text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
# Introduction to Mathematical Optimization Modeling ## Objective and prerequisites The goal of this modeling example is to introduce the key components in the formulation of mixed integer programming (MIP) problems. For each component of a MIP problem formulation, we provide a description, the associated Python code, and the mathematical notation describing the component. To fully understand the content of this notebook, the reader should: * Be familiar with Python. * Have a background in any branch of engineering, computer science, economics, statistics, any branch of the “hard” sciences, or any discipline that uses quantitative models and methods. The reader should also consult the [documentation](https://www.gurobi.com/resources/?category-filter=documentation) of the Gurobi Python API. This notebook is explained in detail in our series of tutorial videos on mixed integer linear programming. You can watch these videos by clicking [here](https://www.gurobi.com/resource/tutorial-mixed-integer-linear-programming/) **Note:** You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). In order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an [evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=CommercialDataScience) as a *commercial user*, or download a [free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=AcademicDataScience) as an *academic user*. ## Problem description Consider a consulting company that has three open positions: Tester, Java Developer, and Architect. The three top candidates (resources) for the positions are: Carlos, Joe, and Monika. The consulting company administered competency tests to each candidate in order to assess their ability to perform each of the jobs. The results of these tests are called *matching scores*. Assume that only one candidate can be assigned to a job, and at most one job can be assigned to a candidate. The problem is to determine an assignment of resources and jobs such that each job is fulfilled, each resource is assigned to at most one job, and the total matching scores of the assignments is maximized. ## Mathematical optimization Mathematical optimization (which is also known as mathematical programming) is a declarative approach where the modeler formulates an optimization problem that captures the key features of a complex decision problem. The Gurobi Optimizer solves the mathematical optimization problem using state-of-the-art mathematics and computer science. A mathematical optimization model has five components: * Sets * Parameters * Decision variables * Constraints * Objective function(s) The following Python code imports the Gurobi callable library and imports the ``GRB`` class into the main namespace. ``` import gurobipy as gp from gurobipy import GRB ``` ## Resource Assignment Problem ### Data The list $R$ contains the names of the three resources: Carlos, Joe, and Monika. The list $J$ contains the names of the job positions: Tester, Java Developer, and Architect. $r \in R$: index and set of resources. The resource $r$ belongs to the set of resources $R$. $j \in J$: index and set of jobs. The job $j$ belongs to the set of jobs $J$. ``` # Resource and job sets R = ['Carlos', 'Joe', 'Monika'] J = ['Tester', 'JavaDeveloper', 'Architect'] ``` The ability of each resource to perform each of the jobs is listed in the following matching scores table: ![scores](matching_score_data.PNG) For each resource $r$ and job $j$, there is a corresponding matching score $s$. The matching score $s$ can only take values between 0 and 100. That is, $s_{r,j} \in [0, 100]$ for all resources $r \in R$ and jobs $j \in J$. We use the Gurobi Python ``multidict`` function to initialize one or more dictionaries with a single statement. The function takes a dictionary as its argument. The keys represent the possible combinations of resources and jobs. ``` # Matching score data combinations, scores = gp.multidict({ ('Carlos', 'Tester'): 53, ('Carlos', 'JavaDeveloper'): 27, ('Carlos', 'Architect'): 13, ('Joe', 'Tester'): 80, ('Joe', 'JavaDeveloper'): 47, ('Joe', 'Architect'): 67, ('Monika', 'Tester'): 53, ('Monika', 'JavaDeveloper'): 73, ('Monika', 'Architect'): 47 }) ``` The following constructor creates an empty ``Model`` object “m”. We specify the model name by passing the string "RAP" as an argument. The ``Model`` object “m” holds a single optimization problem. It consists of a set of variables, a set of constraints, and the objective function. ``` # Declare and initialize model m = gp.Model('RAP') ``` ## Decision variables To solve this assignment problem, we need to identify which resource is assigned to which job. We introduce a decision variable for each possible assignment of resources to jobs. Therefore, we have 9 decision variables. To simplify the mathematical notation of the model formulation, we define the following indices for resources and jobs: ![variables](decision_variables.PNG) For example, $x_{2,1}$ is the decision variable associated with assigning the resource Joe to the job Tester. Therefore, decision variable $x_{r,j}$ equals 1 if resource $r \in R$ is assigned to job $j \in J$, and 0 otherwise. The ``Model.addVars()`` method creates the decision variables for a ``Model`` object. This method returns a Gurobi ``tupledict`` object that contains the newly created variables. We supply the ``combinations`` object as the first argument to specify the variable indices. The ``name`` keyword is used to specify a name for the newly created decision variables. By default, variables are assumed to be non-negative. ``` # Create decision variables for the RAP model x = m.addVars(combinations, name="assign") ``` ## Job constraints We now discuss the constraints associated with the jobs. These constraints need to ensure that each job is filled by exactly one resource. The job constraint for the Tester position requires that resource 1 (Carlos), resource 2 (Joe), or resource 3 (Monika) is assigned to this job. This corresponds to the following constraint. Constraint (Tester=1) $$ x_{1,1} + x_{2,1} + x_{3,1} = 1 $$ Similarly, the constraints for the Java Developer and Architect positions can be defined as follows. Constraint (Java Developer = 2) $$ x_{1,2} + x_{2,2} + x_{3,2} = 1 $$ Constraint (Architect = 3) $$ x_{1,3} + x_{2,3} + x_{3,3} = 1 $$ The job constraints are defined by the columns of the following table. ![jobs](jobs_constraints.PNG) In general, the constraint for the job Tester can defined as follows. $$ x_{1,1} + x_{2,1} + x_{3,1} = \sum_{r=1}^{3 } x_{r,1} = \sum_{r \in R} x_{r,1} = 1 $$ All of the job constraints can be defined in a similarly succinct manner. For each job $j \in J$, take the summation of the decision variables over all the resources. We can write the corresponding job constraint as follows. $$ \sum_{r \in R} x_{r,j} = 1 $$ The ``Model.addConstrs()`` method of the Gurobi/Python API defines the job constraints of the ``Model`` object “m”. This method returns a Gurobi ``tupledict`` object that contains the job constraints. The first argument of this method, "x.sum(‘*’, j)", is the sum method and defines the LHS of the jobs constraints as follows: For each job $j$ in the set of jobs $J$, take the summation of the decision variables over all the resources. The $==$ defines an equality constraint, and the number "1" is the RHS of the constraints. These constraints are saying that exactly one resource should be assigned to each job. The second argument is the name of this type of constraints. ``` # Create job constraints jobs = m.addConstrs((x.sum('*',j) == 1 for j in J), name='job') ``` ## Resource constraints The constraints for the resources need to ensure that at most one job is assigned to each resource. That is, it is possible that not all the resources are assigned. For example, we want a constraint that requires Carlos to be assigned to at most one of the jobs: either job 1 (Tester), job 2 (Java Developer ), or job 3 (Architect). We can write this constraint as follows. Constraint (Carlos=1) $$ x_{1, 1} + x_{1, 2} + x_{1, 3} \leq 1. $$ This constraint is less or equal than 1 to allow the possibility that Carlos is not assigned to any job. Similarly, the constraints for the resources Joe and Monika can be defined as follows: Constraint (Joe=2) $$ x_{2, 1} + x_{2, 2} + x_{2, 3} \leq 1. $$ Constraint (Monika=3) $$ x_{3, 1} + x_{3, 2} + x_{3, 3} \leq 1. $$ Observe that the resource constraints are defined by the rows of the following table. ![resources](resource_constraints.PNG) The constraint for the resource Carlos can be defined as follows. $$ x_{1, 1} + x_{1, 2} + x_{1, 3} = \sum_{j=1}^{3 } x_{1,j} = \sum_{j \in J} x_{1,j} \leq 1. $$ Again, each of these constraints can be written in a succinct manner. For each resource $r \in R$, take the summation of the decision variables over all the jobs. We can write the corresponding resource constraint as follows. $$ \sum_{j \in J} x_{r,j} \leq 1. $$ The ``Model.addConstrs()`` method of the Gurobi/Python API defines the resource constraints of the ``Model`` object “m”. The first argument of this method, "x.sum(r, ‘*’)", is the sum method and defines the LHS of the resource constraints as follows: For each resource $r$ in the set of resources $R$, take the summation of the decision variables over all the jobs. The $<=$ defines a less or equal constraints, and the number “1” is the RHS of the constraints. These constraints are saying that each resource can be assigned to at most 1 job. The second argument is the name of this type of constraints. ``` # Create resource constraints resources = m.addConstrs((x.sum(r,'*') <= 1 for r in R), name='resource') ``` ## Objective function The objective function is to maximize the total matching score of the assignments that satisfy the job and resource constraints. For the Tester job, the matching score is $53x_{1,1}$, if resource Carlos is assigned, or $80x_{2,1}$, if resource Joe is assigned, or $53x_{3,1}$, if resource Monika is assigned. Consequently, the matching score for the Tester job is as follows, where only one term in this summation will be nonzero. $$ 53x_{1,1} + 80x_{2,1} + 53x_{3,1}. $$ Similarly, the matching scores for the Java Developer and Architect jobs are defined as follows. The matching score for the Java Developer job is: $$ 27x_{1, 2} + 47x_{2, 2} + 73x_{3, 2}. $$ The matching score for the Architect job is: $$ 13x_{1, 3} + 67x_{2, 3} + 47x_{3, 3}. $$ The total matching score is the summation of each cell in the following table. ![objfcn](objective_function.PNG) The goal is to maximize the total matching score of the assignments. Therefore, the objective function is defined as follows. \begin{equation} \text{Maximize} \quad (53x_{1,1} + 80x_{2,1} + 53x_{3,1}) \; + \end{equation} \begin{equation} \quad (27x_{1, 2} + 47x_{2, 2} + 73x_{3, 2}) \; + \end{equation} \begin{equation} \quad (13x_{1, 3} + 67x_{2, 3} + 47x_{3, 3}). \end{equation} Each term in parenthesis in the objective function can be expressed as follows. \begin{equation} (53x_{1,1} + 80x_{2,1} + 53x_{3,1}) = \sum_{r \in R} s_{r,1}x_{r,1}. \end{equation} \begin{equation} (27x_{1, 2} + 47x_{2, 2} + 73x_{3, 2}) = \sum_{r \in R} s_{r,2}x_{r,2}. \end{equation} \begin{equation} (13x_{1, 3} + 67x_{2, 3} + 47x_{3, 3}) = \sum_{r \in R} s_{r,3}x_{r,3}. \end{equation} Hence, the objective function can be concisely written as: \begin{equation} \text{Maximize} \quad \sum_{j \in J} \sum_{r \in R} s_{r,j}x_{r,j}. \end{equation} The ``Model.setObjective()`` method of the Gurobi/Python API defines the objective function of the ``Model`` object “m”. The objective expression is specified in the first argument of this method. Notice that both the matching score parameters “score” and the assignment decision variables “x” are defined over the “combinations” keys. Therefore, we use the method “x.prod(score)” to obtain the summation of the elementwise multiplication of the "score" matrix and the "x" variable matrix. The second argument, ``GRB.MAXIMIZE``, is the optimization "sense." In this case, we want to *maximize* the total matching scores of all assignments. ``` # Objective: maximize total matching score of all assignments m.setObjective(x.prod(scores), GRB.MAXIMIZE) ``` We use the “write()” method of the Gurobi/Python API to write the model formulation to a file named "RAP.lp". ``` # Save model for inspection m.write('RAP.lp') ``` ![RAP](RAP_lp.PNG) We use the “optimize( )” method of the Gurobi/Python API to solve the problem we have defined for the model object “m”. ``` # Run optimization engine m.optimize() ``` The ``Model.getVars()`` method of the Gurobi/Python API retrieves a list of all variables in the Model object “m”. The ``.x`` variable attribute is used to query solution values and the ``.varName`` attribute is used to query the name of the decision variables. ``` # Display optimal values of decision variables for v in m.getVars(): if v.x > 1e-6: print(v.varName, v.x) # Display optimal total matching score print('Total matching score: ', m.objVal) ``` The optimal assignment is to assign: * Carlos to the Tester job, with a matching score of 53 * Joe to the Architect job, with a matching score of 67 * Monika to the Java Developer job, with a matching score of 73. The maximum total matching score is 193. ## Resource Assignment Problem with a budget constraint Now, assume there is a fixed cost $C_{r,j}$ associated with assigning a resource $r \in R$ to job $j \in J$. Assume also that there is a limited budget $B$ that can be used for job assignments. The cost of assigning Carlos, Joe, or Monika to any of the jobs is $\$1,000$ , $\$2,000$ , and $\$3,000$ respectively. The available budget is $\$5,000$. ### Data The list $R$ contains the names of the three resources: Carlos, Joe, and Monika. The list $J$ contains the names of the job positions: Tester, Java Developer, and Architect. The Gurobi Python ``multidict`` function initialize two dictionaries: * "scores" defines the matching scores for each resource and job combination. * "costs" defines the fixed cost associated of assigning a resource to a job. ``` # Resource and job sets R = ['Carlos', 'Joe', 'Monika'] J = ['Tester', 'JavaDeveloper', 'Architect'] # Matching score data # Cost is given in thousands of dollars combinations, scores, costs = gp.multidict({ ('Carlos', 'Tester'): [53, 1], ('Carlos', 'JavaDeveloper'): [27, 1], ('Carlos', 'Architect'): [13,1], ('Joe', 'Tester'): [80, 2], ('Joe', 'JavaDeveloper'): [47, 2], ('Joe', 'Architect'): [67, 2], ('Monika', 'Tester'): [53, 3] , ('Monika', 'JavaDeveloper'): [73, 3], ('Monika', 'Architect'): [47, 3] }) # Available budget (thousands of dollars) budget = 5 ``` The following constructor creates an empty ``Model`` object “m”. The ``Model`` object “m” holds a single optimization problem. It consists of a set of variables, a set of constraints, and the objective function. ``` # Declare and initialize model m = gp.Model('RAP2') ``` ### Decision variables The decision variable $x_{r,j}$ is 1 if $r \in R$ is assigned to job $j \in J$, and 0 otherwise. The ``Model.addVars()`` method defines the decision variables for the model object “m”. Because there is a budget constraint, it is possible that not all of the jobs will be filled. To account for this, we define a new decision variable that indicates whether or not a job is filled. Let $g_{j}$ be equal 1 if job $j \in J$ is not filled, and 0 otherwise. This variable is a gap variable that indicates that a job cannot be filled. ***Remark:*** For the previous formulation of the RAP, we defined the assignment variables as non-negative and continuous which is the default value of the ``vtype`` argument of the ``Model.addVars()`` method. However, in this extension of the RAP, because of the budget constraint we added to the model, we need to explicitly define these variables as binary. The ``vtype=GRB.BINARY`` argument of the ``Model.addVars()`` method defines the assignment variables as binary. ``` # Create decision variables for the RAP model x = m.addVars(combinations, vtype=GRB.BINARY, name="assign") # Create gap variables for the RAP model g = m.addVars(J, name="gap") ``` ### Job constraints Since we have a limited budget to assign resources to jobs, it is possible that not all the jobs can be filled. For the job constraints, there are two possibilities either a resource is assigned to fill the job, or this job cannot be filled and we need to declare a gap. This latter possibility is captured by the decision variable $g_j$. Therefore, the job constraints are written as follows. For each job $j \in J$, exactly one resource must be assigned to the job, or the corresponding $g_j$ variable must be set to 1: $$ \sum_{r \: \in \: R} x_{r,\; j} + g_{j} = 1. $$ ``` # Create job constraints jobs = m.addConstrs((x.sum('*',j) + g[j] == 1 for j in J), name='job') ``` ### Resource constraints The constraints for the resources need to ensure that at most one job is assigned to each resource. That is, it is possible that not all the resources are assigned. Therefore, the resource constraints are written as follows. For each resource $r \in R$, at most one job can be assigned to the resource: $$ \sum_{j \: \in \: J} x_{r,\; j} \leq 1. $$ ``` # Create resource constraints resources = m.addConstrs((x.sum(r,'*') <= 1 for r in R), name='resource') ``` ### Budget constraint This constraint ensures that the cost of assigning resources to fill job requirements do not exceed the budget available. The costs of assignment and budget are in thousands of dollars. The cost of filling the Tester job is $1x_{1,1}$, if resource Carlos is assigned, or $2x_{2,1}$, if resource Joe is assigned, or $3x_{3,1}$, if resource Monika is assigned. Consequently, the cost of filling the Tester job is as follows, where at most one term in this summation will be nonzero. $$ 1x_{1,1} + 2x_{2,1} + 3x_{3,1}. $$ Similarly, the cost of filling the Java Developer and Architect jobs are defined as follows. The cost of filling the Java Developer job is: $$ 1x_{1, 2} + 2x_{2, 2} + 3x_{3, 2}. $$ The cost of filling the Architect job is: $$ 1x_{1, 3} + 2x_{2, 3} + 3x_{3, 3}. $$ Hence, the total cost of filling the jobs should be less or equal than the budget available. \begin{equation} (1x_{1,1} + 2x_{2,1} + 3x_{3,1}) \; + \end{equation} \begin{equation} (1x_{1, 2} + 2x_{2, 2} + 3x_{3, 2}) \; + \end{equation} \begin{equation} (1x_{1, 3} + 2x_{2, 3} + 3x_{3, 3}) \leq 5 \end{equation} Each term in parenthesis in the budget constraint can be expressed as follows. \begin{equation} (1x_{1,1} + 2x_{2,1} + 3x_{3,1}) = \sum_{r \in R} C_{r,1}x_{r,1}. \end{equation} \begin{equation} (1x_{1, 2} + 2x_{2, 2} + 3x_{3, 2}) = \sum_{r \in R} C_{r,2}x_{r,2}. \end{equation} \begin{equation} (1x_{1, 3} + 2x_{2, 3} + 3x_{3, 3}) = \sum_{r \in R} C_{r,3}x_{r,3}. \end{equation} Therefore, the budget constraint can be concisely written as: \begin{equation} \sum_{j \in J} \sum_{r \in R} C_{r,j}x_{r,j} \leq B. \end{equation} The ``Model.addConstr()`` method of the Gurobi/Python API defines the budget constraint of the ``Model`` object “m”. The first argument of this method, "x.prod(costs)", is the prod method and defines the LHS of the budget constraint. The $<=$ defines a less or equal constraint, and the budget amount available is the RHS of the constraint. This constraint is saying that the total cost of assigning resources to fill jobs requirements cannot exceed the budget available. The second argument is the name of this constraint. ``` budget = m.addConstr((x.prod(costs) <= budget), name='budget') ``` ## Objective function The objective function is similar to the RAP. The first term in the objective is the total matching score of the assignments. In this extension of the RAP, it is possible that not all jobs are filled; however, we want to heavily penalize this possibility. For this purpose, we have a second term in the objective function that takes the summation of the gap variables over all the jobs and multiply it by a big penalty $M$. Observe that the maximum value of a matching score is 100, and the value that we give to $M$ is 101. The rationale behind the value of $M$ is that having gaps heavily deteriorates the total matching scores value. Consequently, the objective function is to maximize the total matching score of the assignments minus the penalty associated of having gap variables with a value equal to 1. $$ \max \; \sum_{j \; \in \; J} \sum_{r \; \in \; R} s_{r,j}x_{r,j} -M \sum_{j \in J} g_{j} $$ ``` # Penalty for not filling a job position M = 101 # Objective: maximize total matching score of assignments # Unfilled jobs are heavily penalized m.setObjective(x.prod(scores) - M*g.sum(), GRB.MAXIMIZE) # Run optimization engine m.optimize() ``` The definition of the objective function includes the penalty of no filling jobs. However, we are interested in the optimal total matching score value when not all the jobs are filled. For this purpose, we need to compute the total matching score value using the matching score values $s_{r,j}$ and the assignment decision variables $x_{r,j}$. ``` # Compute total matching score from assignment variables total_matching_score = 0 for r, j in combinations: if x[r, j].x > 1e-6: print(x[r, j].varName, x[r, j].x) total_matching_score += scores[r, j]*x[r, j].x print('Total matching score: ', total_matching_score) ``` ### Analysis Recall that the budget is $\$5,000$, and the total cost associated of allocating the three resources is $\$6,000$. This means that there is not enough budget to allocate the three resources we have. Consequently, the Gurobi Optimizer must choose two resources to fill the jobs demand, leave one job unfilled, and maximize the total matching scores. Notice that the two top matching scores are 80% (Joe for the Tester job) and 73% (Monika for the Java Developer job). Also, notice that the lowest score is 13% (Carlos for the Architect job). Assigning Joe to the Tester job, Monika to the Java Developer job, and nobody to the Architect job costs $\$5,000$ and yields a total matching score of 153. This is the optimal solution found by the Gurobi Optimizer. Copyright © 2020 Gurobi Optimization, LLC
github_jupyter
# Comments In the previous lecture we learnt how to print "hello world". Lets try to do that again. Click on the cell below and hit the 'run' button. Go on, I'll wait for you... ``` # print("Hello World) ``` Woah !? Nothing happened!? Why is that? The reason “hello world” did not get printed to the console in this case is because that line of code is *“commented out”*. Comments in Python have a very simple syntax: # {Any text we want} # {More gibberish} To use comments it is actually super easy, we just type the "#" character, and this will make Python ignore everything **on that line from that point on**. If we want comments to span multiple lines, then we have to remember to put a # at the start of every line. For example: ``` # Lots... # and lots... # of comments... ``` You can also place comments after some code, in which case the code executes. Here, let me show you: ``` print("this works") # this works because the "#" symbol is placed AFTER the bit of code we want to run! ``` ## What are comments for? Comments have a few uses, but the main one is to communicate with other human beings (including your future self!). Helpful comments can make reading and understanding code a lot less difficult, and that's why leaving good comments is a **GREAT** idea. Here, let me show you: ``` "abc" * 4 # ??? "a" * 3 # string * number repeats the character. Thus "a" * 2 = "aa" and "az" * 2 = "azaz". ``` In later lectures I'll explain how multiplying strings work. But for now just notice that in the first case you didn't know what was going on (because there were no helpful comments), but you understand what is happening in the second case because the comment explains the code. Also as a quick heads up, generally speaking, if you feel you need to write comments explaining the code then this often a sign that your code isn't actually good code. Good code usually has the property that you can just tell what it is doing just by looking at it. For example: ``` print( 4 * 2 ) # Very simply and clear code, you can tell what it does just by looking at it. print( int(chr(52)).__mul__(int(chr(50))) ) # A TERRIBLE and confusing way to calculate "4 * 2" ``` Complex code often requires comments to explain what it does, and that's why comments are (sort-of) bad. Make no mistake, comments are really useful and you SHOULD use them. However, it’s often a good idea to think of comments as a last-resort; only use them if you cannot think of a way to make your code less complicated. In later lectures, I'll discuss a number of techniques for writing simple code. ## Homework Your task for this week is to get the code in the box below to work, use your understanding of comments to fix it! ``` # print("\n * ,MMM8&&&. *\n MMMM88&&&&& .\n MMMM88&&&&&&&\n * MMM88&&&&&&&&\n MMM88&&&&&&&&\n 'MMM88&&&&&&'\n 'MMM8&&&' *\n |\\___/|\n ) ( . '\n =\\ /=\n )===( *\n / \\\n | |\n / \\\n \\ /\n _/\\_/\\_/\\__ _/_/\\_/\\_/\\_/\\_/\\_/\\_/\\_/\\_/\\_\n | | | |( ( | | | | | | | | | |\n | | | | ) ) | | | | | | | | | |\n | | | |(_( | | | | | | | | | |\n | | | | | | | | | | | | | | |\n | | | | | | | | | | | | | | |\n") ```
github_jupyter
# Lost Luggage Distribution Problem ## Objective and Prerequisites In this example, you’ll learn how to use mathematical optimization to solve a vehicle routing problem with time windows, which involves helping a company figure out the minimum number of vans required to deliver pieces of lost or delayed baggage to their rightful owners and determining the optimal assignment of vans to customers. This model is example 27 from the fifth edition of Model Building in Mathematical Programming by H. Paul Williams on pages 287-289 and 343-344. This modeling example is at the advanced level, where we assume that you know Python and the Gurobi Python API and that you have advanced knowledge of building mathematical optimization models. Typically, the objective function and/or constraints of these examples are complex or require advanced features of the Gurobi Python API. **Download the Repository** <br /> You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). **Gurobi License** <br /> In order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an [evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-MUI-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_Lost_Luggage_Distribution_COM_EVAL_GitHub&utm_term=Lost%20Luggage%20Distribution&utm_content=C_JPM) as a *commercial user*, or download a [free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-EDU-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_Lost_Luggage_Distribution_COM_EVAL_GitHub&utm_term=Lost%20Luggage%20Distribution&utm_content=C_JPM) as an *academic user*. ## Problem Description A small company with six vans has a contract with a number of airlines to pick up lost or delayed baggage, belonging to customers in the London area, from Heathrow airport at 6 p.m. each evening. The contract stipulates that each customer must have their baggage delivered by 8 p.m. The company requires a model to advise them what is the minimum number of vans they need to use and to which customers each van should deliver and in what order. There is no practical capacity limitation on each van. Each van can hold all baggage that needs to be delivered in a two-hour period. To solve this problem, we can formulate an optimization model that minimizes the number of vans that need to be used. ## Model Formulation ### Sets and Indices $i,j \in \text{Locations} \equiv L=\{0,1..(n-1)\}$: Set of locations where $0$ is the index for the single depot -Heathrow airport, and $n$ is the number of locations. $k \in \text{Vans} \equiv V=\{0..K-1\}$: Index and set of vans, where $K$ is the number of vans. $S_k \in S $: Tour of van $k$, i.e. subset of locations visited by the van. ### Parameters $t_{i,j} \in \mathbb{R}^+$: Travel time from location $i$ to location $j$. ### Decision Variables $x_{i,j,k} \in \{0,1 \}$: This binary variable is equal 1, if van $k$ visits and goes directly from location $i$ to location $j$, and zero otherwise. $y_{i,k} \in \{0,1 \}$: This binary variable is equal 1, if van $k$ visits location $i$, and zero otherwise. $z_{k} \in \{0,1 \}$: This binary variable is equal 1, if van $k \in \{1,2..K\}$ is used, and zero otherwise. ### Objective Function **Number of vans**: Minimize number of vans used. \begin{equation} \text{Minimize} \quad \sum_{k = 1}^{K} z_k \end{equation} ### Constraints **Van utilization**: For all locations different from the depot, i.e. $i > 0$, if the location is visited by van $k$, then it is used. \begin{equation} y_{i,k} \leq z_{k} \quad \forall i \in L \setminus \{0\}, \; k \in V \end{equation} **Travel time**: No van travels for more than 120 min. Note that we do not consider the travel time to return to the depot. \begin{equation} \sum_{i \in L} \sum_{j \in L \setminus \{0\}} t_{i,j} \cdot x_{i,j,k} \leq 120 \quad \forall k \in V \end{equation} **Visit all customers**: Each customer location is visited by exactly one van. \begin{equation} \sum_{k \in V} y_{i,k} = 1 \quad \forall i \in L \setminus \{0\} \end{equation} **Depot**: Heathrow is visited by every van used. \begin{equation} \sum_{k \in V} y_{1,k} \geq \sum_{k \in V} z_k \end{equation} **Arriving at a location**: If location $j$ is visited by van $k$, then the van is coming from another location $i$. \begin{equation} \sum_{i \in L} x_{i,j,k} = y_{j,k} \quad \forall j \in L, \; k \in V \end{equation} **Leaving a location**: If van $k$ leaves location $j$, then the van is going to another location $i$. \begin{equation} \sum_{i \in L} x_{j,i,k} = y_{j,k} \quad \forall j \in L, \; k \in V \end{equation} **Breaking symmetry**: \begin{equation} \sum_{i \in L} y_{i,k} \geq \sum_{i \in L} y_{i,k+1} \quad \forall k \in \{0..K-1\} \end{equation} **Subtour elimination**: These constraints ensure that for each van route, there is no cycle. \begin{equation} \sum_{(i,j) \in S_k}x_{i,j,k} \leq |S_k|-1 \quad \forall k \in K, \; S_k \subseteq L \end{equation} ## Python Implementation We import the Gurobi Python Module and other Python libraries. ``` import sys import math import random from itertools import permutations import gurobipy as gp from gurobipy import GRB # tested with Python 3.7.0 & Gurobi 9.1.0 ``` ## Input data We define all the input data for the model. The user defines the number of locations, including the depot, and the number of vans. We randomly determine the coordinates of each location and then calculate the Euclidean distance between each pair of locations. We assume a speed of 60 km/hr, which is 1 km/min. Hence travel time is equal to the distance. ``` # number of locations, including the depot. The index of the depot is 0 n = 17 locations = [*range(n)] # number of vans K = 6 vans = [*range(K)] # Create n random points # Depot is located at (0,0) coordinates random.seed(1) points = [(0, 0)] points += [(random.randint(0, 50), random.randint(0, 50)) for i in range(n-1)] # Dictionary of Euclidean distance between each pair of points # Assume a speed of 60 km/hr, which is 1 km/min. Hence travel time = distance time = {(i, j): math.sqrt(sum((points[i][k]-points[j][k])**2 for k in range(2))) for i in locations for j in locations if i != j} ``` ## Model Deployment We create a model and the variables. The decision variables determines the order in which each van visits a subset of custormers, which customer is visited by each van, and if a van is used or not. ``` m = gp.Model('lost_luggage_distribution.lp') # Create variables: # x =1, if van k visits and goes directly from location i to location j x = m.addVars(time.keys(), vans, vtype=GRB.BINARY, name='FromToBy') # y = 1, if customer i is visited by van k y = m.addVars(locations, vans, vtype=GRB.BINARY, name='visitBy') # Number of vans used is a decision variable z = m.addVars(vans, vtype=GRB.BINARY, name='used') # Travel time per van t = m.addVars(vans, ub=120, name='travelTime') # Maximum travel time s = m.addVar(name='maxTravelTime') ``` ## Constraints For all locations different from depot, i.e. $i > 0$, if the location is visited by van $k$, then it is used. ``` # Van utilization constraint visitCustomer = m.addConstrs((y[i,k] <= z[k] for k in vans for i in locations if i > 0), name='visitCustomer' ) ``` No van travels for more than 120 min. We make a small change from the original H.P. Williams version to introduce a slack variable for the travel time for each van, t[k]. ``` # Travel time constraint # Exclude the time to return to the depot travelTime = m.addConstrs((gp.quicksum(time[i,j]*x[i,j,k] for i,j in time.keys() if j > 0) == t[k] for k in vans), name='travelTimeConstr' ) ``` Each customer location is visited by exactly one van ``` # Visit all customers visitAll = m.addConstrs((y.sum(i,'*') == 1 for i in locations if i > 0), name='visitAll' ) ``` Heathrow (depot) is visited by every van used. ``` # Depot constraint depotConstr = m.addConstr(y.sum(0,'*') >= z.sum(), name='depotConstr' ) ``` If location j is visited by van k , then the van is coming from another location i. ``` # Arriving at a customer location constraint ArriveConstr = m.addConstrs((x.sum('*',j,k) == y[j,k] for j,k in y.keys()), name='ArriveConstr' ) ``` If van k leaves location j , then the van is going to another location i. ``` # Leaving a customer location constraint LeaveConstr = m.addConstrs((x.sum(j,'*',k) == y[j,k] for j,k in y.keys()), name='LeaveConstr' ) ``` Breaking symmetry constraints. ``` breakSymm = m.addConstrs((y.sum('*',k-1) >= y.sum('*',k) for k in vans if k>0), name='breakSymm' ) ``` Relate the maximum travel time to the travel times of each van ``` maxTravelTime = m.addConstrs((t[k] <= s for k in vans), name='maxTravelTimeConstr') # Alternately, as a general constraint: # maxTravelTime = m.addConstr(s == gp.max_(t), name='maxTravelTimeConstr') ``` ### Objective Function We use two hierarchical objectives: - First, minimize the number of vans used - Then, minimize the maximum of the time limit constraints ``` m.ModelSense = GRB.MINIMIZE m.setObjectiveN(z.sum(), 0, priority=1, name="Number of vans") m.setObjectiveN(s, 1, priority=0, name="Travel time") ``` ### Callback Definition Subtour constraints prevent a van from visiting a set of destinations without starting or ending at the Heathrow depot. Because there are an exponential number of these constraints, we don't want to add them all to the model. Instead, we use a callback function to find violated subtour constraints and add them to the model as lazy constraints. ``` # Callback - use lazy constraints to eliminate sub-tours def subtourelim(model, where): if where == GRB.Callback.MIPSOL: # make a list of edges selected in the solution vals = model.cbGetSolution(model._x) selected = gp.tuplelist((i,j) for i, j, k in model._x.keys() if vals[i, j, k] > 0.5) # find the shortest cycle in the selected edge list tour = subtour(selected) if len(tour) < n: for k in vans: model.cbLazy(gp.quicksum(model._x[i, j, k] for i, j in permutations(tour, 2)) <= len(tour)-1) # Given a tuplelist of edges, find the shortest subtour not containing depot (0) def subtour(edges): unvisited = list(range(1, n)) cycle = range(n+1) # initial length has 1 more city while unvisited: thiscycle = [] neighbors = unvisited while neighbors: current = neighbors[0] thiscycle.append(current) if current != 0: unvisited.remove(current) neighbors = [j for i, j in edges.select(current, '*') if j == 0 or j in unvisited] if 0 not in thiscycle and len(cycle) > len(thiscycle): cycle = thiscycle return cycle ``` ## Solve the model ``` # Verify model formulation m.write('lost_luggage_distribution.lp') # Run optimization engine m._x = x m.Params.LazyConstraints = 1 m.optimize(subtourelim) ``` ## Analysis The optimal route of each van used and the total lost luggage delivery time report follows. ``` # Print optimal routes for k in vans: route = gp.tuplelist((i,j) for i,j in time.keys() if x[i,j,k].X > 0.5) if route: i = 0 print(f"Route for van {k}: {i}", end='') while True: i = route.select(i, '*')[0][1] print(f" -> {i}", end='') if i == 0: break print(f". Travel time: {round(t[k].X,2)} min") print(f"Max travel time: {round(s.X,2)}") ``` ## References H. Paul Williams, Model Building in Mathematical Programming, fifth edition. Copyright © 2020 Gurobi Optimization, LLC
github_jupyter
``` #-*- coding: utf-8 -*- import re from wxpy import * import jieba import numpy import pandas as pd import matplotlib.pyplot as plt from scipy.misc import imread from wordcloud import WordCloud, ImageColorGenerator def write_txt_file(path, txt): ''' 写入txt文本 ''' with open(path, 'a', encoding='gb18030', newline='') as f: f.write(txt) def read_txt_file(path): ''' 读取txt文本 ''' with open(path, 'r', encoding='gb18030', newline='') as f: return f.read() def login(): # 初始化机器人,扫码登陆 bot = Bot() # 获取所有好友 my_friends = bot.friends() print(type(my_friends)) return my_friends def show_sex_ratio(friends): # 使用一个字典统计好友男性和女性的数量 sex_dict = {'male': 0, 'female': 0} for friend in friends: # 统计性别 if friend.sex == 1: sex_dict['male'] += 1 elif friend.sex == 2: sex_dict['female'] += 1 print(sex_dict) def get_area_distribution(friends): # 使用一个字典统计各省好友数量 province_dict = {'北京': 0, '上海': 0, '天津': 0, '重庆': 0, '河北': 0, '山西': 0, '吉林': 0, '辽宁': 0, '黑龙江': 0, '陕西': 0, '甘肃': 0, '青海': 0, '山东': 0, '福建': 0, '浙江': 0, '台湾': 0, '河南': 0, '湖北': 0, '湖南': 0, '江西': 0, '江苏': 0, '安徽': 0, '广东': 0, '海南': 0, '四川': 0, '贵州': 0, '云南': 0, '内蒙古': 0, '新疆': 0, '宁夏': 0, '广西': 0, '西藏': 0, '香港': 0, '澳门': 0, '台湾': 0} # 统计省份 for friend in friends: if friend.province in province_dict.keys(): province_dict[friend.province] += 1 # 为了方便数据的呈现,生成JSON Array格式数据 data = [] for key, value in province_dict.items(): data.append({'name': key, 'value': value}) return data def show_area_distribution(friends): # 使用一个字典统计各省好友数量 province_dict = {'北京': 0, '上海': 0, '天津': 0, '重庆': 0, '河北': 0, '山西': 0, '吉林': 0, '辽宁': 0, '黑龙江': 0, '陕西': 0, '甘肃': 0, '青海': 0, '山东': 0, '福建': 0, '浙江': 0, '台湾': 0, '河南': 0, '湖北': 0, '湖南': 0, '江西': 0, '江苏': 0, '安徽': 0, '广东': 0, '海南': 0, '四川': 0, '贵州': 0, '云南': 0, '内蒙古': 0, '新疆': 0, '宁夏': 0, '广西': 0, '西藏': 0, '香港': 0, '澳门': 0, '台湾': 0} # 统计省份 for friend in friends: if friend.province in province_dict.keys(): province_dict[friend.province] += 1 # 为了方便数据的呈现,生成JSON Array格式数据 data = [] for key, value in province_dict.items(): data.append({'name': key, 'value': value}) print(data) def show_signature(friends): # 统计签名 for friend in friends: # 对数据进行清洗,将标点符号等对词频统计造成影响的因素剔除 pattern = re.compile(r'[一-龥]+') filterdata = re.findall(pattern, friend.signature) write_txt_file('signatures.txt', ''.join(filterdata)) # 读取文件 content = read_txt_file('signatures.txt') segment = jieba.lcut(content) words_df = pd.DataFrame({'segment':segment}) # 读取stopwords stopwords = pd.read_csv("stopwords.txt",index_col=False,quoting=3,sep=" ",names=['stopword'],encoding='utf-8') words_df = words_df[~words_df.segment.isin(stopwords.stopword)] # print(words_df) words_stat = words_df.groupby(by=['segment'])['segment'].agg({"计数":numpy.size}) words_stat = words_stat.reset_index().sort_values(by=["计数"],ascending=False) # 设置词云属性 color_mask = imread('background.jfif') wordcloud = WordCloud(font_path="simhei.ttf", # 设置字体可以显示中文 background_color="white", # 背景颜色 max_words=100, # 词云显示的最大词数 mask=color_mask, # 设置背景图片 max_font_size=80, # 字体最大值 random_state=42, width=1200, height=1060, margin=2,# 设置图片默认的大小,但是如果使用背景图片的话, # 那么保存的图片大小将会按照其大小保存,margin为词语边缘距离 ) # 生成词云, 可以用generate输入全部文本,也可以我们计算好词频后使用generate_from_frequencies函数 word_frequence = {x[0]:x[1]for x in words_stat.head(100).values} # print(word_frequence) word_frequence_dict = {} for key in word_frequence: word_frequence_dict[key] = word_frequence[key] wordcloud.generate_from_frequencies(word_frequence_dict) # 从背景图片生成颜色值 image_colors = ImageColorGenerator(color_mask) # 重新上色 wordcloud.recolor(color_func=image_colors) # 保存图片 wordcloud.to_file('weixin_most_words.png') plt.imshow(wordcloud) plt.axis("off") plt.show() if __name__ == '__main__': friends = login() show_sex_ratio(friends) show_area_distribution(friends) show_signature(friends) from pyecharts import Bar, Line from pyecharts.engine import create_default_environment myfriends_pro = get_area_distribution(friends) # 按照好友数量排序 newlist = sorted(myfriends_pro, key=lambda k: k['value'], reverse = True) pro_list,num_list = [], [] for onefriend in newlist: if onefriend["value"] > 0: pro_list.append(onefriend["name"]) num_list.append(onefriend["value"]) bar = Bar("好友数量", "共有" + str(len(friends)) + "个好友。他们分布在" + str(len(num_list)) + "个省份" , width=1200, height=600) print(len(pro_list), len(num_list)) bar.add("我的微信好友分布", pro_list, num_list, is_stack=True) bar.render(r"./我的微信好友分布.html") bar from pyecharts import Map value = num_list attr = pro_list map=Map('我的好友分布',width=1000,height=800) map.add("", attr, value, maptype='china', visual_range=[0, 10], visual_text_color="#fff", symbol_size=10, is_visualmap=True) map.render(r"./我的微信好友分布地图.html") map ```
github_jupyter
``` model_name= 'Burns_CNNBiLSTM' %load_ext autoreload %autoreload 2 import sys sys.path.append('../') import os import tensorflow import numpy as np import random seed_value = 123123 seed_value = None environment_name = sys.executable.split('/')[-3] print('Environment:', environment_name) os.environ[environment_name] = str(seed_value) np.random.seed(seed_value) random.seed(seed_value) tensorflow.random.set_seed(seed_value) from tensorflow.compat.v1 import ConfigProto from tensorflow.compat.v1 import InteractiveSession import tensorflow.compat.v1.keras.backend as K config = ConfigProto() config.gpu_options.allow_growth = True session = InteractiveSession(config=config) K.set_session(session) tensorflow.__version__ multiple_gpus = [0,1,2,3] #multiple_gpus = None import os import tensorflow as tf print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU'))) if multiple_gpus: devices = [] for gpu in multiple_gpus: devices.append('/gpu:' + str(gpu)) strategy = tensorflow.distribute.MirroredStrategy(devices=devices) else: # Get the GPU device name. device_name = tensorflow.test.gpu_device_name() # The device name should look like the following: if device_name == '/device:GPU:0': print('Using GPU: {}'.format(device_name)) else: raise SystemError('GPU device not found') os.environ["CUDA_VISIBLE_DEVICES"] = device_name os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" from wrappers.bioc_wrapper import bioc_to_docs, bioc_to_relevances from wrappers.pandas_wrapper import relevances_to_pandas, docs_to_pandasdocs from preprocessing.dl import DL_preprocessing from mlearning.dl_models import Burns_CNNBiLSTM from preprocessing.embeddings import compute_embedding_matrix, glove_embeddings_2 import numpy as np from sklearn.metrics import accuracy_score from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import f1_score, matthews_corrcoef from sklearn.metrics import cohen_kappa_score from sklearn.metrics import roc_auc_score, auc, roc_curve, precision_recall_curve from sklearn.metrics import confusion_matrix from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint from tensorflow.keras.models import load_model from preprocessing.dl import plot_training_history from preprocessing.dl_config import DLConfig from preprocessing.dl import average_precision from tensorflow.keras.preprocessing import text from preprocessing.dl import plot_roc_n_pr_curves import nltk nltk.download('stopwords') nltk.download('wordnet') nltk.download('punkt') from nltk.corpus import stopwords import seaborn as sns import pandas as pd import os from keras import backend as K import pickle train_dataset_path = '../datasets/PMtask_Triage_TrainingSet.xml' test_dataset_path = '../datasets/PMtask_Triage_TestSet.xml' ``` ## Load Data ``` dl_config = DLConfig(model_name=model_name, seed_value=seed_value) dl_config.stop_words = set(stopwords.words('english')) #dl_config.stop_words = None dl_config.lower = False dl_config.remove_punctuation = False dl_config.split_by_hyphen = False dl_config.lemmatization = False dl_config.stems = True docs_train = bioc_to_docs(train_dataset_path, dl_config=dl_config) relevances_train = bioc_to_relevances(train_dataset_path, 'protein-protein') x_train_df = docs_to_pandasdocs(docs_train) y_train_df = relevances_to_pandas(x_train_df, relevances_train) x_train_df y_train_df x_train_df['Document'][0].title_string x_train_df['Document'][0].abstract_string ``` ### Embeddings and Deep Learning ``` #Parameters dl_config.padding = 'post' #'pre' -> default; 'post' -> alternative dl_config.truncating = 'post' #'pre' -> default; 'post' -> alternative ##### dl_config.oov_token = 'OOV' dl_config.epochs = 50 dl_config.batch_size = 32 # e aumentar o batch dl_config.learning_rate = 0.0001 #experimentar diminuir dl_config.max_sent_len = 300 #sentences will have a maximum of "max_sent_len" words #400/500 dl_config.max_nb_words = 100_000 #it will only be considered the top "max_nb_words" words in the dataset dl_config.embeddings = 'biowordvec' dl_config.validation_percentage = 10 if not os.path.isdir('./embeddings'): !mkdir embeddings if dl_config.embeddings == 'glove': if not os.path.isfile('./embeddings/glove.6B.zip'): # !wget -P ./embeddings http://nlp.stanford.edu/data/glove.6B.zip !unzip ./embeddings/glove.6B.zip -d ./embeddings dl_config.embedding_path = './embeddings/glove.6B.200d.txt' dl_config.embedding_dim = 200 dl_config.embedding_format = 'glove' elif dl_config.embeddings == 'biowordvec': #200 dimensions if not os.path.isfile('./embeddings/biowordvec'): !wget -O ./embeddings/biowordvec https://ndownloader.figshare.com/files/12551780 dl_config.embedding_path = './embeddings/biowordvec' dl_config.embedding_dim = 200 dl_config.embedding_format = 'word2vec' elif dl_config.embeddings == 'pubmed_pmc': #200 dimensions if not os.path.isfile('./embeddings/pubmed_pmc.bin'): !wget -O ./embeddings/pubmed_pmc.bin http://evexdb.org/pmresources/vec-space-models/PubMed-and-PMC-w2v.bin dl_config.embedding_path = './embeddings/pubmed_pmc.bin' dl_config.embedding_dim = 200 dl_config.embedding_format = 'word2vec' elif dl_config.embeddings == 'pubmed_ncbi': #100 dimensions if not os.path.isfile('./embeddings/pubmed_ncbi.bin.gz'): !wget -O ./embeddings/pubmed_ncbi.bin.gz ftp://ftp.ncbi.nlm.nih.gov/pub/wilbur/EMBED/pubmed_s100w10_min.bin.gz dl_config.embedding_path = './embeddings/pubmed_ncbi.bin.gz' dl_config.embedding_dim = 100 dl_config.embedding_format = 'word2vec' else: raise Exception("Please Insert Embeddings Type") ``` ### Keras Callbacks ``` dl_config.keras_callbacks = True if dl_config.keras_callbacks: dl_config.patience = 5 #early-stopping patience checkpoint_path = str(dl_config.model_id_path) + '/checkpoint.hdf5' keras_callbacks = [ EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=dl_config.patience), ModelCheckpoint(checkpoint_path, monitor='val_loss', mode='min', verbose=1, save_best_only=True) ] else: keras_callbacks=None #Preprocessing for Training Data dl_config.tokenizer = text.Tokenizer(num_words=dl_config.max_nb_words, oov_token=dl_config.oov_token) print(x_train_df['Document'][0].fulltext_string) for i, tok in enumerate(x_train_df['Document'][0].fulltext_tokens): print(i, ': ', tok) x_train, y_train, x_val, y_val = DL_preprocessing(x_train_df, y_train_df, dl_config, dataset='train', validation_percentage = dl_config.validation_percentage, seed_value=dl_config.seed_value) x_train.shape x_train dl_config.embedding_matrix = compute_embedding_matrix(dl_config, embeddings_format = dl_config.embedding_format) protein_index = dl_config.tokenizer.word_index['protein'] print("Dimension of the word embedding for the word 'protein': ", len(dl_config.embedding_matrix[protein_index])) print("Index for the word 'protein': ", protein_index) # with open('saved_dl_config.pkl', 'rb') as fp: # dl_config = pickle.load(fp) # with open('saved_data.pkl', 'rb') as fp: # data = pickle.load(fp) # x_train, y_train, x_val, y_val = data[0], data[1], data[2], data[3] dl_config.embedding_matrix.shape[0] len(dl_config.tokenizer.word_index) #model = Hierarchical_Attention_v3(embedding_matrix, dl_config, seed_value=seed_value) #=> Best Results until now if multiple_gpus: with strategy.scope(): model = Burns_CNNBiLSTM(dl_config.embedding_matrix, dl_config, learning_rate=dl_config.learning_rate, seed_value=dl_config.seed_value) else: model = Burns_CNNBiLSTM(dl_config.embedding_matrix, dl_config, learning_rate=dl_config.learning_rate, seed_value=dl_config.seed_value) history = model.fit(x_train, y_train, epochs=dl_config.epochs, batch_size=dl_config.batch_size, validation_data=(x_val, y_val), callbacks=keras_callbacks) if dl_config.keras_callbacks: model.load_weights(checkpoint_path) ``` ## Evaluation - Training Set ``` train_loss, dl_config.train_acc = model.evaluate(x_train, y_train, verbose=0) print('Training Loss: %.3f' % (train_loss)) print('Training Accuracy: %.3f' % (dl_config.train_acc)) plot_training_history(history_dict = history, dl_config=dl_config) ``` # Test Set ### Load Data ``` docs_test = bioc_to_docs(test_dataset_path, dl_config=dl_config) relevances_test = bioc_to_relevances(test_dataset_path, 'protein-protein') x_test_df = docs_to_pandasdocs(docs_test) y_test_df = relevances_to_pandas(x_test_df, relevances_test) x_test, y_test = DL_preprocessing(x_test_df, y_test_df, dl_config, dataset = 'test') ``` ### Predictions ``` yhat_probs = model.predict(x_test, verbose=0) yhat_probs = yhat_probs[:, 0] yhat_classes = np.where(yhat_probs > 0.5, 1, yhat_probs) yhat_classes = np.where(yhat_classes < 0.5, 0, yhat_classes).astype(np.int64) ``` ## Evaluation - Test Set ### ROC and Precision-Recall curves ``` dl_config.test_roc_auc, dl_config.test_pr_auc = plot_roc_n_pr_curves(y_test, yhat_probs,dl_config = dl_config) dl_config.test_avg_prec = average_precision(y_test_df, yhat_probs) print('Average Precision: %f' % dl_config.test_avg_prec) # accuracy: (tp + tn) / (p + n) dl_config.test_acc = accuracy_score(y_test, yhat_classes) print('Accuracy: %f' % dl_config.test_acc) # precision tp / (tp + fp) dl_config.test_prec = precision_score(y_test, yhat_classes) print('Precision: %f' % dl_config.test_prec) # recall: tp / (tp + fn) dl_config.test_recall = recall_score(y_test, yhat_classes) print('Recall: %f' % dl_config.test_recall) # f1: 2 tp / (2 tp + fp + fn) dl_config.test_f1_score = f1_score(y_test, yhat_classes) print('F1 score: %f' % dl_config.test_f1_score) # ROC AUC print('ROC AUC: %f' % dl_config.test_roc_auc) # PR AUC print('PR AUC: %f' % dl_config.test_pr_auc) # kappa dl_config.test_kappa = cohen_kappa_score(y_test, yhat_classes) print('Cohens kappa: %f' % dl_config.test_kappa) dl_config.test_mcc = matthews_corrcoef(y_test, yhat_classes) print('MCC: %f' % dl_config.test_mcc) # confusion matrix matrix = confusion_matrix(y_test, yhat_classes) print('Confusion Matrix:\n %s \n' % matrix) dl_config.test_true_neg, dl_config.test_false_pos, dl_config.test_false_neg, dl_config.test_true_pos = confusion_matrix( y_test, yhat_classes).ravel() ``` ### Model ID ``` dl_config.model_id ``` ### Save DL_Config ``` dl_config.save() dl_config.path ``` ### Write Results ``` dl_config.write_report() ``` ### Model Save ``` model.save(dl_config.model_id_path / 'model_tf', save_format = 'tf') model_yaml = model.to_yaml() with open(dl_config.model_id_path / "model.yaml", "w") as yaml_file: yaml_file.write(model_yaml) import winsound frequency = 2500 # Set Frequency To 2500 Hertz duration = 1000 # Set Duration To 1000 ms == 1 second winsound.Beep(frequency, duration) ```
github_jupyter
``` import os import numpy as np import pandas as pd import statistics as st from scipy import signal import matplotlib.pyplot as plt import keras from keras.utils import to_categorical from sklearn.metrics import confusion_matrix from sklearn.metrics import confusion_matrix,classification_report data1n = [] data2n = [] root = 'Filtered' emosi = ['kaget','marah','santai','senang'] maindirs = 'IAPS hari 2' dirs = os.listdir(maindirs) def lowpass_filter(sinyal,fcl): sampleRate = 50 wnl = fcl/(sampleRate) b,a = signal.butter(3,wnl,'lowpass') fil = signal.filtfilt(b, a, sinyal) return fil def filtering(folder): print("Filter dimulai, harap tunggu sebentar") dirs = os.listdir(folder) for j in dirs: df = pd.read_csv(folder+'/'+str(j)) print(j) #wk = df["Waktu"] pp = df['Pipi'] al = df['Alis'] #wkt = list(wk) data1 = list(pp) data2 = list(al) t = [i for i in range(len(data1))] w = lowpass_filter(data1,2.0) x = lowpass_filter(data2,2.0) mn1 = min(w) mx1 = max(w) mn2 = min(x) mx2 = max(x) for i in range(len(w)): data1n.append((w[i]-mn1)/(mx1-mn1)) data2n.append((x[i]-mn2)/(mx2-mn2)) f = plt.figure() plt.xlabel('Data ke-') plt.ylabel('mV') plt.grid(True) plt.title(j) plt.plot(t,data1n) plt.plot(t,data2n) plt.savefig('Data_Plot3/'+j+'2.png') f.clear() plt.close(f) d_t = list(zip(data1n,data2n)) root = 'Data_filter4' finaldirs = os.path.join(root,j) df1 = pd.DataFrame(d_t,columns=['Pipi','Alis']) df1.to_csv(finaldirs) data1n.clear() data2n.clear() print('Filter Selesai !') filtering('IAPS hari 2') data1n = [] data2n = [] root_filter = 'Filtered' emosi = ['NH','NL','PH','PL'] # pasien = ['adit','agus','amin','eka','riznop'] pasien = ['adit','agus','amin','bagus','basith','eka','hanif','rizki'] stdvn1 = [] rrtn1 = [] mdn1 = [] stdvn2 = [] rrtn2 = [] mdn2 = [] emosi_list = [] count = 0 root_extract = 'IAPS hari 2' rawdata = [] pipi = [] alis = [] wkt = [] count = 0 header_list = ['Waktu','Pipi','Alis'] data = [] X = [] y = [] # int((len(dirs2)-1)/4)+1 def extract_feature(folder): dirs2 = os.listdir(folder) count = 0 root = 'IAPS3_extract' for y in pasien: for i in emosi: for j in range(1,6): df = pd.read_csv(folder+'/'+y+i+str(j)+'.csv') print(folder+'/'+y+i+str(j)+'.csv') data1 = list(df['Pipi'].to_numpy()) data2 = list(df['Alis'].to_numpy()) stdv1 = st.stdev(data1) rrt1 = st.mean(data1) md1 = st.median(data1) stdvn1.append(stdv1) rrtn1.append(rrt1) mdn1.append(md1) stdv2 = st.stdev(data2) rrt2 = st.mean(data2) md2 = st.median(data2) stdvn2.append(stdv2) rrtn2.append(rrt2) mdn2.append(md2) if(i == 'NH'): mk = 1 emosi_list.append(mk) elif(i == 'NL'): mk = 2 emosi_list.append(mk) elif(i == 'PH'): mk = 3 emosi_list.append(mk) elif(i == 'PL'): mk = 4 emosi_list.append(mk) # print('Selesai !') namafile = 'iaps4_extracted.csv' # namafile = 'tes_extracted2.csv' finaldirs = os.path.join(root,namafile) df1 = pd.DataFrame({'STDEV1' : stdvn1,'AVG1' : rrtn1,'MDN1' : mdn1, 'STDEV2':stdvn2,'AVG2' : rrtn2,'MDN2' : mdn2,'EMOSI' : emosi_list}) df1.to_csv(finaldirs,mode='w+',index=False) print(finaldirs) stdvn1.clear() rrtn1.clear() mdn1.clear() stdvn2.clear() rrtn2.clear() mdn2.clear() emosi_list.clear() print('Ekstraksi Fitur Selesai !') #namafile = i+'_extracted.csv' #finaldirs = os.path.join(root,namafile) #if(i == 'kaget'): # i = 1 #elif(i == 'marah'): # i = 2 #elif(i == 'santai'): # i = 3 #elif(i == 'senang'): # i = 4 #df1 = pd.DataFrame({'STDEV1' : stdvn1,'AVG1' : rrtn1,'MDN1' : mdn1, # 'STDEV2':stdvn2,'AVG2' : rrtn2,'MDN2' : mdn2,'EMOSI' : i}) #df1.to_csv(finaldirs,mode='w+') #print(finaldirs) #stdvn1.clear() #rrtn1.clear() #mdn1.clear() #stdvn2.clear() #rrtn2.clear() #mdn2.clear() def create_model(): model = keras.models.Sequential([ keras.layers.LSTM(8, return_sequences=True, input_shape=[None,6]), keras.layers.LSTM(8), keras.layers.Dense(4, activation='softmax') ]) model.compile( loss="categorical_crossentropy", optimizer=keras.optimizers.Adam(lr=0.01), metrics=["acc"] ) model.summary() return model # extract_feature(folder='Data_filter4') keras.backend.clear_session() model = create_model() X = [] y = [] maindirs = 'IAPS3_extract' dirs = os.listdir(maindirs) emosi = ['NH','NL','PH','PL'] df = pd.read_csv(maindirs+"/"+"janai.csv") d_t = df.drop('EMOSI',axis=1) label = pd.get_dummies(df['EMOSI']) data_len = int(len(d_t)) for i in range (0,data_len): temp = d_t.iloc[i] temp_list = temp.values.tolist() X.append(temp_list) for j in range(0,data_len): temp1 = label.iloc[j] temp1_list = temp1.values.tolist() y.append(temp1_list) X = np.array(X) y = np.array(y) length = 420 num_train = 278 index = np.random.randint(0,length, size=length) train_X = X[index[0:num_train]] train_Y = y[index[0:num_train]] test_X = X[index[num_train:]] test_Y = y[index[num_train:]] # train_X = X[0:num_train] # train_Y = y[0:num_train] # test_X = X[num_train:] # test_Y = y[num_train:] train_X = np.reshape(train_X, (train_X.shape[0],1,train_X.shape[1])) test_X = np.reshape(test_X, (test_X.shape[0],1,test_X.shape[1])) print(train_X.shape) print(train_Y.shape) print(test_X.shape) print(test_Y.shape) callback = keras.callbacks.EarlyStopping(monitor='loss', patience=3) history = model.fit( train_X, train_Y, batch_size = 20, epochs=30, callbacks=[callback], validation_data=(test_X,test_Y), ) inpoot = int(input("Apakah mau simpan model ? ")) if inpoot == 1: nama_model = str(input('Nama model = ')) model.save(nama_model) model.save_weights(nama_model+'.h5') print("Model berhasil disimpan !") keras.backend.clear_session() else: print("ga disimpen") keras.backend.clear_session() history_dict = history.history loss_values = history_dict['loss'] val_loss_values = history_dict['val_loss'] epochs = range(1, len(loss_values) + 1) plt.plot(epochs, loss_values, 'bo', label='Training loss') plt.plot(epochs, val_loss_values, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() acc_values = history_dict['acc'] val_acc_values = history_dict['val_acc'] plt.plot(epochs, acc_values, 'bo', label='Training acc') plt.plot(epochs, val_acc_values, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() ```
github_jupyter
Actually this is not PCA, but PCA-related questions in CS357. ``` import numpy as np import numpy.linalg as la def center_data(A): ''' Given a matrix A, we want every column to shift by their column mean ''' B = np.copy(A).astype(float) for i in range(B.shape[1]): B[:,i] -= np.mean(B[:,i]) return B # Generic Test Case for center_data X = np.array([[0.865, 1.043, -0.193], [-1.983, -1.250, 2.333], [0.305, 1.348, -0.024], [1.509, -0.069, -1.400]]) expected = np.array([ [ 0.691, 0.775, -0.372], [-2.157, -1.518, 2.154], [ 0.131, 1.08 , -0.203], [ 1.335, -0.337, -1.579] ]) actual = center_data(X) tol = 10**-4 assert la.norm(expected-actual) < tol def pc_needed(U, S, V, limit): ''' Given a singular value decomposition return the number of principal components needed to achieve a variance coverage equal to or higher than limit ''' var = S@S / np.sum(S@S) curr = 0.0 counter = 0 for i in range(len(S)): if curr >= limit: break curr += var[i,i] counter += 1 return counter # Generic Test Case for pc_needed limit = 0.57 U = np.array([[-0.1, 0.1, -0.1, -0.1, -0.1, 0.0, -0.3, 0.0, -0.6], [0.1, -0.5, 0.1, 0.1, 0.2, 0.1, -0.4, 0.2, 0.0], [0.2, 0.2, -0.1, 0.7, 0.3, -0.1, 0.3, 0.1, -0.4], [0.1, -0.1, -0.2, 0.0, -0.1, 0.0, 0.2, 0.1, 0.2], [-0.2, 0.0, -0.1, 0.4, -0.1, -0.2, -0.4, 0.1, 0.4], [0.1, -0.3, -0.1, -0.2, 0.2, -0.3, 0.0, 0.1, 0.0], [-0.4, 0.2, 0.0, -0.1, 0.2, 0.4, -0.3, 0.2, -0.1], [0.2, -0.3, -0.3, -0.2, 0.4, -0.2, 0.0, 0.1, 0.1], [0.6, 0.3, -0.2, 0.0, -0.2, 0.1, -0.3, 0.1, 0.1], [0.0, -0.1, 0.1, -0.3, -0.4, -0.2, 0.2, 0.5, -0.1], [-0.4, 0.1, -0.2, 0.0, 0.1, 0.0, 0.2, 0.4, 0.1], [0.3, 0.2, 0.3, -0.2, 0.3, 0.1, -0.1, 0.3, -0.1], [0.3, 0.2, 0.2, -0.2, -0.1, 0.0, 0.0, -0.4, 0.1], [-0.2, 0.0, 0.3, -0.2, 0.4, -0.4, 0.0, -0.2, 0.0], [-0.1, 0.3, -0.4, -0.1, -0.1, -0.4, -0.1, -0.1, -0.2], [0.0, 0.0, -0.6, -0.2, 0.2, 0.4, 0.1, -0.2, 0.0], [0.0, 0.0, -0.1, 0.0, -0.1, -0.4, -0.4, -0.1, -0.2], [-0.1, -0.5, 0.1, 0.1, -0.2, 0.2, 0.0, -0.3, -0.3]]) S = np.array([[39, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 36, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 32, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 25, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 24, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 23, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 17, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 12, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 6]]) V = np.array([[-0.5, -0.1, 0.6, 0.1, -0.2, 0.2, 0.5, -0.2, -0.1], [0.2, -0.1, 0.2, -0.5, -0.7, -0.4, 0.0, 0.0, 0.1], [0.0, -0.5, 0.1, 0.4, -0.1, 0.1, -0.3, -0.3, 0.6], [-0.5, 0.6, -0.3, 0.3, -0.2, -0.3, 0.0, 0.0, 0.3], [-0.1, -0.2, -0.7, -0.1, -0.3, 0.2, 0.3, -0.4, -0.2], [0.3, -0.1, 0.0, 0.0, 0.4, -0.3, 0.7, -0.2, 0.4], [0.1, -0.3, 0.0, 0.6, -0.1, -0.5, 0.0, 0.0, -0.5], [0.0, 0.2, 0.2, -0.2, 0.3, -0.2, -0.3, -0.8, -0.2], [0.6, 0.5, 0.1, 0.4, -0.3, 0.3, 0.1, -0.2, 0.0]]) expected = 3 actual = pc_needed(U, S, V, 0.57) assert expected == actual def singular_values_for_pca_mean(X): ''' Designed specifically for exam quesiton "Singular Values for PCA Mean" ''' X_zeroed = center_data(X) U, S, Vh = la.svd(X_zeroed) return X_zeroed, S[-1] # Workspace U = np.array([[0.0, -0.1, -0.4, -0.3, 0.4, -0.1, -0.1, -0.1, -0.1, 0.2], [-0.1, -0.1, 0.3, 0.4, 0.5, 0.0, -0.3, 0.0, 0.0, 0.2], [0.1, -0.2, -0.2, -0.1, -0.1, -0.4, -0.1, 0.4, 0.2, 0.1], [0.4, -0.3, -0.2, 0.1, 0.1, 0.3, -0.1, -0.3, -0.1, 0.0], [0.0, -0.3, -0.2, -0.1, -0.3, 0.1, -0.4, -0.3, 0.4, -0.2], [0.0, 0.3, 0.0, 0.1, -0.1, 0.0, -0.5, 0.0, -0.2, 0.5], [0.0, 0.6, -0.1, 0.0, -0.1, 0.1, -0.1, 0.0, 0.4, 0.1], [0.2, 0.0, -0.2, 0.3, 0.1, 0.3, 0.3, -0.1, 0.2, 0.0], [0.0, 0.1, -0.2, -0.4, 0.1, 0.4, 0.2, 0.2, -0.4, 0.0], [0.6, 0.2, -0.2, 0.3, 0.1, -0.3, 0.2, 0.0, 0.0, -0.1], [-0.1, -0.1, -0.1, -0.1, -0.1, -0.3, -0.3, -0.2, -0.2, -0.3], [0.1, 0.2, -0.5, 0.0, 0.0, -0.1, 0.0, -0.1, 0.1, 0.2], [-0.2, -0.2, 0.0, 0.2, -0.5, 0.0, 0.2, -0.2, -0.2, 0.5], [-0.4, -0.1, -0.3, 0.3, 0.2, 0.2, -0.1, 0.3, 0.3, 0.0], [-0.2, -0.3, 0.0, 0.0, 0.0, -0.4, 0.4, -0.1, 0.1, 0.2], [-0.5, 0.3, -0.3, 0.2, 0.1, -0.2, 0.1, -0.4, -0.2, -0.3], [0.0, -0.1, -0.3, 0.4, -0.3, 0.1, -0.2, 0.4, -0.4, -0.1]]) S = np.array([[32, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 29, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 28, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 25, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 22, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 21, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 19, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 12, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 9, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 2]]) V = np.array([[0.0, 0.7, -0.2, 0.0, -0.6, 0.0, 0.2, -0.2, 0.2, -0.1], [-0.1, -0.3, -0.5, 0.0, -0.2, -0.1, -0.1, 0.2, 0.7, 0.0], [0.2, -0.2, -0.3, 0.4, -0.4, -0.4, -0.2, 0.0, -0.4, -0.4], [-0.6, 0.2, -0.1, -0.2, 0.0, -0.5, -0.1, 0.3, -0.2, 0.3], [0.3, 0.2, 0.0, -0.1, 0.1, -0.2, -0.8, -0.3, 0.1, 0.3], [-0.5, -0.3, 0.5, -0.1, -0.3, -0.1, -0.2, -0.5, 0.2, -0.3], [-0.2, 0.2, 0.1, 0.8, 0.3, -0.1, 0.0, -0.1, 0.3, 0.1], [-0.2, -0.1, -0.5, -0.2, 0.4, -0.1, 0.2, -0.6, -0.1, -0.1], [-0.3, 0.2, -0.1, 0.0, 0.2, 0.4, -0.5, 0.3, -0.1, -0.6], [-0.3, -0.2, -0.3, 0.2, -0.3, 0.5, -0.2, -0.1, -0.3, 0.5]]) pc_needed(U, S, V, 0.73) ``` Remember to check for negative signs! ``` # Workspace 2 X = np.array([[-0.273, -1.366, -0.906], [-0.940, -0.957, -0.019], [-2.025, 0.228, -0.696], [0.469, -0.329, -0.059]]) mat, sca = singular_values_for_pca_mean(X) print(mat) print(sca) ```
github_jupyter
# Transient Fickian Diffusion The package `OpenPNM` allows for the simulation of many transport phenomena in porous media such as Stokes flow, Fickian diffusion, advection-diffusion, transport of charged species, etc. Transient and steady-state simulations are both supported. An example of a transient Fickian diffusion simulation through a `Cubic` pore network is shown here. First, `OpenPNM` is imported. ``` import numpy as np import openpnm as op %config InlineBackend.figure_formats = ['svg'] np.random.seed(10) %matplotlib inline np.set_printoptions(precision=5) ``` ## Define new workspace and project ``` ws = op.Workspace() ws.settings["loglevel"] = 40 proj = ws.new_project() ``` ## Generate a pore network An arbitrary `Cubic` 3D pore network is generated consisting of a layer of $29\times13$ pores with a constant pore to pore centers spacing of ${10}^{-4}{m}$. ``` shape = [13, 29, 1] net = op.network.Cubic(shape=shape, spacing=1e-4, project=proj) ``` ## Create a geometry Here, a geometry, corresponding to the created network, is created. The geometry contains information about the size of pores and throats in the network such as length and diameter, etc. `OpenPNM` has many prebuilt geometries that represent the microstructure of different materials such as Toray090 carbon papers, sand stone, electrospun fibers, etc. In this example, a simple geometry known as `SpheresAndCylinders` that assigns random diameter values to pores throats, with certain constraints, is used. ``` geo = op.geometry.SpheresAndCylinders(network=net, pores=net.Ps, throats=net.Ts) ``` ## Add a phase Then, a phase (water in this example) is added to the simulation and assigned to the network. The phase contains the physical properties of the fluid considered in the simulation such as the viscosity, etc. Many predefined phases as available on `OpenPNM`. ``` phase = op.phases.Water(network=net) ``` ## Add a physics Next, a physics object is defined. The physics object stores information about the different physical models used in the simulation and is assigned to specific network, geometry and phase objects. This ensures that the different physical models will only have access to information about the network, geometry and phase objects to which they are assigned. In fact, models (such as Stokes flow or Fickian diffusion) require information about the network (such as the connectivity between pores), the geometry (such as the pores and throats diameters), and the phase (such as the diffusivity coefficient). ``` phys = op.physics.GenericPhysics(network=net, phase=phase, geometry=geo) ``` The diffusivity coefficient of the considered chemical species in water is also defined. ``` phase['pore.diffusivity'] = 2e-09 ``` ## Defining a new model The physical model, consisting of Fickian diffusion, is defined and attached to the physics object previously defined. ``` mod = op.models.physics.diffusive_conductance.ordinary_diffusion phys.add_model(propname='throat.diffusive_conductance', model=mod, regen_mode='normal') ``` ## Define a transient Fickian diffusion algorithm Here, an algorithm for the simulation of transient Fickian diffusion is defined. It is assigned to the network and phase of interest to be able to retrieve all the information needed to build systems of linear equations. ``` fd = op.algorithms.TransientFickianDiffusion(network=net, phase=phase) ``` ## Add boundary conditions Next, Dirichlet boundary conditions are added over the back and front boundaries of the network. ``` fd.set_value_BC(pores=net.pores('back'), values=0.5) fd.set_value_BC(pores=net.pores('front'), values=0.2) ``` ## Define initial conditions Initial conditions must be specified when `alg.run` is called as `alg.run(x0=x0)`, where `x0` could either be a scalar (in which case it'll be broadcasted to all pores), or an array. ## Setup the transient algorithm settings The settings of the transient algorithm are updated here. When calling `alg.run`, you can pass the following arguments: - `x0`: initial conditions - `tspan`: integration time span - `saveat`: the interval at which the solution is to be stored ``` x0 = 0.2 tspan = (0, 100) saveat = 5 ``` ## Print the algorithm settings One can print the algorithm's settings as shown here. ``` print(fd.settings) ``` Note that the `quantity` corresponds to the quantity solved for. ## Run the algorithm The algorithm is run here. ``` soln = fd.run(x0=x0, tspan=tspan, saveat=saveat) ``` ## Post process and export the results Once the simulation is successfully performed. The solution at every time steps is stored within the algorithm object. The algorithm's stored information is printed here. ``` print(fd) ``` Note that the solutions at every exported time step contain the `@` character followed by the time value. Here the solution is exported after each $5s$ in addition to the final time step which is not a multiple of $5$ in this example. To print the solution at $t=10s$ ``` soln(10) ``` The solution is here stored in the phase before export. ``` phase.update(fd.results()) ``` ## Visialization using Matplotlib One can perform post processing and visualization using the exported files on an external software such as `Paraview`. Additionally, the `Pyhton` library `Matplotlib` can be used as shown here to plot the concentration color map at steady-state. ``` import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable c = fd.x.reshape(shape) fig, ax = plt.subplots(figsize=(6, 6)) im = ax.imshow(c[:,:,0]) ax.set_title('concentration (mol/m$^3$)') divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="4%", pad=0.1) plt.colorbar(im, cax=cax); ```
github_jupyter
``` epochs = 50 ``` # পর্ব 7 - ফেডারেটড্যাটাসেটের সাথে ফেডারেট লার্নিং এখানে আমরা ফেডারেটেড ডেটাসেট ব্যবহারের জন্য একটি নতুন সরঞ্জাম প্রবর্তন করি। আমরা একটি "ফেডারেটড্যাটাসেট` ক্লাস তৈরি করেছি যা পাইটর্চ ডেটাসেট ক্লাসের মতো ব্যবহার করার উদ্দেশ্যে এবং এটি একটি ফেডারেশনযুক্ত ডেটা লোডার - ফেডারেটডাটাডোলোডারকে দেওয়া হয়েছে যা এটি ফেডারেশন ফ্যাশনে পুনরুত্থিত হবে। লেখক: - অ্যান্ড্রু ট্রস্ক - টুইটার: [@ আইয়ামট্রস্ক](https://twitter.com/iamtrask) - থিও রাইফেল - গিটহাব: [@ ল্যারিফেল](https://github.com/LaRiffle) অনুবাদক: - সায়ন্তন দাস - িটহাব: [@ucalyptus](https://github.com/ucalyptus) আমরা স্যান্ডবক্সটি ব্যবহার করি যা আমরা শেষ পাঠটি আবিষ্কার করেছি ``` import torch as th import syft as sy sy.create_sandbox(globals(), verbose=False) ``` আমরা স্যান্ডবক্সটি ব্যবহার করি যা আমরা শেষ পাঠটি আবিষ্কার করেছি... ``` boston_data = grid.search("#boston", "#data") boston_target = grid.search("#boston", "#target") ``` আমরা একটি মডেল এবং একটি অপ্টিমাইজার লোড করি ``` n_features = boston_data['alice'][0].shape[1] n_targets = 1 model = th.nn.Linear(n_features, n_targets) ``` এখানে আমরা একটি `ফেডারেটড্যাটাসেটে প্রাপ্ত ডেটা কাস্ট করেছি` কর্মীদের দেখুন যা ডেটা অংশ রয়েছে। ``` # Cast the result in BaseDatasets datasets = [] for worker in boston_data.keys(): dataset = sy.BaseDataset(boston_data[worker][0], boston_target[worker][0]) datasets.append(dataset) # Build the FederatedDataset object dataset = sy.FederatedDataset(datasets) print(dataset.workers) optimizers = {} for worker in dataset.workers: optimizers[worker] = th.optim.Adam(params=model.parameters(),lr=1e-2) ``` আমরা এটিকে একটি `ফেডারেটেড ডেটা লোডার এ রেখেছি এবং বিকল্পগুলি নির্দিষ্ট করি ``` train_loader = sy.FederatedDataLoader(dataset, batch_size=32, shuffle=False, drop_last=False) ``` এবং অবশেষে আমরা যুগের উপর পুনরাবৃত্তি। খাঁটি এবং স্থানীয় পাইটর্চ প্রশিক্ষণের সাথে এটির তুলনা তুলনাযোগ্য দেখতে পাবেন! ``` for epoch in range(1, epochs + 1): loss_accum = 0 for batch_idx, (data, target) in enumerate(train_loader): model.send(data.location) optimizer = optimizers[data.location.id] optimizer.zero_grad() pred = model(data) loss = ((pred.view(-1) - target)**2).mean() loss.backward() optimizer.step() model.get() loss = loss.get() loss_accum += float(loss) if batch_idx % 8 == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tBatch loss: {:.6f}'.format( epoch, batch_idx, len(train_loader), 100. * batch_idx / len(train_loader), loss.item())) print('Total loss', loss_accum) ``` # অভিনন্দন !!! - সম্প্রদায় যোগদানের সময়! এই নোটবুক টিউটোরিয়ালটি সম্পন্ন করার জন্য অভিনন্দন! আপনি যদি এটি উপভোগ করেন এবং গোপনীয়তা সংরক্ষণ, এআই এবং এআই সরবরাহ চেইনের (ডেটা) বিকেন্দ্রীভূত মালিকানার দিকে আন্দোলনে যোগ দিতে চান, আপনি নিম্নলিখিত উপায়ে এটি করতে পারেন! ### গিটহাবে স্টার পাইসাইফ্ট আমাদের সম্প্রদায়কে সাহায্য করার সবচেয়ে সহজ উপায় হ'ল রেপোসকে অভিনীত করা! এটি আমরা যে শীতল সরঞ্জামগুলি তৈরি করছি তার সচেতনতা বাড়াতে সহায়তা করে। - [স্টার পাইসাইফ্ট](https://github.com/OpenMined/PySyft) ### আমাদের স্ল্যাচে যোগ দিন! সর্বশেষতম অগ্রগতিতে আপ টু ডেট রাখার সর্বোত্তম উপায় হ'ল আমাদের সম্প্রদায়ে যোগদান করা! আপনি [http://slack.openmined.org](http://slack.openmined.org) এ ফর্মটি পূরণ করে এটি করতে পারেন ### একটি কোড প্রকল্পে যোগদান করুন! আমাদের সম্প্রদায়ে অবদান রাখার সর্বোত্তম উপায় হ'ল কোড অবদানকারী হয়ে উঠুন! যে কোনও সময় আপনি পাইসাইফ্ট গিটহাব ইস্যু পৃষ্ঠাতে যেতে পারেন এবং "প্রকল্পগুলি" এর জন্য ফিল্টার করতে পারেন। এটি আপনাকে শীর্ষ স্তরের সমস্ত টিকিট দেখিয়ে দেবে যে আপনি কোন প্রকল্পগুলিতে যোগদান করতে পারেন তার একটি ওভারভিউ দেয়! আপনি যদি কোনও প্রকল্পে যোগ দিতে না চান তবে আপনি কিছুটা কোডিং করতে চান তবে আপনি আরও ভাল "ওয়ান অফ" মিনি-প্রকল্পগুলি "ভাল প্রথম ইস্যু" হিসাবে চিহ্নিত গিটহাব ইস্যুগুলি অনুসন্ধান করেও দেখতে পারেন। - [পাইসাইফ্ট প্রজেক্টস](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject) - [ভাল প্রথম ইস্যুর টিকিট](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) ### দান করুন আপনার যদি আমাদের কোডবেসে অবদান রাখার সময় না থাকে তবে তবুও সমর্থন leণ দিতে চান, আপনি আমাদের ওপেন কালেক্টিভেরও ব্যাকের হয়ে উঠতে পারেন। সমস্ত অনুদান আমাদের ওয়েব হোস্টিং এবং অন্যান্য সম্প্রদায় ব্যয় যেমন হ্যাকাথনস এবং মেটআপগুলির দিকে যায়! [ওপেনমাইন্ডের মুক্ত সমাহারক পৃষ্ঠা](https://opencollective.com/openmined)
github_jupyter
# records ``` import vectorbt as vbt import numpy as np import pandas as pd from numba import njit from collections import namedtuple from datetime import datetime # Disable caching for performance testing # NOTE: Expect waterfall of executions, since some attributes depend on other attributes # that have to be calculated again and again vbt.settings.caching['enabled'] = False example_dt = np.dtype([ ('id', np.int64), ('idx', np.int64), ('col', np.int64), ('some_field1', np.float64), ('some_field2', np.float64) ], align=True) records_arr = np.asarray([ (0, 0, 0, 10, 21), (1, 1, 0, 11, 20), (2, 2, 0, 12, 19), (3, 0, 1, 13, 18), (4, 1, 1, 14, 17), (5, 2, 1, 13, 18), (6, 0, 2, 12, 19), (7, 1, 2, 11, 20), (8, 2, 2, 10, 21) ], dtype=example_dt) print(records_arr) print(records_arr.shape) columns = pd.MultiIndex.from_arrays([[0, 1, 1, 1], ['a', 'b', 'c', 'd']], names=['lvl1', 'lvl2']) wrapper = vbt.ArrayWrapper(index=['x', 'y', 'z'], columns=columns, ndim=2, freq='1 days') records = vbt.Records(wrapper, records_arr) records_grouped = vbt.Records(wrapper.copy(group_by=0), records_arr) big_records_arr = np.asarray(list(zip(*( np.arange(1000000), np.tile(np.arange(1000), 1000), np.repeat(np.arange(1000), 1000), np.random.randint(0, 100, size=1000000), np.random.randint(0, 100, size=1000000)))), dtype=example_dt) print(big_records_arr.shape) big_columns = pd.MultiIndex.from_arrays([np.repeat(np.array([0, 1]), 500), np.arange(1000)], names=['lvl1', 'lvl2']) big_wrapper = vbt.ArrayWrapper(index=np.arange(1000), columns=big_columns, ndim=2, freq='1 days') big_records = vbt.Records(big_wrapper, big_records_arr) big_records_grouped = vbt.Records(big_wrapper.copy(group_by=0), big_records_arr) records_nosort = records.copy(records_arr=records.records_arr[::-1]) print(records_nosort.records_arr) big_records_nosort = big_records.copy(records_arr=big_records.records_arr[::-1]) group_by = pd.Series(['first', 'first', 'second', 'second'], name='group') big_group_by = pd.Series(np.repeat(np.array([0, 1]), 500)) ``` ## ColumnMapper ``` print(records.col_mapper.col_arr) print(records.col_mapper.get_col_arr()) print(records_grouped.col_mapper.get_col_arr()) %timeit big_records_grouped.col_mapper.get_col_arr() print(records.col_mapper.col_range) %timeit big_records.col_mapper.col_range print(records.col_mapper.get_col_range()) print(records_grouped.col_mapper.get_col_range()) %timeit big_records_grouped.col_mapper.get_col_range() print(records.col_mapper.col_map) %timeit big_records.col_mapper.col_map print(records.col_mapper.get_col_map()) print(records_grouped.col_mapper.get_col_map()) %timeit big_records_grouped.col_mapper.get_col_map() print(records.col_mapper.is_sorted()) %timeit big_records.col_mapper.is_sorted() print(records_nosort.col_mapper.is_sorted()) %timeit big_records_nosort.col_mapper.is_sorted() ``` ## MappedArray ``` mapped_array = records.map_field('some_field1') big_mapped_array = big_records.map_field('some_field1') mapped_array_nosort = records_nosort.map_field('some_field1') big_mapped_array_nosort = big_records_nosort.map_field('some_field1') mapped_array_grouped = records_grouped.map_field('some_field1') big_mapped_array_grouped = big_records_grouped.map_field('some_field1') print(mapped_array[(0, 'a')].values) print(mapped_array[(0, 'a')].col_arr) print(mapped_array[(0, 'a')].wrapper.columns) print(mapped_array[(1, 'b')].values) print(mapped_array[(1, 'b')].col_arr) print(mapped_array[(1, 'b')].wrapper.columns) print(mapped_array[[(0, 'a'), (0, 'a')]].values) print(mapped_array[[(0, 'a'), (0, 'a')]].col_arr) print(mapped_array[[(0, 'a'), (0, 'a')]].wrapper.columns) print(mapped_array[[(0, 'a'), (1, 'b')]].values) print(mapped_array[[(0, 'a'), (1, 'b')]].col_arr) print(mapped_array[[(0, 'a'), (1, 'b')]].wrapper.columns) %timeit big_mapped_array.iloc[0] %timeit big_mapped_array.iloc[:] print(mapped_array_nosort[(0, 'a')].values) print(mapped_array_nosort[(0, 'a')].col_arr) print(mapped_array_nosort[(0, 'a')].wrapper.columns) print(mapped_array_nosort[(1, 'b')].values) print(mapped_array_nosort[(1, 'b')].col_arr) print(mapped_array_nosort[(1, 'b')].wrapper.columns) print(mapped_array_nosort[[(0, 'a'), (0, 'a')]].values) print(mapped_array_nosort[[(0, 'a'), (0, 'a')]].col_arr) print(mapped_array_nosort[[(0, 'a'), (0, 'a')]].wrapper.columns) print(mapped_array_nosort[[(0, 'a'), (1, 'b')]].values) print(mapped_array_nosort[[(0, 'a'), (1, 'b')]].col_arr) print(mapped_array_nosort[[(0, 'a'), (1, 'b')]].wrapper.columns) %timeit big_mapped_array_nosort.iloc[0] %timeit big_mapped_array_nosort.iloc[:] print(mapped_array_grouped[0].wrapper.columns) # indexing on groups, not columns! print(mapped_array_grouped[0].wrapper.ndim) print(mapped_array_grouped[0].wrapper.grouped_ndim) print(mapped_array_grouped[0].wrapper.grouper.group_by) print(mapped_array_grouped[1].wrapper.columns) print(mapped_array_grouped[1].wrapper.ndim) print(mapped_array_grouped[1].wrapper.grouped_ndim) print(mapped_array_grouped[1].wrapper.grouper.group_by) print(mapped_array_grouped[[0]].wrapper.columns) print(mapped_array_grouped[[0]].wrapper.ndim) print(mapped_array_grouped[[0]].wrapper.grouped_ndim) print(mapped_array_grouped[[0]].wrapper.grouper.group_by) print(mapped_array_grouped[[0, 1]].wrapper.columns) print(mapped_array_grouped[[0, 1]].wrapper.ndim) print(mapped_array_grouped[[0, 1]].wrapper.grouped_ndim) print(mapped_array_grouped[[0, 1]].wrapper.grouper.group_by) %timeit big_mapped_array_grouped.iloc[0] %timeit big_mapped_array_grouped.iloc[:] print(mapped_array.wrapper.index) print(mapped_array.wrapper.columns) print(mapped_array.wrapper.ndim) print(mapped_array.wrapper.grouper.group_by) print(mapped_array_grouped.wrapper.index) print(mapped_array_grouped.wrapper.columns) print(mapped_array_grouped.wrapper.ndim) print(mapped_array_grouped.wrapper.grouper.group_by) print(mapped_array.values) print(mapped_array.col_arr) print(mapped_array.id_arr) print(mapped_array.idx_arr) print(mapped_array.is_sorted()) %timeit big_mapped_array.is_sorted() print(mapped_array_nosort.is_sorted()) %timeit big_mapped_array_nosort.is_sorted() print(mapped_array.is_sorted(incl_id=True)) %timeit big_mapped_array.is_sorted(incl_id=True) print(mapped_array_nosort.is_sorted(incl_id=True)) %timeit big_mapped_array_nosort.is_sorted(incl_id=True) print(mapped_array.sort().col_arr) print(mapped_array.sort().id_arr) %timeit big_mapped_array.sort() print(mapped_array_nosort.sort().col_arr) print(mapped_array_nosort.sort().id_arr) %timeit big_mapped_array_nosort.sort() print(mapped_array.sort(incl_id=True).col_arr) print(mapped_array.sort(incl_id=True).id_arr) %timeit big_mapped_array.sort(incl_id=True) print(mapped_array_nosort.sort(incl_id=True).col_arr) print(mapped_array_nosort.sort(incl_id=True).id_arr) %timeit big_mapped_array_nosort.sort(incl_id=True) mask = mapped_array.values >= mapped_array.values.mean() print(mapped_array.apply_mask(mask).values) big_mask = big_mapped_array.values >= big_mapped_array.values.mean() %timeit big_mapped_array.apply_mask(big_mask) %timeit big_mapped_array_nosort.apply_mask(big_mask) @njit def every_2_nb(inout, idxs, col, mapped_arr): inout[idxs[::2]] = True print(mapped_array.map_to_mask(every_2_nb)) %timeit big_mapped_array.map_to_mask(every_2_nb) %timeit big_mapped_array_nosort.map_to_mask(every_2_nb) print(mapped_array.values) print(mapped_array.top_n_mask(1)) %timeit big_mapped_array.top_n_mask(100) print(mapped_array.bottom_n_mask(1)) %timeit big_mapped_array.bottom_n_mask(100) print(mapped_array.top_n(1).values) %timeit big_mapped_array.top_n(100) print(mapped_array.bottom_n(1).values) %timeit big_mapped_array.bottom_n(100) print(mapped_array.is_expandable()) %timeit big_mapped_array.is_expandable() print(mapped_array.to_pd()) print(mapped_array.to_pd(fill_value=0.)) %timeit big_mapped_array.to_pd() print(mapped_array[(0, 'a')].to_pd(ignore_index=True)) %timeit big_mapped_array[0].to_pd(ignore_index=True) print(mapped_array.to_pd(ignore_index=True)) %timeit big_mapped_array.to_pd(ignore_index=True) print(mapped_array_grouped.to_pd(ignore_index=True)) print(mapped_array_grouped.to_pd(ignore_index=True, fill_value=0)) %timeit big_mapped_array_grouped.to_pd(ignore_index=True) @njit def mean_reduce_nb(col, a): return np.mean(a) print(mapped_array[(0, 'a')].reduce(mean_reduce_nb)) print(mapped_array[[(0, 'a'), (1, 'b')]].reduce(mean_reduce_nb)) print(mapped_array.reduce(mean_reduce_nb)) print(mapped_array.reduce(mean_reduce_nb, fill_value=0.)) print(mapped_array.reduce(mean_reduce_nb, fill_value=0., wrap_kwargs=dict(dtype=np.int_))) print(mapped_array.reduce(mean_reduce_nb, wrap_kwargs=dict(to_timedelta=True))) %timeit big_mapped_array.reduce(mean_reduce_nb) %timeit big_mapped_array_nosort.reduce(mean_reduce_nb) print(mapped_array_grouped[0].reduce(mean_reduce_nb)) print(mapped_array_grouped[[0]].reduce(mean_reduce_nb)) print(mapped_array_grouped.reduce(mean_reduce_nb)) print(mapped_array_grouped.reduce(mean_reduce_nb, group_by=False)) %timeit big_mapped_array_grouped.reduce(mean_reduce_nb) @njit def argmin_reduce_nb(col, a): return np.argmin(a) print(mapped_array.reduce(argmin_reduce_nb, returns_idx=True)) %timeit big_mapped_array.reduce(argmin_reduce_nb, returns_idx=True) print(mapped_array.reduce(argmin_reduce_nb, returns_idx=True, to_index=False)) %timeit big_mapped_array.reduce(argmin_reduce_nb, returns_idx=True, to_index=False) print(mapped_array_grouped.reduce(argmin_reduce_nb, returns_idx=True)) %timeit big_mapped_array_grouped.reduce(argmin_reduce_nb, returns_idx=True) @njit def min_max_reduce_nb(col, a): return np.array([np.min(a), np.max(a)]) print(mapped_array[(0, 'a')].reduce(min_max_reduce_nb, returns_array=True)) print(mapped_array[[(0, 'a'), (1, 'b')]].reduce(min_max_reduce_nb, returns_array=True)) print(mapped_array.reduce(min_max_reduce_nb, returns_array=True)) print(mapped_array.reduce(min_max_reduce_nb, returns_array=True, wrap_kwargs=dict(name_or_index=['min', 'max']))) print(mapped_array.reduce(min_max_reduce_nb, returns_array=True, wrap_kwargs=dict(name_or_index=['min', 'max']), fill_value=0.)) print(mapped_array.reduce(min_max_reduce_nb, returns_array=True, wrap_kwargs=dict(to_timedelta=True))) %timeit big_mapped_array.reduce(min_max_reduce_nb, returns_array=True) print(mapped_array_grouped[0].reduce(min_max_reduce_nb, returns_array=True)) print(mapped_array_grouped[[0]].reduce(min_max_reduce_nb, returns_array=True)) print(mapped_array_grouped.reduce(min_max_reduce_nb, returns_array=True)) print(mapped_array_grouped.reduce(min_max_reduce_nb, returns_array=True, group_by=False)) %timeit big_mapped_array_grouped.reduce(min_max_reduce_nb, returns_array=True) @njit def idxmin_idxmax_reduce_nb(col, a): return np.array([np.argmin(a), np.argmax(a)]) print(mapped_array.reduce(idxmin_idxmax_reduce_nb, returns_array=True, returns_idx=True)) %timeit big_mapped_array.reduce(idxmin_idxmax_reduce_nb, returns_array=True, returns_idx=True) print(mapped_array.reduce(idxmin_idxmax_reduce_nb, returns_array=True, returns_idx=True, to_index=False)) %timeit big_mapped_array.reduce(idxmin_idxmax_reduce_nb, returns_array=True, returns_idx=True, to_index=False) print(mapped_array_grouped.reduce(idxmin_idxmax_reduce_nb, returns_array=True, returns_idx=True)) %timeit big_mapped_array_grouped.reduce(idxmin_idxmax_reduce_nb, returns_array=True, returns_idx=True) print(mapped_array.nth(0)) print(mapped_array.nth(-1)) %timeit big_mapped_array.nth(0) print(mapped_array_grouped.nth(0)) %timeit big_mapped_array_grouped.nth(0) print(mapped_array.to_pd().vbt.min()) %timeit big_mapped_array.to_pd().vbt.min() print(mapped_array.min()) %timeit big_mapped_array.min() print(mapped_array_grouped.min()) %timeit big_mapped_array_grouped.min() print(mapped_array.to_pd().vbt.max()) %timeit big_mapped_array.to_pd().vbt.max() print(mapped_array.max()) %timeit big_mapped_array.max() print(mapped_array_grouped.max()) %timeit big_mapped_array_grouped.max() print(mapped_array.to_pd().vbt.mean()) %timeit big_mapped_array.to_pd().vbt.mean() print(mapped_array.mean()) %timeit big_mapped_array.mean() print(mapped_array_grouped.mean()) %timeit big_mapped_array_grouped.mean() print(mapped_array.to_pd().vbt.median()) %timeit big_mapped_array.to_pd().vbt.median() print(mapped_array.median()) %timeit big_mapped_array.median() print(mapped_array_grouped.median()) %timeit big_mapped_array_grouped.median() print(mapped_array.to_pd().vbt.std()) print(mapped_array.to_pd().vbt.std(ddof=0)) %timeit big_mapped_array.to_pd().vbt.std() print(mapped_array.std()) print(mapped_array.std(ddof=0)) %timeit big_mapped_array.std() print(mapped_array_grouped.std()) %timeit big_mapped_array_grouped.std() print(mapped_array.to_pd().vbt.sum()) %timeit big_mapped_array.to_pd().vbt.sum() print(mapped_array.sum()) %timeit big_mapped_array.sum() print(mapped_array_grouped.sum()) %timeit big_mapped_array_grouped.sum() print(mapped_array.to_pd().vbt.idxmin()) %timeit big_mapped_array.to_pd().vbt.idxmin() print(mapped_array.idxmin()) %timeit big_mapped_array.idxmin() print(mapped_array_grouped.idxmin()) %timeit big_mapped_array_grouped.idxmin() print(mapped_array.to_pd().vbt.idxmax()) %timeit big_mapped_array.to_pd().vbt.idxmax() print(mapped_array.idxmax()) %timeit big_mapped_array.idxmax() print(mapped_array_grouped.idxmax()) %timeit big_mapped_array_grouped.idxmax() print(mapped_array.to_pd().vbt.describe()) print(mapped_array.to_pd().vbt.describe(percentiles=[0.3, 0.7])) %timeit big_mapped_array.to_pd().vbt.describe() print(mapped_array.describe()) print(mapped_array.describe(percentiles=[0.3, 0.7])) %timeit big_mapped_array.describe() print(mapped_array_grouped.describe()) %timeit big_mapped_array_grouped.describe() print(mapped_array.to_pd().vbt.count()) %timeit big_mapped_array.to_pd().vbt.count() print(mapped_array.count()) %timeit big_mapped_array.count() print(mapped_array_grouped.count()) %timeit big_mapped_array_grouped.count() mapping = {x: str(x) for x in np.unique(mapped_array.values)} big_mapping = {x: str(x) for x in np.unique(big_mapped_array.values)} print(mapped_array[(0, 'a')].value_counts()) %timeit big_mapped_array[0].value_counts() print(mapped_array[(0, 'a')].value_counts(mapping=mapping)) %timeit big_mapped_array[0].value_counts(mapping=big_mapping) print(mapped_array.value_counts()) %timeit big_mapped_array.value_counts() print(mapped_array.value_counts(mapping=mapping)) %timeit big_mapped_array.value_counts(mapping=big_mapping) print(mapped_array_grouped.value_counts()) %timeit big_mapped_array_grouped.value_counts() print(mapped_array[(0, 'a')].stats()) %timeit big_mapped_array[0].stats(silence_warnings=True) print(mapped_array.stats(column=(0, 'a'))) %timeit big_mapped_array.stats(column=0, silence_warnings=True) print(mapped_array.stats()) %timeit big_mapped_array.stats(silence_warnings=True) print(mapped_array.copy(mapping=mapping).stats()) %timeit big_mapped_array.copy(mapping=big_mapping).stats(silence_warnings=True) mapped_array[(0, 'a')].histplot().show_svg() mapped_array.histplot().show_svg() mapped_array_grouped.histplot().show_svg() mapped_array[(0, 'a')].boxplot().show_svg() mapped_array.boxplot().show_svg() mapped_array_grouped.boxplot().show_svg() mapped_array[(0, 'a')].plots().show_svg() mapped_array.plots().show_svg() ``` ## Records ``` print(records[(0, 'a')].values) print(records[(0, 'a')].wrapper.columns) print(records[(1, 'b')].values) print(records[(1, 'b')].wrapper.columns) print(records[[(0, 'a'), (0, 'a')]].values) print(records[[(0, 'a'), (0, 'a')]].wrapper.columns) print(records[[(0, 'a'), (1, 'b')]].values) print(records[[(0, 'a'), (1, 'b')]].wrapper.columns) %timeit big_records.iloc[0] %timeit big_records.iloc[:] print(records_nosort[(0, 'a')].values) print(records_nosort[(0, 'a')].wrapper.columns) print(records_nosort[(1, 'b')].values) print(records_nosort[(1, 'b')].wrapper.columns) print(records_nosort[[(0, 'a'), (0, 'a')]].values) print(records_nosort[[(0, 'a'), (0, 'a')]].wrapper.columns) print(records_nosort[[(0, 'a'), (1, 'b')]].values) print(records_nosort[[(0, 'a'), (1, 'b')]].wrapper.columns) %timeit big_records_nosort.iloc[0] %timeit big_records_nosort.iloc[:] print(records_grouped[0].wrapper.columns) # indexing on groups, not columns! print(records_grouped[0].wrapper.ndim) print(records_grouped[0].wrapper.grouped_ndim) print(records_grouped[0].wrapper.grouper.group_by) print(records_grouped[1].wrapper.columns) print(records_grouped[1].wrapper.ndim) print(records_grouped[1].wrapper.grouped_ndim) print(records_grouped[1].wrapper.grouper.group_by) print(records_grouped[[0]].wrapper.columns) print(records_grouped[[0]].wrapper.ndim) print(records_grouped[[0]].wrapper.grouped_ndim) print(records_grouped[[0]].wrapper.grouper.group_by) print(records_grouped[[0, 1]].wrapper.columns) print(records_grouped[[0, 1]].wrapper.ndim) print(records_grouped[[0, 1]].wrapper.grouped_ndim) print(records_grouped[[0, 1]].wrapper.grouper.group_by) %timeit big_records_grouped.iloc[0] %timeit big_records_grouped.iloc[:] print(records.wrapper.index) print(records.wrapper.columns) print(records.wrapper.ndim) print(records.wrapper.grouper.group_by) print(records_grouped.wrapper.index) print(records_grouped.wrapper.columns) print(records_grouped.wrapper.ndim) print(records_grouped.wrapper.grouper.group_by) print(records.values) print(records.recarray) %timeit big_records.recarray print(records.records) print(records.is_sorted()) %timeit big_records.is_sorted() print(records_nosort.is_sorted()) %timeit big_records_nosort.is_sorted() print(records.is_sorted(incl_id=True)) %timeit big_records.is_sorted(incl_id=True) print(records_nosort.is_sorted(incl_id=True)) %timeit big_records_nosort.is_sorted(incl_id=True) print(records.sort().records_arr) %timeit big_records.sort() print(records_nosort.sort().records_arr) %timeit big_records_nosort.sort() print(records.sort(incl_id=True).records_arr) %timeit big_records.sort(incl_id=True) print(records_nosort.sort(incl_id=True).records_arr) %timeit big_records_nosort.sort(incl_id=True) mask = records.values['some_field1'] >= records.values['some_field1'].mean() print(records.apply_mask(mask).values) big_mask = big_records.values['some_field1'] >= big_records.values['some_field1'].mean() %timeit big_records.apply_mask(big_mask) @njit def map_nb(record): return record.some_field1 + record.some_field2 * 2 print(records.map(map_nb).sum()) print(records_grouped.map(map_nb).sum()) print(records_grouped.map(map_nb, group_by=False).sum()) %timeit vbt.records.MappedArray(\ big_wrapper,\ big_records_arr['some_field1'] + big_records_arr['some_field2'] * 2,\ big_records_arr['col'],\ ) %timeit big_records.map(map_nb) # faster print(records.map_field('col').values) print(records.map_field('idx').values) print(records.map_field('some_field1').values) print(records.map_field('some_field2').values) print(records.map_field('some_field1').sum()) print(records_grouped.map_field('some_field1').sum()) print(records_grouped.map_field('some_field1', group_by=False).sum()) %timeit big_records.map_field('some_field1') print(records.map_array(records_arr['some_field1'] + records_arr['some_field2'] * 2).values) print(records_grouped.map_array(records_arr['some_field1'] + records_arr['some_field2'] * 2).sum()) print(records_grouped.map_array(records_arr['some_field1'] + records_arr['some_field2'] * 2, group_by=False).sum()) %timeit big_records.map_array(big_records_arr['some_field1'] + big_records_arr['some_field2'] * 2) print(records.count()) print(records_grouped.count()) print(records_grouped.count(group_by=False)) %timeit big_records.count() %timeit big_records_grouped.count() filter_mask = np.array([True, False, False, False, False, False, False, False, True]) print(records.apply_mask(filter_mask).count()) print(records_grouped.apply_mask(filter_mask).count()) print(records_grouped.apply_mask(filter_mask, group_by=False).count()) filtered_records = records.apply_mask(filter_mask) print(filtered_records.records) print(filtered_records[(0, 'a')].records) print(filtered_records[(0, 'a')].map_field('some_field1').values) print(filtered_records[(0, 'a')].map_field('some_field1').min()) print(filtered_records[(0, 'a')].count()) print(filtered_records[(1, 'b')].records) print(filtered_records[(1, 'b')].map_field('some_field1').values) print(filtered_records[(1, 'b')].map_field('some_field1').min()) print(filtered_records[(1, 'b')].count()) print(filtered_records[(1, 'c')].records) print(filtered_records[(1, 'c')].map_field('some_field1').values) print(filtered_records[(1, 'c')].map_field('some_field1').min()) print(filtered_records[(1, 'c')].count()) print(filtered_records[(1, 'd')].records) print(filtered_records[(1, 'd')].map_field('some_field1').values) print(filtered_records[(1, 'd')].map_field('some_field1').min()) print(filtered_records[(1, 'd')].count()) print(records[(0, 'a')].stats()) %timeit big_records[0].stats(silence_warnings=True) print(records.stats(column=(0, 'a'))) %timeit big_records.stats(column=0, silence_warnings=True) print(records.stats()) %timeit big_records.stats(silence_warnings=True) records[(0, 'a')].plots() ``` ## Drawdowns ``` ts = pd.DataFrame({ 'a': [2, 1, 3, 1, 4, 1], 'b': [1, 2, 1, 3, 1, 4], 'c': [1, 2, 3, 2, 1, 2], 'd': [1, 2, 3, 4, 5, 6] }, index=[ datetime(2020, 1, 1), datetime(2020, 1, 2), datetime(2020, 1, 3), datetime(2020, 1, 4), datetime(2020, 1, 5), datetime(2020, 1, 6) ]) np.random.seed(42) big_ts = pd.DataFrame(np.random.randint(1, 10, size=(1000, 1000))) drawdowns = vbt.Drawdowns.from_ts(ts, freq='1 days') print(drawdowns.values.shape) big_drawdowns = vbt.Drawdowns.from_ts(big_ts, freq='1 days') print(big_drawdowns.values.shape) %timeit vbt.Drawdowns.from_ts(big_ts, freq='1 days') print(drawdowns.records) print(drawdowns.ts) print(drawdowns['a'].records) print(drawdowns['a'].ts) %timeit big_drawdowns.iloc[0] %timeit big_drawdowns.iloc[:] print(drawdowns.records_readable) print(drawdowns['a'].count()) print(drawdowns.count()) print(drawdowns.count(group_by=group_by)) print(drawdowns['a'].drawdown.to_pd()) print(drawdowns.drawdown.to_pd()) %timeit big_drawdowns.drawdown print(drawdowns['a'].avg_drawdown()) print(drawdowns.avg_drawdown()) %timeit big_drawdowns.avg_drawdown() print(drawdowns.avg_drawdown(group_by=group_by)) %timeit big_drawdowns.avg_drawdown(group_by=big_group_by) print(drawdowns['a'].max_drawdown()) print(drawdowns.max_drawdown()) %timeit big_drawdowns.max_drawdown() print(drawdowns.max_drawdown(group_by=group_by)) %timeit big_drawdowns.max_drawdown(group_by=big_group_by) print(drawdowns['a'].recovery_return.to_pd()) print(drawdowns.recovery_return.to_pd()) %timeit big_drawdowns.recovery_return print(drawdowns['a'].avg_recovery_return()) print(drawdowns.avg_recovery_return()) %timeit big_drawdowns.avg_recovery_return() print(drawdowns.avg_recovery_return(group_by=group_by)) %timeit big_drawdowns.avg_recovery_return(group_by=big_group_by) print(drawdowns['a'].max_recovery_return()) print(drawdowns.max_recovery_return()) %timeit big_drawdowns.max_recovery_return() print(drawdowns.max_recovery_return(group_by=group_by)) %timeit big_drawdowns.max_recovery_return(group_by=big_group_by) print(drawdowns['a'].duration.to_pd()) print(drawdowns.duration.to_pd()) %timeit big_drawdowns.duration print(drawdowns['a'].avg_duration()) print(drawdowns.avg_duration()) %timeit big_drawdowns.avg_duration() print(drawdowns.avg_duration(group_by=group_by)) %timeit big_drawdowns.avg_duration(group_by=big_group_by) print(drawdowns['a'].max_duration()) print(drawdowns.max_duration()) %timeit big_drawdowns.max_duration() print(drawdowns.max_duration(group_by=group_by)) %timeit big_drawdowns.max_duration(group_by=big_group_by) print(drawdowns['a'].coverage()) print(drawdowns.coverage()) %timeit big_drawdowns.coverage() print(drawdowns.coverage(group_by=group_by)) %timeit big_drawdowns.coverage(group_by=big_group_by) print(drawdowns['a'].decline_duration.to_pd()) print(drawdowns.decline_duration.to_pd()) %timeit big_drawdowns.decline_duration print(drawdowns['a'].recovery_duration.to_pd()) print(drawdowns.recovery_duration.to_pd()) %timeit big_drawdowns.recovery_duration print(drawdowns['a'].recovery_duration_ratio.to_pd()) print(drawdowns.recovery_duration_ratio.to_pd()) %timeit big_drawdowns.recovery_duration_ratio print(drawdowns.active) print(drawdowns['a'].active.records) print(drawdowns.active['a'].records) print(drawdowns.active.records) %timeit big_drawdowns.active print(drawdowns.recovered) print(drawdowns['a'].recovered.records) print(drawdowns.recovered['a'].records) print(drawdowns.recovered.records) %timeit big_drawdowns.recovered print(drawdowns['a'].active_drawdown()) print(drawdowns.active_drawdown()) %timeit big_drawdowns.active_drawdown() print(drawdowns['a'].active_duration()) print(drawdowns.active_duration()) %timeit big_drawdowns.active_duration() print(drawdowns['a'].active_recovery_return()) print(drawdowns.active_recovery_return()) %timeit big_drawdowns.active_recovery_return() print(drawdowns['a'].active_recovery_duration()) print(drawdowns.active_recovery_duration()) %timeit big_drawdowns.active_recovery_duration() print(drawdowns['a'].stats()) %timeit big_drawdowns[0].stats(silence_warnings=True) print(drawdowns.stats(column='a')) %timeit big_drawdowns.stats(column=0, silence_warnings=True) print(drawdowns.stats()) %timeit big_drawdowns.stats(silence_warnings=True) drawdowns['a'].plot().show_svg() drawdowns.plot(column='a', top_n=1).show_svg() drawdowns['a'].plots().show_svg() ``` ## Orders ``` close = pd.Series([1, 2, 3, 4, 5, 6, 7, 8], index=[ datetime(2020, 1, 1), datetime(2020, 1, 2), datetime(2020, 1, 3), datetime(2020, 1, 4), datetime(2020, 1, 5), datetime(2020, 1, 6), datetime(2020, 1, 7), datetime(2020, 1, 8) ]).vbt.tile(4, keys=['a', 'b', 'c', 'd']) print(close) big_close = pd.DataFrame(np.random.uniform(1, 10, size=(1000, 1000))) from vectorbt.portfolio.enums import order_dt records_arr = np.asarray([ (0, 0, 0, 1. , 1., 0.01 , 0), (1, 0, 1, 0.1, 2., 0.002, 0), (2, 0, 2, 1. , 3., 0.03 , 1), (3, 0, 3, 0.1, 4., 0.004, 1), (4, 0, 5, 1. , 6., 0.06 , 0), (5, 0, 6, 1. , 7., 0.07 , 1), (6, 0, 7, 2. , 8., 0.16 , 0), (7, 1, 0, 1. , 1., 0.01 , 1), (8, 1, 1, 0.1, 2., 0.002, 1), (9, 1, 2, 1. , 3., 0.03 , 0), (10, 1, 3, 0.1, 4., 0.004, 0), (11, 1, 5, 1. , 6., 0.06 , 1), (12, 1, 6, 1. , 7., 0.07 , 0), (13, 1, 7, 2. , 8., 0.16 , 1), (14, 2, 0, 1. , 1., 0.01 , 0), (15, 2, 1, 0.1, 2., 0.002, 0), (16, 2, 2, 1. , 3., 0.03 , 1), (17, 2, 3, 0.1, 4., 0.004, 1), (18, 2, 5, 1. , 6., 0.06 , 0), (19, 2, 6, 2. , 7., 0.14 , 1), (20, 2, 7, 2. , 8., 0.16 , 0) ], dtype=order_dt) print(records_arr.shape) wrapper = vbt.ArrayWrapper.from_obj(close, freq='1 days') orders = vbt.Orders(wrapper, records_arr, close) orders_grouped = vbt.Orders(wrapper.regroup(group_by), records_arr, close) big_records_arr = np.asarray(list(zip(*( np.arange(1000000), np.tile(np.arange(1000), 1000), np.repeat(np.arange(1000), 1000), np.full(1000000, 10), np.random.uniform(1, 10, size=1000000), np.full(1000000, 1), np.full(1000000, 1) ))), dtype=order_dt) big_records_arr['side'][::2] = 0 print(big_records_arr.shape) big_wrapper = vbt.ArrayWrapper.from_obj(big_close, freq='1 days') big_orders = vbt.Orders(big_wrapper, big_records_arr, big_close) big_orders_grouped = vbt.Orders(big_wrapper.copy(group_by=big_group_by), big_records_arr, big_close) print(orders.records) print(orders.close) print(orders['a'].records) print(orders['a'].close) %timeit big_orders.iloc[0] %timeit big_orders.iloc[:] %timeit big_orders_grouped.iloc[0] %timeit big_orders_grouped.iloc[:] print(orders.records_readable) print(orders.buy) print(orders['a'].buy.records) print(orders.buy['a'].records) print(orders.buy.records) %timeit big_orders.buy print(orders.sell) print(orders['a'].sell.records) print(orders.sell['a'].records) print(orders.sell.records) %timeit big_orders.sell print(orders['a'].stats()) %timeit big_orders[0].stats(silence_warnings=True) print(orders.stats(column='a')) %timeit big_orders.stats(column=0, silence_warnings=True) print(orders.stats()) %timeit big_orders.stats(silence_warnings=True) orders['a'].plot().show_svg() orders.plot(column='a').show_svg() orders.plots(column='a').show_svg() ``` ## Trades ``` trades = vbt.ExitTrades.from_orders(orders) trades_grouped = vbt.ExitTrades.from_orders(orders_grouped) print(trades.values.shape) big_trades = vbt.ExitTrades.from_orders(big_orders) big_trades_grouped = vbt.ExitTrades.from_orders(big_orders_grouped) print(big_trades.values.shape) %timeit vbt.ExitTrades.from_orders(big_orders) print(trades.records) print(trades.close) print(trades['a'].records) print(trades['a'].close) %timeit big_trades.iloc[0] %timeit big_trades.iloc[:] %timeit big_trades_grouped.iloc[0] %timeit big_trades_grouped.iloc[:] print(trades.records_readable) print(trades['a'].count()) print(trades.count()) %timeit big_trades.count() print(trades_grouped.count()) %timeit big_trades_grouped.count() print(trades.winning) print(trades['a'].winning.records) print(trades.winning['a'].records) print(trades.winning.records) %timeit big_trades.winning print(trades.losing) print(trades['a'].losing.records) print(trades.losing['a'].records) print(trades.losing.records) %timeit big_trades.losing print(trades['a'].winning_streak.to_pd(ignore_index=True)) print(trades.winning_streak.to_pd(ignore_index=True)) %timeit big_trades.winning_streak print(trades['a'].losing_streak.to_pd(ignore_index=True)) print(trades.losing_streak.to_pd(ignore_index=True)) %timeit big_trades.losing_streak print(trades['a'].win_rate()) print(trades.win_rate()) %timeit big_trades.win_rate() print(trades.win_rate(group_by=group_by)) %timeit big_trades.win_rate(group_by=big_group_by) print(trades['a'].profit_factor()) print(trades.profit_factor()) %timeit big_trades.profit_factor() print(trades_grouped.profit_factor()) %timeit big_trades_grouped.profit_factor() print(trades['a'].expectancy()) print(trades.expectancy()) %timeit big_trades.expectancy() print(trades_grouped.expectancy()) %timeit big_trades_grouped.expectancy() print(trades['a'].sqn()) print(trades.sqn()) %timeit big_trades.sqn() print(trades_grouped.sqn()) %timeit big_trades_grouped.sqn() print(trades.long) print(trades['a'].long.records) print(trades.long['a'].records) print(trades.long.records) %timeit big_trades.long print(trades.short) print(trades['a'].short.records) print(trades.short['a'].records) print(trades.short.records) %timeit big_trades.short print(trades.open) print(trades['a'].open.records) print(trades.open['a'].records) print(trades.open.records) %timeit big_trades.open print(trades.closed) print(trades['a'].closed.records) print(trades.closed['a'].records) print(trades.closed.records) %timeit big_trades.closed print(trades['a'].stats()) %timeit big_trades[0].stats(silence_warnings=True) print(trades.stats(column='a')) %timeit big_trades.stats(column=0, silence_warnings=True) print(trades.stats()) %timeit big_trades.stats(silence_warnings=True) trades['c'].plot().show_svg() trades['c'].plot_pnl().show_svg() trades.plots(column='c').show_svg() ``` ## Positions ``` positions = vbt.Positions.from_trades(trades) positions_grouped = vbt.Positions.from_trades(trades_grouped) print(positions.values.shape) big_positions = vbt.Positions.from_trades(big_trades) big_positions_grouped = vbt.Positions.from_trades(big_trades_grouped) print(big_positions.values.shape) %timeit vbt.Positions.from_trades(big_trades) print(positions.records) print(positions.close) print(positions['a'].records) print(positions['a'].close) %timeit big_positions.iloc[0] %timeit big_positions.iloc[:] %timeit big_positions_grouped.iloc[0] %timeit big_positions_grouped.iloc[:] ```
github_jupyter
# Rendimiento vs. Riesgo. ¿Cómo medirlos? <img style="float: left; margin: 15px 15px 15px 15px;" src="http://www.creative-commons-images.com/clipboard/images/return-on-investment.jpg" width="300" height="100" /> <img style="float: right; margin: 15px 15px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/5/5a/Risk-dice-example.jpg" title="github" width="300" height="100" /> > En mercados competitivos, **rendimientos esperados** más altos solo se dan a un precio: necesitas asumir un **riesgo** mayor. *Objetivos:* - Recordar elementos básicos de probabilidad. - Entender el equilibrio entre rendimiento y riesgo. - Entender el concepto de riesgo. - Desarrollar medidas cuantitativas de rendimiento y riesgo para activos. Referencia general: Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera. ___ ## 0. Antes de empezar... recorderis de probabilidad ### 0.1. Variables aleatorias En teoría de probabilidad, una variable aleatoria (cantidad aleatoria o variable estocástica) es una variable cuyos posibles valores dependen del resultado de un fenómeno aleatorio. Es común que dichos resultados dependan de ciertas variables físicas (o económicas) que no están del todo entendidas o conocidas. Por ejemplo, cuando se tira una moneda justa, el resultado final de cara o sello depende de ciertas cantidades físicas con incertidumbre. Referencia: https://en.wikipedia.org/wiki/Probability_theory En mercados financieros, los precios de las acciones responden a como el mercado agrega información (ese proceso depende de la calidad del mercado). Incluso, aunque ese proceso sea conocido, hay eventos que no podemos anticipar. De modo que los precios y rendimientos de los instrumentos los tratamos como variables aleatorias. ### 0.2. Función discreta de probabilidad Consideramos un conjunto finito (o contable) $\Omega$ de todos los posibles resultados (o realizaciones) de una variable aleatoria $X$. Entonces a cada elemento en $x\in\Omega$ se le asocia una probabilidad intrínseca $P(X=x)$ que satisface: 1. $0\leq P(X=x)\leq1$ para cada $x\in\Omega$, 2. $\sum_{x\in\Omega} P(X=x)=1$. Referencia: https://en.wikipedia.org/wiki/Probability_theory Para nuestros fines, el conjunto $\Omega$ lo estimaremos con un conjunto finito de la forma $\Omega=\left\lbrace x_j\,:\,j=1,\dots,m\right\rbrace$. Entonces, la segunda condición se puede escribir como $$\sum_{j=1}^m P(X=x_j)=1.$$ Equivalentemente, si definimos $p_j=P(X=x_j)$ $$\sum_{j=1}^m p_j=1.$$ ### 0.3. Valor esperado <img style="float: left; margin: 0px 15px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/f/f9/Largenumbers.svg" width="400" height="200" /> El valor esperado de una variable aleatoria es, intuitivamente, el valor promedio a largo plazo de las repeticiones del experimento que representa. Informalmente, la ley de los grandes números afirma que la media aritmética de los resultados de un experimento aleatorio converge al valor esperado cuando el número de repeticiones tiende a infinito. *Ver ejemplo del dado*. Para una variable aleatoria discreta $X$ $$E[X]=\sum_{x\in\Omega} xP(X=x).$$ En el caso finito $$E[X]=\sum_{j=1}^{m} p_jx_j.$$ ### 0.4. Varianza <img style="float: right; margin: 0px 15px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/f/f9/Comparison_standard_deviations.svg" width="400" height="200" /> La varianza es el valor esperado de la desviación al cuadrado de una variable aleatoria de su media. Informalmente, mide que tan dispersos (lejos) están los datos de su valor esperado. La desviación estándar es la raiz cuadrada de la varianza. Para una variable aleatoria discreta $X$ $$Var(X)=\sigma_X^2=E[(X-E[X])^2]=\sum_{x\in\Omega} P(X=x)(x-E[X])^2.$$ En el caso finito $$\sigma_X^2=\sum_{j=1}^{m} p_j(x_j-E[X])^2.$$ ___ ## 1. Introducción ### 1.1. Compensación rendimiento/riesgo - Cuando se realiza una inversión, pobablemente se anticipan ciertos rendimientos (esperados) futuros. - Sin embargo, dichos rendimientos futuros no pueden ser predichos con precisión. - Siempre hay cierto riesgo asociado. - El rendimiento real casi siempre se desviará de lo que inicialmente se esperaba al inicio del periodo de inversión. **Ejemplo:** <img style="float: right; margin: 15px 15px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/7/7e/S_and_P_500_chart_1950_to_2016_with_averages.png" title="github" width="300" height="100" /> - ¿Qué es el índice S&P500? - En su peor año, el índice S&P500 cayó un 46% (1926). - En el 2010, el índice subió un 55%. Los inversionistas nunca anticiparon estos resultados extremos cuando realizaron sus inversiones en estos periodos. - Obviamente, todos nosotros preferimos los rendimientos esperados de inversión más altos posibles. - En economía no hay torta gratis. Si deseamos renimientos esperados más altos, nos sometemos a un nivel de riesgo más alto. ¿Porqué? Intuitivamente: - Si pudiéramos obtener más rendimiento sin riesgo extra en un activo: todo el mundo compraría este activo, el precio aumentaría y el rendimiento caería. - Si los rendimientos no estuvieran correlacionados con el riesgo,todo el mundo vendería activos con riesgo: ¿para qué tener activos riesgosos si los rendimientos no están relacionados con el riesgo? mejor tener activos con el mismo rendimiento y sin riesgo. **Conclusión: hay una compensación de equilibrio entre rendimiento y riesgo.** ### 1.2. Concepto de riesgo <img style="float: left; margin: 0px 15px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/0/09/Playing-risk-venezuela.JPG" title="github" width="300" height="100" /> Hasta acá todo bien, pero, ¿qué es **riesgo**?, ¿cómo se mide? - Riesgo significa que en realidad no sabemos qué es lo que va a pasar (incertidumbre). - "Más cosas pueden pasar de las que pasarán en realidad". - Existen varias posibilidades pero no sabemos cuál será el resultado. Instintivamente, asociamos riesgo con peligro. Sin embargo, solo porque más cosas pueden pasar de las que pasarán, no significa que cosas malas pasarán: el resultado puede ser mejor de lo que inicialmente esperamos. - Piensen en la expresión "me arriesgué". - Ahora, de acuerdo a lo anterior, ¿cómo podríamos cuantificar el riesgo? - Debe tener relación con la *dispersión* de los rendimientos de un activo (fluctuación). - Esta es una parte de la historia; cuando combinamos activos en un portafolio, también debemos pensar cómo los rendimientos de los activos se mueven en relación a los demás. ___ ## 2. Midiendo el rendimiento ### 2.1. Rendimiento medio geométrico El **rendimiento** que se obtiene al invertir en un activo sobre un periodo se puede calcular directamente. **Ejemplo:** - Suponga que usted invierte en un fondo de acciones. Cada acción se vende actualmente en $\$100$. - Suponga que su horizonte de inversión es de un año. Si el precio de la acciónal final del año es $\$110$ y los dividendos en el año son $\$5$, ¿cuál es su rendimiento en el periodo de tenencia? **Ejemplo:** suponga que tiene una serie de rendimientos anuales para el índice S&P500 ``` # Importamos la librería pandas import pandas as pd # Creamos tabla tabla = pd.DataFrame(columns=['ret'], index=range(1,6)) tabla.index.name = 'year' tabla['ret']=[-0.1189,-0.2210,0.2869,0.1088,0.0491] # Llenar celdas faltantes tabla ``` 1. ¿Cuál es el rendimiento en el periodo de tenencia de los cinco años? 2. ¿Cuál es el rendimiento promedio anual a través de los cinco años? $$r_T=\prod_{i=1}^{T}(1+r_i) - 1,$$ ``` # Respuesta a la pregunta 1 (1 + tabla).prod() - 1 # Respuesta a la pregunta 2 (1 + tabla).prod()**(1 / 5) - 1 ``` En general, el **rendimiento medio geométrico** $\bar{r}_g$ satisface $$(1+\bar{r}_g)^T=\prod_{i=1}^{T}(1+r_i),$$ o equivalentemente $$\bar{r}_g=\left[\prod_{i=1}^{T}(1+r_i)\right]^{1/T}-1.$$ ### 2.2. Rendimiento medio aritmético Si pudiéramos obtener escenarios probables para la economía, asociados con ciertas probabilidades, podríamos calcular el **rendimiento esperado** como el promedio ponderado (valor esperado) por probabilidad de los posibles resultados. Esto es $$E[r]=\sum_{j=1}^{m}p_jr_j,$$ donde $r_j$ para $j=1,2,\dots,m$ son los posibles rendimientos y $p_j$ es la probabilidad asociada a que ocurra el rendimiento $r_j$. **Ejemplo:** - Suponga que usted invierte en un fondo de acciones. Cada acción se vende actualmente en \$100. - Suponga que hay cuatro posibles estados futuros de la economía, los cuales se resumen en la siguiente tabla ``` # Creamos tabla tabla2 = pd.DataFrame(columns=['prob', 'price', 'div', 'ret'], index=['excellent', 'good', 'poor', 'crash']) tabla2.index.name = 'state' tabla2['prob']=[0.25,0.45,0.25,0.05] tabla2['price']=[126.50,110.00,89.75,46.00] tabla2['div']=[4.50,4.00,3.50,2.00] # Llenar celdas faltantes tabla2['ret'] = (tabla2['price'] + tabla2['div'] - 100) / 100 tabla2 ``` Calcular el rendimiento esperado ``` (tabla2['prob'] * tabla2['ret']).sum() ``` **Ejemplo:** para la serie de rendimientos anuales para el índice S&P500, podríamos considerar cada uno de los rendimientos observados como posibles resultados igualmente probables... Entonces el rendimiento esperado se obtiene simplemente como el promedio aritmético de los rendimientos ``` tabla tabla.mean() ``` Primer momento alrededor del cero es la media o valor esperado de la variable aleatoria. La media de una variable aleatoria se considera como una cantidad numérica alrededor de la cual los valores de la variable aleatoria tienden a agruparse. Por lo tanto, la media es una medida de tendencia central. **Conclusión: los rendimientos esperados están relacionados con la media (valor esperado) o primer momento alrededor del cero. ** ### 2.3. Ejercicios En esta sección se dejarán algunos ejercicios para ustedes. Si alcanza el tiempo, se harán en clase. **Ejercicio.** Considere el siguiente reporte de rendimientos de cierta acción en los últimos tres años | Año | Rendimiento | | --- | ----------- | | 1 | -0.10 | | 2 | 0.20 | | 3 | 0.30 | - Calcular el rendimiento medio geométrico. ¿Cuál es su significado? - Calcular el rendimiento medio aritmético. ¿Cuál es su significado? ``` # importamos pandas # Creamos data frame # rendimiento medio geometrico # rendimiento medio aritmetico ``` ## 3. Midiendo el riesgo ### 3.1. La volatilidad como medida de riesgo Dado que el riesgo está estrechamente relacionado con *cuánto no sabemos* acerca de lo que va a pasar, lo podemos cantificar con alguna medida de dispersión de la variable aleatoria de rendimientos. **Ejemplo:** - Tiramos una moneda que no está cargada. - Definimos la variable aleatoria $X$, la cual toma el valor de $+1$ cuando la moneda cae cara y el valor de $-1$ cuando cae sello. - Como la moneda no está cargada, los eventos tienen igual probabilidad $P(X=1)=P(X=-1)=0.5$. El valor esperado de la variable aleatoria $X$ es: $$E[X]=P(X=1) \times (1) + P(X=-1) \times (-1)=0.5\times(1)+0.5\times(-1)=0.$$ Aunque el resultado real nunca va a ser cero, el resultado esperado es cero. *Necesitamos otra medida adicional para describir la distribución*. **Ejemplo:** - Suponga que cada acción de la compañía XYZ en $t=0$ cuestan \$100. - Existen tres posibilidades para el precio de una acción de XYZ en $t=1$: - El precio subirá a \$140 (probabilidad del 25%) - El precio subirá a \$110 (probabilidad del 50%) - El precio bajará a \$80 (probabilidad del 25%) Entonces, ¿cómo describimos una distribución de rendimiento? 1. Tendencia central: - Usaremos el valor esperado de los rendimientos como su tendencia central (ya vimos porqué). $$E[r]=\sum_{j=1}^{m}p_jr_j.$$ 2. Medida de dispersión: - Usaremos la desviación estándar (volatilidad) o varianza como medida de dispersión para las distribuciones de rendimiento... $$\sigma_r^2=\sum_{j=1}^{m} p_j(r_j-E[r])^2.$$ $$\sigma_r=\sqrt{\sum_{j=1}^{m} p_j(r_j-E[r])^2}.$$ En el ejemplo anterior ``` # Creamos tabla tabla3 = pd.DataFrame({'prob':[0.25, 0.5, 0.25], 'precio': [140, 110, 80]}) P0 = 100 tabla3['rend'] = (tabla3['precio'] - P0) / P0 tabla3 # Calculamos rendimiento esperado Er = # Calculamos varianza # Calculamos volatilidad ``` **Conclusión: la varianza y la desviación estándar nos brindad una medida de riesgo (incertidumbre, dispersión, volatilidad) en las realizaciones.** ### 3.2. Ejercicios En esta sección se dejarán algunos ejercicios para ustedes. Si alcanza el tiempo, se harán en clase. **Ejercicio 1.** A partir del análisis de un asesor financiero, se obtuvieron los siguientes datos de rendimientos de activos de cómputo y de telecomunicaciones, relativos a posibles situaciones económicas futuras | Condición  económica | Rendimiento activo de cómputo | Rendimiento activo telecomunicaciones | Probabilidad | | -------------------- | ----------------------------- | ------------------------------------- | ------------ | | Declive | -0.04 | 0.07 | 0.2 | | Estable | 0.02 | 0.04 | 0.5 | | Mejora | 0.10 | 0.03 | 0.3 | Calcular, para cada activo, su rendimiento esperado y volatilidad. **Ejercicio 2.** Con base en la siguiente distribución de rendimientos para el activo A, calcular la desviación estándar. | Probabilidad | Rendimiento | | ------------ | ----------- | | 0.3 | 0.10 | | 0.4 | 0.05 | | 0.3 | 0.30 | ``` # Creamos tabla ``` ### 3.3. Más acerca de medición de riesgo - Entonces, con lo que hemos visto hasta ahora, la distribución de rendimientos de un activo se puede describir simplemente con el rendimiento esperado y la desviación estándar. - Esto es porque todos los análisis en finanzas se simplifican increiblemente si podemos aproximar los rendimientos con una distribución nomal. Pero, ¿qué pasa si la distribución de rendimientos difiere significativamente de una distribución normal? **Ejemplo.** Referencia: Asset Management: A Systematic Approach to Factor Investing. Andrew Ang, 2014. ISBN: 9780199959327. Las siguientes gráficas presentan la riqueza acumulada de una inversión de \$1 en el índice S&P500, y en una estrategia de volatilidad sobre el mismo índice. ![image1](figures/VolStrat_S&P500_1) - Una *estrategia de volatilidad* es una estrategia de inversión que recibe primas durante periodos estables, pero tiene amplias pérdidas en periodos volátiles. - Veamos histogramas de los rendimientos de las diferentes estrategias <img style="float: left; margin: 15px 15px 15px 15px;" src="figures/VolStrat_S&P500_2" width="450" height="100" /> <img style="float: right; margin: 15px 15px 15px 15px;" src="figures/VolStrat_S&P500_3" width="450" height="100" /> ¿Diferencias notables? Se resumen los cuatro momentos para cada uno de los rendimientos en las siguientes tablas. | Medida | Estrategia Vol. | Índice S&P500 | | -------------- | ----------------| ------------- | | Media | 0.099 | 0.097 | | Desv. Estándar | 0.152 | 0.151 | | Asimetría | -8.3 | -0.6 | | Curtosis | 104.4 | 4.0 | **Asimetría (Skewness):** - Una distribución normal tiene medida de asimetría de cero. - Cuando una distribución es asimétrica hacia la izquierda, los valores negativos extremos (lejos de la media a la izquierda) dominan y la medida es negativa. - Cuando una distribución es asimétrica hacia la derecha, los valores positivos extremos (lejos de la media a la derecha) dominan y la medida es positiva. - La volatilidad subestima el riesgo cuando hay asimetría significativa. ¿Es siempre inconveniente tener asimetría?, ¿cuando sí?, ¿cuando no? **Curtosis (Kurtosis):** - Es una medida de cuán pesadas son las colas. - Una distribución normal tiene medida de curtosis de 3. - Colas pesadas implican que hay mayor probabilidad de ocurrencia de eventos extremos (lejos de la media). - De nuevo, la desviación estándar subestima el riesgo cuando hay curtosis significativa. ### Recapitulando... No es que la desviación estándar sólo aplique para distribuciones normales. Más bien, la volatilidad no captura bien la probabilidad de eventos extremos para distribuciones no normales. - Para rendimientos normalmente distribuidos, un rendimiento alejado $2\sigma$ del rendimiento esperado es muy poco probable. Un rendimiento alejado $5\sigma$ del rendimiento esperado es casi imposible que suceda. - Sin embargo, para ciertas estrategias de cobertura, $E[r]\pm 2\sigma$ es común y $E[r]\pm 5\sigma$ podría llegar a pasar. Entonces, la volatilidad es una buena medida de riesgo, siempre y cuando tengamos distribuciones simétricas y sin mucho riesgo de eventos extremos. En otro caso, no es una medida apropiada. Otras medidas de riesgo que intentan capturar estos fenómenos son: - VaR (valor en riesgo) - CVaR (valor en riesgo condicional) # Anuncios parroquiales ## 1. Quiz la siguiente clase... ¡estar presentes temprano! <script> $(document).ready(function(){ $('div.prompt').hide(); $('div.back-to-top').hide(); $('nav#menubar').hide(); $('.breadcrumb').hide(); $('.hidden-print').hide(); }); </script> <footer id="attribution" style="float:right; color:#808080; background:#fff;"> Created with Jupyter by Esteban Jiménez Rodríguez. </footer>
github_jupyter
# Computational Thinking *Lesson Developer: Aaron Weeden aweeden@shodor.org* ``` # This code cell starts the necessary setup for Hour of CI lesson notebooks. # First, it enables users to hide and unhide code by producing a 'Toggle raw code' button below. # Second, it imports the hourofci package, which is necessary for lessons and interactive Jupyter Widgets. # Third, it helps hide/control other aspects of Jupyter Notebooks to improve the user experience # This is an initialization cell # It is not displayed because the Slide Type is 'Skip' from IPython.display import HTML, IFrame, Javascript, display from ipywidgets import interactive import ipywidgets as widgets from ipywidgets import Layout import getpass # This library allows us to get the username (User agent string) # import package for hourofci project import sys sys.path.append('../../supplementary') # relative path (may change depending on the location of the lesson notebook) import hourofci # load javascript to initialize/hide cells, get user agent string, and hide output indicator # hide code by introducing a toggle button "Toggle raw code" HTML(''' <script type="text/javascript" src=\"../../supplementary/js/custom.js\"></script> <style> .output_prompt{opacity:0;} </style> <input id="toggle_code" type="button" value="Toggle raw code"> ''') ``` ## Use abstraction to hide details As algorithms and computations get more *complex*, it can become difficult to keep track of all the details. This is where it can be helpful to utilize **abstraction** to help with computational thinking. If you can find a way to represent parts of an algorithm in a way that hides the details but still covers the entire computation, you can make things more **abstract** and have fewer things to think about. You have already seen an example of an abstraction when you looked at *visualizations*; we used pictures of trees instead of colors or numerals to represent the trees, so you did not have to think about which type of tree was represented by which color or which numeral. We can also use *abstraction* when we build an algorithm, for example by grouping together steps and then using the group over and over. In this way we can pay attention to the details only once to create an *abstraction*, and then we do not have to pay attention to all of the details each time we want to use that abstraction. The example on the next page will take you through a forest fire algorithm that uses abstraction. Feel free to try it a few times to get a better idea of how it works. ``` IFrame(src="supplementary/abstraction.html", width=900, height=610) ``` Now that you’ve navigated this exploration, you’ve been able to connect with a significant amount of computational thinking components. Take these pieces - decomposition, pattern recognition, abstraction, and algorithms - with you to future problems!
github_jupyter
# Developing an AI application Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications. In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below. <img src='assets/Flowers.png' width=500px> The project is broken down into multiple steps: * Load and preprocess the image dataset * Train the image classifier on your dataset * Use the trained classifier to predict image content We'll lead you through each part which you'll implement in Python. When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new. First up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here. Please make sure if you are running this notebook in the workspace that you have chosen GPU rather than CPU mode. ``` # Imports here import pandas as pd import numpy as np import matplotlib.pyplot as plt import torch from torch import nn from torch import optim from torchvision import datasets, transforms, models ``` ## Load the data Here you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). The data should be included alongside this notebook, otherwise you can [download it here](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz). The dataset is split into three parts, training, validation, and testing. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. You'll also need to make sure the input data is resized to 224x224 pixels as required by the pre-trained networks. The validation and testing sets are used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size. The pre-trained networks you'll use were trained on the ImageNet dataset where each color channel was normalized separately. For all three sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`, calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1. ``` data_dir = 'flowers' train_dir = data_dir + '/train' valid_dir = data_dir + '/valid' test_dir = data_dir + '/test' # TODO: Define your transforms for the training, validation, and testing sets train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) vaild_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) # TODO: Load the datasets with ImageFolder train_data = datasets.ImageFolder(train_dir, transform=train_transforms) test_data = datasets.ImageFolder(test_dir, transform=test_transforms) vaild_data = datasets.ImageFolder(valid_dir, transform=vaild_transforms) # TODO: Using the image datasets and the trainforms, define the dataloaders train_dataloaders = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True) test_dataloaders = torch.utils.data.DataLoader(test_data, batch_size=32) vaild_dataloaders = torch.utils.data.DataLoader(vaild_data, batch_size=32) ``` ### Label mapping You'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers. ``` import json with open('cat_to_name.json', 'r') as f: cat_to_name = json.load(f) images, labels = next(iter(train_dataloaders)) print(len(images[0,2])) plt.imshow(images[0,0]) ``` # Building and training the classifier Now that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features. We're going to leave this part up to you. Refer to [the rubric](https://review.udacity.com/#!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do: * Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use) * Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout * Train the classifier layers using backpropagation using the pre-trained network to get the features * Track the loss and accuracy on the validation set to determine the best hyperparameters We've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal! When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project. One last important tip if you're using the workspace to run your code: To avoid having your workspace disconnect during the long-running tasks in this notebook, please read in the earlier page in this lesson called Intro to GPU Workspaces about Keeping Your Session Active. You'll want to include code from the workspace_utils.py module. ``` # TODO: Build and train your network # Transfer learning model = models.vgg16(pretrained=True) model.cuda() # Freeze for param in model.parameters(): param.requires_grad = False num_features = model.classifier[0].in_features ##Build Classifier from collections import OrderedDict classifier = nn.Sequential(OrderedDict([ ('dropout',nn.Dropout(p=0.5)), ('fc1', nn.Linear(num_features, 500)), ('relu', nn.ReLU()), ('fc2', nn.Linear(500, 100)), ('relu2', nn.ReLU()), ('fc3', nn.Linear(100,102)), ('output', nn.LogSoftmax(dim=1)) ])) model.classifier = classifier # Select Criterion and Optimizer criterion = nn.NLLLoss() optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) model.to('cuda') # Training epochs = 10 for e in range(epochs): running_loss = 0 model.train() for images, labels in train_dataloaders: images,labels = images.to('cuda'), labels.to('cuda') log_ps = model(images) loss = criterion(log_ps, labels) optimizer.zero_grad() loss.backward() optimizer.step() running_loss += loss.item() else: model.eval() accuracy = 0 vaild_loss = 0 with torch.no_grad(): for images, labels in vaild_dataloaders: images,labels = images.to('cuda'), labels.to('cuda') log_ps = model(images) vaild_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) running_loss = running_loss / len(train_dataloaders) valid_loss = vaild_loss / len(vaild_dataloaders) accuracy = accuracy / len(vaild_dataloaders) print("epoch {0}/10 Training loss: {1} ".format(e+1,running_loss), "Vaildation loss: {}".format(valid_loss), "Accurancy:{}".format(accuracy)) ``` ## Testing your network It's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. Run the test images through the network and measure the accuracy, the same way you did validation. You should be able to reach around 70% accuracy on the test set if the model has been trained well. ``` # TODO: Do validation on the test set size = 0 correct = 0 model.eval() with torch.no_grad(): for images, labels in test_dataloaders: images,labels = images.to('cuda'), labels.to('cuda') log_ps = model(images) vaild_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) size += labels.size(0) correct += equals.sum().item() test_accuracy = correct / size print("Accurancy:{:.4f}".format(test_accuracy)) ``` ## Save the checkpoint Now that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on. ```model.class_to_idx = image_datasets['train'].class_to_idx``` Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now. ``` # TODO: Save the checkpoint model.class_to_idx = train_data.class_to_idx checkpoint = {'classifier': model.classifier, 'optimizer': optimizer, 'arch': "vgg16", 'state_dict': model.state_dict(), 'optimizer_state': optimizer.state_dict(), 'class_to_idx': model.class_to_idx } torch.save(checkpoint,'checkpoint.pth') ``` ## Loading the checkpoint At this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network. ``` # TODO: Write a function that loads a checkpoint and rebuilds the model def load_model(path): checkpoint = torch.load(path) classifier = checkpoint['classifier'] model = models.vgg16(pretrained=True) model.class_to_idx = checkpoint['class_to_idx'] model.classifier = classifier model.load_state_dict(checkpoint['state_dict']) return model model = load_model('checkpoint.pth') ``` # Inference for classification Now you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like ```python probs, classes = predict(image_path, model) print(probs) print(classes) > [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339] > ['70', '3', '45', '62', '55'] ``` First you'll need to handle processing the input image such that it can be used in your network. ## Image Preprocessing You'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training. First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image. Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`. As before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation. And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions. ``` from PIL import Image def process_image(image): ''' Scales, crops, and normalizes a PIL image for a PyTorch model, returns an Numpy array ''' img_pil = Image.open(image) transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) img_tensor = transform(img_pil) return img_tensor # TODO: Process a PIL image for use in a PyTorch model img = (data_dir + '/test' + '/1/' + 'image_06752.jpg') img = process_image(img) print(img.shape) # TODO: Process a PIL image for use in a PyTorch model ``` To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions). ``` def imshow(image, ax=None, title=None): if ax is None: fig, ax = plt.subplots() # PyTorch tensors assume the color channel is the first dimension # but matplotlib assumes is the third dimension image = image.numpy().transpose((1, 2, 0)) # Undo preprocessing mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) image = std * image + mean # Image needs to be clipped between 0 and 1 or it looks like noise when displayed image = np.clip(image, 0, 1) ax.imshow(image) return ax ``` ## Class Prediction Once you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values. To get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.html#torch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](#Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well. Again, this method should take a path to an image and a model checkpoint, then return the probabilities and classes. ```python probs, classes = predict(image_path, model) print(probs) print(classes) > [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339] > ['70', '3', '45', '62', '55'] ``` ``` def predict(image_path, model, topk=5): ''' Predict the class (or classes) of an image using a trained deep learning model. ''' model.eval() model.to('cuda') img = process_image(image_path) img = img.to('cuda') img = img.unsqueeze_(0) img = img.float() with torch.no_grad(): output = model.forward(img) probability = torch.exp(output) prob, label = probability.topk(topk) folder_index = [] for i in np.array(label[0]): for folder, num in model.class_to_idx.items(): if num == i: folder_index += [folder] return prob, folder_index # TODO: Implement the code to predict the class from an image file img_path = (data_dir + '/test' + '/10/' + 'image_07104.jpg') prob, label = predict(img_path,model) flower_name = [cat_to_name[i] for i in label] flower_name ``` ## Sanity Checking Now that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this: <img src='assets/inference_example.png' width=300px> You can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above. ``` path = test_dir + '/19/image_06175.jpg' img = process_image(path) imshow(img) # TODO: Display an image along with the top 5 classes def check_sanity(): path = test_dir + '/1/image_06764.jpg' img = process_image(path) fig, ax1 = plt.subplots(figsize=(3.5,3.5), ncols=1) image = imshow(img) prob, label = predict(path,model) flower_name = [cat_to_name[i] for i in label] prob_1 = prob.data.cpu().numpy().squeeze() ax1.barh(flower_name, prob_1) print('The flower is', cat_to_name['1']) check_sanity() ```
github_jupyter
``` %load_ext watermark %watermark -d -u -a 'Andreas Mueller, Kyle Kastner, Sebastian Raschka' -v -p numpy,scipy,matplotlib,scikit-learn %matplotlib inline import matplotlib.pyplot as plt import numpy as np ``` # 无监督学习(Unsupervised Learning) Part 2 -- 聚类(Clustering) 聚类是将样本组织到具有相似分组的任务,本节中,将探讨一个基本的聚类任务,针对一些合成的和真实世界的数据集。 以下是聚类算法的一些常见应用: - 数据缩减和压缩 - 数据摘要,作为推荐系统的再处理步骤 - 类似的: - 将相关的网上新闻(如Google新闻)和网上搜寻结果分组 - 为投资组合管理分组相关股票报价 - 建立客户档案,以便进行市场分析 - 为无监督特征提取建立一个原型样本码本 从创建一个简单的二维合成数据集开始: ``` from sklearn.datasets import make_blobs X, y = make_blobs(random_state=42) X.shape plt.scatter(X[:, 0], X[:, 1]); ``` 上面的散点图中,可以看到三组独立的数据点,我们希望用聚类来恢复它们——想象一下在分类任务中“发现”我们已经认为理所当然的类标签。 即使这些群组在数据中是显而易见的,但当数据存在于高维空间时,也很难找到它们,因为我们无法用单独的直方图或散点图进行可视化。 先用最简单的聚类算法之一,K均值算法。用迭代算法搜索3个群组的中心,使得每个点到所属类的距离最小化。 **问题**: 你希望输出结果是什么样的? ``` from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=3, random_state=42) ``` 可以通过调用fit,然后访问K均值估计器的``labels_``属性,或通过调用``fit_predict``。无论哪种方式,结果都包含分配给每个点的类的标识。 ``` labels = kmeans.fit_predict(X) labels all(y == labels) ``` 把结果可视化: ``` plt.scatter(X[:, 0], X[:, 1], c=labels); ``` 与真正的标签相比: ``` plt.scatter(X[:, 0], X[:, 1], c=y); ``` 在这里,我们可能对聚类结果感到满意。但总的来说,我们会希望有一个更量化的评价。在生成blob时,将我们的聚类标签与我们的基本事实进行比较怎么样? ``` from sklearn.metrics import confusion_matrix, accuracy_score print('Accuracy score:', accuracy_score(y, labels)) print(confusion_matrix(y, labels)) np.mean(y == labels) ``` 尽管我们完美地恢复了数据到类的划分,但是我们分配的类标识是任意的,我们不能指望完全恢复它们。因此,我们必须采用不同的评分标准,如``adjusted_rand_score``,它对标签排列是不变的: ``` from sklearn.metrics import adjusted_rand_score adjusted_rand_score(y, labels) ``` K-means的“缺点”之一,是我们必须指定聚类的个数,而我们通常不知道*先验(apriori)*。例如,让我们看看如果将合成的3-blob数据集中的聚类个数设置为2会发生什么: ``` kmeans = KMeans(n_clusters=2, random_state=42) labels = kmeans.fit_predict(X) plt.scatter(X[:, 0], X[:, 1], c=labels); ``` #### 拐点(Elbow)法 拐点法是一种“经验”方法,用于找到最佳的聚类个数。这里,我们看一下不同K值的类内分散度: ``` distortions = [] for i in range(1, 11): km = KMeans(n_clusters=i, random_state=0) km.fit(X) distortions.append(km.inertia_) plt.plot(range(1, 11), distortions, marker='o') plt.xlabel('Number of clusters') plt.ylabel('Distortion') plt.show() ``` 然后,选择类似“拐点”位置的值。如我们所见,在这种情况下,k=3,考虑到我们之前对数据集的视觉预期,这是有意义的。 **聚类伴随假设而来**: 聚类算法通过假设样本应该被分组在一起,来发现聚类。每种算法都有不同的假设,结果的质量和可解释性将取决于这些假设是否满足目标。对于K-均值聚类,模型假设所有的聚类都有相等的球形方差。 **一般来说,不能保证聚类算法找到的结构与你感兴趣的东西有任何关系**. 我们可以很容易创建一个具有非各向同性聚类的数据集,在该数据集上K均值会失败: ``` from sklearn.datasets import make_blobs X, y = make_blobs(random_state=170, n_samples=600) rng = np.random.RandomState(74) transformation = rng.normal(size=(2, 2)) X = np.dot(X, transformation) y_pred = KMeans(n_clusters=3).fit_predict(X) plt.scatter(X[:, 0], X[:, 1], c=y_pred) ``` ## 一些值得注意的聚类方法 以下是一些众所周知的聚类算法。 - `sklearn.cluster.KMeans`: <br/> 最简单有效的聚类算法。需要预先提供计算聚类个数,假设数据已经标准化 - `sklearn.cluster.MeanShift`: <br/> 可以找到比KMeans看起来更好看的聚类,但无法扩展到大量样本 - `sklearn.cluster.DBSCAN`: <br/> 可以基于密度检测不规则形状的类簇,即输入空间中的稀疏区域可能成为簇间边界。也可以检测离群值(不属于聚类的样本) - `sklearn.cluster.AffinityPropagation`: <br/> 基于数据点间消息传递的聚类算法 - `sklearn.cluster.SpectralClustering`: <br/> 应用于归一化图拉普拉斯投影的K均值 - `sklearn.cluster.Ward`: <br/> 基于沃德算法实现分层聚类,方差最小化方法。在每一步,最小化的总和所有聚类内平方差(惯性标准) 其中,Ward, SpectralClustering, DBSCAN和Affinity propagation也可用预先计算的相似性矩阵进行处理 <img src="figures/cluster_comparison.png" width="900"> ## 练习: digits聚类 对手写数字数据进行K均值聚类,找到10个聚类。将类中心可视化为图像(将每个样本reshape到8×8并使用``plt.imshow``)聚类和特定数字有关吗?``adjusted_rand_score``得分是多少? ``` from sklearn.datasets import load_digits digits = load_digits() # ... #%load solutions/08B_digits_clustering.py ```
github_jupyter
#Transformer ``` from google.colab import drive drive.mount('/content/drive') # informer, ARIMA, Prophet, LSTMa와는 다른 형식의 CSV를 사용한다.(Version2) !pip install pandas import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline df = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/Data/삼성전자_6M_NonST_Version2.csv', encoding='cp949') df.head() df.info() data_start_date = df.columns[1] data_end_date = df.columns[-1] print('Data ranges from %s to %s' % (data_start_date, data_end_date)) ``` ### Train and Validation Series Partioning ``` ######################## CHECK ######################### # 기준시간이 hour이므로, 7일 예측한다면 7*24로 설정한다. from datetime import timedelta pred_steps = 24*4+23 pred_length=timedelta(hours = pred_steps) first_day = pd.to_datetime(data_start_date) last_day = pd.to_datetime(data_end_date) val_pred_start = last_day - pred_length + timedelta(1) val_pred_end = last_day print(val_pred_start, val_pred_end) train_pred_start = val_pred_start - pred_length train_pred_end = val_pred_start - timedelta(days=1) print(train_pred_start, train_pred_end) enc_length = train_pred_start - first_day print(enc_length) train_enc_start = first_day train_enc_end = train_enc_start + enc_length - timedelta(1) val_enc_start = train_enc_start + pred_length val_enc_end = val_enc_start + enc_length - timedelta(1) print(train_enc_start, train_enc_end) print(val_enc_start, val_enc_end) # 최종적으로 Val prediction 구간을 예측하게 된다. print('Train encoding:', train_enc_start, '-', train_enc_end) print('Train prediction:', train_pred_start, '-', train_pred_end, '\n') print('Val encoding:', val_enc_start, '-', val_enc_end) print('Val prediction:', val_pred_start, '-', val_pred_end) print('\nEncoding interval:', enc_length.days) print('Prediction interval:', pred_length.days) ``` ## Data Formatting ``` #np.log 1p 해준다. date_to_index = pd.Series(index=pd.Index([pd.to_datetime(c) for c in df.columns[1:]]), data=[i for i in range(len(df.columns[1:]))]) series_array = df[df.columns[1:]].values.astype(np.float32) print(series_array) def get_time_block_series(series_array, date_to_index, start_date, end_date): inds = date_to_index[start_date:end_date] return series_array[:,inds] def transform_series_encode(series_array): series_array = np.log1p(np.nan_to_num(series_array)) # filling NaN with 0 series_mean = series_array.mean(axis=1).reshape(-1,1) series_array = series_array - series_mean series_array = series_array.reshape((series_array.shape[0],series_array.shape[1], 1)) return series_array, series_mean def transform_series_decode(series_array, encode_series_mean): series_array = np.log1p(np.nan_to_num(series_array)) # filling NaN with 0 series_array = series_array - encode_series_mean series_array = series_array.reshape((series_array.shape[0],series_array.shape[1], 1)) return series_array # sample of series from train_enc_start to train_enc_end encoder_input_data = get_time_block_series(series_array, date_to_index, train_enc_start, train_enc_end) encoder_input_data, encode_series_mean = transform_series_encode(encoder_input_data) # sample of series from train_pred_start to train_pred_end decoder_target_data = get_time_block_series(series_array, date_to_index, train_pred_start, train_pred_end) decoder_target_data = transform_series_decode(decoder_target_data, encode_series_mean) encoder_input_val_data = get_time_block_series(series_array, date_to_index, val_enc_start, val_enc_end) encoder_input_val_data, encode_series_mean = transform_series_encode(encoder_input_val_data) decoder_target_val_data = get_time_block_series(series_array, date_to_index, val_pred_start, val_pred_end) decoder_target_val_data = transform_series_decode(decoder_target_val_data, encode_series_mean) #for d in encoder_input_data: # print(d.shape) #train_dataset = tf.data.Dataset.from_tensor_slices((encoder_input_data, decoder_target_data)) #train_dataset = train_dataset.batch(54) #for d in train_dataset: # #print(f'features:{features_tensor} target:{target_tensor}') # print("-----") # print(d) ``` ### Transformer model ``` !pip install tensorflow_datasets import tensorflow_datasets as tfds import tensorflow as tf import time import numpy as np import matplotlib.pyplot as plt train_dataset = tf.data.Dataset.from_tensor_slices((encoder_input_data, decoder_target_data)) val_dataset = tf.data.Dataset.from_tensor_slices((encoder_input_val_data, decoder_target_val_data)) ### position def get_angles(pos, i, d_model): angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model)) return pos * angle_rates def positional_encoding(position, d_model): angle_rads = get_angles(np.arange(position)[:, np.newaxis], np.arange(d_model)[np.newaxis, :], d_model) # apply sin to even indices in the array; 2i sines = np.sin(angle_rads[:, 0::2]) # apply cos to odd indices in the array; 2i+1 cosines = np.cos(angle_rads[:, 1::2]) pos_encoding = np.concatenate([sines, cosines], axis=-1) pos_encoding = pos_encoding[np.newaxis, ...] return tf.cast(pos_encoding, dtype=tf.float32) # Masking def create_padding_mask(seq): seq = tf.cast(tf.math.equal(seq, 0), tf.float32) # add extra dimensions so that we can add the padding # to the attention logits. return seq[:, tf.newaxis, tf.newaxis, :] # (batch_size, 1, 1, seq_len) x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]]) print(create_padding_mask(x)) def create_look_ahead_mask(size): mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0) return mask # (seq_len, seq_len) x = tf.random.uniform((1, 4)) temp = create_look_ahead_mask(x.shape[1]) print(temp) # Scaled dot product attention def scaled_dot_product_attention(q, k, v, mask): """Calculate the attention weights. q, k, v must have matching leading dimensions. The mask has different shapes depending on its type(padding or look ahead) but it must be broadcastable for addition. Args: q: query shape == (..., seq_len_q, depth) k: key shape == (..., seq_len_k, depth) v: value shape == (..., seq_len_v, depth) mask: Float tensor with shape broadcastable to (..., seq_len_q, seq_len_k). Defaults to None. Returns: output, attention_weights """ matmul_qk = tf.matmul(q, k, transpose_b=True) # (..., seq_len_q, seq_len_k) # scale matmul_qk dk = tf.cast(tf.shape(k)[-1], tf.float32) scaled_attention_logits = matmul_qk / tf.math.sqrt(dk) # add the mask to the scaled tensor. if mask is not None: scaled_attention_logits += (mask * -1e9) # softmax is normalized on the last axis (seq_len_k) so that the scores # add up to 1. attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1) # (..., seq_len_q, seq_len_k) output = tf.matmul(attention_weights, v) # (..., seq_len_v, depth) return output, attention_weights # scaled dot product attetion test def print_out(q, k, v): temp_out, temp_attn = scaled_dot_product_attention( q, k, v, None) print ('Attention weights are:') print (temp_attn) print ('Output is:') print (temp_out) np.set_printoptions(suppress=True) temp_k = tf.constant([[10,0,0], [0,10,0], [0,0,10], [0,0,10]], dtype=tf.float32) # (4, 3) temp_v = tf.constant([[ 1,0], [ 10,0], [ 100,5], [1000,6]], dtype=tf.float32) # (4, 3) # This `query` aligns with the second `key`, # so the second `value` is returned. temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32) # (1, 3) print_out(temp_q, temp_k, temp_v) # Multi Head Attention class MultiHeadAttention(tf.keras.layers.Layer): def __init__(self, d_model, num_heads): super(MultiHeadAttention, self).__init__() self.num_heads = num_heads self.d_model = d_model assert d_model % self.num_heads == 0 self.depth = d_model // self.num_heads self.wq = tf.keras.layers.Dense(d_model) self.wk = tf.keras.layers.Dense(d_model) self.wv = tf.keras.layers.Dense(d_model) self.dense = tf.keras.layers.Dense(d_model) def split_heads(self, x, batch_size): x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth)) return tf.transpose(x, perm=[0, 2, 1, 3]) def call(self, v, k, q, mask): batch_size = tf.shape(q)[0] q = self.wq(q) k = self.wk(k) v = self.wv(v) # (batch_size, seq_len, d_model) q = self.split_heads(q, batch_size) k = self.split_heads(k, batch_size) v = self.split_heads(v, batch_size) #(batch_size, num_head, seq_len_v, depth) # scaled_attention.shape == (batch_size, num_heads, seq_len_v, depth) # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k) scaled_attention, attention_weights = scaled_dot_product_attention( q, k, v, mask) scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3]) # (batch_size, seq_len_v, num_heads, depth) concat_attention = tf.reshape(scaled_attention, (batch_size, -1, self.d_model)) # (batch_size, seq_len_v, d_model) output = self.dense(concat_attention) # (batch_size, seq_len_v, d_model) return output, attention_weights # multhead attention test temp_mha = MultiHeadAttention(d_model=512, num_heads=8) y = tf.random.uniform((1, 60, 512)) # (batch_size, encoder_sequence, d_model) out, attn = temp_mha(y, k=y, q=y, mask=None) out.shape, attn.shape # activation – the activation function of encoder/decoder intermediate layer, relu or gelu (default=relu). # Point wise feed forward network def point_wise_feed_forward_network(d_model, dff): return tf.keras.Sequential([ tf.keras.layers.Dense(dff, activation='relu'), # (batch_size, seq_len, dff) tf.keras.layers.Dense(d_model) # (batch_size, seq_len, d_model) ]) # Point wise feed forward network test sample_ffn = point_wise_feed_forward_network(512, 2048) sample_ffn(tf.random.uniform((64, 50, 512))).shape ``` ### Encoder and Decoder ``` # Encoder Layer class EncoderLayer(tf.keras.layers.Layer): def __init__(self, d_model, num_heads, dff, rate=0.1): super(EncoderLayer, self).__init__() self.mha = MultiHeadAttention(d_model, num_heads) self.ffn = point_wise_feed_forward_network(d_model, dff) self.layernorm1 = tf.keras.layers.BatchNormalization(epsilon=1e-6) self.layernorm2 = tf.keras.layers.BatchNormalization(epsilon=1e-6) self.dropout1 = tf.keras.layers.Dropout(rate) self.dropout2 = tf.keras.layers.Dropout(rate) def call(self, x, training, mask): attn_output, _ = self.mha(x, x, x, mask) # (batch_size, input_seq_len, d_model) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(x + attn_output) ffn_output = self.ffn(out1) # (batch_size, input_seq_len, d_model) ffn_output = self.dropout2(ffn_output, training=training) out2 = self.layernorm2(out1 + ffn_output) # (batch_size, input_seq_len, d_model) return out2 # Encoder Layer Test sample_encoder_layer = EncoderLayer(512, 8, 2048) sample_encoder_layer_output = sample_encoder_layer( tf.random.uniform((64, 43, 512)), False, None) sample_encoder_layer_output.shape # (batch_size, input_seq_len, d_model) # Decoder Layer class DecoderLayer(tf.keras.layers.Layer): def __init__(self, d_model, num_heads, dff, rate=0.1): super(DecoderLayer, self).__init__() self.mha1 = MultiHeadAttention(d_model, num_heads) self.mha2 = MultiHeadAttention(d_model, num_heads) self.ffn = point_wise_feed_forward_network(d_model, dff) self.layernorm1 = tf.keras.layers.BatchNormalization(epsilon=1e-6) self.layernorm2 = tf.keras.layers.BatchNormalization(epsilon=1e-6) self.layernorm3 = tf.keras.layers.BatchNormalization(epsilon=1e-6) self.dropout1 = tf.keras.layers.Dropout(rate) self.dropout2 = tf.keras.layers.Dropout(rate) self.dropout3 = tf.keras.layers.Dropout(rate) def call(self, x, enc_output, training, look_ahead_mask, padding_mask): # enc_output.shape == (batch_size, input_seq_len, d_model) attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask) attn1 = self.dropout1(attn1, training=training) out1 = self.layernorm1(attn1 + x) attn2, attn_weights_block2 = self.mha2( enc_output, enc_output, out1, padding_mask) attn2 = self.dropout2(attn2, training=training) out2 = self.layernorm2(attn2 + out1) ffn_output = self.ffn(out2) ffn_output = self.dropout3(ffn_output, training=training) out3 = self.layernorm3(ffn_output + out2) return out3, attn_weights_block1, attn_weights_block2 # Decoder layer test sample_decoder_layer = DecoderLayer(512, 8, 2048) sample_decoder_layer_output, _, _ = sample_decoder_layer( tf.random.uniform((64, 50, 512)), sample_encoder_layer_output, False, None, None) sample_decoder_layer_output.shape # (batch_size, target_seq_len, d_model) # Encoder class Encoder(tf.keras.layers.Layer): def __init__(self, num_layers, d_model, num_heads, dff, max_len=5000, rate=0.1): super(Encoder, self).__init__() self.d_model = d_model self.num_layers = num_layers self.embedding = tf.keras.layers.Dense(d_model, use_bias=False) self.pos_encoding = positional_encoding(max_len, self.d_model) self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate) for _ in range(num_layers)] self.dropout = tf.keras.layers.Dropout(rate) def call(self, x, training, mask): seq_len = tf.shape(x)[1] # adding embedding and position encoding x = self.embedding(x) # (batch_size, input_seq_len, d_model) x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32)) x += self.pos_encoding[:, :seq_len, :] x = self.dropout(x, training=training) for i in range(self.num_layers): x = self.enc_layers[i](x, training, mask) return x sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8, dff=2048) sample_encoder_output = sample_encoder(tf.random.uniform((64, 62,1)), training=False, mask=None) print (sample_encoder_output.shape) # (batch_size, input_seq_len, d_model) # Decoder class Decoder(tf.keras.layers.Layer): def __init__(self, num_layers, d_model, num_heads, dff, max_len=5000, rate=0.1): super(Decoder, self).__init__() self.d_model = d_model self.num_layers = num_layers self.embedding = tf.keras.layers.Dense(d_model, use_bias=False) self.pos_encoding = positional_encoding(max_len, self.d_model) self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate) for _ in range(num_layers)] self.dropout = tf.keras.layers.Dropout(rate) def call(self, x, enc_output, training, look_ahead_mask, padding_mask): seq_len = tf.shape(x)[1] attention_weights = {} x = self.embedding(x) x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32)) x += self.pos_encoding[:, :seq_len, :] x = self.dropout(x, training=training) for i in range(self.num_layers): x, block1, block2 = self.dec_layers[i](x, enc_output, training, look_ahead_mask, padding_mask) attention_weights['decoder_layer{}_block1'.format(i+1)] = block1 attention_weights['decoder_layer{}_block2'.format(i+1)] = block2 return x, attention_weights sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8, dff=2048) output, attn = sample_decoder(tf.random.uniform((64, 26,3)), enc_output=sample_encoder_output, training=False, look_ahead_mask=None, padding_mask=None) output.shape, attn['decoder_layer2_block2'].shape ``` ### Transfomer for TS ``` class Transformer(tf.keras.Model): def __init__(self, num_layers, d_model, num_heads, dff, out_dim, max_len=5000, rate=0.1): super(Transformer, self).__init__() self.encoder = Encoder(num_layers, d_model, num_heads, dff, max_len, rate) self.decoder = Decoder(num_layers, d_model, num_heads, dff, max_len, rate) self.final_layer = tf.keras.layers.Dense(out_dim) def call(self, inp, tar, training, enc_padding_mask, look_ahead_mask, dec_padding_mask): enc_output = self.encoder(inp, training, enc_padding_mask) dec_output, attention_weights = self.decoder( tar, enc_output, training, look_ahead_mask, dec_padding_mask) final_output = self.final_layer(dec_output) return final_output, attention_weights sample_transformer = Transformer( num_layers=2, d_model=512, num_heads=8, dff=2048, out_dim=1) temp_input = tf.random.uniform((64, 62,1)) temp_target = tf.random.uniform((64, 23,1)) fn_out, _ = sample_transformer(temp_input, temp_target,training=False, enc_padding_mask=None, look_ahead_mask=None, dec_padding_mask=None) fn_out.shape # Set hyperparameters # 트랜스포머 기준으로 바꿔볼까? # d_model – the number of expected features in the encoder/decoder inputs (default=512). # nhead – the number of heads in the multiheadattention models (default=8). # num_encoder_layers – the number of sub-encoder-layers in the encoder & decoder (default=6). # num_decoder_layers – the number of sub-decoder-layers in the decoder (default=6). # dff(dim_feedforward) – the dimension of the feedforward network model (default=2048). # dropout – the dropout value (default=0.1). num_layers = 1 d_model = 64 dff = 256 num_heads = 8 dropout_rate = 0.1 input_sequence_length = 4320-(24*4+23) # Length of the sequence used by the encoder target_sequence_length = 24*4+23 # Length of the sequence predicted by the decoder batch_size = 2**11 train_dataset = train_dataset.batch(batch_size) val_dataset = val_dataset.batch(batch_size) # Optimizizer class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule): def __init__(self, d_model, warmup_steps=4000): super(CustomSchedule, self).__init__() self.d_model = d_model self.d_model = tf.cast(self.d_model, tf.float32) self.warmup_steps = warmup_steps def __call__(self, step): arg1 = tf.math.rsqrt(step) arg2 = step * (self.warmup_steps ** -1.5) return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2) learning_rate = CustomSchedule(64) optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98, epsilon=1e-9) temp_learning_rate_schedule = CustomSchedule(512) plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32))) plt.ylabel("Learning Rate") plt.xlabel("Train Step") # Loss and metrics loss_object = tf.keras.losses.MeanAbsoluteError() def loss_function(real, pred): mask = tf.math.logical_not(tf.math.equal(real, 0)) loss_ = loss_object(real, pred) mask = tf.cast(mask, dtype=loss_.dtype) loss_ *= mask return tf.reduce_mean(loss_) train_loss = tf.keras.metrics.Mean(name='train_loss') #train_accuracy = tf.keras.metrics.mean_absolute_error() test_loss = tf.keras.metrics.Mean(name='test_loss') # Training and checkpoint transformer = Transformer(num_layers, d_model, num_heads, dff, out_dim=1, rate=dropout_rate) def create_masks(inp, tar): inp = inp.reshape() # Encoder padding mask enc_padding_mask = create_padding_mask(inp) # Used in the 2nd attention block in the decoder. # This padding mask is used to mask the encoder outputs. dec_padding_mask = create_padding_mask(inp) # Used in the 1st attention block in the decoder. # It is used to pad and mask future tokens in the input received by # the decoder. look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1]) dec_target_padding_mask = create_padding_mask(tar) combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask) return enc_padding_mask, combined_mask, dec_padding_mask # check point checkpoint_path = "./checkpoints/train" ckpt = tf.train.Checkpoint(transformer=transformer, optimizer=optimizer) ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5) # if a checkpoint exists, restore the latest checkpoint. if ckpt_manager.latest_checkpoint: ckpt.restore(ckpt_manager.latest_checkpoint) print ('Latest checkpoint restored!!') # EPOCHS EPOCHS=3000 @tf.function def train_step(inp, tar): last_inp = tf.expand_dims(inp[:,0,:],-1) tar_inp = tf.concat([last_inp, tar[:,:-1,:]], axis=1) tar_real = tar #enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp) #print(enc_padding_mask) look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1]) with tf.GradientTape() as tape: predictions, _ = transformer(inp, tar_inp, True, None, look_ahead_mask, None) loss = loss_function(tar_real, predictions) gradients = tape.gradient(loss, transformer.trainable_variables) optimizer.apply_gradients(zip(gradients, transformer.trainable_variables)) train_loss(loss) #train_accuracy(tar_real, predictions) @tf.function def test_step(inp, tar): #print(inp) #print(tar) last_inp = tf.expand_dims(inp[:,0,:],-1) #print(last_inp) tar_inp = tf.concat([last_inp, tar[:,:-1,:]], axis=1) tar_real = tar look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1]) with tf.GradientTape() as tape: predictions, _ = transformer(inp, tar_inp, False, None, look_ahead_mask, None) loss = loss_function(tar_real, predictions) gradients = tape.gradient(loss, transformer.trainable_variables) optimizer.apply_gradients(zip(gradients, transformer.trainable_variables)) test_loss(loss) # Val_dataset을 돌려서 Val_prediction 구간을 예측한다 for epoch in range(EPOCHS): start = time.time() train_loss.reset_states() test_loss.reset_states() # validation: for (batch, (inp, tar)) in enumerate(val_dataset): #print(inp, tar) test_step(inp, tar) if (epoch + 1) % 5 == 0: ckpt_save_path = ckpt_manager.save() print ('Saving checkpoint for epoch {} at {}'.format(epoch+1, ckpt_save_path)) #print ('Epoch {} Train Loss {:.4f}'.format(epoch + 1, #train_loss.result())) #train_accuracy.result())) print ('Epoch {} Test Loss {:.4f}'.format(epoch + 1, test_loss.result())) print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start)) MAX_LENGTH = target_sequence_length def evaluate(inp): encoder_input = inp #print(encoder_input) output = tf.expand_dims(encoder_input[:,-1,:],-1) #print(output) for i in range(MAX_LENGTH): look_ahead_mask = create_look_ahead_mask(tf.shape(output)[1]) predictions, attention_weights = transformer(encoder_input, output, False, None, look_ahead_mask, None) # select the last word from the seq_len dimension predictions = predictions[: ,-1:, :] # (batch_size, 1) #print("pred:", predictions) # output = tf.concat([output, predictions], axis=1) #print(output) return tf.squeeze(output, axis=0), attention_weights def mape(y_pred, y_true): return np.mean(np.abs((y_true - y_pred) / y_true)) * 100 from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error encode_series = encoder_input_val_data[0:1,:,:] #print(encode_series) pred_series, _ = evaluate(encode_series) pred_series = np.array([pred_series]) encode_series = encode_series.reshape(-1,1) pred_series = pred_series.reshape(-1,1)[1:,:] target_series = decoder_target_val_data[0,:,:1].reshape(-1,1) encode_series_tail = np.concatenate([encode_series[-999:],target_series[:1]]) x_encode = encode_series_tail.shape[0] print(mape(pred_series[:24*4+23-23]*80846+81652.04075, target_series*80846+81652.04075)) print(mean_squared_error(target_series*80846+81652.04075, pred_series[:24*4+23-23]*80846+81652.04075)) print(mean_absolute_error(target_series*80846+81652.04075, pred_series[:24*4+23-23]*80846+81652.04075)) x_encode # 실제와 가격차이가 어떻게 나는지 비교해서 보정한다. plt.figure(figsize=(20,6)) plt.plot(range(1,x_encode+1),encode_series_tail*80846+81652.04075) plt.plot(range(x_encode,x_encode+pred_steps-23),target_series*80846+81652.04075,color='orange') plt.plot(range(x_encode,x_encode+pred_steps-23),pred_series[:24*4+23-23]*80846+81652.04075,color='teal',linestyle='--') plt.title('Encoder Series Tail of Length %d, Target Series, and Predictions' % 1000) plt.legend(['Encoding Series','Target Series','Predictions']) ``` #Prophet ``` import pandas as pd from fbprophet import Prophet import matplotlib.pyplot as plt import numpy as np df = pd.read_csv("/content/drive/MyDrive/Colab Notebooks/Data/삼성전자_6M_NonST_Version1.csv", encoding='CP949') df = df.drop(df.columns[0], axis=1) df.columns = ["ds","y"] df["ds"] = pd.to_datetime(df["ds"], dayfirst = True) df.head() m = Prophet() m.fit(df[:-24*4]) future = m.make_future_dataframe(freq='H',periods=24*4) future.tail() forecast = m.predict(future) forecast[['ds', 'yhat']].tail() plt.figure(figsize=(20,5)) plt.plot(df["y"][1784:], label="real") plt.plot(range(4320-24*4,4320),forecast['yhat'][-24*4:], label="Prophet") plt.plot(range(4320-24*4,4320),pred_series[:24*4+23-23]*80846+81652.04075, label="Transformer") plt.legend() plt.show() ``` #LSTMa ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from tqdm import trange import random data = pd.read_csv("/content/drive/MyDrive/Colab Notebooks/Data/삼성전자_6M_NonST_Version1.csv", encoding='CP949') data.head() from sklearn.preprocessing import MinMaxScaler min_max_scaler = MinMaxScaler() data["종가"] = min_max_scaler.fit_transform(data["종가"].to_numpy().reshape(-1,1)) train = data[:-24*4] train = train["종가"].to_numpy() test = data[-24*4:] test = test["종가"].to_numpy() import torch import torch.nn as nn from torch import optim import torch.nn.functional as F device = torch.device("cuda", index=0) class lstm_encoder(nn.Module): def __init__(self, input_size, hidden_size, num_layers = 1): super(lstm_encoder, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.num_layers = num_layers self.lstm = nn.LSTM(input_size = input_size, hidden_size = hidden_size, num_layers = num_layers, batch_first=True) def forward(self, x_input): lstm_out, self.hidden = self.lstm(x_input) return lstm_out, self.hidden class lstm_decoder(nn.Module): def __init__(self, input_size, hidden_size, num_layers = 1): super(lstm_decoder, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.num_layers = num_layers self.lstm = nn.LSTM(input_size = input_size, hidden_size = hidden_size,num_layers = num_layers, batch_first=True) self.linear = nn.Linear(hidden_size, input_size) def forward(self, x_input, encoder_hidden_states): lstm_out, self.hidden = self.lstm(x_input.unsqueeze(-1), encoder_hidden_states) output = self.linear(lstm_out) return output, self.hidden class lstm_encoder_decoder(nn.Module): def __init__(self, input_size, hidden_size): super(lstm_encoder_decoder, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.encoder = lstm_encoder(input_size = input_size, hidden_size = hidden_size) self.decoder = lstm_decoder(input_size = input_size, hidden_size = hidden_size) def forward(self, inputs, targets, target_len, teacher_forcing_ratio): batch_size = inputs.shape[0] input_size = inputs.shape[2] outputs = torch.zeros(batch_size, target_len, input_size) _, hidden = self.encoder(inputs) decoder_input = inputs[:,-1, :] for t in range(target_len): out, hidden = self.decoder(decoder_input, hidden) out = out.squeeze(1) if random.random() < teacher_forcing_ratio: decoder_input = targets[:, t, :] else: decoder_input = out outputs[:,t,:] = out return outputs def predict(self, inputs, target_len): inputs = inputs.unsqueeze(0) self.eval() batch_size = inputs.shape[0] input_size = inputs.shape[2] outputs = torch.zeros(batch_size, target_len, input_size) _, hidden = self.encoder(inputs) decoder_input = inputs[:,-1, :] for t in range(target_len): out, hidden = self.decoder(decoder_input, hidden) out = out.squeeze(1) decoder_input = out outputs[:,t,:] = out return outputs.detach().numpy()[0,:,0] from torch.utils.data import DataLoader, Dataset class windowDataset(Dataset): def __init__(self, y, input_window=80, output_window=20, stride=5): #총 데이터의 개수 L = y.shape[0] #stride씩 움직일 때 생기는 총 sample의 개수 num_samples = (L - input_window - output_window) // stride + 1 #input과 output X = np.zeros([input_window, num_samples]) Y = np.zeros([output_window, num_samples]) for i in np.arange(num_samples): start_x = stride*i end_x = start_x + input_window X[:,i] = y[start_x:end_x] start_y = stride*i + input_window end_y = start_y + output_window Y[:,i] = y[start_y:end_y] X = X.reshape(X.shape[0], X.shape[1], 1).transpose((1,0,2)) Y = Y.reshape(Y.shape[0], Y.shape[1], 1).transpose((1,0,2)) self.x = X self.y = Y self.len = len(X) def __getitem__(self, i): return self.x[i], self.y[i] def __len__(self): return self.len iw = 24*8 ow = 24*4 train_dataset = windowDataset(train, input_window=iw, output_window=ow, stride=1) train_loader = DataLoader(train_dataset, batch_size=64) # y_train_loader = DataLoader(y_train, batch_size=5) model = lstm_encoder_decoder(input_size=1, hidden_size=16).to(device) # model.train_model(X_train.to(device), y_train.to(device), n_epochs=100, target_len=ow, batch_size=5, training_bprediction="mixed_teacher_forcing", teacher_forcing_ratio=0.6, learning_rate=0.01, dynamic_tf=False) #5000으로 할 경우 시간도 오래걸리고 에러도 커서 100으로 줄인다. learning_rate=0.01 epoch = 100 optimizer = optim.Adam(model.parameters(), lr = learning_rate) criterion = nn.MSELoss() from tqdm import tqdm model.train() with tqdm(range(epoch)) as tr: for i in tr: total_loss = 0.0 for x,y in train_loader: optimizer.zero_grad() x = x.to(device).float() y = y.to(device).float() output = model(x, y, ow, 0.6).to(device) loss = criterion(output, y) loss.backward() optimizer.step() total_loss += loss.cpu().item() tr.set_postfix(loss="{0:.5f}".format(total_loss/len(train_loader))) predict = model.predict(torch.tensor(train_dataset[0][0]).to(device).float(), target_len=ow) real = train_dataset[0][1] predict = model.predict(torch.tensor(train[-24*4*2:]).reshape(-1,1).to(device).float(), target_len=ow) real = data["종가"].to_numpy() predict = min_max_scaler.inverse_transform(predict.reshape(-1,1)) real = min_max_scaler.inverse_transform(real.reshape(-1,1)) real.shape plt.figure(figsize=(20,5)) plt.plot(range(3319,4320), real[3320:], label="real") plt.plot(range(4320-24*4,4320), predict[-24*4:], label="LSTMa") plt.plot(range(4320-24*4,4320),forecast['yhat'][-24*4:], label="Prophet") plt.plot(range(4320-24*4,4320),pred_series[:24*4+23-23]*80846+81652.04075, label="Transformer") plt.legend() plt.show() ``` #Informer ``` !git clone https://github.com/zhouhaoyi/Informer2020.git from google.colab import drive drive.mount('/content/drive') import sys if not 'Informer2020' in sys.path: sys.path += ['Informer2020'] import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.preprocessing import MinMaxScaler from datetime import timedelta import torch from torch import nn from torch import optim from torch.utils.data import DataLoader, Dataset from tqdm import tqdm from models.model import Informer class StandardScaler(): def __init__(self): self.mean = 0. self.std = 1. def fit(self, data): self.mean = data.mean(0) self.std = data.std(0) def transform(self, data): mean = torch.from_numpy(self.mean).type_as(data).to(data.device) if torch.is_tensor(data) else self.mean std = torch.from_numpy(self.std).type_as(data).to(data.device) if torch.is_tensor(data) else self.std return (data - mean) / std def inverse_transform(self, data): mean = torch.from_numpy(self.mean).type_as(data).to(data.device) if torch.is_tensor(data) else self.mean std = torch.from_numpy(self.std).type_as(data).to(data.device) if torch.is_tensor(data) else self.std return (data * std) + mean def time_features(dates, freq='h'): dates['month'] = dates.date.apply(lambda row:row.month,1) dates['day'] = dates.date.apply(lambda row:row.day,1) dates['weekday'] = dates.date.apply(lambda row:row.weekday(),1) dates['hour'] = dates.date.apply(lambda row:row.hour,1) dates['minute'] = dates.date.apply(lambda row:row.minute,1) dates['minute'] = dates.minute.map(lambda x:x//15) freq_map = { 'y':[],'m':['month'],'w':['month'],'d':['month','day','weekday'], 'b':['month','day','weekday'],'h':['month','day','weekday','hour'], 't':['month','day','weekday','hour','minute'], } return dates[freq_map[freq.lower()]].values def _process_one_batch(batch_x, batch_y, batch_x_mark, batch_y_mark): batch_x = batch_x.float().to(device) batch_y = batch_y.float() batch_x_mark = batch_x_mark.float().to(device) batch_y_mark = batch_y_mark.float().to(device) dec_inp = torch.zeros([batch_y.shape[0], pred_len, batch_y.shape[-1]]).float() dec_inp = torch.cat([batch_y[:,:label_len,:], dec_inp], dim=1).float().to(device) outputs = model(batch_x, batch_x_mark, dec_inp, batch_y_mark) batch_y = batch_y[:,-pred_len:,0:].to(device) return outputs, batch_y class Dataset_Pred(Dataset): def __init__(self, dataframe, size=None, scale=True): self.seq_len = size[0] self.label_len = size[1] self.pred_len = size[2] self.dataframe = dataframe self.scale = scale self.__read_data__() def __read_data__(self): self.scaler = StandardScaler() df_raw = self.dataframe df_raw["date"] = pd.to_datetime(df_raw["date"]) delta = df_raw["date"].iloc[1] - df_raw["date"].iloc[0] if delta>=timedelta(hours=1): self.freq='h' else: self.freq='t' border1 = 0 border2 = len(df_raw) cols_data = df_raw.columns[1:] df_data = df_raw[cols_data] if self.scale: self.scaler.fit(df_data.values) data = self.scaler.transform(df_data.values) else: data = df_data.values tmp_stamp = df_raw[['date']][border1:border2] tmp_stamp['date'] = pd.to_datetime(tmp_stamp.date) pred_dates = pd.date_range(tmp_stamp.date.values[-1], periods=self.pred_len+1, freq=self.freq) df_stamp = pd.DataFrame(columns = ['date']) df_stamp.date = list(tmp_stamp.date.values) + list(pred_dates[1:]) data_stamp = time_features(df_stamp, freq=self.freq) self.data_x = data[border1:border2] self.data_y = data[border1:border2] self.data_stamp = data_stamp def __getitem__(self, index): s_begin = index s_end = s_begin + self.seq_len r_begin = s_end - self.label_len r_end = r_begin + self.label_len + self.pred_len seq_x = self.data_x[s_begin:s_end] seq_y = self.data_y[r_begin:r_end] seq_x_mark = self.data_stamp[s_begin:s_end] seq_y_mark = self.data_stamp[r_begin:r_end] return seq_x, seq_y, seq_x_mark, seq_y_mark def __len__(self): return len(self.data_x) - self.seq_len- self.pred_len + 1 data = pd.read_csv("/content/drive/MyDrive/Colab Notebooks/Data/삼성전자_6M_NonST_Version1.csv", encoding='CP949') data.head() data["date"] = data["날짜"] data["date"] = pd.to_datetime(data["date"], dayfirst = True) data["value"] = data["종가"] min_max_scaler = MinMaxScaler() data["value"] = min_max_scaler.fit_transform(data["value"].to_numpy().reshape(-1,1)).reshape(-1) data = data[["date", "value"]] data_train = data.iloc[:-24*4].copy() pred_len = 24*4 seq_len = pred_len#인풋 크기 label_len = pred_len#디코더에서 참고할 크기 pred_len = pred_len#예측할 크기 batch_size = 10 shuffle_flag = True num_workers = 0 drop_last = True dataset = Dataset_Pred(dataframe=data_train ,scale=True, size = (seq_len, label_len,pred_len)) data_loader = DataLoader(dataset,batch_size=batch_size,shuffle=shuffle_flag,num_workers=num_workers,drop_last=drop_last) enc_in = 1 dec_in = 1 c_out = 1 device = torch.device("cuda:0") model = Informer(enc_in, dec_in, c_out, seq_len, label_len, pred_len, device = device).to(device) learning_rate = 1e-4 criterion = nn.MSELoss() model_optim = optim.Adam(model.parameters(), lr=learning_rate) # Informer는 error를 100하는게 시간도 덜 걸리고 에러도 적다. train_epochs = 100 model.train() progress = tqdm(range(train_epochs)) for epoch in progress: train_loss = [] for i, (batch_x,batch_y,batch_x_mark,batch_y_mark) in enumerate(data_loader): model_optim.zero_grad() pred, true = _process_one_batch(batch_x, batch_y, batch_x_mark, batch_y_mark) loss = criterion(pred, true) train_loss.append(loss.item()) loss.backward() model_optim.step() train_loss = np.average(train_loss) progress.set_description("loss: {:0.6f}".format(train_loss)) import time now = time.time() scaler = dataset.scaler df_test = data_train.copy() df_test["value"] = scaler.transform(df_test["value"]) df_test["date"] = pd.to_datetime(df_test["date"].values) delta = df_test["date"][1] - df_test["date"][0] for i in range(pred_len): df_test = df_test.append({"date":df_test["date"].iloc[-1]+delta}, ignore_index=True) df_test = df_test.fillna(0) df_test_x = df_test.iloc[-seq_len-pred_len:-pred_len].copy() df_test_y = df_test.iloc[-label_len-pred_len:].copy() df_test_numpy = df_test.to_numpy()[:,1:].astype("float") test_time_x = time_features(df_test_x, freq=dataset.freq) #인풋 타임 스템프 test_data_x = df_test_numpy[-seq_len-pred_len:-pred_len] #인풋 데이터 test_time_y = time_features(df_test_y, freq=dataset.freq) #아웃풋 타임스템프 test_data_y =df_test_numpy[-label_len-pred_len:] test_data_y[-pred_len:] = np.zeros_like(test_data_y[-pred_len:]) #예측하는 부분을 0으로 채워준다. test_time_x = test_time_x test_time_y = test_time_y test_data_y = test_data_y.astype(np.float64) test_data_x = test_data_x.astype(np.float64) _test = [(test_data_x,test_data_y,test_time_x,test_time_y)] _test_loader = DataLoader(_test,batch_size=1,shuffle=False) preds = [] with torch.no_grad(): for i, (batch_x,batch_y,batch_x_mark,batch_y_mark) in enumerate(_test_loader): batch_x = batch_x.float().to(device) batch_y = batch_y.float().to(device) batch_x_mark = batch_x_mark.float().to(device) batch_y_mark = batch_y_mark.float().to(device) outputs = model(batch_x, batch_x_mark, batch_y, batch_y_mark) preds = outputs.detach().cpu().numpy() preds = scaler.inverse_transform(preds[0]) df_test.iloc[-pred_len:, 1:] = preds print(time.time() - now) import matplotlib.pyplot as plt real = data["value"].to_numpy() result = df_test["value"].iloc[-24*4:].to_numpy() real = min_max_scaler.inverse_transform(real.reshape(-1,1)).reshape(-1) result = min_max_scaler.inverse_transform(result.reshape(-1,1)).reshape(-1) plt.figure(figsize=(20,5)) plt.plot(range(3319,4320),real[3320:], label="real") plt.plot(range(4320-24*4,4320),result, label="Informer") plt.plot(range(4320-24*4,4320), predict[-24*4:], label="LSTMa") plt.plot(range(4320-24*4,4320),forecast['yhat'][-24*4:], label="Prophet") plt.plot(range(4320-24*4,4320),pred_series[:24*4+23-23]*80846+81652.04075, label="Transformer") plt.legend() plt.show() ``` #ARIMA ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv("/content/drive/MyDrive/Colab Notebooks/Data/삼성전자_6M_NonST_Version1.csv", encoding='CP949') df = df.drop(df.columns[0], axis=1) df.columns = ["ds","y"] df.head() df_train = df.iloc[:-24*4] from statsmodels.tsa.seasonal import seasonal_decompose import statsmodels.api as sm fig = plt.figure(figsize=(20,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(df_train["y"], lags=20, ax=ax1) fig = plt.figure(figsize=(20,8)) ax1 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(df_train["y"], lags=20, ax=ax1) from statsmodels.tsa.arima_model import ARIMA from statsmodels.tsa.statespace.sarimax import SARIMAX import itertools from tqdm import tqdm p = range(0,3) d = range(1,2) q = range(0,6) m = 24 pdq = list(itertools.product(p,d,q)) seasonal_pdq = [(x[0],x[1], x[2], m) for x in list(itertools.product(p,d,q))] aic = [] params = [] with tqdm(total = len(pdq) * len(seasonal_pdq)) as pg: for i in pdq: for j in seasonal_pdq: pg.update(1) try: model = SARIMAX(df_train["y"], order=(i), season_order = (j)) model_fit = model.fit() # print("SARIMA:{}{}, AIC:{}".format(i,j, round(model_fit.aic,2))) aic.append(round(model_fit.aic,2)) params.append((i,j)) except: continue optimal = [(params[i],j) for i,j in enumerate(aic) if j == min(aic)] model_opt = SARIMAX(df_train["y"], order = optimal[0][0][0], seasonal_order = optimal[0][0][1]) model_opt_fit = model_opt.fit() model_opt_fit.summary() model = SARIMAX(df_train["y"], order=optimal[0][0][0], seasonal_order=optimal[0][0][1]) model_fit = model.fit(disp=0) ARIMA_forecast = model_fit.forecast(steps=24*4) plt.figure(figsize=(20,5)) plt.plot(range(0,4320), df["y"].iloc[1:], label="Real") plt.plot(ARIMA_forecast, label="ARIMA") plt.plot(range(4320-24*4,4320),result, label="Informer") plt.plot(range(4320-24*4,4320), predict[-24*4:], label="LSTMa") plt.plot(range(4320-24*4,4320),forecast['yhat'][-24*4:], label="Prophet") plt.plot(range(4320-24*4,4320),pred_series[:24*4+23-23]*80846+81652.04075, label="Transformer") plt.legend() plt.show() plt.figure(figsize=(20,5)) plt.plot(range(3319,4320), df["y"].iloc[3320:], label="Real") plt.plot(ARIMA_forecast, label="ARIMA") plt.plot(range(4320-24*4,4320),result, label="Informer") plt.plot(range(4320-24*4,4320), predict[-24*4:], label="LSTMa") plt.plot(range(4320-24*4,4320),forecast['yhat'][-24*4:], label="Prophet") plt.plot(range(4320-24*4,4320),pred_series[:24*4+23-23]*80846+81652.04075, label="Transformer") plt.legend() plt.show() from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error def MAPEval(y_pred, y_true): return np.mean(np.abs((y_true - y_pred) / y_true)) * 100 def MSE(y_true, y_pred): return np.mean(np.square((y_true - y_pred))) def MAE(y_true, y_pred): return np.mean(np.abs((y_true - y_pred))) print('Transformer') print('-' * 40) print('MAPE: {} |\nMSE: {} |\nMAE : {}\n'.format(mape(pred_series[:24*4+23-23]*80846+81652.04075, target_series*80846+81652.04075), mean_squared_error(target_series*80846+81652.04075, pred_series[:24*4+23-23]*80846+81652.04075), mean_absolute_error(target_series*80846+81652.04075, pred_series[:24*4+23-23]*80846+81652.04075))) print('Informer') print('-' * 40) print('MAPE: {} |\nMSE: {} |\nMAE : {}\n'.format(mape(result, real[-24*4:]), mean_squared_error(real[-24*4:], result), mean_absolute_error(real[-24*4:], result))) print('ARIMA') print('-' * 40) print('MAPE: {} |\nMSE: {} |\nMAE : {}\n'.format(mape(ARIMA_forecast, df["y"].iloc[-24*4:]), mean_squared_error(df["y"].iloc[-24*4:], ARIMA_forecast), mean_absolute_error(df["y"].iloc[-24*4:], ARIMA_forecast))) print('Prophet') print('-' * 40) print('MAPE: {} |\nMSE: {} |\nMAE : {}\n'.format(mape(forecast['yhat'][4320-24*4:],df["y"][4320-24*4:]), mean_squared_error(df["y"][4320-24*4:], forecast['yhat'][4320-24*4:]), mean_absolute_error(df["y"][4320-24*4:], forecast['yhat'][4320-24*4:]))) print('LSTMa') print('-' * 40) print('MAPE: {} |\nMSE: {} |\nMAE : {}\n'.format(mape(predict[-24*4:],real[-24*4:]), mean_squared_error(real[-24*4:], predict[-24*4:]), mean_absolute_error(real[-24*4:], predict[-24*4:]))) ```
github_jupyter
``` %load_ext autoreload import os import sys import glob import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error, r2_score import torch import torch.nn as nn module_path = os.path.abspath(os.path.join('../../py-conjugated/')) if module_path not in sys.path: sys.path.append(module_path) import morphology_networks as net import model_training as train import model_testing as test import physically_informed_loss_functions as pilf import network_utils as nuts torch.manual_seed(28) train_data_path = '/Volumes/Tatum_SSD-1/Grad_School/m2py/Morphology_labels/OPV_morph_maps/train_set/' test_data_path = '/Volumes/Tatum_SSD-1/Grad_School/m2py/Morphology_labels/OPV_morph_maps/test_set/' train_dataset = nuts.local_OPV_ImDataset(train_data_path) test_dataset = nuts.local_OPV_ImDataset(test_data_path) train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size = 13) test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size = 10) class ConvolutionalEncoder(nn.Module): """ ConvolutionalEncoder() relies on PyTorch API to convolve and compress image-like data from SPM analyses. The stack of 2-dimensional SPM channels is encoded into a 1-dimensional torch.Tensor(), which describes the image encoding into feature-space """ def __init__(self, im_z, fc_nodes): super(ConvolutionalEncoder, self).__init__() self.z = im_z self.fc_nodes = fc_nodes self.conv_pool1 = nn.Sequential( nn.Conv2d(im_z, 32, kernel_size = 5, stride = 1, padding = 4), nn.ReLU(), nn.MaxPool2d(kernel_size = 2, stride = 2) ) self.conv_pool2 = nn.Sequential( nn.Conv2d(32, 64, kernel_size = 3, stride = 1, padding = 1), nn.ReLU(), nn.MaxPool2d(kernel_size = 2, stride = 2) ) self.conv_pool3 = nn.Sequential( nn.Conv2d(64, 128, kernel_size = 3, stride = 1, padding = 1), nn.ReLU(), nn.MaxPool2d(kernel_size = 2, stride = 2), nn.Flatten() ) self.linear_layers = nn.Sequential( nn.Dropout(), #to avoid over-fitting nn.Linear(self.fc_nodes, 5000), nn.ReLU(), nn.Linear(5000, 100), nn.ReLu() ) def forward(self, im): conv_enc = self.conv_pool1(im) conv_enc = self.conv_pool2(conv_enc) conv_enc = self.conv_pool3(conv_enc) linear_enc = self.linear_layers(conv_enc) return linear_enc class ConvolutionalDecoder(nn.Module): """ ConvolutionalDecoder() relies on PyTorch API to convolve and decompress encodings of image-like data from SPM analyses. The stack of 2-dimensional SPM channels are decoded from a 1-dimensional torch.Tensor(), which reconstructs the image encoding from feature-space """ def __init__(self, im_z, fc_nodes): super(ConvolutionalDecoder, self).__init__() self.z = im_z self.fc_nodes = fc_nodes self.linear_layers = nn.Sequential( nn.Linear(100, 5000), nn.ReLu(), nn.Linear(5000, self.fc_nodes), nn.ReLU(), ) self.deconv = nn.Sequential( nn.ConvTranspose2d(128, 64, 2, stride = 2), nn.ReLU(), nn.ConvTranspose2d(64, 32, 2, stride = 2), nn.ReLU(), nn.ConvTranspose2d(32, im_z, 2, stride = 2), nn.ReLU() ) def forward(self, linear_enc): im_enc = self.linear_layers(linear_enc), im_enc = self.conv_pool3(im_enc), im_enc = self.conv_pool2(im_enc), decoded_image = self.conv_pool1(im_enc) return decoded_image class VAE(nn.Module): """ Variational Autoencoder based on PyTorch module framework. VAE takes in SPM images or m2py_labels as a stack of 2-dimensional channels """ def __init__(self, im_z, im_x = 256, im_y = 256): super(VAE, self).__init__() self.x = im_x self.y = im_y self.z = im_z #After *3* deconvolution and activation layers (decreasing x and y by 50% each), #there are *128* channels expanded from the input tensor of the image encoding. #May need modification if, stride, padding, convolution layers, or number of #channels in output of convolution and pooling are changed. self.fc_nodes = int((im_x/(2^3))*(im_y/(2^3))*128) self.encoder = ConvolutionalEncoder(self.z, self.fc_nodes) self.decoder = ConvolutionalDecoder(self.z, self.fc_nodes) def forward(self, original_im): encoding = self.encoder(original_im) decoded_image = self.decoder(encoding) return decoded_image, encoding def train(model, training_dataset, criterion, optimizer): model.train() loss_list = [] batch_iterator = 0 for images, labels in training_data_set: batch_iterator += 1 print(f'image # {batch_iterator}') images = images.to(device) # labels = labels.to(device) # Run the forward pass optimizer.zero_grad() decoded_images, encoding = model(images) loss = criterion(decoded_images, images) torch.autograd.backward(loss) optimizer.step() loss_list.append(loss.item()) samples = len(loss_list) epoch_loss = sum(loss_list)/samples return epoch_loss def test(model, test_dataset, criterion): model.eval() with torch.no_grad(): results_dict = {} batch_iterator = 0 for images in training_data_set: batch_iterator += 1 print(f'image # {batch_iterator}') images = images.to(device) # labels = labels.to(device) # Run the forward pass optimizer.zero_grad() decoded_images, encoding = model(images) results_dict[batch_iterator] = {'original': images, 'decoded': decoded_images} loss = criterion(decoded_images, images) torch.autograd.backward(loss) loss_list.append(loss.item()) samples = len(loss_list) epoch_loss = sum(loss_list)/samples return epoch_loss, results_dict model = VAE(im_z = 2) criterion = BCELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.001) epochs = np.arange(0, 10, 1) train_losses = [] test_losses = [] test_results = {} for ep in epochs: train_loss = train(model, train_dataloader, criterion, optimizer) train_losses.append(train_loss) test_loss, results_dict = test(model, test_dataloader, criterion) test_losses.append(test_loss) test_results[ep] = results_dict ```
github_jupyter
This is a collection of scratch work that i use to organize the tests for coddiwomple ``` from simtk import openmm from openmmtools.testsystems import HarmonicOscillator from coddiwomple.tests.utils import get_harmonic_testsystem from coddiwomple.tests.utils import HarmonicAlchemicalState from simtk import unit import numpy as np testsystem, period, collision_rate, timestep, alchemical_functions = get_harmonic_testsystem() testsystem.system.getNumForces() for force_index in range(testsystem.system.getNumForces()): force = testsystem.system.getForce(force_index) n_global_parameters = force.getNumGlobalParameters() print(n_global_parameters) for term in range(n_global_parameters): print(force.getGlobalParameterName(term)) def test_OpenMMPDFState(): """ conduct a class-wide test on coddiwomple.openmm.states.OpenMMPDFState with the `get_harmonic_testsystem` testsystem this will assert successes on __init__, set_parameters, get_parameters, reduced_potential methods """ temperature = 300 * unit.kelvin pressure = None from coddiwomple.openmm.states import OpenMMPDFState, OpenMMParticleState #create the default get_harmonic_testsystem testsystem, period, collision_rate, timestep, alchemical_functions = get_harmonic_testsystem(temperature = temperature) #test init method pdf_state = OpenMMPDFState(system = testsystem.system, alchemical_composability = HarmonicAlchemicalState, temperature = temperature, pressure = pressure) assert isinstance(pdf_state._internal_context, openmm.Context) print(f"pdf_state parameters: {pdf_state._parameters}") #test set_parameters new_parameters = {key : 1.0 for key, val in pdf_state._parameters.items() if val is not None} pdf_state.set_parameters(new_parameters) #this should set the new parameters, but now we have to make sure that the context actually has those parameters bound swig_parameters = pdf_state._internal_context.getParameters() context_parameters = {q: swig_parameters[q] for q in swig_parameters} assert context_parameters['testsystems_HarmonicOscillator_x0'] == 1. assert context_parameters['testsystems_HarmonicOscillator_U0'] == 1. #test get_parameters returnable_parameters = pdf_state.get_parameters() assert len(returnable_parameters) == 2 assert returnable_parameters['testsystems_HarmonicOscillator_x0'] == 1. assert returnable_parameters['testsystems_HarmonicOscillator_U0'] == 1. #test reduced_potential particle_state = OpenMMParticleState(positions = testsystem.positions) #make a particle state so that we can compute a reduced potential reduced_potential = pdf_state.reduced_potential(particle_state) externally_computed_reduced_potential = pdf_state._internal_context.getState(getEnergy=True).getPotentialEnergy()*pdf_state.beta assert np.isclose(reduced_potential, externally_computed_reduced_potential) def test_OpenMMParticleState(): """ conduct a class-wide test on coddiwomple.openmm.states.OpenMMParticleState with the `get_harmonic_testsystem` testsystem this will assert successes on __init__, as well as _all_ methods in the coddiwomple.particles.Particle class """ temperature = 300 * unit.kelvin pressure = None from coddiwomple.openmm.states import OpenMMPDFState, OpenMMParticleState from coddiwomple.particles import Particle #create the default get_harmonic_testsystem testsystem, period, collision_rate, timestep, alchemical_functions = get_harmonic_testsystem(temperature = temperature) #test __init__ method particle_state = OpenMMParticleState(positions = testsystem.positions) #make a particle state particle = Particle(index = 0, record_state=False, iteration = 0) #test update_state assert particle.state is None assert not particle._record_states particle.update_state(particle_state) assert particle.state is not None #test update_iteration assert particle.iteration == 0 particle.update_iteration() assert particle.iteration == 1 #test update ancestry assert particle.ancestry == [0] particle.update_ancestry(1) assert particle.ancestry == [0,1] #the rest of the methods are trivial or would be redundant to test test_OpenMMPDFState() test_OpenMMParticleState() def test_OpenMMReporter(): """ test the OpenMMReporter object for its ability to make appropriate trajectory writes for particles. use the harmonic oscillator testsystem NOTE : this class will conduct dynamics on 5 particles defined by the harmonic oscillator testsystem in accordance with the coddiwomple.openmm.propagators.OMMBIP equipped with the coddiwomple.openmm.integrators.OMMLI integrator, but will NOT explicitly conduct a full test on the propagators or integrators. """ import os from coddiwomple.openmm.propagators import OMMBIP from coddiwomple.openmm.integrators import OMMLI temperature = 300 * unit.kelvin pressure = None from coddiwomple.openmm.states import OpenMMPDFState, OpenMMParticleState from coddiwomple.particles import Particle from coddiwomple.openmm.reporters import OpenMMReporter import shutil #create the default get_harmonic_testsystem testsystem, period, collision_rate, timestep, alchemical_functions = get_harmonic_testsystem(temperature = temperature) #create a particle state and 5 particles particles = [] for i in range(5): particle_state = OpenMMParticleState(positions = testsystem.positions) #make a particle state particle = Particle(index = i, record_state=False, iteration = 0) particle.update_state(particle_state) particles.append(particle) #since we are copying over the positions, we need a simple assert statement to make sure that the id(hex(particle_state.positions)) are separate in memory position_hexes = [hex(id(particle.state.positions)) for particle in particles] assert len(position_hexes) == len(list(set(position_hexes))), f"positions are copied identically; this is a problem" #create a pdf_state pdf_state = OpenMMPDFState(system = testsystem.system, alchemical_composability = HarmonicAlchemicalState, temperature = temperature, pressure = pressure) #create an integrator integrator = OMMLI(temperature=temperature, collision_rate=collision_rate, timestep=timestep) #create a propagator propagator = OMMBIP(openmm_pdf_state = pdf_state, integrator = integrator) steps_per_application = 100 #the only thing we want to do here is to run independent md for each of the particles and save trajectories; at the end, we will delete the directory and the traj files temp_traj_dir, temp_traj_prefix = os.path.join(os.getcwd(), 'test_dir'), 'traj_prefix' reporter = OpenMMReporter(trajectory_directory = 'test_dir', trajectory_prefix='traj_prefix', md_topology=testsystem.mdtraj_topology) assert reporter.write_traj num_applications=10 for application_index in range(num_applications): returnables = [propagator.apply(particle.state, n_steps=100, reset_integrator=True, apply_pdf_to_context=True, randomize_velocities=True) for particle in particles] _save=True if application_index == num_applications-1 else False reporter.record(particles, save_to_disk=_save) assert reporter.hex_counter == len(reporter.hex_dict) assert os.path.exists(temp_traj_dir) assert os.listdir(temp_traj_dir) is not None #then we can delete shutil.rmtree(temp_traj_dir) def test_OMMLI(): """ test OMMLI (OpenMMLangevinIntegrator) in the baoab regime on the harmonic test system; Specifically, we run MD to convergence and assert that the potential energy of the system and the standard deviation thereof is within a specified threshold. We also check the accumulation of shadow, proposal works, as well as the ability to reset, initialize, and subsume the integrator into an OMMBIP propagator """ from coddiwomple.openmm.propagators import OMMBIP from coddiwomple.openmm.integrators import OMMLI import tqdm temperature = 300 * unit.kelvin pressure = None from coddiwomple.openmm.states import OpenMMPDFState, OpenMMParticleState from coddiwomple.particles import Particle #create the default get_harmonic_testsystem testsystem, period, collision_rate, timestep, alchemical_functions = get_harmonic_testsystem(temperature = temperature) particle_state = OpenMMParticleState(positions = testsystem.positions) #make a particle state particle = Particle(index = 0, record_state=False, iteration = 0) particle.update_state(particle_state) num_applications = 100 #create a pdf_state pdf_state = OpenMMPDFState(system = testsystem.system, alchemical_composability = HarmonicAlchemicalState, temperature = temperature, pressure = pressure) #create an integrator integrator = OMMLI(temperature=temperature, collision_rate=collision_rate, timestep=timestep) #create a propagator propagator = OMMBIP(openmm_pdf_state = pdf_state, integrator = integrator) #expected reduced potential mean_reduced_potential = testsystem.get_potential_expectation(pdf_state) * pdf_state.beta std_dev_reduced_potential = testsystem.get_potential_standard_deviation(pdf_state) * pdf_state.beta reduced_pe = [] #some sanity checks for propagator: global_integrator_variables_before_integration = propagator._get_global_integrator_variables() print(f"starting integrator variables: {global_integrator_variables_before_integration}") #some sanity checks for integrator: start_proposal_work = propagator.integrator._get_energy_with_units('proposal_work', dimensionless=True) start_shadow_work = propagator.integrator._get_energy_with_units('shadow_work', dimensionless=True) assert start_proposal_work == global_integrator_variables_before_integration['proposal_work'] assert start_shadow_work == global_integrator_variables_before_integration['shadow_work'] for app_num in tqdm.trange(num_applications): particle_state, proposal_work = propagator.apply(particle_state, n_steps=20, reset_integrator=False, apply_pdf_to_context=False, randomize_velocities=True) assert proposal_work==0. #this must be the case since we did not pass a 'returnable_key' #sanity checks for inter-application methods assert propagator.integrator._get_energy_with_units('proposal_work', dimensionless=True) != 0. #this cannot be zero after a step of MD without resets assert propagator.integrator._get_energy_with_units('shadow_work', dimensionless=True) != 0. #this cannot be zero after a step of MD without resets reduced_pe.append(pdf_state.reduced_potential(particle_state)) tol=6 * std_dev_reduced_potential calc_mean_reduced_pe = np.mean(reduced_pe) calc_stddev_reduced_pe = np.std(reduced_pe) assert calc_mean_reduced_pe < mean_reduced_potential + tol and calc_mean_reduced_pe > mean_reduced_potential - tol, f"the mean reduced energy and standard deviation ({calc_mean_reduced_pe}, {calc_stddev_reduced_pe}) is outside the tolerance \ of a theoretical mean potential energy of {mean_reduced_potential} +/- {tol}" print(f"the mean reduced energy/standard deviation is {calc_mean_reduced_pe, calc_stddev_reduced_pe} and the theoretical mean reduced energy and stddev are {mean_reduced_potential}") #some cleanup of the integrator propagator.integrator.reset() #this should reset proposal, shadow, and ghmc staticstics (we omit ghmc stats) assert propagator.integrator._get_energy_with_units('proposal_work', dimensionless=True) == 0. #this should be zero after a reset assert propagator.integrator._get_energy_with_units('shadow_work', dimensionless=True) == 0. #this should be zero after a reset def test_OMMBIP(): """ test OMMBIP (OpenMMBaseIntegratorPropagator) in the baoab regime on the harmonic test system; specifically, we validate the init, apply, _get_global_integrator_variables, _get_context_parameters methods. For the sake of testing all of the internal methods, we equip an OMMLI integrator """ from coddiwomple.openmm.propagators import OMMBIP from coddiwomple.openmm.integrators import OMMLI temperature = 300 * unit.kelvin pressure = None from coddiwomple.openmm.states import OpenMMPDFState, OpenMMParticleState #create the default get_harmonic_testsystem testsystem, period, collision_rate, timestep, alchemical_functions = get_harmonic_testsystem(temperature = temperature) particle_state = OpenMMParticleState(positions = testsystem.positions) #make a particle state num_applications = 100 #create a pdf_state pdf_state = OpenMMPDFState(system = testsystem.system, alchemical_composability = HarmonicAlchemicalState, temperature = temperature, pressure = pressure) #create an integrator integrator = OMMLI(temperature=temperature, collision_rate=collision_rate, timestep=timestep) #create a propagator propagator = OMMBIP(openmm_pdf_state = pdf_state, integrator = integrator) #check the __init__ method for appropriate equipment assert hex(id(propagator.pdf_state)) == hex(id(pdf_state)) #the defined pdf state is tethered to the propagator (this is VERY important for SMC) #conduct null application prior_reduced_potential = pdf_state.reduced_potential(particle_state) return_state, proposal_work = propagator.apply(particle_state, n_steps=0) assert proposal_work == 0. #there is no proposal work if returnable_key is None assert pdf_state.reduced_potential(particle_state) == prior_reduced_potential propagator_state = propagator.context.getState(getEnergy=True) assert np.isclose(propagator_state.getPotentialEnergy()*pdf_state.beta, pdf_state.reduced_potential(particle_state)) #check context update internals prior_reduced_potential = pdf_state.reduced_potential(particle_state) parameters = pdf_state.get_parameters() #change an alchemical parameter parameters['testsystems_HarmonicOscillator_U0'] = 1. #update parameter dict pdf_state.set_parameters(parameters) #set new params _ = propagator.apply(particle_state, n_steps=0, apply_pdf_to_context=False) # if we do not apply to context, then the internal_context should not be modified assert propagator._get_context_parameters()['testsystems_HarmonicOscillator_U0'] == 0. assert np.isclose(propagator.context.getState(getEnergy=True).getPotentialEnergy()*pdf_state.beta, prior_reduced_potential) _ = propagator.apply(particle_state, n_steps=0, apply_pdf_to_context=True) # if we do apply to context, then the internal_context should be modified assert propagator._get_context_parameters()['testsystems_HarmonicOscillator_U0'] == 1. assert np.isclose(prior_reduced_potential + 1.0 * unit.kilojoules_per_mole * pdf_state.beta, propagator.context.getState(getEnergy=True).getPotentialEnergy()*pdf_state.beta) #check gettable integrator variables integrator_vars = propagator._get_global_integrator_variables() #check propagator stability with integrator reset and velocity randomization _ = propagator.apply(particle_state, n_steps=1000, reset_integrator=True, apply_pdf_to_context=True, randomize_velocities=True) test_OMMBIP() ```
github_jupyter
# ElasticNet with MinMaxScaler & Polynomial Features This Code template is for Regression tasks using a ElasticNet based on the Regression linear model Technique with MinMaxScaler and feature transformation technique Polynomial Features in a pipeline. ### Required Packages ``` import warnings as wr import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.preprocessing import LabelEncoder from sklearn.pipeline import make_pipeline from sklearn.preprocessing import MinMaxScaler, PolynomialFeatures from sklearn.model_selection import train_test_split from sklearn.linear_model import ElasticNet from sklearn.metrics import mean_squared_error, r2_score,mean_absolute_error wr.filterwarnings('ignore') ``` ### Initialization Filepath of CSV file ``` #filepath file_path= '' ``` List of features which are required for model training . ``` #x_values features=[] ``` Target feature for prediction. ``` #y_value target='' ``` ### Data Fetching Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools. We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ``` df=pd.read_csv(file_path) #reading file df.head()#displaying initial entries print('Number of rows are :',df.shape[0], ',and number of columns are :',df.shape[1]) df.columns.tolist() ``` ### Data Preprocessing Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ``` def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) ``` #### Correlation Map In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ``` plt.figure(figsize = (15, 10)) corr = df.corr() mask = np.triu(np.ones_like(corr, dtype = bool)) sns.heatmap(corr, mask = mask, linewidths = 1, annot = True, fmt = ".2f") plt.show() correlation = df[df.columns[1:]].corr()[target][:] correlation ``` ### Feature Selections It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model. We will assign all the required input features to X and target/outcome to Y. ``` X=df[features] Y=df[target] ``` Calling preprocessing functions on the feature and target set. ``` x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head() ``` ### Data Splitting The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ``` #we can choose randomstate and test_size as over requerment X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 1) #performing datasplitting ``` ## Model ### Data Scaling **Used MinMaxScaler** * Transform features by scaling each feature to a given range. * This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one. ### Feature Transformation Generate polynomial and interaction features. Generate a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. Refer [API](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html) for parameters. ### ElasticNet Elastic Net first emerged as a result of critique on Lasso, whose variable selection can be too dependent on data and thus unstable. The solution is to combine the penalties of Ridge regression and Lasso to get the best of both worlds. **Features of ElasticNet Regression-** * It combines the L1 and L2 approaches. * It performs a more efficient regularization process. * It has two parameters to be set, λ and α. #### Model Tuning Parameters 1. alpha : float, default=1.0 > Constant that multiplies the penalty terms. Defaults to 1.0. See the notes for the exact mathematical meaning of this parameter. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Given this, you should use the LinearRegression object. 2. l1_ratio : float, default=0.5 > The ElasticNet mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2. 3. normalize : bool, default=False >This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. 4. max_iter : int, default=1000 >The maximum number of iterations. 5. tol : float, default=1e-4 >The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol. 6. selection : {‘cyclic’, ‘random’}, default=’cyclic’ >If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. ``` model = make_pipeline(MinMaxScaler(),PolynomialFeatures(),ElasticNet(random_state = 5)) model.fit(X_train,y_train) ``` #### Model Accuracy score() method return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. ``` print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100)) #prediction on testing set prediction=model.predict(X_test) ``` ### Model evolution **r2_score:** The r2_score function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. **MAE:** The mean abosolute error function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. **MSE:** The mean squared error function squares the error(penalizes the model for large errors) by our model. ``` print('Mean Absolute Error:', mean_absolute_error(y_test, prediction)) print('Mean Squared Error:', mean_squared_error(y_test, prediction)) print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, prediction))) print("R-squared score : ",r2_score(y_test,prediction)) #ploting actual and predicted red = plt.scatter(np.arange(0,80,5),prediction[0:80:5],color = "red") green = plt.scatter(np.arange(0,80,5),y_test[0:80:5],color = "green") plt.title("Comparison of Regression Algorithms") plt.xlabel("Index of Candidate") plt.ylabel("target") plt.legend((red,green),('ElasticNet', 'REAL')) plt.show() ``` ### Prediction Plot¶ First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis. ``` plt.figure(figsize=(10,6)) plt.plot(range(20),y_test[0:20], color = "green") plt.plot(range(20),model.predict(X_test[0:20]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show() ``` #### Creator: Snehaan Bhawal , Github: [Profile](https://github.com/Sbhawal)
github_jupyter
# RadiusNeighborsClassifier with Power Transformer This Code template is for the Classification task using a simple Radius Neighbor Classifier with pipeline and PowerTransformer Feature Transformation. It implements learning based on the number of neighbors within a fixed radius r of each training point, where r is a floating-point value specified by the user. ## Required Packages ``` !pip install imblearn import numpy as np import pandas as pd import seaborn as se import warnings import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.neighbors import RadiusNeighborsClassifier from imblearn.over_sampling import RandomOverSampler from sklearn.pipeline import make_pipeline from sklearn.preprocessing import LabelEncoder, PowerTransformer from sklearn.metrics import classification_report,plot_confusion_matrix warnings.filterwarnings('ignore') ``` ## Initialization Filepath of CSV file ``` #filepath file_path= "" ``` List of features which are required for model training ``` #x_values features=[] ``` Target feature for prediction ``` #y_value target='' ``` ## Data Fetching Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools. We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ``` df=pd.read_csv(file_path) df.head() ``` ## Feature Selections It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model. We will assign all the required input features to X and target/outcome to Y. ``` X=df[features] Y=df[target] ``` ## Data Preprocessing Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes ``` def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) def EncodeY(df): if len(df.unique())<=2: return df else: un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort') df=LabelEncoder().fit_transform(df) EncodedT=[xi for xi in range(len(un_EncodedT))] print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT)) return df ``` Calling preprocessing functions on the feature and target set. ``` x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=EncodeY(NullClearner(Y)) X.head() ``` ## Correlation Map In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ``` f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ``` ## Distribution Of Target Variable ``` plt.figure(figsize = (10,6)) se.countplot(Y) ``` ## Data Splitting The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ``` x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123) ``` ## Handling Target Imbalance The challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important. One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library. ``` x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train) ``` ## Feature Transformation Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired. <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html">More about Power Transformer module</a> ## Model RadiusNeighborsClassifier implements learning based on the number of neighbors within a fixed radius of each training point, where is a floating-point value specified by the user. In cases where the data is not uniformly sampled, radius-based neighbors classification can be a better choice. ### Tuning parameters **radius:** Range of parameter space to use by default for radius_neighbors queries. **algorithm:** Algorithm used to compute the nearest neighbors: **leaf_size:** Leaf size passed to BallTree or KDTree. **p:** Power parameter for the Minkowski metric. **metric:** the distance metric to use for the tree. **outlier_label:** label for outlier samples **weights:** weight function used in prediction. <br><br>FOR MORE INFO : <a href="https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.RadiusNeighborsClassifier.html">API</a> ``` model = make_pipeline(PowerTransformer(),RadiusNeighborsClassifier()) model.fit(x_train, y_train) ``` ## Model Accuracy score() method return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. ``` print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100)) ``` ## Confusion Matrix A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known. ``` plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues) ``` ## Classification Report A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False. where: - Precision:- Accuracy of positive predictions. - Recall:- Fraction of positives that were correctly identified. - f1-score:- percent of positive predictions were correct - support:- Support is the number of actual occurrences of the class in the specified dataset. ``` print(classification_report(y_test,model.predict(x_test))) ``` ## Creator: Abhishek Garg, Github: <a href="https://github.com/abhishek-252">Profile</a>
github_jupyter
## 10.1 添加程序源代码 很多时候,在技术文档中添加程序源代码具有一定的必要性,这源于: - 在很多文档(如实验报告)中,程序源代码往往作为重要组成部分,必须作为辅助材料放在文档末尾的附录中。 - 程序源代码既可以直接展现计算机编程的实现过程和细节,又可以评估实验的真实性,同时也能供读者学习和使用。 事实上,使用LaTeX制作文档时,添加程序源代码是一件看似简单、但又比较考验技巧的事,因为在文档中添加程序源代码并不能通过简单的“复制+粘贴”来实现。我们需要保持代码在原来程序语言中的格式,包括代码所采用的高亮颜色和等宽字体,目的都是为了让代码本来的面貌得以完美展现。 在LaTeX中,有很多宏包可供制作文档时添加程序源代码到正文或附录中,最常用的宏包包括`listings`和`minted`这两种,除此之外,还有一种插入程序源代码非常简便的一种方式,即使用`\begin{verbatim} \end{verbatim}`环境。 ### 10.1.1 使用verbatim插入程序源代码 在LaTeX中插入Python代码可以使用`verbatim`环境,即在`\begin{verbatim} \end{verbatim}`之间插入代码,代码的文本是等宽字体。需要注意的是,这一环境不会对程序源代码进行高亮处理。 【**例1**】使用`verbatim`环境插入如下Python代码: ```python import numpy as np x = np.random.rand(4) print(x) ``` ```tex \documentclass[12pt]{article} \begin{document} Python code example: \begin{verbatim} import numpy as np x = np.random.rand(4) print(x) \end{verbatim} \end{document} ``` 编译后效果如图10.1.1所示。 <p align="center"> <img align="middle" src="graphics/example10_1_1.png" width="450" /> </p> <center><b>图10.1.1</b> 编译后效果</center> ### 10.1.2 使用listings插入程序源代码 如果想要对程序源代码进行高亮处理,可以使用专门排版代码的工具包`listings`,除了在前导代码中申明使用`listings`工具包,即`\usepackage{listings}`,有时候还可以根据需要自定义一些参数。 【**例2**】使用`listings`工具包插入如下Python代码: ```python import numpy as np x = np.random.rand(4) print(x) ``` ```tex \documentclass[12pt]{article} \usepackage{listings} \begin{document} Python code example: \begin{lstlisting}[language = python] import numpy as np x = np.random.rand(4) print(x) \end{lstlisting} \end{document} ``` 编译后效果如图10.1.2所示。 <p align="center"> <img align="middle" src="graphics/example10_1_2.png" width="450" /> </p> <center><b>图10.1.2</b> 编译后效果</center> > 参考listings使用手册[The Listings Package](https://texdoc.org/serve/listings.pdf/0)。 【**例3**】使用`listings`工具包插入Python代码,并自定义代码高亮。 ```tex \documentclass[12pt]{article} \usepackage{listings} \usepackage{color} \definecolor{codegreen}{rgb}{0,0.6,0} \definecolor{codegray}{rgb}{0.5,0.5,0.5} \definecolor{codepurple}{rgb}{0.58,0,0.82} \definecolor{backcolour}{rgb}{0.95,0.95,0.92} \lstdefinestyle{mystyle}{ backgroundcolor=\color{backcolour}, commentstyle=\color{codegreen}, keywordstyle=\color{magenta}, numberstyle=\tiny\color{codegray}, stringstyle=\color{codepurple}, basicstyle=\ttfamily\footnotesize, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=5pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2 } \lstset{style=mystyle} \begin{document} Python code example: \begin{lstlisting}[language = python] import numpy as np x = np.random.rand(4) print(x) \end{lstlisting} \end{document} ``` 编译后效果如图10.1.3所示。 <p align="center"> <img align="middle" src="graphics/example10_1_3.png" width="450" /> </p> <center><b>图10.1.3</b> 编译后效果</center> > 参考[Code listing](https://www.overleaf.com/learn/latex/Code_listing)。 【回放】[**导言**](https://nbviewer.jupyter.org/github/xinychen/latex-cookbook/blob/main/chapter-10/section0.ipynb) 【继续】[**10.2 算法伪代码**](https://nbviewer.jupyter.org/github/xinychen/latex-cookbook/blob/main/chapter-10/section2.ipynb) ### License <div class="alert alert-block alert-danger"> <b>This work is released under the MIT license.</b> </div>
github_jupyter
# EASY pilot study downloads This notebook can be used to download the data associated with the "EASY Study - 75 Image, full featureset" study on the ISIC archive available here: https://isic-archive.com/api/v1/#!/study/study_find And from there we can see that this study has the (mongodb Object) ID "58d9189ad831133741859b5d", with which we can access all the data associated with that study. ## Notes on the ISIC API Here's a few important notes related to the ISIC API: - whenever you are logging in using a web browser, make sure the login is performed on the actual https://isic-archive.com/ domain, not the www.isic-archive.com domain, since they do not share cookies/headers, meaning that a login at the www. website will not carry over! To do so, visit https://isic-archive.com/admin - when accessing annotation images (masks) or superpixel arrays, the feature must be given as a URL mangled name, not as on the API documentation specified (featureId) ``` # username (leave at None if you don't have one) # username = None import getpass username = 'weberj3@mskcc.org' password = None # do NOT enter here, will be requested via getpass! if not username is None: password = getpass.getpass('Please enter the password for user %s: ' % (username)) # ObjectID for the "EASY Study - 75 Image, full featureset" study studyId = '5a32cde91165975cf58a469c' studyName = 'EasyPilot' # settings ImageFolder = 'ISICImages' AnnotationFolder = 'Annotations' # imports import os from ISICApi import ISICApi # get ISIC API reference api = ISICApi(None, username, password) # make folders if necessary if not os.path.exists(ImageFolder): os.mkdir(ImageFolder) if not os.path.exists(AnnotationFolder): os.mkdir(AnnotationFolder) # function for mangling feature names featureIds = {} def featureId(feature): if feature in featureIds: return featureIds[feature] letters = [letter if ord(letter) > 64 and ord(letter) < 123 else '%s%02x' % ('%', ord(letter)) for letter in feature] idVal = ''.join(letters) featureIds[feature] = idVal return idVal featureNames = {} def featureName(feature): if feature in featureNames: return featureNames[feature] letters = [letter for letter in feature if ord(letter) > 64 and ord(letter) < 123] nameVal = ''.join(letters) featureNames[feature] = nameVal return nameVal # get study information studyInfo = api.getJson('study/%s' % (studyId)) studyFeatures = studyInfo['features'] studyImage = studyInfo['images'] studyComplete = studyInfo['userCompletion'] # and users with at least 75 images annotated studyUsers = list(filter(lambda user: studyComplete[user['_id']] >= 140, studyInfo['users'])) print('Study "%s" has %s annotators with at least 140 images.' % (studyInfo['name'], len(studyUsers))) # download ISIC images (if not already in folder) imageIds = [] for ic in range(len(studyImage)): imageIds.append(studyImage[ic]['_id']) localFile = '%s/%s.jpg' % (ImageFolder, studyImage[ic]['name']) if not os.path.exists(localFile): print('Downloading %s...' % (studyImage[ic]['name'])) api.getFile('image/%s/download' % (studyImage[ic]['_id']), localFile) # download annotations for user in studyUsers: print('Downloading annotations for user %s %s...' % (user['firstName'], user['lastName'])) userId = user['_id'] userName = user['lastName'] annotations = api.getJsonList('annotation?studyId=%s&userId=%s&state=complete&detail=true' % (studyId, userId)) for annotation in annotations: annotationId = annotation['_id'] imageName = annotation['image']['name'] for markup in annotation['markups'].keys(): fId = featureId(markup) fName = featureName(markup) localFile = '%s/%s_%s_%s_%s.png' % (AnnotationFolder, studyName, imageName, userName, fName) if not os.path.exists(localFile): api.getFile('annotation/%s/%s/mask' % (annotationId, fId), localFile) ```
github_jupyter
<a href="https://githubtocolab.com/giswqs/geemap/blob/master/examples/notebooks/28_voila.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a> Uncomment the following line to install [geemap](https://geemap.org) if needed. ``` # !pip install geemap ``` ## Deploy Earth Engine Apps using Voila and ngrok **Steps to deploy an Earth Engine App:** 1. Install ngrok by following the [instruction](https://ngrok.com/download) 2. Install voila by following the [instruction](https://voila.readthedocs.io/en/stable/install.html) 3. Download the notebook [28_voila.ipynb](https://github.com/giswqs/geemap/blob/master/examples/notebooks/28_voila.ipynb) 4. Run this from the command line: `voila --no-browser 28_voila.ipynb` 5. Run this from the command line: `ngrok http 8866` 6. Copy the link from the ngrok terminal window. The links looks like the following: https://randomstring.ngrok.io 7. Share the link with anyone. **Optional steps:** * To show code cells from you app, run this from the command line: `voila --no-browser --strip_sources=False 28_voila.ipynb` * To protect your app with a password, run this: `ngrok http -auth="username:password" 8866` * To run python simple http server in the directory, run this:`sudo python -m http.server 80` ``` import os import ee import geemap import ipywidgets as widgets Map = geemap.Map() Map.add_basemap('HYBRID') Map style = {'description_width': 'initial'} title = widgets.Text( description='Title:', value='Landsat Timelapse', width=200, style=style ) bands = widgets.Dropdown( description='Select RGB Combo:', options=[ 'Red/Green/Blue', 'NIR/Red/Green', 'SWIR2/SWIR1/NIR', 'NIR/SWIR1/Red', 'SWIR2/NIR/Red', 'SWIR2/SWIR1/Red', 'SWIR1/NIR/Blue', 'NIR/SWIR1/Blue', 'SWIR2/NIR/Green', 'SWIR1/NIR/Red', ], value='NIR/Red/Green', style=style, ) hbox1 = widgets.HBox([title, bands]) hbox1 speed = widgets.IntSlider( description=' Frames per second:', tooltip='Frames per second:', value=10, min=1, max=30, style=style, ) cloud = widgets.Checkbox( value=True, description='Apply fmask (remove clouds, shadows, snow)', style=style ) hbox2 = widgets.HBox([speed, cloud]) hbox2 start_year = widgets.IntSlider( description='Start Year:', value=1984, min=1984, max=2020, style=style ) end_year = widgets.IntSlider( description='End Year:', value=2020, min=1984, max=2020, style=style ) start_month = widgets.IntSlider( description='Start Month:', value=5, min=1, max=12, style=style ) end_month = widgets.IntSlider( description='End Month:', value=10, min=1, max=12, style=style ) hbox3 = widgets.HBox([start_year, end_year, start_month, end_month]) hbox3 font_size = widgets.IntSlider( description='Font size:', value=30, min=10, max=50, style=style ) font_color = widgets.ColorPicker( concise=False, description='Font color:', value='white', style=style ) progress_bar_color = widgets.ColorPicker( concise=False, description='Progress bar color:', value='blue', style=style ) hbox4 = widgets.HBox([font_size, font_color, progress_bar_color]) hbox4 create_gif = widgets.Button( description='Create timelapse', button_style='primary', tooltip='Click to create timelapse', style=style, ) download_gif = widgets.Button( description='Download GIF', button_style='primary', tooltip='Click to download timelapse', disabled=False, style=style, ) output = widgets.Output() hbox5 = widgets.HBox([create_gif]) hbox5 def submit_clicked(b): with output: output.clear_output() if start_year.value > end_year.value: print('The end year must be great than the start year.') return if start_month.value > end_month.value: print('The end month must be great than the start month.') return if start_year.value == end_year.value: add_progress_bar = False else: add_progress_bar = True start_date = str(start_month.value).zfill(2) + '-01' end_date = str(end_month.value).zfill(2) + '-30' print('Computing...') Map.add_landsat_ts_gif( roi=Map.user_roi, label=title.value, start_year=start_year.value, end_year=end_year.value, start_date=start_date, end_date=end_date, bands=bands.value.split('/'), font_color=font_color.value, frames_per_second=speed.value, font_size=font_size.value, add_progress_bar=add_progress_bar, progress_bar_color=progress_bar_color.value, download=True, apply_fmask=cloud.value, ) create_gif.on_click(submit_clicked) output ```
github_jupyter
# **Classification of iris varieties within the same species** ## Introduction The aim of this Notebook is to use AI TRAINING product to train a simple model, on the Iris dataset, with the PyTorch library. It is an exemple of neural network for data classification ## Code The neural network will be set up in different step. First, librairies have to be imported. Next, the neural network model will be defined et the dataset split. Then, the model will be trained. Finally, the loss rate will be displayed. ### Step 1 - librairies importation (and installation if required) ``` pip install pandas sklearn matplotlib %matplotlib inline import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.nn.functional as F import pandas as pd from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.datasets import load_iris ``` ### Step 2 - Define the neural network model ``` class Model(nn.Module): def __init__(self): super().__init__() # fully connected layer : 4 input features for 4 parameters in X self.layer1 = nn.Linear(in_features=4, out_features=16) # fully connected layer self.layer2 = nn.Linear(in_features=16, out_features=12) # output layer : 3 output features for 3 species self.output = nn.Linear(in_features=12, out_features=3) def forward(self, x): # activation fonction : reLU x = F.relu(self.layer1(x)) x = F.relu(self.layer2(x)) x = self.output(x) return x ``` ### Step 3 - Load and split Iris dataset ``` # data opening dataset = load_iris() # input of the neural network X = dataset.data # output of the neural network y = dataset.target Y = y.astype("float64") # train and test split : 20 % for the test and 80 % for the learning X_train,X_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=0) # convert split data from numpy array to Pytorch tensors X_train = torch.FloatTensor(X_train) X_test = torch.FloatTensor(X_test) y_train = torch.LongTensor(y_train) y_test = torch.LongTensor(y_test) ``` ### Step 4 - Train model ``` # display the model architecture model = Model() print("Model display: ",model) # measure loss criterion = nn.CrossEntropyLoss() # optimizer Adam with a learning rate of 0.01 optimizer = torch.optim.Adam(model.parameters(), lr=0.01) # the model will be train during 100 epochs epochs = 100 epoch_list = [] loss_list = [] perf= [] print("The loss is printed for each epoch: ") for i in range(epochs): optimizer.zero_grad() y_pred = model.forward(X_train) loss = criterion(y_pred, y_train) loss_list.append(loss) loss.backward() epoch_list.append(i) optimizer.step() # the loss is printed for each epoch print(f'Epoch: {i} Loss: {loss}') ``` ### Step 5 - Prediction and loss display ``` # print the last loss last_loss = loss_list[99].item() print('Last value of loss: ',round(last_loss,3)) # make prediction predict_out = model(X_test) _, predict_y = torch.max(predict_out, 1) # print he accuracy print('Prediction accuracy: ', accuracy_score(y_test.data, predict_y.data)) # display the graph of loss plt.plot(epoch_list,loss_list) plt.title('Evolution of the loss according to the number of epochs') plt.xlabel('Epochs') plt.ylabel('Loss') plt.show() ``` ## Conclusion - The loss of this neural network is really low (arround 0.05 %). - The accuracy of the prediction is 100 %. It means that the prediction il always good with this model.
github_jupyter
``` # Copyright 2021 Google LLC # Use of this source code is governed by an MIT-style # license that can be found in the LICENSE file or at # https://opensource.org/licenses/MIT. # Notebook authors: Kevin P. Murphy (murphyk@gmail.com) # and Mahmoud Soliman (mjs@aucegypt.edu) # This notebook reproduces figures for chapter 2 from the book # "Probabilistic Machine Learning: An Introduction" # by Kevin Murphy (MIT Press, 2021). # Book pdf is available from http://probml.ai ``` <a href="https://opensource.org/licenses/MIT" target="_parent"><img src="https://img.shields.io/github/license/probml/pyprobml"/></a> <a href="https://colab.research.google.com/github/probml/pml-book/blob/main/pml1/figure_notebooks/chapter2_probability_univariate_models_figures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Figure 2.1:<a name='2.1'></a> <a name='multinom'></a> Some discrete distributions on the state space $\mathcal X =\ 1,2,3,4\ $. (a) A uniform distribution with $p(x=k)=1/4$. (b) A degenerate distribution (delta function) that puts all its mass on $x=1$. Figure(s) generated by [discrete_prob_dist_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/discrete_prob_dist_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n discrete_prob_dist_plot.py ``` ## Figure 2.2:<a name='2.2'></a> <a name='gaussianPdf'></a> (a) Plot of the cdf for the standard normal, $\mathcal N (0,1)$. Figure(s) generated by [gauss_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/gauss_plot.py) [quantile_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/quantile_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n gauss_plot.py try_deimport() %run -n quantile_plot.py ``` ## Figure 2.3:<a name='2.3'></a> <a name='roweis-xtimesy'></a> Computing $p(x,y) = p(x) p(y)$, where $ X \perp Y $. Here $X$ and $Y$ are discrete random variables; $X$ has 6 possible states (values) and $Y$ has 5 possible states. A general joint distribution on two such variables would require $(6 \times 5) - 1 = 29$ parameters to define it (we subtract 1 because of the sum-to-one constraint). By assuming (unconditional) independence, we only need $(6-1) + (5-1) = 9$ parameters to define $p(x,y)$ ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.3.png" width="256"/> ## Figure 2.4:<a name='2.4'></a> <a name='bimodal'></a> Illustration of a mixture of two 1d Gaussians, $p(x) = 0.5 \mathcal N (x|0,0.5) + 0.5 \mathcal N (x|2,0.5)$. Figure(s) generated by [bimodal_dist_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/bimodal_dist_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n bimodal_dist_plot.py ``` ## Figure 2.5:<a name='2.5'></a> <a name='anscombe'></a> Illustration of Anscombe's quartet. All of these datasets have the same low order summary statistics. Figure(s) generated by [anscobmes_quartet.py](https://github.com/probml/pyprobml/blob/master/scripts/anscobmes_quartet.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n anscobmes_quartet.py ``` ## Figure 2.6:<a name='2.6'></a> <a name='datasaurus'></a> Illustration of the Datasaurus Dozen. All of these datasets have the same low order summary statistics. Adapted from Figure 1 of <a href='#Matejka2017'>[JG17]</a> . Figure(s) generated by [datasaurus_dozen.py](https://github.com/probml/pyprobml/blob/master/scripts/datasaurus_dozen.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n datasaurus_dozen.py ``` ## Figure 2.7:<a name='2.7'></a> <a name='boxViolin'></a> Illustration of 7 different datasets (left), the corresponding box plots (middle) and violin box plots (right). From Figure 8 of https://www.autodesk.com/research/publications/same-stats-different-graphs . Used with kind permission of Justin Matejka ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.7.png" width="256"/> ## Figure 2.8:<a name='2.8'></a> <a name='3d2d'></a> Any planar line-drawing is geometrically consistent with infinitely many 3-D structures. From Figure 11 of <a href='#Sinha1993'>[PE93]</a> . Used with kind permission of Pawan Sinha ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.8.png" width="256"/> ## Figure 2.9:<a name='2.9'></a> <a name='binomDist'></a> Illustration of the binomial distribution with $N=10$ and (a) $\theta =0.25$ and (b) $\theta =0.9$. Figure(s) generated by [binom_dist_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/binom_dist_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n binom_dist_plot.py ``` ## Figure 2.10:<a name='2.10'></a> <a name='sigmoidHeaviside'></a> (a) The sigmoid (logistic) function $\bm \sigma (a)=(1+e^ -a )^ -1 $. (b) The Heaviside function $\mathbb I \left ( a>0 \right )$. Figure(s) generated by [activation_fun_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/activation_fun_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n activation_fun_plot.py ``` ## Figure 2.11:<a name='2.11'></a> <a name='iris-logreg-1d'></a> Logistic regression applied to a 1-dimensional, 2-class version of the Iris dataset. Figure(s) generated by [iris_logreg.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_logreg.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n iris_logreg.py ``` ## Figure 2.12:<a name='2.12'></a> <a name='softmaxDemo'></a> Softmax distribution $\mathcal S ( \bm a /T)$, where $ \bm a =(3,0,1)$, at temperatures of $T=100$, $T=2$ and $T=1$. When the temperature is high (left), the distribution is uniform, whereas when the temperature is low (right), the distribution is ``spiky'', with most of its mass on the largest element. Figure(s) generated by [softmax_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/softmax_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n softmax_plot.py ``` ## Figure 2.13:<a name='2.13'></a> <a name='iris-logistic-2d-3class-prob'></a> Logistic regression on the 3-class, 2-feature version of the Iris dataset. Adapted from Figure of 4.25 <a href='#Geron2019'>[Aur19]</a> . Figure(s) generated by [iris_logreg.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_logreg.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n iris_logreg.py ``` ## Figure 2.14:<a name='2.14'></a> <a name='hetero'></a> Linear regression using Gaussian output with mean $\mu (x)=b + w x$ and (a) fixed variance $\sigma ^2$(homoskedastic) or (b) input-dependent variance $\sigma (x)^2$(heteroscedastic). Figure(s) generated by [linreg_1d_hetero_tfp.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_1d_hetero_tfp.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n linreg_1d_hetero_tfp.py ``` ## Figure 2.15:<a name='2.15'></a> <a name='studentPdf'></a> (a) The pdf's for a $\mathcal N (0,1)$, $\mathcal T (\mu =0,\sigma =1,\nu =1)$, $\mathcal T (\mu =0,\sigma =1,\nu =2)$, and $\mathrm Lap (0,1/\sqrt 2 )$. The mean is 0 and the variance is 1 for both the Gaussian and Laplace. When $\nu =1$, the Student is the same as the Cauchy, which does not have a well-defined mean and variance. (b) Log of these pdf's. Note that the Student distribution is not log-concave for any parameter value, unlike the Laplace distribution. Nevertheless, both are unimodal. Figure(s) generated by [student_laplace_pdf_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/student_laplace_pdf_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n student_laplace_pdf_plot.py ``` ## Figure 2.16:<a name='2.16'></a> <a name='robustDemo'></a> Illustration of the effect of outliers on fitting Gaussian, Student and Laplace distributions. (a) No outliers (the Gaussian and Student curves are on top of each other). (b) With outliers. We see that the Gaussian is more affected by outliers than the Student and Laplace distributions. Adapted from Figure 2.16 of <a href='#BishopBook'>[Bis06]</a> . Figure(s) generated by [robust_pdf_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/robust_pdf_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n robust_pdf_plot.py ``` ## Figure 2.17:<a name='2.17'></a> <a name='gammaDist'></a> (a) Some beta distributions. If $a<1$, we get a ``spike'' on the left, and if $b<1$, we get a ``spike'' on the right. if $a=b=1$, the distribution is uniform. If $a>1$ and $b>1$, the distribution is unimodal. Figure(s) generated by [beta_dist_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/beta_dist_plot.py) [gamma_dist_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/gamma_dist_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n beta_dist_plot.py try_deimport() %run -n gamma_dist_plot.py ``` ## Figure 2.18:<a name='2.18'></a> <a name='empiricalDist'></a> Illustration of the (a) empirical pdf and (b) empirical cdf derived from a set of $N=5$ samples. From https://bit.ly/3hFgi0e . Used with kind permission of Mauro Escudero ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.18_A.png" width="256"/> <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.18_B.png" width="256"/> ## Figure 2.19:<a name='2.19'></a> <a name='changeOfVar1d'></a> (a) Mapping a uniform pdf through the function $f(x) = 2x + 1$. (b) Illustration of how two nearby points, $x$ and $x+dx$, get mapped under $f$. If $\frac dy dx >0$, the function is locally increasing, but if $\frac dy dx <0$, the function is locally decreasing. From <a href='#JangBlog'>[Jan18]</a> . Used with kind permission of Eric Jang ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.19_A.png" width="256"/> <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.19_B.png" width="256"/> ## Figure 2.20:<a name='2.20'></a> <a name='affine2d'></a> Illustration of an affine transformation applied to a unit square, $f( \bm x ) = \mathbf A \bm x + \bm b $. (a) Here $\mathbf A =\mathbf I $. (b) Here $ \bm b = \bm 0 $. From <a href='#JangBlog'>[Jan18]</a> . Used with kind permission of Eric Jang ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.20.png" width="256"/> ## Figure 2.21:<a name='2.21'></a> <a name='polar'></a> Change of variables from polar to Cartesian. The area of the shaded patch is $r \tmspace +\thickmuskip .2777em dr \tmspace +\thickmuskip .2777em d\theta $. Adapted from Figure 3.16 of <a href='#Rice95'>[Ric95]</a> ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.21.png" width="256"/> ## Figure 2.22:<a name='2.22'></a> <a name='bellCurve'></a> Distribution of the sum of two dice rolls, i.e., $p(y)$ where $y=x_1 + x_2$ and $x_i \sim \mathrm Unif (\ 1,2,\ldots ,6\ )$. From https://en.wikipedia.org/wiki/Probability\_distribution . Used with kind permission of Wikipedia author Tim Stellmach ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.22.png" width="256"/> ## Figure 2.23:<a name='2.23'></a> <a name='clt'></a> The central limit theorem in pictures. We plot a histogram of $ \mu _N^s = \frac 1 N \DOTSB \sum@ \slimits@ _ n=1 ^Nx_ ns $, where $x_ ns \sim \mathrm Beta (1,5)$, for $s=1:10000$. As $N\rightarrow \infty $, the distribution tends towards a Gaussian. (a) $N=1$. (b) $N=5$. Adapted from Figure 2.6 of <a href='#BishopBook'>[Bis06]</a> . Figure(s) generated by [centralLimitDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/centralLimitDemo.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n centralLimitDemo.py ``` ## Figure 2.24:<a name='2.24'></a> <a name='changeOfVars'></a> Computing the distribution of $y=x^2$, where $p(x)$ is uniform (left). The analytic result is shown in the middle, and the Monte Carlo approximation is shown on the right. Figure(s) generated by [change_of_vars_demo1d.py](https://github.com/probml/pyprobml/blob/master/scripts/change_of_vars_demo1d.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n change_of_vars_demo1d.py ``` ## References: <a name='Geron2019'>[Aur19]</a> G. Aur'elien "Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques for BuildingIntelligent Systems (2nd edition)". (2019). <a name='BishopBook'>[Bis06]</a> C. Bishop "Pattern recognition and machine learning". (2006). <a name='Matejka2017'>[JG17]</a> M. Justin and F. George. "Same Stats, Different Graphs: Generating Datasets with VariedAppearance and Identical Statistics through Simulated Annealing". (2017). <a name='JangBlog'>[Jan18]</a> E. Jang "Normalizing Flows Tutorial". (2018). <a name='Sinha1993'>[PE93]</a> S. P and A. E. "Recovering reflectance and illumination in a world of paintedpolyhedra". (1993). <a name='Rice95'>[Ric95]</a> J. Rice "Mathematical statistics and data analysis". (1995).
github_jupyter
# Parametric Dynamic Mode Decomposition In this tutorial we explore the usage of the class `pydmd.ParametricDMD`, which is implemented following the work presented in [arXiv:2110.09155](https://arxiv.org/pdf/2110.09155.pdf) (Andreuzzi, Demo, Rozza. _A dynamic mode decomposition extension for the forecasting of parametric dynamical systems_, 2021). The approach is an attempt to extend Dynamic Mode Decomposition to parametric problems, in order to obtain predictions for future time instants in untested parameters. In this tutorial we apply the parametric approach to a simple parametric time-dependent problem, for which we are going to construct a dataset _on the fly_. $$\begin{cases} f_1(x,t) &:= e^{2.3i*t} \cosh(x+3)^{-1}\\ f_2(x,t) &:= 2 * e^{2.8j*t} \tanh(x) \cosh(x)^{-1}\\ f^{\mu}(x,t) &:= \mu f_1(x,t) + (1-\mu) f_2(x,t), \qquad \mu \in [0,1] \end{cases}$$ First of all we import the modules needed for the tutorial, which include: + Several classes from `pydmd` (in addition to `ParametricDMD` we import also the class `DMD` which actually performs the Dynamic Mode Decomposition); + The classes `POD` and `RBF` from `ezyrb`, which respectively are used to reduce the dimensionality before the interpolation and to perform the actual interpolation (see the reference for more details); + `NumPy` and `Matplotlib`. ``` import warnings warnings.filterwarnings('ignore') from pydmd import ParametricDMD, DMD, HankelDMD from ezyrb import POD, RBF import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as colors ``` First of all we define several functions to construct our system and gather the data needed to train the algorithm: ``` def f1(x,t): return 1./np.cosh(x+3)*np.exp(2.3j*t) def f2(x,t): return 2./np.cosh(x)*np.tanh(x)*np.exp(2.8j*t) def f(mu): def fmu(x,t): return mu*f1(x,t) + (1-mu)*f2(x,t) return fmu ``` Then we construct a discrete space-time grid with an acceptable number of sample points in both the dimensions: ``` N = 160 m = 500 x = np.linspace(-5, 5, m) t = np.linspace(0, 4*np.pi, N) xgrid, tgrid = np.meshgrid(x, t) ``` We can now construct our dataset by computing the value of `f` for several known parameters (since our problem is quite simple we consider only 10 samples): ``` training_params = np.round(np.linspace(0,1,10),1) print(training_params) training_snapshots = np.array([f(p)(xgrid, tgrid) for p in training_params]) print(training_snapshots.shape) ``` After defining several utility functions which we are going to use in the following sections, we visualize our dataset for several values of $\mu$: ``` def title(param): return '$\mu$={}'.format(param) def visualize(X, param, ax, log=False, labels_func=None): ax.set_title(title(param)) if labels_func != None: labels_func(ax) if log: return ax.pcolormesh(X.real, norm=colors.LogNorm(vmin=X.min(), vmax=X.max())) else: return ax.pcolormesh(X.real) def visualize_multiple(Xs, params, log=False, figsize=(20,6), labels_func=None): if log: Xs[Xs == 0] = np.min(Xs[Xs != 0]) fig = plt.figure(figsize=figsize) axes = fig.subplots(nrows=1, ncols=5, sharey=True) if labels_func is None: def labels_func_default(ax): ax.set_yticks([0, N//2, N]) ax.set_yticklabels(['0', '$\pi$', '2$\pi$']) ax.set_xticks([0, m//2, m]) ax.set_xticklabels(['-5', '0', '5']) labels_func = labels_func_default im = [visualize(X, param, ax, log, labels_func) for X, param, ax in zip(Xs, params, axes)][-1] fig.colorbar(im, ax=axes) plt.show() idxes = [0,2,4,6,8] visualize_multiple(training_snapshots[idxes], training_params[idxes]) ``` As you can see the parameter is 1-dimensional, but the approach works also with parameters living in multi-dimensional spaces. It is important to provide a sufficient number of _training_ parameters, otherwise the algorithm won't be able to explore the solution manifold in an acceptable way. We now select several _unknown_ (or _testing_) parameters in order to assess the results obtained using the parametric approach. As you can see we consider testing parameters having dishomogeneous distances from our training parameters. ``` similar_testing_params = [1,3,5,7,9] testing_params = training_params[similar_testing_params] + np.array([5*pow(10,-i) for i in range(2,7)]) testing_params_labels = [str(training_params[similar_testing_params][i-2]) + '+$10^{{-{}}}$'.format(i) for i in range(2,7)] step = t[1]-t[0] N_predict = 40 N_nonpredict = 40 t2 = np.array([4*np.pi + i*step for i in range(-N_nonpredict+1,N_predict+1)]) print(t2.shape) xgrid2, tgrid2 = np.meshgrid(x, t2) testing_snapshots = np.array([f(p)(xgrid2, tgrid2) for p in testing_params]) plt.figure(figsize=(8,2)) plt.scatter(training_params, np.zeros(len(training_params)), label='training') plt.scatter(testing_params, np.zeros(len(testing_params)), label='testing') plt.legend() plt.title('Distribution of the parameters'); plt.xlabel('$\mu$') plt.yticks([],[]); ``` ## Mathematical formulation The reference mentioned above proposes two possible ways to achieve the parametrization of DMD, namely _monolithic_ and _partitioned_ approach. We briefly present each one before demonstrating how to use the class `ParametricDMD` in the two cases. The two methods share a common part, which consists in assembling the matrix $$\mathbf{X} = \begin{bmatrix} X_{\mu_1} & \dots & X_{\mu_p}\\ \end{bmatrix} \in \mathbb{R}^{m \times N p}$$ where $\mu_1, \dots, \mu_p$ are the parameters in the training set, $X_{\mu_i} \in \mathbb{R}^{m \times N}$ is the set of $N$ snapshots of the time-dependent system computed with the parameter $\mu_i$ represented by a vector with $m$ components (which may be a sampling of a continuous distribution like the pressure field on a surface). In our formulation $m$ is the dimension of the space and $N$ is the number of known time instants for each instance of the parametric problem. The dimensionality of the problem is reduced using Proper Orthogonal Decomposition (POD) (retaining the first $n$ POD modes using the singular values criteria), thus obtaining the (reduced) matrix $$\tilde{\mathbf{X}} = \begin{bmatrix} \tilde{X}_{\mu_1} & \dots & \tilde{X}_{\mu_p}\\ \end{bmatrix} \in \mathbb{R}^{n \times N p}$$ We now examine two ways to treat the reduced matrices $\tilde{X}_{\mu_1}, \dots, \tilde{X}_{\mu_p}$ to obtain an approximation of the system for untested parameters and future time instants. ## Monolithic approach This method consists in the application of a single DMD altogether to the matrix $$\begin{bmatrix} \tilde{X}_{\mu_1}, \dots, \tilde{X}_{\mu_p} \end{bmatrix}^T \in \mathbb{R}^{np \times N}$$ Note that the index of the time instant increases along the rows of this matrix. For this reason the constructor requires, in this case, only one instance of `DMD`, since the reduced representations of the snapshots in the testing dataset are treated as one single time-dependent non-parametric system. This allows us to obtain an approximation of the POD coefficients for the parameters in the training set in future time instants. The POD coefficients are then interpolated separately for each unknown parameter, and the approximated full-dimensional system is reconstructed via a matrix multiplication with the matrix of POD modes. We choose to retain the first 10 POD modes for each parameter, and set `svd_rank=-1` for our DMD instance, in order to protect us from divergent DMD modes which may ruin the results. We also provide an instance of an `RBF` interpolator to be used for the interpolation of POD coefficients. ``` pdmd = ParametricDMD(DMD(svd_rank=-1), POD(rank=20), RBF()) pdmd.fit(training_snapshots, training_params) ``` We can now set the testing parameters by chaning the propery `parameters` of our instance of `ParametricDMD`, as well as the time-frame via the property `dmd_time` (see the other tutorials for an overview of the latter): ``` pdmd.parameters = testing_params pdmd.dmd_time['t0'] = pdmd.original_time['tend'] - N_nonpredict + 1 pdmd.dmd_time['tend'] = pdmd.original_time['tend'] + N_nonpredict print(len(pdmd.dmd_timesteps), pdmd.dmd_timesteps) approximation = pdmd.reconstructed_data approximation.shape ``` As you can see above we stored the result of the approximation (which comprises both reconstrction of known time instants and prediction of future time instants) into the variable `approximation`. ``` # this is needed to visualize the time/space in the appropriate way def labels_func(ax): l = len(pdmd.dmd_timesteps) ax.set_yticks([0, l//2, l]) ax.set_yticklabels(['3\pi', '4$\pi$', '5$\pi$']) ax.set_xticks([0, m//2, m]) ax.set_xticklabels(['-5', '0', '5']) print('Approximation') visualize_multiple(approximation, testing_params_labels, figsize=(20,2.5), labels_func=labels_func) print('Truth') visualize_multiple(testing_snapshots, testing_params_labels, figsize=(20,2.5), labels_func=labels_func) print('Absolute error') visualize_multiple(np.abs(testing_snapshots.real - approximation.real), testing_params_labels, figsize=(20,2.5), labels_func=labels_func) ``` Below we plot the dependency of the mean point-wise error of the reconstruction on the distance between the (untested) parameter and the nearest tested parameter in the training set: ``` distances = np.abs(testing_params - training_params[similar_testing_params]) errors = np.mean(np.abs(testing_snapshots.real - approximation.real), axis=(1,2)) plt.loglog(distances, errors, 'ro-') plt.xlabel('Distance from nearest parameter') plt.ylabel('Mean point-wise error'); ``` ## Partitioned approach We now consider the second possible approach implemented in `ParametricDMD`. We consider again the matrices $\tilde{X}_{\mu_1}, \dots, \tilde{X}_{\mu_p}$ which we defined in [Mathematical formulation](#Mathematical-formulation). Unlke we did for the *Monolithic* approach, we now perform $p$ separate DMDs, one for each $X_{\mu_i}$. We then predict the POD coefficients in future time instants. The reconstruction phase then is the same we performed in the monolithic approach, for the details see the reference mentioned in the introduction. In order to apply this approach in `PyDMD`, you just need to pass a list of DMD instances in the constructor of `ParametricDMD`. Clearly you will need $p$ instances, where $p$ is the number of parameters in the training set. ``` dmds = [DMD(svd_rank=-1) for _ in range(len(training_params))] p_pdmd = ParametricDMD(dmds, POD(rank=20), RBF()) p_pdmd.fit(training_snapshots, training_params) ``` We set untested parameters and the time frame in which we want to reconstruct the system in the same way we did in the monolithic approach: ``` # setting unknown parameters and time p_pdmd.parameters = testing_params p_pdmd.dmd_time['t0'] = p_pdmd.original_time['tend'] - N_nonpredict + 1 p_pdmd.dmd_time['tend'] = p_pdmd.original_time['tend'] + N_nonpredict ``` **Important**: Don't pass the same DMD instance $p$ times, since that would mean that this object is trained $p$ times on $p$ different training set, therefore only the last one is retained at the time in which the reconstruction is computed. ``` approximation = p_pdmd.reconstructed_data approximation.shape ``` Below we plot the point-wise absolute error: ``` visualize_multiple(np.abs(testing_snapshots.real - approximation.real), testing_params_labels, figsize=(20,2.5)) ``` As you can see there's not much difference in the absolute error, but for more complex problems (i.e. when new frequencies/modes turn on or off as the parameter moves in the domain) there are documented improvements when using the partitioned approach.
github_jupyter
# LogisticRegression with StandardScaler & Polynomial Features This Code template is for the Classification task using the LogisticRegression with StandardScaler feature scaling technique and PolynomialFeatures as Feature Transformation Technique in a pipeline. ### Required Packages ``` !pip install imblearn --q import warnings import numpy as np import pandas as pd import seaborn as se import matplotlib.pyplot as plt from imblearn.over_sampling import RandomOverSampler from sklearn.pipeline import make_pipeline from sklearn.preprocessing import LabelEncoder,StandardScaler,PolynomialFeatures from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import classification_report,plot_confusion_matrix warnings.filterwarnings('ignore') ``` ### Initialization Filepath of CSV file ``` #filepath file_path= "" ``` List of features which are required for model training . ``` #x_values features=[''] ``` Target feature for prediction. ``` #y_value target='' ``` ### Data Fetching Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools. We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ``` df=pd.read_csv(file_path) df.head() ``` ### Feature Selections It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model. We will assign all the required input features to X and target/outcome to Y. ``` X = df[features] Y = df[target] ``` ### Data Preprocessing Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ``` def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) def EncodeY(df): if len(df.unique())<=2: return df else: un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort') df=LabelEncoder().fit_transform(df) EncodedT=[xi for xi in range(len(un_EncodedT))] print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT)) return df x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=EncodeY(NullClearner(Y)) X.head() ``` #### Correlation Map In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ``` f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ``` #### Distribution Of Target Variable ``` plt.figure(figsize = (10,6)) se.countplot(Y) ``` ### Data Splitting The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ``` x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123) ``` #### Handling Target Imbalance The challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important. One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library. ``` x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train) ``` ### Model Logistic regression is a statistical model that in its basic form uses a logistic function to model a binary dependent variable, although many more complex extensions exist. In regression analysis, logistic regression (or logit regression) is estimating the parameters of a logistic model (a form of binary regression). This can be extended to model several classes of events. #### Model Tuning Parameters 1. penalty : {‘l1’, ‘l2’, ‘elasticnet’, ‘none’}, default=’l2’ > Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties. ‘elasticnet’ is only supported by the ‘saga’ solver. If ‘none’ (not supported by the liblinear solver), no regularization is applied. 2. C : float, default=1.0 > Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization. 3. tol : float, default=1e-4 > Tolerance for stopping criteria. 4. solver : {‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’}, default=’lbfgs’ > Algorithm to use in the optimization problem. For small datasets, ‘<code>liblinear</code>’ is a good choice, whereas ‘<code>sag</code>’ and ‘<code>saga</code>’ are faster for large ones. For multiclass problems, only ‘<code>newton-cg</code>’, ‘<code>sag</code>’, ‘<code>saga</code>’ and ‘<code>lbfgs</code>’ handle multinomial loss; ‘<code>liblinear</code>’ is limited to one-versus-rest schemes. * ‘<code>newton-cg</code>’, ‘<code>lbfgs</code>’, ‘<code>sag</code>’ and ‘<code>saga</code>’ handle L2 or no penalty. * ‘<code>liblinear</code>’ and ‘<code>saga</code>’ also handle L1 penalty. * ‘<code>saga</code>’ also supports ‘<code>elasticnet</code>’ penalty. * ‘<code>liblinear</code>’ does not support setting <code>penalty='none'</code>. 5. random_state : int, RandomState instance, default=None > Used when <code>solver</code> == ‘sag’, ‘saga’ or ‘liblinear’ to shuffle the data. 6. max_iter : int, default=100 > Maximum number of iterations taken for the solvers to converge. 7. multi_class : {‘auto’, ‘ovr’, ‘multinomial’}, default=’auto’ > If the option chosen is ‘<code>ovr</code>’, then a binary problem is fit for each label. For ‘<code>multinomial</code>’ the loss minimised is the multinomial loss fit across the entire probability distribution, even when the data is binary. ‘<code>multinomial</code>’ is unavailable when <code>solver</code>=’<code>liblinear</code>’. ‘auto’ selects ‘ovr’ if the data is binary, or if <code>solver</code>=’<code>liblinear</code>’, and otherwise selects ‘<code>multinomial</code>’. 8. verbose : int, default=0 > For the liblinear and lbfgs solvers set verbose to any positive number for verbosity. 9. n_jobs : int, default=None > Number of CPU cores used when parallelizing over classes if multi_class=’ovr’”. This parameter is ignored when the <code>solver</code> is set to ‘liblinear’ regardless of whether ‘multi_class’ is specified or not. <code>None</code> means 1 unless in a joblib.parallel_backend context. <code>-1</code> means using all processors ``` # Build Model here model = make_pipeline(StandardScaler(),PolynomialFeatures(),LogisticRegression(random_state = 123,n_jobs = -1)) model.fit(x_train, y_train) ``` #### Model Accuracy score() method return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. ``` print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100)) ``` #### Confusion Matrix A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known. ``` plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues) ``` #### Classification Report A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False. * **where**: - Precision:- Accuracy of positive predictions. - Recall:- Fraction of positives that were correctly identified. - f1-score:- percent of positive predictions were correct - support:- Support is the number of actual occurrences of the class in the specified dataset. ``` print(classification_report(y_test,model.predict(x_test))) ``` #### Creator: Ganapathi Thota , Github: [Profile](https://github.com/Shikiz)
github_jupyter
The objective of this experiment is to learn words with similar or different meanings are equally apart in BoW and semantics or Meaning of the word is preserved in W2V In this experiment we will be using a huge dataset named as 20 news classification dataset. This data set consists of 20000 messages taken from 20 newsgroups. ### Datasource http://archive.ics.uci.edu/ml/datasets/Twenty+Newsgroups To get a sense of our data, let us first start by counting the frequencies of the target classes in our news articles in the training set. #### Keywords * Numpy * Collections * Gensim * Bag-of-Words (Word Frequency, Pre-Processing) * Bag-of-Words representation ### Setup Steps ``` #@title Run this cell to complete the setup for this Notebook from IPython import get_ipython ipython = get_ipython() ipython.magic("sx wget https://www.dropbox.com/s/ir5kph0ocvibaqm/Setup_W1_D1_Exp1.sh?dl=1") ipython.magic("sx mv Setup_W1_D1_Exp1.sh?dl=1 Setup_W1_D1_Exp1.sh") ipython.magic("sx bash Setup_W1_D1_Exp1.sh -u standard -p pass123") ipython.magic("sx pip3 install gensim") # Importing required Packages import pickle import re import operator from collections import defaultdict import matplotlib.pyplot as plt import numpy as np import math import collections import gensim from nltk import ngrams import warnings warnings.filterwarnings("ignore") # Loading the dataset dataset = pickle.load(open('week1_exp1/AIML_DS_NEWSGROUPS_PICKELFILE.pkl','rb')) print(dataset.keys()) # Print frequencies of dataset print("Class : count") print("--------------") number_of_documents = 0 for key in dataset: print(key, ':', len(dataset[key])) ``` Next, let us split our dataset which consists of 1000 samples per class, into training and test sets. We use 950 samples from each class in the training set, and the remaining 50 in the test set. As a mental exercise you should try reasoning about why is it important to ensure a nearly equal distribution of classes in your training and test sets. ``` train_set = {} test_set = {} # Clean dataset for text encoding issues :- Very useful when dealing with non-unicode characters for key in dataset: dataset[key] = [[i.decode('utf-8', errors='replace').lower() for i in f] for f in dataset[key]] # Break dataset into 95-5 split for training and testing n_train = 0 n_test = 0 for k in dataset: split = int(0.95*len(dataset[k])) train_set[k] = dataset[k][0:split] test_set[k] = dataset[k][split:-1] n_train += len(train_set[k]) n_test += len(test_set[k]) for key in train_set: print(key) ``` ## 1. Bag-of-Words Let us begin our journey into text classification with one of the simplest but most commonly used feature representations for news documents - Bag-of-Words. As you might have realized, machine learning algorithms need good feature representations of different inputs. Concretely, we would like to represent each news article $D$ in terms of a feature vector $V$, which can be used for classification. Feature vector $V$ is made up of the number of occurences of each word in the vocabulary. Let us begin by counting the number of occurences of every word in the news documents in the training set. ### 1.1 Word frequency ### Let us try understanding the kind of words that appear frequently, and those that occur rarely. We now count the frequencies of words: ``` def frequency_words(train_set): frequency = defaultdict(int) for key in train_set: for f in train_set[key]: # Find all words which consist only of capital and lowercase characters and are between length of 2-9. # We ignore all special characters such as !.$ and words containing numbers words = re.findall(r'(\b[A-Za-z][a-z]{2,9}\b)', ' '.join(f)) for word in words: frequency[word] += 1 return frequency frequency_of_words = frequency_words(train_set) sorted_words = sorted(frequency_of_words.items(), key=operator.itemgetter(1), reverse=True) print("Top-10 most frequent words:") for word in sorted_words[:10]: print(word) print('----------------------------') print("10 least frequent words:") for word in sorted_words[-10:-1]: print(word) ``` Next, we attempt to plot a histogram of the counts of various words in descending order. Could you comment about the relationship between the frequency of the most frequent word to the second frequent word? And what about the third most frequent word? (Hint - Check the relative frequencies of the first, second and third most frequent words) (After answering, you can visit https://en.wikipedia.org/wiki/Zipf%27s_law for further Reading) ``` %matplotlib inline fig = plt.figure() fig.set_size_inches(20,10) plt.bar(range(len(sorted_words[:100])), [v for k, v in sorted_words[:100]] , align='center') plt.xticks(range(len(sorted_words[:100])), [k for k, v in sorted_words[:100]]) locs, labels = plt.xticks() plt.setp(labels, rotation=90) plt.show() ``` ### 1.2 Pre-processing to remove most and least frequent words We can see that different words appear with different frequencies. The most common words appear in almost all documents. Hence, for a classification task, having information about those words' frequencies does not mater much since they appear frequently in every type of document. To get a good feature representation, we eliminate them since they do not add too much value. Additionally, notice how the least frequent words appear so rarely that they might not be useful either. Let us pre-process our news articles now to remove the most frequent and least frequent words by thresholding their counts: ``` def cleaning_vocabulary_words(list_of_grams): valid_words = defaultdict(int) print('Number of words before preprocessing:', len(list_of_grams)) # Ignore the 25 most frequent words, and the words which appear less than 100 times ignore_most_frequent = 25 freq_thresh = 100 feature_number = 0 for word, word_frequency in list_of_grams[ignore_most_frequent:]: if word_frequency > freq_thresh: valid_words[word] = feature_number feature_number += 1 elif '_' in word: valid_words[word] = feature_number feature_number += 1 print('Number of words after preprocessing:', len(valid_words)) vector_size = len(valid_words) return valid_words, vector_size valid_words, number_of_words = cleaning_vocabulary_words(sorted_words) dictionary = valid_words.keys() ``` ### 1.3 Bag-of-Words representation The simplest way to represent a document $D$ as a vector $V$ would be to now count the relevant words in the document. For each document, make a vector of the count of each of the words in the vocabulary (excluding the words removed in the previous step - the "stopwords"). ``` def convert_to_BoW(dataset, number_of_documents): bow_representation = np.zeros((number_of_documents, number_of_words)) labels = np.zeros((number_of_documents, 1)) i=0 for label, class_name in enumerate(dataset): # For each file for f in dataset[class_name]: # words = re.findall(r'(\b[A-Za-z][a-z]{2,9}\b)', ' '.join(f)) for w in f: words=w.split() for word in words: if word in dictionary: bow_representation[i][valid_words[word]]+=1 i+=1 return bow_representation, labels # Convert the dataset into their bag of words representation treating train and test separately train_bow_set, train_bow_labels = convert_to_BoW(train_set, n_train) test_bow_set, test_bow_labels = convert_to_BoW(test_set, n_test) print(train_bow_set) ``` ### 1.4 Document classification using Bag-of-Words For the test documents, use your favorite distance metric (Cosine, Eucilidean, etc.) to find similar news articles from your training set and classify using kNN. ``` # Optimized K-NN:- This does the same thing as you've learned but in an optimized manner def dist(train_features, given_feature): # Calculate euclidea distaces between the training feature set and the given feature # Try and optimise this calculation using in built numpy functions rather than for loops distances =np.sqrt(np.sum(np.square(np.abs(train_features-given_feature)),axis=0)) return distances ''' Optimized K-NN code. This code is the same as what you've already seen, but trades off memory efficency for computational efficency. ''' def kNN(k, train_features, train_labels, given_feature): distances = [] n = train_features.shape[0] # np.tile function repeats the given_feature n times. given_feature = np.tile(given_feature, (n, 1)) # Compute distance distances = dist(train_features, given_feature) sort_neighbors = np.argsort(distances) return np.concatenate((distances[sort_neighbors][:k].reshape(-1, 1), train_labels[sort_neighbors][:k].reshape(-1, 1)), axis = 1) def kNN_classify(k, train_features, train_labels, given_feature): tally = collections.Counter() tally.update(str(int(nn[1])) for nn in kNN(k, train_features, train_labels, given_feature)) return int(tally.most_common(1)[0][0]) ``` For example, using 3 nearest neighbours, the $0^{th}$ test document is classified as: Computing accuracy for the bag-of-words features on the full test set: ``` accuracy = 0 for i, given_feature in enumerate(test_bow_set): print("Progress: {0:.04f}".format((i+1)/len(test_bow_set)), end="\r") predicted_class = kNN_classify(3, train_bow_set, train_bow_labels, given_feature) if predicted_class == int(test_bow_labels[i]): accuracy += 1 BoW_accuracy = (accuracy / len(test_bow_set)) print(BoW_accuracy) ```
github_jupyter
## Computer vision data ``` %matplotlib inline from fastai.gen_doc.nbdoc import * from fastai.vision import * ``` This module contains the classes that define datasets handling [`Image`](/vision.image.html#Image) objects and their transformations. As usual, we'll start with a quick overview, before we get in to the detailed API docs. Before any work can be done a dataset needs to be converted into a [`DataBunch`](/basic_data.html#DataBunch) object, and in the case of the computer vision data - specifically into an [`ImageDataBunch`](/vision.data.html#ImageDataBunch) subclass. This is done with the help of [data block API](/data_block.html) and the [`ImageList`](/vision.data.html#ImageList) class and its subclasses. However, there is also a group of shortcut methods provided by [`ImageDataBunch`](/vision.data.html#ImageDataBunch) which reduce the multiple stages of the data block API, into a single wrapper method. These shortcuts methods work really well for: - Imagenet-style of datasets ([`ImageDataBunch.from_folder`](/vision.data.html#ImageDataBunch.from_folder)) - A pandas `DataFrame` with a column of filenames and a column of labels which can be strings for classification, strings separated by a `label_delim` for multi-classification or floats for a regression problem ([`ImageDataBunch.from_df`](/vision.data.html#ImageDataBunch.from_df)) - A csv file with the same format as above ([`ImageDataBunch.from_csv`](/vision.data.html#ImageDataBunch.from_csv)) - A list of filenames and a list of targets ([`ImageDataBunch.from_lists`](/vision.data.html#ImageDataBunch.from_lists)) - A list of filenames and a function to get the target from the filename ([`ImageDataBunch.from_name_func`](/vision.data.html#ImageDataBunch.from_name_func)) - A list of filenames and a regex pattern to get the target from the filename ([`ImageDataBunch.from_name_re`](/vision.data.html#ImageDataBunch.from_name_re)) In the last five factory methods, a random split is performed between train and validation, in the first one it can be a random split or a separation from a training and a validation folder. If you're just starting out you may choose to experiment with these shortcut methods, as they are also used in the first lessons of the fastai deep learning course. However, you can completely skip them and start building your code using the data block API from the very beginning. Internally, these shortcuts use this API anyway. The first part of this document is dedicated to the shortcut [`ImageDataBunch`](/vision.data.html#ImageDataBunch) factory methods. Then all the other computer vision data-specific methods that are used with the data block API are presented. ## Quickly get your data ready for training To get you started as easily as possible, the fastai provides two helper functions to create a [`DataBunch`](/basic_data.html#DataBunch) object that you can directly use for training a classifier. To demonstrate them you'll first need to download and untar the file by executing the following cell. This will create a data folder containing an MNIST subset in `data/mnist_sample`. ``` path = untar_data(URLs.MNIST_SAMPLE); path ``` There are a number of ways to create an [`ImageDataBunch`](/vision.data.html#ImageDataBunch). One common approach is to use *Imagenet-style folders* (see a ways down the page below for details) with [`ImageDataBunch.from_folder`](/vision.data.html#ImageDataBunch.from_folder): ``` tfms = get_transforms(do_flip=False) data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ``` Here the datasets will be automatically created in the structure of *Imagenet-style folders*. The parameters specified: - the transforms to apply to the images in `ds_tfms` (here with `do_flip`=False because we don't want to flip numbers), - the target `size` of our pictures (here 24). As with all [`DataBunch`](/basic_data.html#DataBunch) usage, a `train_dl` and a `valid_dl` are created that are of the type PyTorch [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). If you want to have a look at a few images inside a batch, you can use [`DataBunch.show_batch`](/basic_data.html#DataBunch.show_batch). The `rows` argument is the number of rows and columns to display. ``` data.show_batch(rows=3, figsize=(5,5)) ``` The second way to define the data for a classifier requires a structure like this: ``` path\ train\ test\ labels.csv ``` where the labels.csv file defines the label(s) of each image in the training set. This is the format you will need to use when each image can have multiple labels. It also works with single labels: ``` pd.read_csv(path/'labels.csv').head() ``` You can then use [`ImageDataBunch.from_csv`](/vision.data.html#ImageDataBunch.from_csv): ``` data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=28) data.show_batch(rows=3, figsize=(5,5)) ``` An example of multiclassification can be downloaded with the following cell. It's a sample of the [planet dataset](https://www.google.com/search?q=kaggle+planet&rlz=1C1CHBF_enFR786FR786&oq=kaggle+planet&aqs=chrome..69i57j0.1563j0j7&sourceid=chrome&ie=UTF-8). ``` planet = untar_data(URLs.PLANET_SAMPLE) ``` If we open the labels files, we seach that each image has one or more tags, separated by a space. ``` df = pd.read_csv(planet/'labels.csv') df.head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim=' ', ds_tfms=get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)) ``` The `show_batch`method will then print all the labels that correspond to each image. ``` data.show_batch(rows=3, figsize=(10,8), ds_type=DatasetType.Valid) ``` You can find more ways to build an [`ImageDataBunch`](/vision.data.html#ImageDataBunch) without the factory methods in [`data_block`](/data_block.html#data_block). ``` show_doc(ImageDataBunch) ``` This is the same initialization as a regular [`DataBunch`](/basic_data.html#DataBunch) so you probably don't want to use this directly, but one of the factory methods instead. ### Factory methods If you quickly want to get a [`ImageDataBunch`](/vision.data.html#ImageDataBunch) and train a model, you should process your data to have it in one of the formats the following functions handle. ``` show_doc(ImageDataBunch.from_folder) ``` Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments. "*Imagenet-style*" datasets look something like this (note that the test folder is optional): ``` path\ train\ clas1\ clas2\ ... valid\ clas1\ clas2\ ... test\ ``` For example: ``` data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ``` Note that this (and all factory methods in this section) pass any `kwargs` to [`DataBunch.create`](/basic_data.html#DataBunch.create). ``` show_doc(ImageDataBunch.from_csv) ``` Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments. Create an [`ImageDataBunch`](/vision.data.html#ImageDataBunch) from `path` by splitting the data in `folder` and labelled in a file `csv_labels` between a training and validation set. Use `valid_pct` to indicate the percentage of the total images to use as the validation set. An optional `test` folder contains unlabelled data and `suffix` contains an optional suffix to add to the filenames in `csv_labels` (such as '.jpg'). `fn_col` is the index (or the name) of the the column containing the filenames and `label_col` is the index (indices) (or the name(s)) of the column(s) containing the labels. Use [`header`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html#pandas-read-csv) to specify the format of the csv header, and `delimiter` to specify a non-standard csv-field separator. In case your csv has no header, column parameters can only be specified as indices. If `label_delim` is passed, split what's in the label column according to that separator. For example: ``` data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=24); show_doc(ImageDataBunch.from_df) ``` Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments. Same as [`ImageDataBunch.from_csv`](/vision.data.html#ImageDataBunch.from_csv), but passing in a `DataFrame` instead of a csv file. e.g ``` df = pd.read_csv(path/'labels.csv', header='infer') df.head() data = ImageDataBunch.from_df(path, df, ds_tfms=tfms, size=24) ``` Different datasets are labeled in many different ways. The following methods can help extract the labels from the dataset in a wide variety of situations. The way they are built in fastai is constructive: there are methods which do a lot for you but apply in specific circumstances and there are methods which do less for you but give you more flexibility. In this case the hierarchy is: 1. [`ImageDataBunch.from_name_re`](/vision.data.html#ImageDataBunch.from_name_re): Gets the labels from the filenames using a regular expression 2. [`ImageDataBunch.from_name_func`](/vision.data.html#ImageDataBunch.from_name_func): Gets the labels from the filenames using any function 3. [`ImageDataBunch.from_lists`](/vision.data.html#ImageDataBunch.from_lists): Labels need to be provided as an input in a list ``` show_doc(ImageDataBunch.from_name_re) ``` Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments. Creates an [`ImageDataBunch`](/vision.data.html#ImageDataBunch) from `fnames`, calling a regular expression (containing one *re group*) on the file names to get the labels, putting aside `valid_pct` for the validation. In the same way as [`ImageDataBunch.from_csv`](/vision.data.html#ImageDataBunch.from_csv), an optional `test` folder contains unlabelled data. Our previously created dataframe contains the labels in the filenames so we can leverage it to test this new method. [`ImageDataBunch.from_name_re`](/vision.data.html#ImageDataBunch.from_name_re) needs the exact path of each file so we will append the data path to each filename before creating our [`ImageDataBunch`](/vision.data.html#ImageDataBunch) object. ``` fn_paths = [path/name for name in df['name']]; fn_paths[:2] pat = r"/(\d)/\d+\.png$" data = ImageDataBunch.from_name_re(path, fn_paths, pat=pat, ds_tfms=tfms, size=24) data.classes show_doc(ImageDataBunch.from_name_func) ``` Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments. Works in the same way as [`ImageDataBunch.from_name_re`](/vision.data.html#ImageDataBunch.from_name_re), but instead of a regular expression it expects a function that will determine how to extract the labels from the filenames. (Note that `from_name_re` uses this function in its implementation). To test it we could build a function with our previous regex. Let's try another, similar approach to show that the labels can be obtained in a different way. ``` def get_labels(file_path): return '3' if '/3/' in str(file_path) else '7' data = ImageDataBunch.from_name_func(path, fn_paths, label_func=get_labels, ds_tfms=tfms, size=24) data.classes show_doc(ImageDataBunch.from_lists) ``` Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments. The most flexible factory function; pass in a list of `labels` that correspond to each of the filenames in `fnames`. To show an example we have to build the labels list outside our [`ImageDataBunch`](/vision.data.html#ImageDataBunch) object and give it as an argument when we call `from_lists`. Let's use our previously created function to create our labels list. ``` labels_ls = list(map(get_labels, fn_paths)) data = ImageDataBunch.from_lists(path, fn_paths, labels=labels_ls, ds_tfms=tfms, size=24) data.classes show_doc(ImageDataBunch.create_from_ll) ``` Use `bs`, `num_workers`, `collate_fn` and a potential `test` folder. `ds_tfms` is a tuple of two lists of transforms to be applied to the training and the validation (plus test optionally) set. `tfms` are the transforms to apply to the [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). The `size` and the `kwargs` are passed to the transforms for data augmentation. ``` show_doc(ImageDataBunch.single_from_classes) jekyll_note('This method is deprecated, you should use DataBunch.load_empty now.') ``` ### Other methods In the next few methods we will use another dataset, CIFAR. This is because the second method will get the statistics for our dataset and we want to be able to show different statistics per channel. If we were to use MNIST, these statistics would be the same for every channel. White pixels are [255,255,255] and black pixels are [0,0,0] (or in normalized form [1,1,1] and [0,0,0]) so there is no variance between channels. ``` path = untar_data(URLs.CIFAR); path show_doc(channel_view) data = ImageDataBunch.from_folder(path, ds_tfms=tfms, valid='test', size=24) def channel_view(x:Tensor)->Tensor: "Make channel the first axis of `x` and flatten remaining axes" return x.transpose(0,1).contiguous().view(x.shape[1],-1) ``` This function takes a tensor and flattens all dimensions except the channels, which it keeps as the first axis. This function is used to feed [`ImageDataBunch.batch_stats`](/vision.data.html#ImageDataBunch.batch_stats) so that it can get the pixel statistics of a whole batch. Let's take as an example the dimensions our MNIST batches: 128, 3, 24, 24. ``` t = torch.Tensor(128, 3, 24, 24) t.size() tensor = channel_view(t) tensor.size() show_doc(ImageDataBunch.batch_stats) data.batch_stats() show_doc(ImageDataBunch.normalize) ``` In the fast.ai library we have `imagenet_stats`, `cifar_stats` and `mnist_stats` so we can add normalization easily with any of these datasets. Let's see an example with our dataset of choice: MNIST. ``` data.normalize(cifar_stats) data.batch_stats() ``` ## Data normalization You may also want to normalize your data, which can be done by using the following functions. ``` show_doc(normalize) show_doc(denormalize) show_doc(normalize_funcs) ``` On MNIST the mean and std are 0.1307 and 0.3081 respectively (looked on Google). If you're using a pretrained model, you'll need to use the normalization that was used to train the model. The imagenet norm and denorm functions are stored as constants inside the library named <code>imagenet_norm</code> and <code>imagenet_denorm</code>. If you're training a model on CIFAR-10, you can also use <code>cifar_norm</code> and <code>cifar_denorm</code>. You may sometimes see warnings about *clipping input data* when plotting normalized data. That's because even although it's denormalized when plotting automatically, sometimes floating point errors may make some values slightly out or the correct range. You can safely ignore these warnings in this case. ``` data = ImageDataBunch.from_folder(untar_data(URLs.MNIST_SAMPLE), ds_tfms=tfms, size=24) data.normalize() data.show_batch(rows=3, figsize=(6,6)) show_doc(get_annotations) ``` To use this dataset and collate samples into batches, you'll need to following function: ``` show_doc(bb_pad_collate) ``` Finally, to apply transformations to [`Image`](/vision.image.html#Image) in a [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset), we use this last class. ## ItemList specific to vision The vision application adds a few subclasses of [`ItemList`](/data_block.html#ItemList) specific to images. ``` show_doc(ImageList, title_level=3) ``` It inherits from [`ItemList`](/data_block.html#ItemList) and overwrite [`ItemList.get`](/data_block.html#ItemList.get) to call [`open_image`](/vision.image.html#open_image) in order to turn an image file in `Path` object into an [`Image`](/vision.image.html#Image) object. `label_cls` can be specified for the labels, `xtra` contains any extra information (usually in the form of a dataframe) and `processor` is applied to the [`ItemList`](/data_block.html#ItemList) after splitting and labelling. How [`ImageList.__init__`](/vision.data.html#ImageList.__init__) overwrites on [`ItemList.__init__`](/data_block.html#ItemList.__init__)? [`ImageList.__init__`](/vision.data.html#ImageList.__init__) creates additional attributes like `convert_mode`, `after_open`, `c`, `size` upon [`ItemList.__init__`](/data_block.html#ItemList.__init__); and `convert_mode` and `sizes` in particular are necessary to make use of [`ImageList.get`](/vision.data.html#ImageList.get) (which also overwrites on [`ItemList.get`](/data_block.html#ItemList.get)) and [`ImageList.open`](/vision.data.html#ImageList.open). ``` show_doc(ImageList.from_folder) ``` How [`ImageList.from_folder`](/vision.data.html#ImageList.from_folder) overwrites on [`ItemList.from_folder`](/data_block.html#ItemList.from_folder)? [`ImageList.from_folder`](/vision.data.html#ImageList.from_folder) adds some constraints on `extensions` upon [`ItemList.from_folder`](/data_block.html#ItemList.from_folder), to work with image files specifically; and can take additional input arguments like `convert_mode` and `after_open` which are not available to [`ItemList`](/data_block.html#ItemList). ``` show_doc(ImageList.from_df) show_doc(get_image_files) show_doc(ImageList.open) ``` Let's get a feel of how `open` is used with the following example. ``` from fastai.vision import * path_data = untar_data(URLs.PLANET_TINY); path_data.ls() imagelistRGB = ImageList.from_folder(path_data/'train'); imagelistRGB ``` `open` takes only one input `fn` as `filename` in the type of `Path` or `String`. ``` imagelistRGB.items[10] imagelistRGB.open(imagelistRGB.items[10]) imagelistRGB[10] print(imagelistRGB[10]) ``` The reason why `imagelistRGB[10]` print out an image, is because behind the scene we have [`ImageList.get`](/vision.data.html#ImageList.get) calls [`ImageList.open`](/vision.data.html#ImageList.open) which calls [`open_image`](/vision.image.html#open_image) which uses ` PIL.Image.open(fn).convert(convert_mode)` to open an image file (how we print the image), and finally turns it into an Image object with shape (3, 128, 128) Internally, [`ImageList.open`](/vision.data.html#ImageList.open) passes `ImageList.convert_mode` and `ImageList.after_open` to [`open_image`](/vision.image.html#open_image) to adjust the appearance of the Image object. For example, setting `convert_mode` to `L` can make images black and white. ``` imagelistRGB.convert_mode = 'L' imagelistRGB.open(imagelistRGB.items[10]) show_doc(ImageList.show_xys) show_doc(ImageList.show_xyzs) show_doc(ObjectCategoryList, title_level=3) show_doc(ObjectItemList, title_level=3) show_doc(SegmentationItemList, title_level=3) show_doc(SegmentationLabelList, title_level=3) show_doc(PointsLabelList, title_level=3) show_doc(PointsItemList, title_level=3) show_doc(ImageImageList, title_level=3) ``` ## Building your own dataset This module also contains a few helper functions to allow you to build you own dataset for image classification. ``` show_doc(download_images) show_doc(verify_images) ``` It will try if every image in this folder can be opened and has `n_channels`. If `n_channels` is 3 – it'll try to convert image to RGB. If `delete=True`, it'll be removed it this fails. If `resume` – it will skip already existent images in `dest`. If `max_size` is specified, image is resized to the same ratio so that both sizes are less than `max_size`, using `interp`. Result is stored in `dest`, `ext` forces an extension type, `img_format` and `kwargs` are passed to PIL.Image.save. Use `max_workers` CPUs. ## Undocumented Methods - Methods moved below this line will intentionally be hidden ``` show_doc(PointsItemList.get) show_doc(SegmentationLabelList.new) show_doc(ImageList.from_csv) show_doc(ObjectCategoryList.get) show_doc(ImageList.get) show_doc(SegmentationLabelList.reconstruct) show_doc(ImageImageList.show_xys) show_doc(ImageImageList.show_xyzs) show_doc(ImageList.open) show_doc(PointsItemList.analyze_pred) show_doc(SegmentationLabelList.analyze_pred) show_doc(PointsItemList.reconstruct) show_doc(SegmentationLabelList.open) show_doc(ImageList.reconstruct) show_doc(resize_to) show_doc(ObjectCategoryList.reconstruct) show_doc(PointsLabelList.reconstruct) show_doc(PointsLabelList.analyze_pred) show_doc(PointsLabelList.get) ``` ## New Methods - Please document or move to the undocumented section ``` show_doc(ObjectCategoryList.analyze_pred) ```
github_jupyter
``` %matplotlib inline ``` ============================= Model Surface Output ============================= Plot an surface map with mean sea level pressure (MSLP), 2m Temperature (F), and Wind Barbs (kt). Imports ``` from datetime import datetime import cartopy.crs as ccrs import cartopy.feature as cfeature import matplotlib.pyplot as plt from metpy.units import units from netCDF4 import num2date import numpy as np import scipy.ndimage as ndimage from siphon.ncss import NCSS ``` Helper functions ``` # Helper function for finding proper time variable def find_time_var(var, time_basename='time'): for coord_name in var.coordinates.split(): if coord_name.startswith(time_basename): return coord_name raise ValueError('No time variable found for ' + var.name) ``` Create NCSS object to access the NetcdfSubset --------------------------------------------- Data from NCEI GFS 0.5 deg Analysis Archive ``` base_url = 'https://www.ncei.noaa.gov/thredds/ncss/grid/gfs-g4-anl-files/' dt = datetime(2018, 1, 4, 12) ncss = NCSS('{}{dt:%Y%m}/{dt:%Y%m%d}/gfsanl_4_{dt:%Y%m%d}' '_{dt:%H}00_000.grb2'.format(base_url, dt=dt)) # Create lat/lon box for location you want to get data for query = ncss.query().time(dt) query.lonlat_box(north=65, south=15, east=310, west=220) query.accept('netcdf') # Request data for model "surface" data query.variables('Pressure_reduced_to_MSL_msl', 'Apparent_temperature_height_above_ground', 'u-component_of_wind_height_above_ground', 'v-component_of_wind_height_above_ground') data = ncss.get_data(query) ``` Begin data maipulation ----------------------- Data for the surface from a model is a bit complicated. The variables come from different levels and may have different data array shapes. MSLP: Pressure_reduced_to_MSL_msl (time, lat, lon) 2m Temp: Apparent_temperature_height_above_ground (time, level, lat, lon) 10m Wind: u/v-component_of_wind_height_above_ground (time, level, lat, lon) Height above ground Temp from GFS has one level (2m) Height above ground Wind from GFS has three levels (10m, 80m, 100m) ``` # Pull out variables you want to use mslp = data.variables['Pressure_reduced_to_MSL_msl'][:].squeeze() temp = units.K * data.variables['Apparent_temperature_height_above_ground'][:].squeeze() u_wind = units('m/s') * data.variables['u-component_of_wind_height_above_ground'][:].squeeze() v_wind = units('m/s') * data.variables['v-component_of_wind_height_above_ground'][:].squeeze() lat = data.variables['lat'][:].squeeze() lon = data.variables['lon'][:].squeeze() time_var = data.variables[find_time_var(data.variables['Pressure_reduced_to_MSL_msl'])] # Convert winds to knots u_wind.ito('kt') v_wind.ito('kt') # Convert number of hours since the reference time into an actual date time = num2date(time_var[:].squeeze(), time_var.units) lev_10m = np.where(data.variables['height_above_ground3'][:] == 10)[0][0] u_wind_10m = u_wind[lev_10m] v_wind_10m = v_wind[lev_10m] # Combine 1D latitude and longitudes into a 2D grid of locations lon_2d, lat_2d = np.meshgrid(lon, lat) # Smooth MSLP a little # Be sure to only put in a 2D lat/lon or Y/X array for smoothing smooth_mslp = ndimage.gaussian_filter(mslp, sigma=3, order=0) * units.Pa smooth_mslp.ito('hPa') ``` Begin map creation ------------------ ``` # Set Projection of Data datacrs = ccrs.PlateCarree() # Set Projection of Plot plotcrs = ccrs.LambertConformal(central_latitude=[30, 60], central_longitude=-100) # Create new figure fig = plt.figure(figsize=(11, 8.5)) # Add the map and set the extent ax = plt.subplot(111, projection=plotcrs) plt.title('GFS Analysis MSLP, 2m Temperature (F), Wind Barbs (kt)' ' {0:%d %B %Y %H:%MZ}'.format(time), fontsize=16) ax.set_extent([235., 290., 20., 55.]) # Add state boundaries to plot states_provinces = cfeature.NaturalEarthFeature(category='cultural', name='admin_1_states_provinces_lakes', scale='50m', facecolor='none') ax.add_feature(states_provinces, edgecolor='black', linewidth=1) # Add country borders to plot country_borders = cfeature.NaturalEarthFeature(category='cultural', name='admin_0_countries', scale='50m', facecolor='none') ax.add_feature(country_borders, edgecolor='black', linewidth=1) # Plot MSLP Contours clev_mslp = np.arange(0, 1200, 4) cs = ax.contour(lon_2d, lat_2d, smooth_mslp, clev_mslp, colors='black', linewidths=1.5, linestyles='solid', transform=datacrs) plt.clabel(cs, fontsize=10, inline=1, inline_spacing=10, fmt='%i', rightside_up=True, use_clabeltext=True) # Plot 2m Temperature Contours clevtemp = np.arange(-60, 101, 10) cs2 = ax.contour(lon_2d, lat_2d, temp.to(units('degF')), clevtemp, colors='tab:red', linewidths=1.25, linestyles='dotted', transform=datacrs) plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=10, fmt='%i', rightside_up=True, use_clabeltext=True) # Plot 10m Wind Barbs ax.barbs(lon_2d, lat_2d, u_wind_10m.magnitude, v_wind_10m.magnitude, length=6, regrid_shape=20, pivot='middle', transform=datacrs) plt.show() ```
github_jupyter
Collect stock and option data, price with BSM, compare accuracy ``` import pandas as pd from math import sqrt import numpy as np from scipy.stats import norm import seaborn as sns from datetime import datetime, timezone, timedelta # initial parameters ticker = 'GOOG' risk_free_rate = 0.08 option_type = 'put' # 2 weeks, 1m, 3m, 6m # expiration_datetime = datetime(2020, 12, 4, 0, 0, tzinfo=timezone.utc) # expiration_datetime = datetime(2020, 12, 11, 0, 0, tzinfo=timezone.utc) # expiration_datetime = datetime(2020, 12, 24, 0, 0, tzinfo=timezone.utc) # expiration_datetime = datetime(2021, 2, 19, 0, 0, tzinfo=timezone.utc) expiration_datetime = datetime(2021, 6, 18, 0, 0, tzinfo=timezone.utc) # get a UTC timestamp from a date. This is used to scrape data from Yahoo Finance. expiration_timestamp = int(expiration_datetime.timestamp()) print(expiration_timestamp) print(datetime.fromtimestamp(expiration_timestamp)) # this is in UTC, add 5 hours to this to get EST # feb 19 2021 # !pip install lxml ``` Download historical stock price data for Google (GOOG). I get a past year's worth from (https://finance.yahoo.com/quote/GOOG/history?p=GOOG) ``` # download annual historical data for the stock # stock_price_path = "~/documents/quant_finance/price_data/{}.csv".format(ticker) stock_price_path = "price_data/{}.csv".format(ticker) options_data_path = 'https://finance.yahoo.com/quote/{}/options?date={}&p={}'.format(ticker, expiration_timestamp, ticker) df = pd.read_csv(stock_price_path) df = df.sort_values(by="Date") df = df.dropna() # calculate returns df = df.assign(close_day_before=df['Adj Close'].shift(1)) df['returns'] = ((df['Adj Close'] - df.close_day_before)/df.close_day_before) # get options data options_data = pd.read_html(options_data_path) # returns two dataframes, for calls and puts if option_type == 'call': r = options_data[0] elif option_type == 'put': r = options_data[1] print(df) print(r) # get risk free rate data rfr_data_path = 'https://www.treasury.gov/resource-center/data-chart-center/interest-rates/Pages/TextView.aspx?data=yield' rfr_df = pd.read_html(rfr_data_path)[1] # returns two dataframes, for calls and puts rfr_df # plot the rfr rfr_date = rfr_df.iloc[-1][0] cur_rfr = rfr_df.iloc[-1][1:].astype(float) cur_rfr.plot(title="Treasury Curve for {}".format(rfr_date)) # BSM model # S is the spot price # K is the strike price # T is the fraction of days to strike date divided by 252 (stock market days) # r is the risk free rate # sigma is the annual volatility of the returns of the stock def black_scholes(S, K, T, r, sigma, option_type='call'): d1 = (np.log(S/K) + (r + sigma**2/2)*T)/(sigma * sqrt(T)) d2 = d1 - sigma * np.sqrt(T) if option_type=='call': return S * norm.cdf(d1) - K*np.exp(-r*T) * norm.cdf(d2) elif option_type=='put': return K*np.exp(-r*T) * norm.cdf(-d2) - S * norm.cdf(-d1) # get the (num days to expiration) / (trading days in a year) def get_time_to_expiration(expiration_datetime_utc): return (expiration_datetime_utc - datetime.now(timezone.utc)).days / 252 value_s = black_scholes(S = 69, K = 70, T = 6/12, r = .05, sigma = .35, option_type='put') print(value_s) # the rfr should vary with time, not be constant. cur_stock_price = df.iloc[-1]['Adj Close'] time_to_expiration = get_time_to_expiration(expiration_datetime) # Calculate the volatility as the annualized standard deviation of the stock returns sigma = np.sqrt(252) * df['returns'].std() print('cur_stock_price: {}, time to expiration: {}, rfr: {}, vol: {}'.format(cur_stock_price, time_to_expiration, risk_free_rate, sigma)) list_estimates = [] strike_start_idx, strike_end_idx = 0, r.shape[0] # strike_start_idx, strike_end_idx = 0, 21 # run BSM for different strikes for x in range(strike_start_idx,strike_end_idx): value_s = black_scholes(S = cur_stock_price, K = r['Strike'][x], T = time_to_expiration, r = risk_free_rate, sigma = sigma, option_type = option_type) list_estimates.append(value_s) # merge the real and computed dataframes to compare results df_list = pd.DataFrame(data=list_estimates, index=r.index[strike_start_idx:strike_end_idx]) df_list['estimate'] = df_list[0] del df_list[0] df_estimate = r.merge(df_list, right_index = True, left_index = True) df_estimate # plot option prices across strikes df_estimate.plot(x='Strike',y=['Last Price','estimate'], title='Computed v Market prices for {}, Expiry {}'.format(ticker, expiration_datetime.date())) df_estimate['estimate_error'] = ((df_estimate['Last Price'] - df_estimate['estimate'])/df_estimate['estimate'])*100 ax = sns.distplot(df_estimate['estimate_error']) print(df_estimate['estimate_error'].describe()) # across strikes, the median error is 34%, but there is a huge spread and outliers, # which mainly seem to be due to the higher strikes being very close to 0. # plot the error across strikes df_estimate.plot(x='Strike',y='estimate_error',title='estimate error across strikes') # what is a measure of error? # MSE: average of the sum of squared errors mse = np.sum((df_estimate['Last Price'] - df_estimate['estimate'])**2) / df_estimate.shape[0] rmse = np.sqrt(mse) print(mse, rmse) # could also try MAD # examine outliers df_estimate[df_estimate['estimate_error'].abs() > 1] # the model tends to undervalue puts especially at high strikes (deep out of the money). # how accurate are our vol estimates? df_estimate['Implied Volatility'] = df_estimate['Implied Volatility'].str.replace(',','').str.slice(stop=-1).astype('float') / 100 df_estimate['Historical Vol'] = sigma ax = df_estimate.plot(x='Strike',y=['Implied Volatility','Historical Vol'],title='volatility across strikes') df_estimate['Volume'] = df_estimate['Volume'].replace(to_replace='-',value=0) df_estimate['Volume'] = df_estimate['Volume'].astype('int') df_estimate['Open Interest'] = df_estimate['Open Interest'].replace(to_replace='-',value=0) df_estimate['Open Interest'] = df_estimate['Open Interest'].astype('int') df_estimate.plot(x='Strike',y=['Volume','Open Interest'],title='volume and open interest') sns.distplot(df_estimate['Open Interest']) print(df_estimate['Open Interest'].describe()) # filter out low volume options # lower_bound, upper_bound = cur_stock_price * .5, cur_stock_price * 1.5 # price_estimate_filtered = df_estimate[(df_estimate['Strike'] > lower_bound) & (df_estimate['Strike'] < upper_bound)].reset_index() price_estimate_filtered = df_estimate[(df_estimate['Open Interest'] > 15)].reset_index() ax = sns.distplot(price_estimate_filtered['estimate_error']) print(price_estimate_filtered['estimate_error'].describe()) price_estimate_filtered.plot(x='Strike',y=['Last Price','estimate'], title='Option prices for {}, Expiry {} (Filtered Strikes)'.format(ticker, expiration_datetime.date())) # if we use implied vol instead of historical vol, our estimate should improve def calculate_prices_across_strikes(options_df, cur_stock_price, time_to_expiration, risk_free_rate, sigma, option_type): print('cur_stock_price: {}, time to expiration: {}, rfr: {}, vol: {}'.format(cur_stock_price, time_to_expiration, risk_free_rate, sigma)) list_estimates = [] estimates_with_implied = [] strike_start_idx, strike_end_idx = 0, options_df.shape[0] # run BSM for different strikes for x in range(strike_start_idx,strike_end_idx): # print(options_df['Strike'][x]) value_s = black_scholes(S = cur_stock_price, K = options_df['Strike'][x], T = time_to_expiration, r = risk_free_rate, sigma = sigma, option_type = option_type) list_estimates.append(value_s) for x in range(strike_start_idx,strike_end_idx): # print(options_df['Strike'][x]) value_s = black_scholes(S = cur_stock_price, K = options_df['Strike'][x], T = time_to_expiration, r = risk_free_rate, sigma = options_df['Implied Volatility'][x], option_type = option_type) estimates_with_implied.append(value_s) # merge the real and computed dataframes to compare results df_estimate = options_df[['Strike','Last Price']] df_estimate['estimate'] = list_estimates df_estimate['estimate_with_implied'] = estimates_with_implied return df_estimate df_estimate_european = calculate_prices_across_strikes(price_estimate_filtered, cur_stock_price, time_to_expiration, risk_free_rate, sigma, option_type) df_estimate_european.plot(x='Strike',y=['Last Price','estimate','estimate_with_implied'], title='Option Prices for {}, Expiry {}, with Implied Vol'.format(ticker, expiration_datetime.date())) df_estimate_european['estimate_with_implied_error'] = ((df_estimate_european['Last Price'] - df_estimate_european['estimate_with_implied'])/df_estimate_european['estimate_with_implied'])*100 ax = sns.distplot(df_estimate_european['estimate_with_implied_error']) print(df_estimate_european['estimate_with_implied_error'].describe()) df_estimate_european.plot(x='Strike',y='estimate_with_implied_error',title='Estimate Percent Error Across Strikes (Filtered Strikes)') mse = np.sum((df_estimate_european['Last Price'] - df_estimate_european['estimate_with_implied'])**2) / df_estimate_european.shape[0] rmse = np.sqrt(mse) print(mse, rmse) # American options from functions.BS_pricer import BS_pricer from functions.Parameters import Option_param from functions.Processes import Diffusion_process from functions.cython.cython_functions import PSOR import numpy as np import scipy.stats as ss import matplotlib.pyplot as plt %matplotlib inline from IPython.display import display import sympy; sympy.init_printing() def display_matrix(m): display(sympy.Matrix(m)) # Longstaff Schwartz Method def price_american(S0, K, T, r, sigma, payoff='call'): # Creates the object with the parameters of the option opt_param = Option_param(S0=S0, K=K, T=T, exercise="American", payoff=payoff ) # Creates the object with the parameters of the process diff_param = Diffusion_process(r=r, sig=sigma) # Creates the pricer object BS = BS_pricer(opt_param, diff_param) # return BS.LSM(N=1000, paths=1000, order=2) # Longstaff Schwartz Method N_space = 8000 N_time = 5000 return BS.PDE_price((N_space,N_time)) # def plot_american(BS, x_start, x_end, y_start, y_end): # fig = plt.figure(figsize=(9,5)) # BS.plot([x_start,x_end,y_start,y_end]) # price_american(S0=100, K=100, T=1, r=0.1, sigma=0.2, payoff="put" ) # 1827.949951 1110.0 0.7817460317460317 0.08 0.38212781123440914 put price_american(S0=1827.949951, K=1110.0, T=0.7817460317460317, r=0.08, sigma=0.38212781123440914, payoff="put" ) %%time # if we use implied vol instead of historical vol, our estimate should improve def calculate_prices_across_strikes_american(options_df, cur_stock_price, time_to_expiration, risk_free_rate, sigma, option_type): print('cur_stock_price: {}, time to expiration: {}, rfr: {}, vol: {}'.format(cur_stock_price, time_to_expiration, risk_free_rate, sigma)) index = [] list_estimates = [] # estimates_with_implied = [] strike_start_idx, strike_end_idx = 0, options_df.shape[0] step = 10 # run BSM for different strikes for x in range(strike_start_idx,strike_end_idx, step): index.append(x) print('strike', options_df['Strike'][x]) value_s = price_american(S0 = cur_stock_price, K = options_df['Strike'][x], T = time_to_expiration, r = risk_free_rate, sigma = sigma, payoff = option_type) list_estimates.append(value_s) # print(cur_stock_price, options_df['Strike'][x], time_to_expiration, risk_free_rate, options_df['Implied Volatility'][x], option_type) # value_s = price_american(S0 = cur_stock_price, # K = options_df['Strike'][x], # T = time_to_expiration, # r = risk_free_rate, # sigma = options_df['Implied Volatility'][x], # payoff = option_type) # estimates_with_implied.append(value_s) # merge the real and computed dataframes to compare results price_estimates = pd.Series(data=list_estimates, index=index) print(price_estimates) # df_estimate['estimate_with_implied'] = estimates_with_implied df_estimate = options_df[['Strike','Last Price']] df_estimate['estimate'] = price_estimates return df_estimate df_estimate_american = calculate_prices_across_strikes_american(price_estimate_filtered, cur_stock_price, time_to_expiration, risk_free_rate, sigma, option_type) # df_estimate.plot(x='Strike',y=['Last Price','estimate','estimate_with_implied'], title='Option Prices for {}, Expiry {}, with Implied Vol'.format(ticker, expiration_datetime.date())) # df_estimate.plot(x='Strike',y=['Last Price','estimate'], title='Option Prices for {}, Expiry {}, with Implied Vol'.format(ticker, expiration_datetime.date())) ax1 = df_estimate_american.plot(kind='scatter', x='Strike',y=['Last Price'], title='Option Prices for {}, Expiry {}, with Implied Vol'.format(ticker, expiration_datetime.date())) df_estimate_american.plot(kind='scatter', x='Strike',y=['estimate'], ax=ax1, color='r') # compare with european df_estimate_american['european_estimate'] = df_estimate_european['estimate'] df_estimate_american['european_estimate_with_implied'] = df_estimate_european['estimate_with_implied'] ax1 = df_estimate_american.plot(kind='scatter', x='Strike',y=['Last Price'], label='Actual Price') df_estimate_american.plot(kind='scatter', x='Strike',y=['estimate'], ax=ax1, color='r', label='PDE solver') df_estimate_american.plot(kind='scatter', x='Strike',y=['european_estimate'], ax=ax1, color='g', label='European') # df_estimate.plot(kind='scatter', x='Strike',y=['estimate'], ax=ax1, color='r') ax1.set_title('Option Prices for {}, Expiry {}, Collected {}'.format(ticker, expiration_datetime.date(), datetime.now().date())) ax1.legend() datetime.now().date() # how accurate is BSM for different expiration dates? # import os # data_path = "/home/amao/options_data/2020-11-19 17:00:01.568118" # ticker = 'TSLA' # calls_data, puts_data = dict(), dict() # for filename in os.listdir(data_path): # if ticker in filename: # df = pd.read_csv(os.path.join(data_path, filename)) # if 'puts' in filename : # puts_data[filename] = df # elif 'calls' in filename: # calls_data[filename] = df # print(len(calls_data), len(puts_data)) # stock_price_df = df # for k,v in calls_data.items(): # print(k, v.head()) # bsm_estimate_df = calculate_prices_across_strikes(v, cur_stock_price, time_to_expiration, risk_free_rate, sigma): # binomial tree import math sigma = .2 rfr = .02 delta_t = 1 u = math.exp(sigma * math.sqrt(delta_t)) d = math.exp(-sigma * math.sqrt(delta_t)) print('u and d:',u,d) # a = math.exp(rfr * delta_t) a = math.exp((rfr - .01) * delta_t) p = (a - d)/(u - d) print('p', p) # N period binomial tree model def price_option_european(S_0, u, d, K, option_type: str, num_periods: int): if num_periods == 0: if option_type == 'call': return max(S_0-K, 0) elif option_type == 'put': return max(K-S_0, 0) else: S_u = S_0 * u S_d = S_0 * d f_u = price_option_european(S_0 * u,u,d,K,option_type,num_periods-1) f_d = price_option_european(S_0 * d,u,d,K,option_type,num_periods-1) # print('f_u and f_d:', f_u, f_d) return math.exp(-rfr*delta_t) * (p*f_u + (1-p)*f_d) S_0 = 200 K = 200 option_type = 'put' for i in range(5): price = price_option_european(S_0, u, d, K, option_type, num_periods=i) print('price',price) def price_option_american(S_0, u, d, K, option_type: str, num_periods: int): if num_periods == 0: if option_type == 'call': return max(S_0-K, 0) elif option_type == 'put': return max(K-S_0, 0) else: S_u = S_0 * u S_d = S_0 * d f_u = price_option_american(S_0 * u,u,d,K,option_type,num_periods-1) f_d = price_option_american(S_0 * d,u,d,K,option_type,num_periods-1) if option_type == 'call': f_u_early = max(S_u-K, 0) f_d_early = max(S_d-K, 0) elif option_type == 'put': f_u_early = max(K-S_u, 0) f_d_early = max(K-S_d, 0) # print('f_u,f_d,f_u_early,f_d_early:',f_u,f_d,f_u_early,f_d_early) f_u = max(f_u,f_u_early) f_d = max(f_u,f_d_early) # print('f_u and f_d:', f_u, f_d) return math.exp(-rfr*delta_t) * (p*f_u + (1-p)*f_d) S_0 = 200 K = 200 option_type = 'put' for i in range(5): price = price_option_american(S_0, u, d, K, option_type, num_periods=i) print('price',price) ```
github_jupyter
<img src='./img/LogoWekeo_Copernicus_RGB_0.png' alt='Logo EU Copernicus EUMETSAT' align='right' width='20%'></img> <br> <a href="./00_index.ipynb"><< Index</a><br> <a href="./10_sentinel5p_L2_retrieve.ipynb"><< 10 - Sentinel-5P Carbon Monoxide - Retrieve</a><span style="float:right;"><a href="./20_sentinel3_OLCI_L1_retrieve.ipynb">20 - Sentinel-3 OLCI Level 1B - Retrieve >></a></span> <div class="alert alert-block alert-warning"> <b>LOAD, BROWSE AND VISUALIZE</b></div> # Copernicus Sentinel-5 Precursor (Sentinel-5p) - Carbon Monoxide The subsequent example introduces you to Sentinel-5p data in general and the total column of carbon monoxide sensed by Sentinel-5p in specific. Carbon Monoxide is a good trace gas in order to monitor and track fire occurences. The example is based on the Australian bushfires that severely affected Australias East Coast end of 2019 and beginning of 2020. #### Module outline: * [1 - Load and browse Sentinel-5P TROPOMI data](#load_s5p) * [2 - Create a geographical subset](#geographical_subset) * [3 - Visualize Sentinel-5P Carbon Monoxide data](#visualize_s5p) #### Load required libraries ``` %matplotlib inline import os import xarray as xr import numpy as np import netCDF4 as nc import matplotlib.pyplot as plt from matplotlib.colors import LogNorm import cartopy.crs as ccrs from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER import cartopy.feature as cfeature from matplotlib.axes import Axes from cartopy.mpl.geoaxes import GeoAxes GeoAxes._pcolormesh_patched = Axes.pcolormesh ``` #### Load helper functions ``` from ipynb.fs.full.functions import visualize_pcolormesh, generate_geographical_subset ``` <hr> ## <a id="load_s5p"></a>Load and browse Sentinel-5P data A Sentinel-5p file is organised in two groups: `PRODUCT` and `METADATA`. The `PRODUCT` group stores the main data fields of the product, including `latitude`, `longitude` and the variable itself. The `METADATA` group provides additional metadata items. Sentinel-5p variables have the following dimensions: * `scanline`: the number of measurements in the granule / along-track dimension index * `ground_pixel`: the number of spectra in a measurement / across-track dimension index * `time`: time reference for the data * `corner`: pixel corner index * `layer`: this dimension indicates the vertical grid of profile variables Sentinel-5p data is disseminated in `netCDF`. You can load multiple `netCDF` files at once with the `open_mfdataset()` function of the xarray library. In order to load the variable as part of a Sentinel-5p data files, you have to specify the following keyword arguments: - `group='PRODUCT'`: to load the `PRODUCT` group - `concat_dim='scanline'`: multiple files will be concatenated based on the scanline dimension - `combine=nested`: combine a n-dimensional grids into one along each dimension of the grid Let us load a Sentinel-5p data file as `xarray.Dataset` and inspect the data structure: ``` s5p = xr.open_mfdataset('../eodata/sentinel_5p/co/2019/12/29/*', concat_dim='scanline', combine='nested', group='PRODUCT') s5p ``` You see that a Sentinel-5p data files contains of five dimensions and five data variables: * **Dimensions**: * `scanline` * `ground_pixel` * `time` * `corner` * **Data variables**: * `delta_time`: the offset of individual measurements within the granule, given in milliseconds * `time_utc`: valid time stamp of the data * `ga_value`: quality descriptor, varying between 0 (nodata) and 1 (full quality data). * `carbonmonoxide_total_column`: Vertically integrated CO column density * `carbonmonoxide_total_column_precision`: Standard error of the vertically integrate CO column You can specify one variable of interest and get more detailed information about the variable. E.g. `carbonmonoxide_total_column` is the atmosphere mole content of carbon monoxide, has the unit `mol per m-2`, and has three dimensions, `time`, `scanline` and `groundpixel` respectively. ``` s5p_co = s5p['carbonmonoxide_total_column'] s5p_co ``` You can do this for the available variables, but also for the dimensions latitude and longitude. ``` print('Latitude') print(s5p_co.latitude) print('Longitude') print(s5p_co.longitude) ``` <br> You can retrieve the array values of the variable with squared bracket: `[:,:,:]`. One single time step can be selected by specifying one value of the time dimension, e.g. `[0,:,:]`. ``` s5p_co_2912 = s5p_co[0,:,:] s5p_co_2912 ``` The attributes of the data array hold the entry `multiplication_factor_to_convert_to_molecules_percm2`, which is a conversion factor that has to be applied to convert the data from `mol per m2` to `molecules per cm2`. ``` conversion_factor = s5p_co_2912.multiplication_factor_to_convert_to_molecules_percm2 conversion_factor ``` Additionally, you can save the attribute `longname`, which you can make use of when visualizing the data. ``` longname = s5p_co.long_name longname ``` ## <a id='geographical_subset'></a>Create a geographical subset You can zoom into a region by specifying a `bounding box` of interest. Let's set the extent to southeast Australia with the following bounding box information: ``` lonmin=130 lonmax=170 latmin=-50 latmax=-19 ``` You can use the function [generate_geographical_subset()](./functions.ipynb#generate_geographical_subset) to subset an xarray DataArray based on a given bounding box. ``` s5p_co_subset = generate_geographical_subset(s5p_co_2912, latmin, latmax, lonmin, lonmax) s5p_co_subset ``` <br> ## <a id="plotting_s5p"></a>Plotting example - Sentinel-5P data You can plot data arrays of type `numpy` with matplotlib's `pcolormesh` function. In combination with the library [cartopy](https://scitools.org.uk/cartopy/docs/latest/), you can produce high-quality maps. In order to make it easier to visualize the Carbon Monoxide values, we apply the conversion factor to the DataArray. This converts the Carbon Monoxide values from mol per m<sup>2</sup> to molecules per cm<sup>2</sup>. ``` s5p_co_converted = s5p_co_subset*conversion_factor s5p_co_converted ``` For visualization, you can use the function [visualize_pcolormesh](./functions#visualize_pcolormesh) to visualize the data. The following kwargs have to be defined: * `xarray DataArray` * `longitude` * `latitude` * `projection` * `color palette` * `unit` * `longname` * `vmin`, `vmax` * `extent (lonmin, lonmax, latmin, latmax)` * `log` * `set_global` Now, let us apply the [visualize_pcolormesh](./functions#visualize_pcolormesh) function and visualize the vertically integrated carbon monoxide column sensored from the Sentinel-5P satellite on 29 December 2019. Note: Multiplying the DataArray values with 1e-18 improves the readibility of the map legend. ``` visualize_pcolormesh(s5p_co_converted*1e-18, s5p_co_converted.longitude, s5p_co_converted.latitude, ccrs.PlateCarree(), 'viridis', '*1e-18 molecules per cm2', longname, 0, 8, lonmin, lonmax, latmin, latmax, log=False, set_global=False) ``` <br> <a href="./00_index.ipynb"><< Index</a><br> <a href="./10_sentinel5p_L2_retrieve.ipynb"><< 10 - Sentinel-5P Carbon Monoxide - Retrieve</a><span style="float:right;"><a href="./20_sentinel3_OLCI_L1_retrieve.ipynb">20 - Sentinel-3 OLCI Level 1B - Retrieve >></a></span> <hr> <img src='./img/all_partners_wekeo.png' alt='Logo EU Copernicus EUMETSAT' align='right' width='100%'></img>
github_jupyter
### SVD Estimation For this example a square matrix of size 2 operates on a dense set of unit vectors distributed uniformly around a unit circle. The resulting set of vectors is searched for maximum and minimum lengths to find an estimate of the singular values of the matrix. ####Note This example is also available in ~/jvsip/python/python_examples/LinearAlgebraSimple as SVD_exploration.py to be run as a regular script. I decided to also make a notebook to support better documentation. ``` #! python # Estimate two-norm (sigma0) and singular values (sigma0, sigma1) # of a two by two matrix by searching through a dense space of vectors. # Check by reproducing estimate for A matrix using Aest=U Sigma V^t # Motivating text Numerical Linear Algebra, Trefethen and Bau ``` ####Goals This example has three goals 1) Explore the SVD 2) Explore simple pyJvsip coding 3) Learn a little about matplotlib We need the pyJvsip package and the matplotlib package. We also need pi and a function to do arccos; here we get them from numpy. ``` import pyJvsip as pv from numpy import pi,arccos %matplotlib inline from matplotlib.pyplot import * ``` #### Input unit vectors $ X = \begin{bmatrix} \vec{x_0} \\\\ \vec{x_1}\end{bmatrix}$ where $\vec{x_0} $ and $\vec{x_1}$ are row vectors such that elements $ X[0,i]^2 +X[1,i]^2 = 1 $ ``` N=1000 #number of points to search through for estimates # create a vector 'arg' of evenly spaced angles between [zero, 2 pi) arg=pv.create('vview_d',N).ramp(0.0,2.0 * pi/float(N)) # Create a matrix 'X' of unit vectors representing Unit Ball in R2 X=pv.create('mview_d',2,N,'ROW').fill(0.0) x0=X.rowview(0);x1=X.rowview(1) _=pv.sin(arg,x0);_=pv.cos(arg,x1) ``` #### Input matrix operator $A = \begin{bmatrix} 1.0 & 1.0 \\\\ 0.0 & 2.0 \end{bmatrix}$ ``` # create matrix 'A' to examine A=pv.create('mview_d',2,2).fill(0.0) A[0,0]=1.0;A[0,1]=2.0; A[1,1]=2.0;A[1,0]=0.0 ``` #### Operation $ Y = A X $ ``` # create matrix Y=AX Y=A.prod(X) y0=Y.rowview(0);y1=Y.rowview(1) # create vector 'n' of 2-norms for Y col vectors (sqrt(y0_i^2+y1_i^2)) n=(y0.copy.sq+y1.copy.sq).sqrt ``` #### Estimate singular values Singular values for matrix $A$ require an orthonormal basis associated with matrix $V$ and an orthonormal basis associatead with matrix $U$ such that $A V = U {\;}\Sigma $ where $\Sigma$ is a diagonal matrix with the singular values on the diagonal ordered so that they are in decreasing order. ``` # Find index of (first) maximum value and (first) minimum value i=n.maxvalindx; j=n.minvalindx sigma0=n[i];sigma1=n[j] # create estimates for V, U, Sigma (S) V=A.empty.fill(0) U=V.copy S=V.copy S[0,0]=sigma0;S[1,1]=sigma1 U[0:2,0]=Y[0:2,i];U[0:2,1]=Y[0:2,j] U[0:2,0] /=sigma0;U[0:2,1]/=sigma1 #normalize u1,u2 V[0:2,0]=X[0:2,i];V[0:2,1]=X[0:2,j] #v0, v1 already normalized #estimate A from results of U,V,S Aest=U.prod(S).prod(V.transview) ``` #### Play with matplotlib ``` #PLOT Results stk='\\stackrel' #for creating small matrices spc='\\/' #for creating some space lft='\\left[' #left bracket rgt='\\right]' #right bracket #Plot results for two norm tne = "Maximum magnitude 'x':\n "+'(%.4f, %.4f)'%(y1[i], y0[i]) tne +='\nTwo norm estimate: \n ' + "%.4f"%sigma0 figure(1,figsize=(8.5,6)) subplot(1,2,1).set_aspect('equal') title(r'Unit Ball in $\Re^2$',fontsize=14) ylabel(r'$x_0$',fontsize=14) xlabel(r'$x_1$',fontsize=14) plot(x1.list,x0.list) a00='{'+'%.4f'%Aest[0,0]+'}';a01='{'+'%.4f'%Aest[0,1]+'}' a10='{'+'%.4f'%Aest[1,0]+'}';a11='{'+'%.4f'%Aest[1,1]+'}' eqn="r\'$"+lft+stk+"{y_0}{y_1}"+rgt+"="+lft+stk+a00+a10+\ spc+stk+a01+a11+rgt eqn+= lft+stk+"{x_0}{x_1}"+rgt+"$\'" text(-1.25,1.65,eval(eqn),fontsize=20) text(x1[i],x0[i],'x',horizontalalignment='center',\ verticalalignment='center') plot([0,x1[0]],[0,x0[0]],'k') plot([0,x1[N/4]],[0,x0[N/4]],'g') plot([0,x1[N/2]],[0,x0[N/2]],'r') plot([0,x1[3*N/4]],[0,x0[3*N/4]],'m') subplot(1,2,2).set_aspect('equal') title(r'Transformed Unit Ball in $\Re^2$',fontsize=14) ylabel(r'$y_0$',fontsize=14) xlabel(r'$y_1$',fontsize=14) plot(y1.list,y0.list) text(y1[i],y0[i],'x',horizontalalignment='center',\ verticalalignment='center') text(-1.9,2.0,tne,fontname='serif',fontsize=10) plot([0,y1[0]],[0,y0[0]],'k') plot([0,y1[N/4]],[0,y0[N/4]],'g') plot([0,y1[N/2]],[0,y0[N/2]],'r') plot([0,y1[3*N/4]],[0,y0[3*N/4]],'m') #Plot results for singular values a00='{'+'%.4f'%Aest[0,0]+'}';a01='{'+'%.4f'%Aest[0,1]+'}' a10='{'+'%.4f'%Aest[1,0]+'}';a11='{'+'%.4f'%Aest[1,1]+'}' s00='{'+'%.4f'%S[0,0]+'}';s01='{'+'%.4f'%S[0,1]+'}' s10='{'+'%.4f'%S[1,0]+'}';s11='{'+'%.4f'%S[1,1]+'}' u00='{'+'%.4f'%U[0,0]+'}';u01='{'+'%.4f'%U[0,1]+'}' u10='{'+'%.4f'%U[1,0]+'}';u11='{'+'%.4f'%U[1,1]+'}' v00='{'+'%.4f'%V[0,0]+'}';v01='{'+'%.4f'%V[0,1]+'}' v10='{'+'%.4f'%V[1,0]+'}';v11='{'+'%.4f'%V[1,1]+'}' estEqn="r\'$\\left[" +stk+a00+a10+spc+stk+a01+a11+"\\right] = " estEqn+="\\left[" +stk+u00+u10+spc+stk+u01+u11+"\\right] " estEqn+="\\left[" +stk+s00+s10+spc+stk+s01+s11+"\\right] " estEqn+="\\left[" +stk+v00+v01+spc+stk+v10+v11+"\\right]$\'" figure(2,figsize=(9,6.5)) subplot(1,2,1).set_aspect('equal') title(r'Unit Ball in $\Re^2$',fontsize=14) text(-1.6,1.50,eval(estEqn),fontsize=14) #Use a tilde to indicate these are estimates eqnUSV=r'$\mathrm{\tilde{A} = \tilde{U}\/\tilde{\Sigma}\/\tilde{V}^{\/T}}$' text(-1.4,1.75,eqnUSV,fontsize=16) #text(.5,V[0,0],r'$ \hat v_0$',rotation=25,fontsize=16) theta=180.0/pi * arg[i] text(V[1,0]*0.5,V[0,0]*0.5,r'$\hat v_0$',rotation=theta, fontsize=16,\ horizontalalignment='center',verticalalignment='bottom') #text(-.1,.5,r'$ \hat v_1$',rotation = -65,fontsize=16) theta=180.0/pi * arg[j] text(V[1,1]*0.5,V[0,1]*0.5,r'$\hat v_1$',rotation=theta, fontsize=16, \ horizontalalignment='center',verticalalignment='bottom') ylabel(r'$x_0$',fontsize=14) xlabel(r'$x_1$',fontsize=14) plot(x1.list,x0.list) text(x1[i],x0[i],'x',horizontalalignment='center',\ verticalalignment='center') text(x1[j],x0[j],'x',horizontalalignment='center',\ verticalalignment='center') plot([0,x1[i]],[0,x0[i]],'k') plot([0,x1[j]],[0,x0[j]],'g') subplot(1,2,2).set_aspect('equal') title(r'Transformed Unit Ball in $\Re^2$',fontsize=14) ylabel(r'$y_0$',fontsize=14) xlabel(r'$y_1$',fontsize=14) plot(y1.list,y0.list) text(y1[i],y0[i],'x',horizontalalignment='center',\ verticalalignment='center') text(y1[j],y0[j],'x',horizontalalignment='center',\ verticalalignment='center') cos0=y1[i]/sigma0;sin0=y0[i]/sigma0;cos1=y1[j]/sigma1;sin1=y0[j]/sigma1 theta=arccos(cos0)*180.0/pi text(y1[i]/2,y0[i]/2,r'$ \sigma_0 \hat u_0$',rotation=theta,fontsize=16,\ horizontalalignment='center',verticalalignment='top') theta=arccos(cos1)*180.0/pi text(y1[j]/2,y0[j]/2,r'$ \sigma_1 \hat u_1$',rotation=theta,fontsize=16,\ horizontalalignment='center',verticalalignment='bottom') plot([0,y1[i]],[0,y0[i]],'k') plot([0,y1[j]],[0,y0[j]],'g') ```
github_jupyter
``` import sys import os from IPython.display import Image os.getcwd() sys.path.append('Your path to Snowmodel') # insert path to Snowmodel import numpy as np from model import * sys.path ``` #### Initialize model geometry In set_up_model_geometry you can choose several intial geometries via *geom* that are described in the docstring. 'FieldScale0.5m' is a snowpack of an initial height *Z* of 0.5 m, 101 computational nodes *nz*, which means a node distance *dz* of 0.005 m. *coord* contains the exact z-coordinates of all computational nodes. ``` geom = 'FieldScale0.5m' [nz, dz, Z, coord] = set_up_model_geometry(geom) ``` #### Initialize time step and maximum iteration number *it* is the maximum number of iterations. The first time step *dt* is set to 0.01 s and the time passed *t_passed* is 0 s. *iter_max* is the maximum iteration numer. ``` it = 7000 [iter_max, dt, t_passed] = set_up_iter(it) ``` #### Set initial conditions for temperature and snow density set_initial_conditions defines the initial conditions for temperature *T* and snow density *rho_eff*. *RHO_ini* and *T_ini* can be replace by all options listed in the doc string of *set_initial_conditions*. *RHO_2Layer_Continuous_smooth* reflects a snowpack with two equally thick snow layers of which the lower one is denser (150 kgm$^{-3}$) and the upper one is lighter (75 kgm$^{-3}$). Ice volume fraction *phi* is then derived from snow density *rho_eff* with *retrieve_phi_from_rho_eff* ``` T_ini = 'T_const_263' RHO_ini = 'RHO_2Layer_Continuous_smooth' [T, rho_eff] = set_initial_conditions(nz, Z, RHO_ini, T_ini) phi = retrieve_phi_from_rho_eff (nz, rho_eff) ``` #### Set up matrices to store results for each time step ``` [all_D_eff, all_k_eff, all_FN, all_rhoC_eff, all_rho_v, all_T,all_c, all_phi, all_rho_eff,all_coord, \ all_v, all_sigma, all_t_passed, all_dz] = set_up_matrices(iter_max, nz) ``` #### Initialize mesh fourier number *FN* and deposition rate *c* ``` FN = np.zeros(nz) c = np.zeros(nz) ``` #### Initialize model parameters Diffusion coefficient *D_eff*, thermal conductivity *k_eff*, heat capacity *rhoC_eff*, *rho_v* saturation water vapor density and *rho_v_dT* is temperature derivate of the saturation water vapor density the. SWVD stands for saturation water vapor density and is an option to decide on the equation to be used for the computation. We choose 'Libbrecht'. ``` SWVD = 'Libbrecht' [D_eff, k_eff, rhoC_eff, rho_v, rho_v_dT] =\ update_model_parameters(phi, T, nz, coord, SWVD) ``` #### Initialize settling velocity *v* is the settling velocity *v_dz* is the z derivative of the velocity and sigma is the vertical stress at each grid node. *SetVel* can be set to 'Y' and 'N' to include or exclude settling respectively. *v_opt* is the option for velocity computation that are described in the docstring. We choose *continuous*. *viscosity* is an option for the viscosity computation. We choose *eta_constant_n1*. \begin{equation}\label{eq:vvsstrainrate} \nabla v = \dot{\epsilon}, \end{equation} \begin{equation} \dot{\epsilon} = \frac{1}{\eta} \sigma^m, \end{equation} \begin{equation} \partial_z v = \frac{1}{\eta} \left( g \int_z^{H(t)} \phi_i\left(\zeta \right) \rho_i \, d \zeta \right)^{m}. \end{equation} \begin{equation}\label{equ:velocity} v (z) = - \int_0^z \frac{1}{\eta} \left( g \int_{\tilde z}^{H(t)} \phi_i(\zeta)\rho_i d \zeta \right)^{m} \, d\tilde{z}, \end{equation} ``` SetVel = 'Y' v_opt = 'continuous' viscosity = 'eta_constant_n1' [v, v_dz, sigma] =\ settling_vel(T, nz, coord, phi, SetVel, v_opt, viscosity) Eterms = True import matplotlib.pyplot as plt plt.style.use('seaborn-colorblind') plt.style.use('seaborn-whitegrid') ``` Ice mass balance: \begin{equation} \partial_t \phi_i + \nabla \cdot (\mathbf{v}\,\phi_i) = \frac{c}{\rho_i}, \label{equ:icemassbalance} \end{equation} Water vapor transport: \begin{equation} \label{eq:vapormassbalance} \partial_t \left( \rho_v \, (1- \phi_i) \right) - \nabla \cdot \left( D_{eff} \, \nabla \rho_v \right) + \rho_v \, \nabla \cdot \left(v \,\phi_i \right) = -c, \end{equation} ``` %matplotlib notebook fig,(ax1, ax2, ax3, ax4,ax5) = plt.subplots(5,1, figsize=(10,13)) for t in range(iter_max): # if t_passed > 3600*(24*2) : # e.g. 2 days # to_stop = 5 #print(t) if t%500 == 0: visualize_juypter(fig, ax1, ax2, ax3, ax4,ax5, T, c, phi, rho_v, v, t) [all_D_eff, all_k_eff, all_FN, all_rhoC_eff, all_rho_v, all_T,all_c,all_phi, all_rho_eff, all_coord, all_v, all_sigma, all_t_passed, all_dz] \ = store_results(all_D_eff, all_k_eff, all_FN, all_rhoC_eff, all_rho_v, all_T, all_c,all_phi, all_rho_eff, all_coord, all_v, all_sigma, all_t_passed,all_dz, D_eff, k_eff, FN, phi, rhoC_eff, rho_v, T, c, rho_eff, coord, v, sigma, t, iter_max, nz,dz,t_passed) T_prev = T # Module I solves for temperature - Diffusion (T, a, b) = solve_for_T(T, rho_v_dT, k_eff, D_eff, rhoC_eff, phi, nz, dt, dz, Eterms) # Module II solves for deposition rate - Diffusion c = solve_for_c(T, T_prev, phi, D_eff, rho_v_dT, nz, dt, dz, Eterms) # Module III solves for ice volume fraction and coordinate update - Advection (phi, coord, dz, v_dz, v, sigma) = coupled_update_phi_coord(T, c, dt, nz, phi, v_dz, coord, SetVel, v_opt, viscosity) [D_eff, k_eff, rhoC_eff, rho_v, rho_v_dT] = update_model_parameters(phi, T, nz, coord, SWVD) t_passed = t_total(t_passed, dt) #print(t_passed) ## find iteration number for specific time by placing a breakpoint at line 58: # activate next line if Module I and II are deactivated #dt = 100 # deactivate next line if Module I and/or II are deactivated [dt, FN] = comp_dt(t_passed, dz, a, b, v) def visualize_juypter(fig, ax1, ax2, ax3, ax4,ax5, T, c, phi, rho_v, v, t): ax1.plot(phi, coord, label='phi profile at t = ' + str(t)) ax1.set_xlabel('Liquid fraction [-]') ax1.set_ylabel('Height [m]' ) ax2.plot(T, coord, label='T profile at t = ' + str(t)) ax2.set_xlabel('Temperature [K]') ax2.set_ylabel('Height [m]' ) ax3.plot(c, coord, label='c at t = ' + str(t)) ax3.set_xlabel('Deposition rate') ax3.set_ylabel('Height [m]' ) ax4.plot(rho_v, coord, label ='rho_v t = ' + str(t)) ax4.set_xlabel('Water vapor density') ax4.set_ylabel('Height [m]' ) ax5.plot(v, coord, label='v profile at t = ' + str(t)) ax5.set_xlabel('Vertical velocity [m/s]') ax5.set_ylabel('Height [m]' ) fig.canvas.draw() ```
github_jupyter
# Large, three-generation CEPH families reveal post-zygotic mosaicism and variability in germline mutation accumulation ### Thomas A. Sasani, Brent S. Pedersen, Ziyue Gao, Lisa M. Baird, Molly Przeworski, Lynn B. Jorde, Aaron R. Quinlan ### Read in files containing DNMs identified in the second and third generations, as well as putative gonosomal and post-PGCS mosaic DNMs. ``` mosaic = read.csv("../data/post-pgcs.dnms.summary.csv") gonosomal = read.csv("../data/gonosomal.dnms.summary.csv") gen3 = read.csv("../data/third_gen.dnms.summary.csv") gen2 = read.csv("../data/second_gen.dnms.summary.csv") ``` ### Figure 2. Effects of parental age and sex on autosomal DNM counts and mutation types in the second generation > A) Numbers of phased paternal and maternal de novo variants as a function of parental age at birth. ``` plot_dnms <- function(df, adjust_ar=FALSE, alpha=1.) { library(ggplot2) library(cowplot) library(MASS) # fit the Poisson regressions for moms and dads separately d = glm(dad_dnms ~ dad_age, data=df, family=poisson(link="identity")) m = glm(mom_dnms ~ mom_age, data=df, family=poisson(link="identity")) # use the fitted GLM to predict the response variable d_pred = predict(d, type='response', se.fit=TRUE) m_pred = predict(m, type='response', se.fit=TRUE) # add CI intervals by calculating 1.96 standard deviations # from the mean in either direction df$dad_ci_lo = d_pred$fit - 1.96 * d_pred$se.fit df$dad_ci_hi = d_pred$fit + 1.96 * d_pred$se.fit df$mom_ci_lo = m_pred$fit - 1.96 * m_pred$se.fit df$mom_ci_hi = m_pred$fit + 1.96 * m_pred$se.fit # get min and max X and Y values for plot limits min_age = min(c(min(df$dad_age), min(df$mom_age))) max_age = max(c(max(df$dad_age), max(df$mom_age))) min_dnm = min(c(min(df$dad_dnms), min(df$mom_dnms))) max_dnm = max(c(max(df$dad_dnms), max(df$mom_dnms))) # set the upper Y limit if (max_dnm < 15) { max_dnm = max_dnm } else { max_dnm = max_dnm + 15 } # adjust the aspect ratio if needed. # these adjustments are specific to plotting either second-generation # DNMs or gonosomal DNMs, and are for aesthetic purposes only if (adjust_ar) { adjust = (0.075 * min_age/min_dnm) } else { adjust = 2.25 } p <- ggplot(df) + # plot the raw data geom_point(aes(x=dad_age, y=dad_dnms), size=3.5, pch=21, fill='#66c2a5', col='white', stroke=0.25) + geom_point(aes(x=mom_age, y=mom_dnms), size=3.5, pch=21, fill='#fc8d62', col='white', stroke=0.25) + # plot the predictions from the fitted GLM geom_line(data=cbind(df, pred_d=d_pred$fit), aes(x=dad_age, y=pred_d), col='#66c2a5') + geom_line(data=cbind(df, pred_m=m_pred$fit), aes(x=mom_age, y=pred_m), col='#fc8d62') + # plot confidence bands geom_ribbon(aes(x=dad_age, ymin=dad_ci_lo, ymax=dad_ci_hi), alpha=alpha, fill='#66c2a5') + geom_ribbon(aes(x=mom_age, ymin=mom_ci_lo, ymax=mom_ci_hi), alpha=alpha, fill='#fc8d62') + # aesthetics for the plot xlab('Parental age at birth') + ylab('Number of DNMs') + theme(text = element_text(size=16)) + theme(axis.text.x = element_text(size = 16)) + theme(axis.text.y = element_text(size = 16)) + xlim(c(min_age, max_age)) + ylim(c(0, max_dnm)) + coord_fixed(adjust) p } plot_dnms(gen2, # file of DNMs adjust_ar=T, # adjust the aspect ratio (for aesthetics only) alpha=0.25) # transparency value for the `geom_ribbon` ``` #### Get the slopes of each regression, as well as 95% CIs ``` get_model_params <- function(df) { d = glm(dad_dnms ~ dad_age, data=df, family=poisson(link="identity")) m = glm(mom_dnms ~ mom_age, data=df, family=poisson(link="identity")) # summaries of each model print(summary(d)) print(summary(m)) # 95% confidence intervals print(confint(d), level=0.95) print(confint(m), level=0.95) } get_model_params(gen2) ``` ### Figure 3. Parental age effects on autosomal germline mutation counts vary significantly among CEPH/Utah families > C: Total number of autosomal DNMs vs. paternal age at birth for each of the 40 CEPH families (i.e., combinations of second-generation parents and their third-generation children). > D: The slope of each family's Poisson regression +/- 95% confidence intervals, sorted in ascending order from top to bottom. ``` library(ggplot2) library(cowplot) library(MASS) # create a new dataframe, which consists of the # `gen3` dataframe plus four new columns. these columns # contain the slope, intercept, and 95% CI of the slope # estimate (in both directions) for the paternal age # effect in that family new_df = data.frame() for (sp in split(gen3, as.factor(gen3$family_id))) { m = glm(autosomal_dnms ~ dad_age, data=sp, family=poisson(link="identity")) s = summary(m) ci = confint(m, level=0.95) sp$slope_ci_lo = ci[[2]] sp$slope_ci_hi = ci[[4]] sp$slope = s$coefficients[[2]] sp$intercept = s$coefficients[[1]] new_df = rbind(new_df, sp) } # sort the dataframe by slope sorted_df = new_df[order(new_df$slope),] # set overall colorscheme, and color # the two most extreme families sorted_df$pt_color = "azure4" sorted_df$ci_color = "grey" sorted_df$pt_color[sorted_df$family_id == "24_C"] = "dodgerblue" sorted_df$ci_color[sorted_df$family_id == "24_C"] = "dodgerblue" sorted_df$pt_color[sorted_df$family_id == "16"] = "firebrick" sorted_df$ci_color[sorted_df$family_id == "16"] = "firebrick" # fit the poisson regression (with interaction) to the full # dataset of third-generation DNMs m = glm(autosomal_dnms ~ dad_age * family_id, data=sorted_df, family=poisson(link="identity")) # get model predictions m_predict = predict(m, sorted_df, type='response', se.fit=TRUE) # add columns for regression confidence interval bands sorted_df$ci_lo = m_predict$fit - 1.96 * m_predict$se.fit sorted_df$ci_hi = m_predict$fit + 1.96 * m_predict$se.fit # add a column to the dataframe that assigns a specific level # to each family so that we can sort the families in our plot sorted_df$facet_order = factor(sorted_df$family_id, levels = unique(sorted_df$family_id)) # plot `dad_age` vs. `autosomal_dnms` for each family, and # add regression lines + confidence bands p1 <- ggplot(sorted_df, aes(x = dad_age, y = autosomal_dnms)) + facet_wrap(~facet_order, nrow=10) + # plot 95% CI bands, regression lines, and raw data points geom_ribbon(data = sorted_df, aes(x=dad_age, ymin=ci_lo, ymax=ci_hi), fill=sorted_df$ci_color, alpha=1) + geom_line(data=cbind(sorted_df, p=m_predict$fit), aes(x=dad_age, y=p), col="white", size=0.1) + geom_point(pch=21, fill=sorted_df$pt_color, size=2, col='white', stroke=0.15) + # plot aesthetics, labels, font sizes xlab("Paternal age at birth") + ylab("Number of autosomal DNMs") + ylim(0,135) + xlim(15,50) + theme(text = element_text(size = 12)) + theme(strip.text = element_blank()) + theme(panel.spacing.y = unit(-5, "pt")) + theme(panel.spacing.x = unit(-5, "pt")) + theme(axis.text.x = element_text(size = 9)) + theme(axis.text.y = element_text(size = 9)) + theme(axis.line = element_line(colour = 'black', size = 0.5)) + theme(axis.ticks = element_line(colour = "black", size = 0.5)) + # adjust the aspect ratio of the plot (also for aesthetics) coord_fixed(ratio=0.15) # remove duplicate family IDs from the dataframe so that we # only plot one point per family in the next plot sorted_df = sorted_df[!duplicated(sorted_df[,c('family_id')]),] sorted_df$order = rev(c(1:nrow(sorted_df))) # calculate the overall paternal age effect ovl_age_effect = glm(autosomal_dnms ~ dad_age, data=gen3, family=poisson(link="identity")) ovl_slope = summary(ovl_age_effect)$coefficients[[2]] ci = confint(ovl_age_effect, level=0.95) ovl_slope_ci_lo = ci[[2]] ovl_slope_ci_hi = ci[[4]] print ("Overall slope estimate in the F2") print (ovl_slope) print (ovl_slope_ci_lo) print (ovl_slope_ci_hi) # plot the slope estimate (+/- 95% CI) for each family separately p2 <- ggplot(sorted_df) + # plot the overall slope, estimated using all samples together, regardless of family ID geom_vline(xintercept=ovl_slope, color = "black", linetype="dashed") + # plot the slope estimate for each family geom_point(aes(y=order, x=slope), col=sorted_df$ci_color) + scale_y_continuous(breaks=sorted_df$order, labels=sorted_df$family_id) + # plot 95% CI for each family geom_errorbarh(aes(y=order, xmax = slope_ci_hi, xmin = slope_ci_lo), col=sorted_df$ci_color) + # plot aesthetics, labels, font sizes theme(axis.text.x = element_text(size = 10)) + theme(axis.text.y = element_text(size = 9)) + theme(text = element_text(size = 12)) + ylab('Family ID') + xlab('Additional DNMs per year of \npaternal age (slope) +/- 95% CI') + background_grid(major="x", minor="none") + coord_fixed(0.4) p1 p2 ``` ### Figure 4: Identification of post-PGCS mosaicism in the second generation > C) Mosaic number as a function of paternal age at birth. ``` mosaic_number_vs_age <- function(mosaic, alpha=1.) { library(ggplot2) library(cowplot) mosaic_fit = glm(snv_dnms ~ dad_age, data=mosaic, family=poisson(link="identity")) mosaic_pred = predict(mosaic_fit, mosaic, type="response", se.fit=TRUE) # add columns for regression confidence interval bands mosaic$ci_lo = mosaic_pred$fit - 1.96 * mosaic_pred$se.fit mosaic$ci_hi = mosaic_pred$fit + 1.96 * mosaic_pred$se.fit p <- ggplot(mosaic, aes(x = dad_age, y = snv_dnms)) + geom_ribbon(data = mosaic, aes(x=dad_age, ymin=ci_lo, ymax=ci_hi), alpha=alpha, fill="firebrick") + geom_line(data=cbind(mosaic, p=mosaic_pred$fit), aes(x=dad_age, y=p), col="firebrick", size=1) + geom_point(fill="black", color='white', pch=21, size=3.5, stroke=0.25) + # plot aesthetics, labels, font sizes xlab("Paternal age at birth") + ylab("Number of sample's DNMs that is shared with siblings") + theme(text = element_text(size=14)) + theme(axis.text.x = element_text(size = 14)) + theme(axis.text.y = element_text(size = 14)) + coord_fixed(3) print(summary(mosaic_fit)) p } mosaic_number_vs_age(mosaic, # combined dataframe containing both post-PGCS and F2 germline DNMs alpha=0.25) # alpha value for `geom_ribbon` ``` > 4D: Comparison of germline and post-PGCS DNM age effects ``` germline_vs_pgcs <- function(gen3, mosaic) { library(ggplot2) library(cowplot) # fit the Poisson regression for post-PGCS DNMs mosaic_fit = glm(snv_dnms ~ dad_age, family=poisson(link="identity"), data=mosaic) mosaic_pred = predict(mosaic_fit, mosaic, type='response', se.fit=TRUE) mosaic$mosaic_ci_lo = mosaic_pred$fit - 1.96 * mosaic_pred$se.fit mosaic$mosaic_ci_hi = mosaic_pred$fit + 1.96 * mosaic_pred$se.fit # fit the Poisson regression for germline DNMs germline_fit = glm(snv_dnms ~ dad_age, family=poisson(link="identity"), data=gen3) germline_pred = predict(germline_fit, gen3, type='response', se.fit=TRUE) # add columns for regression confidence interval bands gen3$germline_ci_lo = germline_pred$fit - 1.96 * germline_pred$se.fit gen3$germline_ci_hi = germline_pred$fit + 1.96 * germline_pred$se.fit p <- ggplot() + geom_line(data=cbind(gen3, p=germline_pred$fit), aes(x=dad_age, y=p), col="dodgerblue") + geom_ribbon(data=gen3, aes(x=dad_age, ymin=germline_ci_lo, ymax=germline_ci_hi), fill="dodgerblue", alpha=0.25) + geom_line(data=cbind(mosaic, p=mosaic_pred$fit), aes(x=dad_age, y=p), col="firebrick") + geom_ribbon(data=mosaic, aes(x=dad_age, ymin=mosaic_ci_lo, ymax=mosaic_ci_hi), fill="firebrick", alpha=0.25) + # plot aesthetics, labels, font sizes xlab("Paternal age at birth") + ylab("Number of DNMs") + theme(text = element_text(size=14)) + theme(axis.text.x = element_text(size = 14)) + theme(axis.text.y = element_text(size = 14)) + xlim(18,50) + coord_fixed(0.1) p } germline_vs_pgcs(gen3, mosaic) ``` > 4E: Fraction of post-PGCS DNMs as a function of paternal age ``` mosaic_fraction_vs_age <- function(gen3, mosaic) { library(ggplot2) library(cowplot) # merge the third-generation and post-PGCS DNM dataframes by sample ID, so # that we can access metadata about each sample from both # dataframes at the same time combined = merge(gen3, mosaic, by="sample_id") # make a column representing the fraction of a sample's DNMs (SNVs only, # since we only looked at post-PGCS SNVs) that are shared with a sibling. # the `.x` and `.y` suffixes on column IDs simply refer to the dataframe that # the column came from. since both dataframes have the same column IDs, `snv_dnms.x` # represents the column from the third-generation dataframe, and `snv_dnms.y` represents the # column from the post-PGCS mosaic dataframe combined$mosaic_fraction = combined$snv_dnms.y / (combined$snv_dnms.y + combined$snv_dnms.x) # add a column representing the total number of DNMs (germline SNVs + # post-PGCS shared SNVs) in the samples combined$total_dnms = combined$snv_dnms.x + combined$snv_dnms.y # fit a model predicting the log of the mosaic fraction vs. dad age mosaic_fraction_fit = lm(log(mosaic_fraction) ~ dad_age.x, data=combined) print(summary(mosaic_fraction_fit)) p <- ggplot(combined, aes(x=dad_age.x, y=mosaic_fraction)) + geom_point(pch=21, size=2, color="white", fill="black", stroke=0.25) + geom_line(data=cbind(combined, p_frac=exp(predict(mosaic_fraction_fit, combined, type="response"))), aes(x=dad_age.x, y=p_frac), col="firebrick", size=1) + # plot aesthetics, labels, font sizes xlab("Paternal age at birth") + ylab("Fraction of sample's DNMs that is\n shared with siblings") + theme(text = element_text(size=14)) + theme(axis.text.x = element_text(size = 14)) + theme(axis.text.y = element_text(size = 14)) + xlim(18,50) + coord_fixed(80) p } mosaic_fraction_vs_age(gen3, mosaic) ``` #### Determine whether the mosaic fraction is dependent on the number of siblings in a family ``` combined = merge(gen3, mosaic, by="sample_id") # make a column representing the fraction of a sample's DNMs (SNVs only, # since we only looked at post-PGCS SNVs) that are shared with a sibling. # the `.x` and `.y` suffixes on column IDs simply refer to the dataframe that # the column came from. since both dataframes have the same column IDs, `snv_dnms.x` # represents the column from the third-generation dataframe, and `snv_dnms.y` represents the # column from the post-PGCS mosaic dataframe combined$mosaic_fraction = combined$snv_dnms.y / (combined$snv_dnms.y + combined$snv_dnms.x) # fit a model predicting the mosaic fraction as a function of age m = lm(mosaic_fraction ~ n_sibs.x, data=combined) summary(m) cor.test(combined$mosaic_fraction, combined$n_sibs.x) ``` #### Determine whether the mosaic number is dependent on the number of siblings in a family ``` m = lm(snv_dnms.y ~ n_sibs.x, data=combined) summary(m) cor.test(combined$snv_dnms.y, combined$n_sibs.x) ``` ### Figure 5: Identification of gonosomal mutations in the second generation > D: Numbers of phased paternal and maternal gonosomal variants as a function of parental age at birth ``` plot_dnms(gonosomal, # file of DNMs adjust_ar=F, # adjust the aspect ratio (for aesthetics only) alpha=0.25) # transparency value for the `geom_ribbon` get_model_params(gonosomal) ``` ### Supplementary Figure 3: Contribution of maternal and paternal age to de novo mutation rates > A) Contributions of maternal and paternal age to mutation rates in second-generation children ``` age_by_age <- function(df) { library(ggplot2) library(cowplot) # calculate the autosomal mutation rate df$rate = df$snv_autosomal_dnms / df$autosomal_callable_fraction / 2. p <- ggplot(df, aes(dad_age, mom_age)) + geom_point(aes(col = rate), size=2) + geom_abline(slope=1, intercept=0) + scale_colour_gradient(low = "blue", high = "red") + xlab("Paternal age at birth") + ylab("Maternal age at birth") + theme(text = element_text(size=16)) + theme(axis.text.x = element_text(size = 16)) + theme(axis.text.y = element_text(size = 16)) p } age_by_age(gen2) ``` > B) Contributions of maternal and paternal age to mutation rates in third-generation children ``` age_by_age(gen3) ```
github_jupyter
<a href="https://colab.research.google.com/github/diogojorgebasso/bootcamp-python-igti/blob/main/modulo1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Anotações preciosas do Módulo I - curso IGTI. Fundamentos do Python Import files in Google Colab ``` from google.colab import files #uploaded = files.upload() arquivo = open('sample_data/README.md',"r") for linhas in arquivo: print(linhas) arquivo.close() arq = open('testing','r') linha = arq.readline().rstrip() contador = 0 while linha != 'Programming': contador+=1 print(contador) arq.close() ``` ## Testando Entradas, tipos de dados e criação de arquivos ``` arq = str(input('Digite aqui, por favor, o nome do arquivo a ser criado: ')) nome = str(input('Digite aqui seu nome: ')) idade = int(input('Digite aqui sua idade: ')) cidade = str(input('digite sua cidade: ')) arquivo_saida = open(arq, 'w') arquivo_saida.write(f"Olá, eu sou {nome}, tenho {idade} anos. Moro com orgulho em {cidade}\n Feito com amor por Diogo's Programming") arquivo_saida.close() ``` ### Concatenação e soma de dados com base no tipo ``` lista1 = [1,2,'IGTI'] lista2 = [3,4,'Harvadr'] lista = lista1+lista2 print(lista) idade = int(input ('Digita ai')) nova = idade+1 print(nova) ``` ## Estudo de Repetição e Loops For ``` for x in range(1, 10, 2): print(x) frutas = ['maçã','banan', 'uva', 'goiaba'] frutas.append('Diogo') for x in frutas: if x =='uva': break print(x) ``` While ``` n = 5 while n > 0: n-=1 print(n) ``` ### Programação <s>imperativa</s> semântica ``` lista = [int(i) for i in input('digite a lista de valores aqui').split()] print('A lista analisada pelo programa foi: ', lista) for x in lista: for y in lista: try: if x % y == 0: print(f'{x} é divisível por {y}') except: continue ``` ## Funções ``` #*args --> tal como o spread operator. def mostra_nome(*nomes): for nome in (nomes): print(f'Olá, {nome}') def mostra_frase(*frases): lista = [] for msg in frases: print(msg, end=" ") mostra_nome('Diogo', 'Camila', 'Ana') mostra_frase('TOP','IGTI È FERA','Vamos melhorar a educação do país.') def text_inverted(string): char = " " for letter in (string): char = letter + char return char texto = str(input()) while texto.strip() != '': print(f'o inverso de {texto} é {text_inverted(texto)}') choice = str(input('Almeja digitar mais? ')) if choice[0] == 's': texto = str(input('Digite aqui...')) else: break print('muito obrigado por ultilizar o código.') ``` ### Recursiva ``` def fatorial(n): if n==0: return 1 return n*fatorial(n-1) print(fatorial(4)) ``` ## Leitura de XML em Python ``` from xml.dom import minidom documento = minidom.parse('teste.xml') print(documento.documentElement.tagName) #nó principal para o arquivo XML tag_filho = documento.getElementsByTagName('Test') principal = documento.getElementsByTagName('Input') #acessando atributo no elemento filho: for item2 in tag_filho: print(item2.attributes['TestId'].value) #acessando os elementos de um arquivo: for item in principal: print(item.firstChild.data) ``` Para mais, consultar em: https://docs.python.org/3/library/xml.dom.minidom.html#module-xml.dom.minidom ## Classe ### Criemos, abaixo, uma classe em Python no modelo de logística ``` entrada_1=Logistica('carro', 'branco', '6') entrada_1.get_numero_portas() class Logistica: def __init__(self, tipo_veiculo='caminhão', cor='prata', numero_portas='4'): self.tipo_veiculo=tipo_veiculo self.cor = cor self.numero_portas=numero_portas def get_tipo_veiculo(self): return self.tipo_veiculo def get_numero_portas(self): return self.numero_portas def get_cor(self): return self.cor def set_cor(self, nova_cor): self.cor = nova_cor return self.cor class carro(Logistica): '''essa é uma classe filha de Logisitca''' def __init__(self, modelo='Astra'): self.modelo=modelo def get_modelo(self): return self.modelo carro_1=carro() print(carro_1.get_tipo_veiculo) ``` ### Classe Raiz: ``` class carro: def __init__(self, numero_portas, preco, peso): self.numero_portas = numero_portas self.preco = preco self.peso = peso print('construtor iniciado') def get_portas(self): return self.numero_portas def set_portas(self, novo_num_porta): self.numero_portas = novo_num_porta carro1 = carro(4,50000,4000) carro2 = carro(4, 5000, 45) print(carro2.get_portas()) carro1.set_portas(1) print(carro1.get_portas()) class Retangulo: def __init__(self, comprimento, largura): self.comprimento = comprimento self.largura = largura def area(self): return self.comprimento*self.largura class Quadrado(Retangulo): def __init__(self, comprimento): super().__init__(comprimento, comprimento) quadrado_1 = Quadrado(6) print(quadrado_1.area()) class Cubo(Quadrado): def area_superficie(self): area_total_superficie = super().area() return area_total_superficie*6 def volume(self): volume = super().area() return volume*self.comprimento ``` ### Verficação de classe pai ``` #checando se é ou não instancia e subclasse, respectivamente: print(issubclass(Cubo, Retangulo)) print(isinstance(cubo_1, Retangulo)) cubo_1 = Cubo(6) print(cubo_1.volume()) ``` ## Função Lambda ``` f = lambda x, y: x*y if x*y<100 else x+y f(4,50) ``` ## Extra: *Desafios* Módulo I ``` def funcao_1(num1, num2): resultado = num1*num2 if resultado %2 ==0: return resultado else: return num1+num2 funcao_1(10,20) def funcao_2(num): numero_anterior = 0 for i in range(num)): resultado = numero_anterior+i print('Numero A', i, 'Numero B', numero_anterior, 'resultado: ', resultado) numero_anterior = i funcao_2(list(range(num))) def funcao_3(str): for i in range(1, len(str)-1, 2): print("indice[",i,"]", str[i] ) funcao_3('diegão') def funcao_4(lista_numerica): print('valor passado...', lista_numerica) a = lista_numerica[0] b = lista_numerica[-1] if a!=b: return True else: return False funcao_4([10,20,30,40,10]) class Classe_1: def funcao(self, string): dicionario = {'I': 1, 'V': 5, 'X':10, 'L':50, 'C':100, 'D':500, 'M':1000} valor = 0 for i in range(len(string)): if i>0 and dicionario[string[i]]>dicionario[string[i-1]]: valor+= dicionario[string[i]] - 2*dicionario[string[i-1]] else: valor+= dicionario[string[i]] return valor teste = Classe_1() teste.funcao('MC') class A: def __init__(self): self.calcI(30) print('i da classe A', self.i) def calcI(self, i): self.i = i*2 class B(A): def __init__(self): super().__init__() def calcI(self, i): self.i = i*3 b=B() class Classe_2(): def __init__(self, l, w): self.a = l self.b = w def metodo_1(self): return self.a*self.b objeto_1 = Classe_2(12,10) objeto_1.metodo_1() ```
github_jupyter
# 2A.eco - Mise en pratique des séances 1 et 2 - Utilisation de pandas et visualisation - correction Correction d'un exercice sur la manipulation des données. ``` from jyquickhelper import add_notebook_menu add_notebook_menu() from pyensae.datasource import download_data files = download_data("td2a_eco_exercices_de_manipulation_de_donnees.zip", url="https://github.com/sdpython/ensae_teaching_cs/raw/master/_doc/notebooks/td2a_eco/data/") files ``` ## Exercice 1 - manipulation des bases Durée : 10 minutes 1. Importer la base de données relatives aux joueurs de la Coupe du Monde 2014 2. Déterminer le nombre de joueurs dans chaque équipe et créer un dictionnaire { équipe : Nombre de joueurs} 3. Déterminer quels sont les 3 joueurs qui ont couvert le plus de distance. Y a t il un biais de sélection ? 4. Parmis les joueurs qui sont dans le premier décile des joueurs plus rapides, qui a passé le plus clair de son temps à courrir sans la balle ? # Import du fichier ``` import pandas as pd data_players = pd.read_excel("Players_WC2014.xlsx", engine='openpyxl') data_players.head() ``` ## Nombre de joueurs par équipe ``` data_players.groupby(['Team']).size().to_dict() ``` ## Les joueurs ayant couvert le plus de distance ``` ## quels joueurs ont couvert le plus de distance ? data_players['Distance Covered'] = data_players['Distance Covered'].str.replace('km','') data_players['Distance Covered'] = pd.to_numeric(data_players['Distance Covered']) data_players.sort_values(['Distance Covered'], ascending = 0).head(n=3) ``` On voit un clair effet de sélection sur cette variable : ce sont les joueurs dont les équipes ont été le plus loin dans la compétition qui couvert le plus de distance. ## Qui a été le plus efficace ? On a besoin de rendre la variable Top Speed numérique, et de créer une nouvelle variable avec le poucentage de possession de balle ``` ## Qui a été le plus rapide ? data_players['Top Speed'] = data_players['Top Speed'].str.replace('km/h','') data_players['Top Speed'] = pd.to_numeric(data_players['Top Speed']) data_players.sort_values(['Top Speed'], ascending = 0).head(n=3) ## Parmis ceux qui sont dans le décile des plus rapides, qui a passé le plus clair de son temps à courrir sans la balle ? data_players['Distance Covered In Possession'] = data_players['Distance Covered In Possession'].str.replace('km','') data_players['Distance Covered In Possession'] = pd.to_numeric(data_players['Distance Covered In Possession']) data_players['Share of Possession'] = data_players['Distance Covered In Possession']/data_players['Distance Covered'] data_players[data_players['Top Speed'] > data_players['Top Speed']. quantile(.90)].sort_values(['Share of Possession'], ascending = 0).head() ```
github_jupyter
# Visualizing COVID-19 Data at the State and County Levels in Python ## Part I: Downloading and Organizing Data From casual observation, I surmise that the widespread stay-at-home orders initiated in March 2020 have left data scientists with a bit of extra time as, with each passing week, I find new sources for COVID-19 data and data visualizations. I have written before about the [proper](https://www.ndsu.edu/centers/pcpe/news/detail/58432/) and [improper](https://www.aier.org/article/visualizations-are-powerful-but-often-misleading/) uses of data. In this post, my purpose is pedagogical. I intend to teach the reader how to download and organize COVID-19 data and how to honestly and meaningfully visualize this data. First, a confession. I am a self-taught programmer. Like many of us, much of what I write can be described as *spaghetti code*, at least initially. I don't thoroughly plan a program before I write it. I roll up my sleeves and get to coding. This has its benefits. Rarely is the perfect the enemy of the good when I first construct a program. Since I'm not writing my code for commercial use, I am able to efficiently produce results. One benefit of building code on the fly is that you may not know at the start of a project what sorts of qualities will be useful to include. As with all creative processes, _discovery_ is often a process that is dependent upon context that emerges from the process of development. _Spaghetti code_ can also be swiftly repurposed, usually by creation of a new copy of the script and making some marginal adjustments. However, the more spaghetti code you write for a particular project and the more kinds of output demanded by the project, the greater the difficulty of maintaining quality code that is easy to read. At some point, you've got to standardize components of your code. When I find myself returning time and again to a particular template, I eventually consolidate the scripts that I have developed so as to minimize costs of editing code by modularizing the code. Standardizing different blocks of code by creating and implementing functions allows the script to produce a variety of outputs. The script in this example that is in the middle stages of this process of development, revision, and standardization. Much of the information concerning how the program works is included as notes in the code. These notes are preceded by the _#_ symbol. ### Downloading the COVID-19 data We will use two datasets. First, we will import a shapefile to use with *geopandas*, which we will later use to generate a county level map that tracks COVID-19. The shapefile is provide for you in the Github folder housing this post. You can also download shapefiles from the U.S. Census [website](https://www.census.gov/geographies/mapping-files/time-series/geo/carto-boundary-file.html). We will create maps in [Part III](https://github.com/jlcatonjr/Learn-Python-for-Stats-and-Econ/blob/master/Projects/COVID19/Visualizing%20COVID-19%20Data%20at%20the%20State%20and%20County%20Levels%20in%20Python%20-%20Part%20III.ipynb). We will download Johns Hopkins's COVID-19 data from the Associated Press's [account](https://data.world/associatedpress/johns-hopkins-coronavirus-case-tracker) at data.world using their [Python module](https://data.world/integrations/python). Follow [these instructions](https://github.com/datadotworld/data.world-py/) to install the *datadotworld* module and access their API. Once we have installed the _datadotworld_ module, we can get to work. First, we will need to import our modules. While not all of these modules will be used in _Part I_ of this series, it will be convenient to import them now so that we can use them later. ``` #createCOVID19StateAndCountyVisualization.py import geopandas import numpy as np import pandas as pd # We won't actually use datetime directly. Since the dataframe index will use # data formatted as datetime64, I import it in case I need to use the datetime # module to troubleshoot later import datetime # you could technically call many of the submodules from matplotlib using mpl., #but for convenience we explicitly import submodules. These will be used for # constructing visualizations import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation from mpl_toolkits.axes_grid1 import make_axes_locatable from matplotlib.backends.backend_pdf import PdfPages import matplotlib.ticker as mtick import datadotworld as dw ``` Now we are ready to import the shapefile and download the COVID-19 data. Let's start by creating a function to import the shapefile. ``` def import_geo_data(filename, index_col = "Date", FIPS_name = "FIPS"): # import county level shapefile map_data = geopandas.read_file(filename = filename, index_col = index_col) # rename fips code to match variable name in COVID-19 data map_data.rename(columns={"State":"state"}, inplace = True) # Combine statefips and county fips to create a single fips value # that identifies each particular county without referencing the # state separately map_data[FIPS_name] = map_data["STATEFP"].astype(str) + \ map_data["COUNTYFP"].astype(str) map_data[FIPS_name] = map_data[FIPS_name].astype(np.int64) # set FIPS as index map_data.set_index(FIPS_name, inplace=True) return map_data ``` Next we create a function to download the COVID-19 data using the datadotworld API. ``` def import_covid_data(filename, FIPS_name): # Load COVID19 county data using datadotworld API # Data provided by Johns Hopkins, file provided by Associated Press dataset = dw.load_dataset("associatedpress/johns-hopkins-coronavirus-case-tracker", auto_update = True) # the dataset includes multiple dataframes. We will only use #2 covid_data = dataset.dataframes["2_cases_and_deaths_by_county_timeseries"] # Include only oberservation for political entities within states # i.e., not territories, etc... drop any nan fip values with covid_data[FIPS_name] > 0 covid_data = covid_data[covid_data[FIPS_name] < 57000] covid_data = covid_data[covid_data[FIPS_name] > 0] # Transform FIPS codes into integers (not floats) covid_data[FIPS_name] = covid_data[FIPS_name].astype(int) covid_data.set_index([FIPS_name, "date"], inplace = True) # Prepare a column for state abbreviations. We will draw these from a # dictionary created in the next step. covid_data["state_abr"] = "" for state, abr in state_dict.items(): covid_data.loc[covid_data["state"] == state, "state_abr"] = abr # Create "Location" which concatenates county name and state abbreviation covid_data["Location"] = covid_data["location_name"] + ", " + \ covid_data["state_abr"] return covid_data ``` Finally, create script that will employ the functions created above. ``` # I include this dictionary to convenienlty cross reference state names and # state abbreviations. state_dict = { 'Alabama': 'AL', 'Alaska': 'AK', 'Arizona': 'AZ', 'Arkansas': 'AR', 'California': 'CA', 'Colorado': 'CO', 'Connecticut': 'CT', 'Delaware': 'DE', 'District of Columbia': 'DC', 'Florida': 'FL', 'Georgia': 'GA', 'Hawaii': 'HI', 'Idaho': 'ID', 'Illinois': 'IL', 'Indiana': 'IN', 'Iowa': 'IA','Kansas': 'KS', 'Kentucky': 'KY', 'Louisiana': 'LA', 'Maine': 'ME', 'Maryland': 'MD', 'Massachusetts': 'MA', 'Michigan': 'MI', 'Minnesota': 'MN', 'Mississippi': 'MS', 'Missouri': 'MO', 'Montana': 'MT', 'Nebraska': 'NE', 'Nevada': 'NV', 'New Hampshire': 'NH', 'New Jersey': 'NJ', 'New Mexico': 'NM', 'New York': 'NY', 'North Carolina': 'NC', 'North Dakota': 'ND', 'Ohio': 'OH', 'Oklahoma': 'OK', 'Oregon': 'OR', 'Pennsylvania': 'PA', 'Rhode Island': 'RI', 'South Carolina': 'SC', 'South Dakota': 'SD', 'Tennessee': 'TN', 'Texas': 'TX', 'Utah': 'UT', 'Vermont': 'VT', 'Virginia': 'VA', 'Washington': 'WA', 'West Virginia': 'WV', 'Wisconsin': 'WI', 'Wyoming': 'WY' } # When we complete our script, we will add an if statement that ensures that we # only download the data one time. This will prevent us from rudely wasting # bandwidth from data.world. # ignore warnings from NA values upon import. fips_name = "fips_code" covid_filename = "COVID19DataAP.csv" # rename_FIPS matches map_data FIPS with COVID19 FIPS name map_data = import_geo_data(filename = "countiesWithStatesAndPopulation.shp", index_col = "Date", FIPS_name= fips_name) covid_data = import_covid_data(filename = covid_filename, FIPS_name = fips_name) ``` Call both dataframes in the console to check that everything loaded properly. ``` map_data.iloc[:15] covid_data.iloc[:15] ``` ### Reconstructing the Data Next we will generate state level by summing the county level data. This is largely a pedagogical exercise as we could download state data directly. It is helpful, however, to understand how the .sum() and .groupby() function work in _pandas_ and the process for summing county level data is straight forward. ``` def create_state_dataframe(covid_data): # the keys of state_dict are the names of the states states = list(state_dict.keys()) # D.C. is included in the county level data, so I elect to remove D.C. # if you do not remove D.C., it will be called as a Series (i.e., not a DF), # and will require an extra step in the script states.remove("District of Columbia") # We want to sum data within each state by summing the county values for each # date state_data = covid_data.reset_index().set_index(["date", "state","fips_code"])\ .groupby(["state", "date"]).sum(numeric_only = True, ignore_index = False) # These values will be recalculated since the sum of the county values # would need to be weighted to be meaningful drop_cols = ['uid', 'location_name', 'cumulative_cases_per_100_000', 'cumulative_deaths_per_100_000', 'new_cases_per_100_000', 'new_deaths_per_100_000', 'new_cases_7_day_rolling_avg', 'new_deaths_7_day_rolling_avg'] state_data.drop(drop_cols, axis = 1, inplace = True) # .sum() concatenated the strings in the dataframe, so we must correct for this # by redefining these values state_data["location_type"] = "state" for state in states: state_data.loc[state_data.index.get_level_values("state") == state, "Location"] = state state_data.loc[state_data.index.get_level_values("state") == state, "state_abr"] = \ state_dict[state] return state_data ``` At the bottom of the script after the line where *covid_data* is defined, create *state_data*. ``` state_data = create_state_dataframe(covid_data) ``` Call the result to check that *state_data* was correctly constructed. ``` state_data[:15] ``` Now it is time to merge the COVID-19 data with the data from the U.S. Census shapefile. We created *state_data* before merging the county level data with the shapefile data since the state level dataframe does not meed to include the data from the shapefile. ``` def create_covid_geo_dataframe(covid_data, map_data, dates): # create geopandas dataframe with multiindex for date # original geopandas dataframe had no dates, so copies of the df are # stacked vertically, with a new copy for each date in the covid_data index #(dates is a global) i = 0 for date in dates: # select county observations from each date in dates df = covid_data[covid_data.index.get_level_values("date")==date] # use the fips_codes from the slice of covid_data to select counties # from the map_data index,making sure that the map_data index matches # the covid_data index counties = df.index.get_level_values("fips_code") agg_df = map_data.loc[counties] # each row for agg_df will reflect that agg_df["date"] = date if i == 0: # create the geodataframe, select coordinate system (.crs) to # match map_data.crs matching_gpd = geopandas.GeoDataFrame(agg_df, crs = map_data.crs) i += 1 else: # after initial geodataframe is created, stack a dataframe for # each date in dates. Once completed, index of matching_gpd # will match index of covid_data matching_gpd = matching_gpd.append(agg_df, ignore_index = False) # Set mathcing_gpd index as["fips_code", "date"], liked covid_data index matching_gpd.reset_index(inplace=True) matching_gpd.set_index(["fips_code","date"], inplace = True) # add each column from covid_data to mathcing_gpd for key, val in covid_data.items(): matching_gpd[key] = val return matching_gpd # dates will be used to create a geopandas DataFrame with multiindex dates = sorted(list(set(covid_data.index.get_level_values("date")))) covid_data = create_covid_geo_dataframe(covid_data, map_data, dates) ``` As before, let's check the result by calling the covid_data that we have redefined to include the shapefile. ``` covid_data.iloc[-15:] ``` The result is that covid_data is now a geodataframe that can be used to generate maps that reflect data at the county level. We will create these maps in Part III. Next we will generate data that normalizes the number of Cases per Million and Deaths per Million. For daily rates of both variables, we will create a 7 day moving average. ``` def create_new_vars(covid_data, moving_average_days): # use a for loop that performs the same operations on data for cases and for deaths for key in ["cases", "deaths"]: # create a version of the key with the first letter capitalized cap_key = key.title() covid_data[cap_key + " per Million"] = covid_data["cumulative_" + key]\ .div(covid_data["total_population"]).mul(10 ** 6) # generate daily data normalized per million population by taking the daily difference within each # entity (covid_data.index.names[0]), dividing this value by population and multiplying that value # by 1 million 10 ** 6 covid_data["Daily " + cap_key + " per Million"] = \ covid_data["cumulative_" + key ].groupby(covid_data.index.names[0])\ .diff(1).div(covid_data["total_population"]).mul(10 ** 6) # taking the rolling average; choice of number of days is passed as moving_average_days covid_data["Daily " + cap_key + " per Million MA"] = covid_data["Daily " + \ cap_key + " per Million"].rolling(moving_average_days).mean() ``` At the bottom of the script, define the number of days for the rolling moving average. Call *create_new_vars()* to create new variables for *covid_data* and *state_data* ``` moving_average_days = 7 create_new_vars(covid_data, moving_average_days) create_new_vars(state_data, moving_average_days) ``` Now check that the dataframes for the new variables. ``` covid_data.iloc[-15:] state_data.iloc[-15:] ``` The new variables have been created successfully. You might notice that the value of Daily Deaths per Million MA is not exactly zero. This is a technicality, as Python will identify the number zero as an arbtirarily small float value. The last step will be to compare data from each geographic entity by aligning the data in relation to the first day that Cases per Million or Deaths per Million passed a given threshold. This aligned data will be recorded in the *zero_day_dict*. ``` def create_zero_day_dict(covid_data, start_date): # Data from each entity will be stored in the dictionary zero_day_dict = {} # The dictionary will have a total of 4 keys # "Cases per Million", "Daily Cases per Million MA", # "Deaths per Million", "Daily Deaths per Million MA" for key in ["Cases", "Deaths"]: zero_day_dict[key + " per Million"] = {} zero_day_dict["Daily " + key + " per Million MA"] = {} # Each key is associated with a minimal value that identifies day zero # For deaths, the value is drawn from "Deaths per Million" # For cases, the value is drawn from "Cases per Million" day_zero_val = {} for key in zero_day_dict: day_zero_val[key] = 2 if "Deaths" in key else 10 # create a list of entities (states or counties) entities = sorted(list(set(covid_data.index.get_level_values(0)))) # for each key, identify the full set of values for key in zero_day_dict.keys(): vals = covid_data[key] # select values that will be used to identify day zero thresh_vals = covid_data["Deaths per Million"] if "Deaths" in key else \ covid_data["Cases per Million"] # for each entity, select the slice of values greater than the minimum value for entity in entities: dpc = vals[vals.index.get_level_values(0) == entity][thresh_vals > day_zero_val[key]] zero_day_dict[key][entity] = dpc.copy() return zero_day_dict, day_zero_val start_date = "03-15-2020" end_date = dates[-1] county_zero_day_dict, day_zero_val = create_zero_day_dict(covid_data, start_date) state_zero_day_dict, day_zero_val = create_zero_day_dict(state_data, start_date) ``` Check a key from each dictionary to make sure that the data has actually been aligned. We will call only one element from each dictionary to check that the results are as expected. ``` state_zero_day_dict["Deaths per Million"]["New York"].iloc[-15:] county_zero_day_dict["Daily Deaths per Million MA"][1001].iloc[-15:] ``` If you call other fips values, you may notice that in the county level dictionary, sometimes a key will point to an empty Series. In that case, the data for this county never passed the threshold value for day zero in the realligned data. You have completed the last step. All that is left now is to create a feature that does not unnecessarily download and process the data again once all steps have been completed. We will add a term, *data_processed* that confirms when the data has been processed. At the beginning of the main program - after creating *state_dict* we include an if statement that checks if this variable has been created. If you want to redownload the data, simply delete *data_processed* or restart the kernel. ``` if "data_processed" not in locals(): fips_name = "fips_code" covid_filename = "COVID19DataAP.csv" # rename_FIPS matches map_data FIPS with COVID19 FIPS name map_data = import_geo_data(filename = "countiesWithStatesAndPopulation.shp", index_col = "Date", FIPS_name= fips_name) covid_data = import_covid_data(filename = covid_filename, FIPS_name = fips_name) state_data = create_state_dataframe(covid_data) # dates will be used to create a geopandas DataFrame with multiindex dates = sorted(list(set(covid_data.index.get_level_values("date")))) covid_data = create_covid_geo_dataframe(covid_data, map_data, dates) moving_average_days = 7 create_new_vars(covid_data, moving_average_days) create_new_vars(state_data, moving_average_days) start_date = "03-15-2020" end_date = dates[-1] county_zero_day_dict, day_zero_val = create_zero_day_dict(covid_data, start_date) state_zero_day_dict, day_zero_val = create_zero_day_dict(state_data, start_date) # once data is processed, it is saved in the memory # the if statement at the top of this block of code instructs the computer # not to repeat these operations data_processed = True ``` ### Part II: Visualizing Aligned Data Next we will create visualizations of the data in *county_zero_day_dict* and *state_zero_day_dict* like these. The challenge will be to create a single script that can accomodate both kinds of visualizations. We will make efficient use of generators and trailing if statements to switch between types of visualizations. In the process, you will not only create informative visualizations of COVID-19 data, you will learn how to exercise control over the attributes of your visualizations. We will need to indicate whether we are plotting daily rates or cumulative totals. We will also need to distinguish between cases and deaths as we visualize both data in a single figure. This requires preparation. Below I present the main function along with three functions that execute procedures within the main function. These subfunctions are *plot_double_lines()*, *identify_plot_locs()*, and *plot_lines_and_text*. The inclusion of these help improve readability of the code as these select features that distinguish the plots of daily rates from plots of cumulative totals. ``` def plot_zero_day_data(state_name, state, covid_data, zero_day_dict, day_zero_val, keys, entity_type, entities, pp, n_largest = 10, bold_entities = None, daily = False): # initialize figure that will hold two plots, stacked vertically fig, a = plt.subplots(2,1, figsize = (48, 32)) for key in keys: # if daily is true, plot moving average of daily rates for key, # else plot values identified by key val_key = "Daily " + key + " MA" if daily else key # only plot if there are actually values in day_zero_dict if len(entities) > 0: # i will track color of bolded entities # j will track color of non-bolded entities i = 0 j = 0 # select the upper part - [0] - of the figure to plot "Cases" # and the lower part - [1] - to plot "Deaths" ax = a[0] if "Cases" in key else a[1] # For plotting levels, we will include lines that indicate a doubling of # cases or deaths from day zero # Function also identifies the maximum x and y values to determine size of plot max_x, max_y = plot_double_lines(ax, zero_day_dict, day_zero_val, val_key, entities, daily) # select entities to be visualized. # top_locs are either the n most populous counties or states selected # entities in top_locs will be bolded locs, top_locs = identify_plot_locs(state_name, covid_data, bold_entities, n_largest) # cycle through each entity within the set (states within the U.S. # or counties within states) for entity in entities: vals = zero_day_dict[val_key][entity] # D.C. only has series as value # you might include D.C. by using an if statement that does not # call a subset of entities if there is only one entity if len(vals) > 0 and entity != "District of Columbia": # select only observations that include entity loc = locs[locs.index.get_level_values(entity_type) == entity]["Location"][0] # plot lines and increase the i if entity in top_locs, else increase j i, j = plot_lines_and_text(ax, vals, state, state_dict, loc, top_locs, colors_dict, i, j) # set plot attributes if daily: # provide a bit of *breating room* at the top of the visualization ax.set_ylim(bottom = 0, top = max_y * 1.08) else: # if observing totals, log the y-axis so you can compare rates. ax.set_yscale('log') # In some cases, max_y reads as np.nan, so this exception was necessary if max_y is not np.nan: ax.set_ylim(bottom = np.e ** (np.log(day_zero_val[key])), top = np.e ** (np.log(max_y * 4) )) # make sure that axis is labeled with integers, not floats vals = ax.get_yticks() ax.set_yticklabels([int(y) if y >= 1 else round(y,1) for y in vals]) ax.set_ylabel(val_key) # provide space for entity names on the edge of the plot: see plot_lines_and_text() ax.set_xlim(right = max_x + 10) # key (not val_key) provides basis for index values that align data at day zero ax.set_xlabel("Days Since " + key + " Exceeded " + str(day_zero_val[key])) title = str(end_date)[:10] + "\n7 Day Moving Average" + "\nCOVID-19 in " + state_name \ if daily else str(end_date)[:10] + "\nCOVID-19 in " + state_name # title for daily data takes up 3 lines instead of two, so move y_pos higher for daily data y_pos = .987 if daily else .95 fig.suptitle(title , y=y_pos, fontsize = 75) pp.savefig(fig, bbox_inches = "tight") plt.savefig("statePlots/" + state + " " + val_key + ".png", bbox_inches = "tight") plt.show() plt.close() # this function creates lines that indicate a doubling of the day_zero_value every X number of days def plot_double_lines(ax, zero_day_dict, day_zero_val, key, entities, daily): # function also checks if the lines indicating doubling reach the maximum extent of the plot max_x = max([len(zero_day_dict[key][entity]) for entity in entities]) max_y = max([zero_day_dict[key][entity].max() for entity in entities]) # Do not including doubling lines for rates if not daily: double_lines ={} for i in [2,3,5]: double_lines[i] = [day_zero_val[key] * 2 ** (k/i) for k in range(9 * i)] ax.plot(double_lines[i], label = None, alpha = .2, color = "k", linewidth = 5) # labels are placed at the end of each doubling line ax.text(len(double_lines[i]), double_lines[i][len(double_lines[i])-1], "X2 every \n" + str(i) + " days", alpha = .2) # check if doubling lines press outside current max_x, max_y max_x2 = max(len(val) for val in double_lines.values()) max_y2 = max(val[-1] for val in double_lines.values()) max_x = max_x if max_x > max_x2 else max_x2 max_y = max_y if max_y > max_y2 else max_y2 return max_x, max_y # the program must select which entities to plot with bold lines and larger text # for counties, this selection occurs in light of population def identify_plot_locs(state_name, covid_data, bold_entities, n_largest): if state_name not in state_dict.keys(): # select states within the U.S. to bold locs = covid_data top_locs = covid_data[covid_data["state_abr"].isin(bold_entities)] else: # select counties within state to bold locs = covid_data[covid_data["state"] == state_name][["Location", "state_abr", "total_population"]] top_locs = locs[locs.index.get_level_values("date")==locs.index.get_level_values("date")[0]] top_locs = top_locs[top_locs["total_population"] >= top_locs["total_population"]\ .nlargest(n_largest).min()]["Location"] return locs, top_locs def plot_lines_and_text(ax, vals, state, state_dict, loc, top_locs, colors_dict, i, j): # procedure to select color def select_color(loc, top_locs, colors_dict, colors, i, j): val = i if loc in top_locs.values else 7#j # if loc not in colors_dict.keys(): colors_dict[loc] = colors[val % 10] color = colors_dict[loc] if loc in top_locs.values: i += 1 else: j += 1 return color, i, j color, i, j = select_color(loc, top_locs, colors_dict, colors, i, j) # counties are in form: San Bernardino, CA. To select county name from loc, # remove the last 4 characters. # state abbreviations are selected for comparison between states label = state_dict[loc] if state not in state_dict.values() else loc[:-4].replace(" ", "\n") # choose different sets of charactersitics for bolded entities vs. entities not emphasized linewidth, ls, fontsize, alpha = (7, "-", 36, 1) if loc in top_locs.values else (3, "--", 24, .5) ax.plot(vals.values, label = label, ls = ls, linewidth = linewidth, alpha = alpha, color = color) # write text at the end of the line ax.text(x = len(vals.values) - 1, y = vals.values[-1], s = label, fontsize = fontsize, color = color, alpha = alpha) return i, j ``` The functions are the backbone of the program that create data visualizations. Be sure to read carefully through the script to understand its structure. Now we will call the main function, plot_zero_day_data(), distinguishing between county level and state level data and also distinguishing between daily rates and cumulative totals. We will use a boolean variable - _daily_ - to identify whether rates or totals should be ploted. In this case, I have chosen to plot states in the Upper Midwest. ``` plt.rcParams['axes.xmargin'] = 0 plt.rcParams.update({'font.size': 32}) keys = ["Cases per Million", "Deaths per Million"] lines= {} colors = ["C" + str(i) for i in range(10)] colors_dict = {} pp = PdfPages("covidDataByState.pdf") n_largest = 10 for daily in [True, False]: if not daily: for state_name, state in state_dict.items(): state_fips = sorted(list(set(covid_data[covid_data["state_abr"] == state].index.get_level_values("fips_code").copy()))) plot_zero_day_data(state_name, state, covid_data, county_zero_day_dict, day_zero_val, keys, "fips_code", state_fips, pp, n_largest, daily = daily) else: plot_zero_day_data("Upper Midwest", "Upper Midwest", state_data, state_zero_day_dict, day_zero_val, keys, "state", state_dict.keys(), pp, bold_entities = ["IA", "MN", "NE", "ND","SD", "WI"], daily = daily) plot_zero_day_data("Southwest", "Southwest", state_data, state_zero_day_dict, day_zero_val, keys, "state", state_dict.keys(), pp, bold_entities = ["AZ", "CA", "CO", "NM", "NV", "TX", "UT"], daily = daily) plot_zero_day_data("Northwest", "Northwest", state_data, state_zero_day_dict, day_zero_val, keys, "state", state_dict.keys(), pp, bold_entities = ["ID", "MT", "OR", "WA", "WY"], daily = daily) plot_zero_day_data("Midwest", "Midwest", state_data, state_zero_day_dict, day_zero_val, keys, "state", state_dict.keys(), pp, bold_entities = ["IL", "IN", "KS", "MI", "MO", "OH","OK"], daily = daily) plot_zero_day_data("South", "South", state_data, state_zero_day_dict, day_zero_val, keys, "state", state_dict.keys(), pp, bold_entities = ["AL","AR", "KY", "LA", "MS", "TN"], daily = daily) plot_zero_day_data("Southeast", "Southeast", state_data, state_zero_day_dict, day_zero_val, keys, "state", state_dict.keys(), pp, bold_entities = ["FL","GA", "MD", "NC", "SC", "VA", "WV"], daily = daily) plot_zero_day_data("Northeast", "Northeast", state_data, state_zero_day_dict, day_zero_val, keys, "state", state_dict.keys(), pp, bold_entities = ["CT", "DE", "MA","ME", "NH", "NJ", "NY", "PA", "RI", "VT"], daily = daily) pp.close() ``` Output will be generated in the following format: <img src="https://github.com/jlcatonjr/Learn-Python-for-Stats-and-Econ/blob/master/Projects/COVID19/statePlots/Upper%20Midwest%20Daily%20Deaths%20per%20Million%20MA.png?raw=true" alt="U.S. Covid Rates" title="" /> <h3><center></center></h3> <img src="https://github.com/jlcatonjr/Learn-Python-for-Stats-and-Econ/blob/master/Projects/COVID19/statePlots/NY%20Deaths%20per%20Million.png?raw=true" alt="New York Covid Levels" title="" /> If you would like to view the full set of output, go to the statePlots folder in the GitHub directory where this post is stored. ### Conclusion Here we learned to organize COVID-19 data for the purpose of plotting. The result is that we created visualizations that convey two kinds of data - Cases and Deaths - in two different formats - levels and daily rates - for two different levels of analysis - states and counties. We have identified opportunity for improvements in efficiency along the way that may be useful to develop if we continue to repurpose this script. In the next post, we will plot data using dynamic map where color reflects the level of Cases per Million and Deaths per Million. These visualization will provide understanding of the nature of the spread that started to evidence itself in mid-March.
github_jupyter
``` # LSTM for international airline passengers problem with window regression framing import numpy import keras import matplotlib.pyplot as plt from pandas import read_csv import math from keras.models import Sequential from keras.layers import Dense,Dropout from keras.layers import LSTM from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error from sklearn.preprocessing import OneHotEncoder from sklearn.cross_validation import train_test_split # convert an array of values into a dataset matrix def create_dataset(dataset, look_back=1): dataX, dataY = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back), 0] dataX.append(a) dataY.append(dataset[i + look_back, 0]) return numpy.array(dataX), numpy.array(dataY) # fix random seed for reproducibility numpy.random.seed(7) # load the dataset dataframe = read_csv('w_d_v.csv', usecols=[7], engine='python', skipfooter=3) all_data = read_csv('all_data.csv', usecols=[7], engine='python', skipfooter=3) dataset = dataframe.values allData=all_data.values look_back = 3 trainX, trainY = create_dataset(dataset, look_back) AllX, AllY = create_dataset(allData, look_back) trainY=numpy.reshape(trainY,(trainY.shape[0],-1)) AllY=numpy.reshape(AllY,(AllY.shape[0],-1)) encX = OneHotEncoder() encX.fit(AllX) print ("enc.n_values_ is:",encX.n_values_) print ("enc.feature_indices_ is:",encX.feature_indices_) encY = OneHotEncoder() encY.fit(AllY) print ("enc.n_values_ is:",encY.n_values_) print ("enc.feature_indices_ is:",encY.feature_indices_) trainX_one=encX.transform(trainX).toarray() train_X=numpy.reshape(trainX_one,(trainX_one.shape[0],look_back,-1)) train_Y=encY.transform(trainY).toarray() a_train, a_test, b_train, b_test = train_test_split(train_X, train_Y, test_size=0.1, random_state=42) print(a_train.shape) # create and fit the LSTM network model = Sequential() # model.add(Embedding(max_features, output_dim=256)) model.add(LSTM(512,return_sequences=True, input_shape=(3, a_train.shape[2]))) # returns a sequence of vectors of dimension 32 model.add(LSTM(256)) # return a single vector of dimension 32 model.add(Dense(a_train.shape[2])) # model.compile(loss='mean_squared_logarithmic_error', optimizer='rmsprop',metrics=['accuracy']) # model.compile(loss='categorical_hinge', optimizer='rmsprop',metrics=['accuracy']) # model.compile(loss='logcosh', optimizer='rmsprop',metrics=['accuracy']) model.compile(loss='cosine_proximity', optimizer='rmsprop',metrics=['accuracy']) #batchsize批尺寸 model.fit(a_train, b_train, epochs=100, batch_size=64, verbose=2, validation_data=(a_test, b_test)) model = Sequential() # model.add(Embedding(max_features, output_dim=256)) model.add(LSTM(512,return_sequences=True, input_shape=(3, a_train.shape[2]))) # returns a sequence of vectors of dimension 32 model.add(LSTM(256)) # return a single vector of dimension 32 model.add(Dense(a_train.shape[2])) # model.compile(loss='mean_squared_logarithmic_error', optimizer='rmsprop',metrics=['accuracy']) # model.compile(loss='categorical_hinge', optimizer='rmsprop',metrics=['accuracy']) model.compile(loss='logcosh', optimizer='rmsprop',metrics=['accuracy']) # model.compile(loss='cosine_proximity', optimizer='rmsprop',metrics=['accuracy']) #batchsize批尺寸 model.fit(a_train, b_train, epochs=100, batch_size=64, verbose=2, validation_data=(a_test, b_test)) model = Sequential() # model.add(Embedding(max_features, output_dim=256)) model.add(LSTM(512,return_sequences=True, input_shape=(3, a_train.shape[2]))) # returns a sequence of vectors of dimension 32 model.add(LSTM(256)) # return a single vector of dimension 32 model.add(Dense(a_train.shape[2])) # model.compile(loss='mean_squared_logarithmic_error', optimizer='rmsprop',metrics=['accuracy']) model.compile(loss='categorical_hinge', optimizer='rmsprop',metrics=['accuracy']) # model.compile(loss='logcosh', optimizer='rmsprop',metrics=['accuracy']) # model.compile(loss='cosine_proximity', optimizer='rmsprop',metrics=['accuracy']) #batchsize批尺寸 model.fit(a_train, b_train, epochs=100, batch_size=64, verbose=2, validation_data=(a_test, b_test)) model = Sequential() # model.add(Embedding(max_features, output_dim=256)) model.add(LSTM(512,return_sequences=True, input_shape=(3, a_train.shape[2]))) # returns a sequence of vectors of dimension 32 model.add(LSTM(256)) # return a single vector of dimension 32 model.add(Dense(a_train.shape[2])) model.compile(loss='mean_squared_logarithmic_error', optimizer='rmsprop',metrics=['accuracy']) # model.compile(loss='categorical_hinge', optimizer='rmsprop',metrics=['accuracy']) # model.compile(loss='logcosh', optimizer='rmsprop',metrics=['accuracy']) # model.compile(loss='cosine_proximity', optimizer='rmsprop',metrics=['accuracy']) #batchsize批尺寸 model.fit(a_train, b_train, epochs=100, batch_size=64, verbose=2, validation_data=(a_test, b_test)) #pip3 install h5py import h5py # define LSTM from keras.layers import TimeDistributed from keras.layers import Bidirectional model = Sequential() model.add(Bidirectional(LSTM(512, return_sequences=True), input_shape=(3, a_train.shape[2]))) # model.add(TimeDistributed(Dense(1, activation='sigmoid'))) model.add(LSTM(256)) # return a single vector of dimension 32 model.add(Dense(a_train.shape[2])) model.compile(loss='cosine_proximity', optimizer='rmsprop',metrics=['accuracy']) model.fit(a_train, b_train, epochs=100, batch_size=64, verbose=2, validation_data=(a_test, b_test)) from keras.models import * model.save('wdv.h5') # HDF5 file, you have to pip3 install h5py if don't have it print(os.path.abspath('.')) from keras.models import load_model model = load_model('wdv.h5')#调整好,保存为每个用户的名字 testdata = read_csv('weekend_day_test.csv', usecols=[7], engine='python', skipfooter=3) Tdataset = testdata.values look_back = 3 TX, TY = create_dataset(Tdataset, look_back) encX = OneHotEncoder() encX.fit(AllX) TX=encX.transform(TX).toarray() TestX=numpy.reshape(TX,(TX.shape[0],look_back,-1)) TY=numpy.reshape(TY,(TY.shape[0],-1)) encY = OneHotEncoder() encY.fit(AllY) TestY=encY.transform(TY).toarray() model.evaluate(TestX, TestY, batch_size=64, verbose=2, sample_weight=None) trainPredict = model.predict(a_train) print(trainPredict[0]) print(b_train[0]*trainPredict[0]) ```
github_jupyter
# Map Benign Mutations to 3D Structure This notebook maps a dataset of 63,197 missense mutations with allele frequencies >=1% and <25% extracted from the ExAC database to 3D structures in the Protein Data Bank. The dataset is described in: [1] Niroula A, Vihinen M (2019) How good are pathogenicity predictors in detecting benign variants? PLoS Comput Biol 15(2): e1006481. doi: [10.1371/journal.pcbi.1006481](https://doi.org/10.1371/journal.pcbi.1006481) ``` # Disable Numba: temporary workaround for https://github.com/sbl-sdsc/mmtf-pyspark/issues/288 import os os.environ['NUMBA_DISABLE_JIT'] = "1" from pyspark.sql import SparkSession from mmtfPyspark.datasets import dbSnpDataset, pdbjMineDataset from ipywidgets import interact, IntSlider import pandas as pd import py3Dmol #### Initialize Spark spark = SparkSession.builder.appName("BenignMutationsTo3DStructure").getOrCreate() # Enable Arrow-based columnar data transfers between Spark and Pandas dataframes spark.conf.set("spark.sql.execution.arrow.enabled", "true") ``` ## Read ExAC_ASS dataset [1] ``` df = pd.read_excel('http://structure.bmc.lu.se/VariBench/ExAC_AAS_20171214.xlsx', dtype=str, nrows=63198) df = df[df.RSID.str.startswith('rs')] # keep only rows that contain rs ids. df = df[df.RSID.str.contains(';') == False] # skip rows with an ';' in the RSID column df['rs_id'] = df.RSID.str[2:].astype('int') # create integer column of rs ids df.head() ``` Convert Pandas dataframe to Spark Dataframe ``` ds = spark.createDataFrame(df) ``` ## Read file with dbSNP info The following dataset was created from the NCBI dbSNP SNP3D_PDB_GRCH37 dataset by mapping non-synonymous SNPs to human proteins with >= 95% sequence identity in the PDB. ``` dn = dbSnpDataset.get_cached_dataset() ``` ## Find the intersection between the two dataframes ``` pd.set_option('display.max_columns', None) # show all columns dp = dn.join(ds, dn.snp_id == ds.rs_id).toPandas() dp = dp.sort_values(['chr', 'pos']) dp.head() ``` ## View mutations grouped by protein chain Use the slider to view each protein chain. ``` chains = dp.groupby('pdbChainId') def view_grouped_mutations(grouped_df, *args): chainIds = list(grouped_df.groups.keys()) def view3d(show_bio_assembly=False, show_surface=False, show_labels=True, i=0): group = grouped_df.get_group(chainIds[i]) pdb_id, chain_id = chainIds[i].split('.') viewer = py3Dmol.view(query='pdb:' + pdb_id, options={'doAssembly': show_bio_assembly}) # # polymer style viewer.setStyle({'cartoon': {'colorscheme': 'chain', 'width': 0.6, 'opacity':0.9}}) # # non-polymer style viewer.setStyle({'hetflag': True}, {'stick':{'radius': 0.3, 'singleBond': False}}) # highlight chain of interest in blue viewer.setStyle({'chain': chain_id},{'cartoon': {'color': 'blue'}}) rows = group.shape[0] for j in range(0, rows): res_num = str(group.iloc[j]['pdbResNum']) mod_res = {'resi': res_num, 'chain': chain_id} col = 'red' c_col = col + 'Carbon' viewer.addStyle(mod_res, {'stick':{'colorscheme':c_col, 'radius': 0.2}}) viewer.addStyle(mod_res, {'sphere':{'color':col, 'opacity': 0.6}}) if show_labels: label = 'rs' + str(group.iloc[j]['rs_id']) viewer.addLabel(label, {'fontSize':10,'fontColor': 'black','backgroundColor':'ivory'}, {'resi': res_num, 'chain': chain_id}) #print header print("PDB Id: " + pdb_id + " chain Id: " + chain_id) # print any specified additional columns from the dataframe for a in args: print(a + ": " + group.iloc[0][a]) viewer.zoomTo({'chain': chain_id}) if show_surface: viewer.addSurface(py3Dmol.SES,{'opacity':0.8,'color':'lightblue'},{'chain': chain_id}) return viewer.show() s_widget = IntSlider(min=0, max=len(chainIds)-1, description='Structure', continuous_update=False) return interact(view3d, show_bio_assembly=False, show_surface=False, show_labels=True, i=s_widget) view_grouped_mutations(chains, 'uniprotId','Chromosome'); def view_single_mutation(df, distance_cutoff, *args): def view3d(show_bio_assembly=False, show_surface=False, show_labels=True, i=0): pdb_id, chain_id = df.iloc[i]['pdbChainId'].split('.') viewer = py3Dmol.view(query='pdb:' + pdb_id, options={'doAssembly': show_bio_assembly}) # polymer style viewer.setStyle({'cartoon': {'colorscheme': 'chain', 'width': 0.6, 'opacity':0.9}}) # non-polymer style viewer.setStyle({'hetflag': True}, {'stick':{'radius': 0.3, 'singleBond': False}}) # highlight chain of interest in green viewer.setStyle({'chain': chain_id},{'cartoon': {'color': 'blue', 'opacity':0.5}}) # res_num = str(df.iloc[i]['pdbResNum']) label = 'rs' + str(df.iloc[i]['rs_id']) mod_res = {'resi': res_num, 'chain': chain_id} col = 'red' c_col = col + 'Carbon' viewer.addStyle(mod_res, {'stick':{'colorscheme':c_col, 'radius': 0.2}}) viewer.addStyle(mod_res, {'sphere':{'color':col, 'opacity': 0.8}}) if show_labels: viewer.addLabel(label, {'fontSize':12,'fontColor': 'black','backgroundColor':'ivory'}, {'resi': res_num, 'chain': chain_id}) # select neigboring residues by distance surroundings = {'chain': chain_id, 'resi': res_num, 'byres': True, 'expand': distance_cutoff} # residues surrounding mutation positions viewer.addStyle(surroundings,{'stick':{'colorscheme':'orangeCarbon', 'radius': 0.15}}) viewer.zoomTo(surroundings) if show_surface: viewer.addSurface(py3Dmol.SES, {'opacity':0.8,'color':'lightblue'}, {'chain': chain_id}) #print header print("PDB Id:", pdb_id, "chain Id:" , chain_id, "residue:", res_num, "mutation:", label) # print any specified additional columns from the dataframe for a in args: print(a + ": " + str(df.iloc[i][a])) return viewer.show() s_widget = IntSlider(min=0, max=len(df)-1, description='Structure', continuous_update=False) return interact(view3d, show_bio_assembly=False, show_surface=False, show_labels=True, i=s_widget) ``` ## View one mutation at a time Use the slider to view each mutation. Interacting residues within the distance_cutoff of 8 A are rendered as orange sticks. ``` distance_cutoff = 8 view_single_mutation(dp, distance_cutoff, 'uniprotId','Chromosome','Position','Reference_allele','Altered_allele','Reference_AA','Altered_AA','clinsig', 'AF_Adj'); spark.stop() ```
github_jupyter
## NumPy for Performance ### NumPy constructors We saw previously that NumPy's core type is the `ndarray`, or N-Dimensional Array: ``` import numpy as np np.zeros([3, 4, 2, 5])[2, :, :, 1] ``` The real magic of numpy arrays is that most python operations are applied, quickly, on an elementwise basis: ``` x = np.arange(0, 256, 4).reshape(8, 8) y = np.zeros((8, 8)) %%timeit for i in range(8): for j in range(8): y[i][j] = x[i][j] + 10 x + 10 ``` Numpy's mathematical functions also happen this way, and are said to be "vectorized" functions. ``` np.sqrt(x) ``` Numpy contains many useful functions for creating matrices. In our earlier lectures we've seen `linspace` and `arange` for evenly spaced numbers. ``` np.linspace(0, 10, 21) np.arange(0, 10, 0.5) ``` Here's one for creating matrices like coordinates in a grid: ``` xmin = -1.5 ymin = -1.0 xmax = 0.5 ymax = 1.0 resolution = 300 xstep = (xmax - xmin) / resolution ystep = (ymax - ymin) / resolution ymatrix, xmatrix = np.mgrid[ymin:ymax:ystep, xmin:xmax:xstep] print(ymatrix) ``` We can add these together to make a grid containing the complex numbers we want to test for membership in the Mandelbrot set. ``` values = xmatrix + 1j * ymatrix print(values) ``` ### Arraywise Algorithms We can use this to apply the mandelbrot algorithm to whole *ARRAYS* ``` z0 = values z1 = z0 * z0 + values z2 = z1 * z1 + values z3 = z2 * z2 + values print(z3) ``` So can we just apply our `mandel1` function to the whole matrix? ``` def mandel1(position,limit=50): value = position while abs(value) < 2: limit -= 1 value = value**2 + position if limit < 0: return 0 return limit mandel1(values) ``` No. The *logic* of our current routine would require stopping for some elements and not for others. We can ask numpy to **vectorise** our method for us: ``` mandel2 = np.vectorize(mandel1) data5 = mandel2(values) from matplotlib import pyplot as plt %matplotlib inline plt.imshow(data5, interpolation='none') ``` Is that any faster? ``` %%timeit data5 = mandel2(values) ``` This is not significantly faster. When we use *vectorize* it's just hiding an plain old python for loop under the hood. We want to make the loop over matrix elements take place in the "**C Layer**". What if we just apply the Mandelbrot algorithm without checking for divergence until the end: ``` def mandel_numpy_explode(position, limit=50): value = position while limit > 0: limit -= 1 value = value**2 + position diverging = abs(value) > 2 return abs(value) < 2 data6 = mandel_numpy_explode(values) ``` OK, we need to prevent it from running off to $\infty$ ``` def mandel_numpy(position, limit=50): value = position while limit > 0: limit -= 1 value = value**2 + position diverging = abs(value) > 2 # Avoid overflow value[diverging] = 2 return abs(value) < 2 data6 = mandel_numpy(values) %%timeit data6 = mandel_numpy(values) from matplotlib import pyplot as plt %matplotlib inline plt.imshow(data6, interpolation='none') ``` Wow, that was TEN TIMES faster. There's quite a few NumPy tricks there, let's remind ourselves of how they work: ``` diverging = abs(z3) > 2 z3[diverging] = 2 ``` When we apply a logical condition to a NumPy array, we get a logical array. ``` x = np.arange(10) y = np.ones([10]) * 5 z = x > y x y print(z) ``` Logical arrays can be used to index into arrays: ``` x[x>3] x[np.logical_not(z)] ``` And you can use such an index as the target of an assignment: ``` x[z] = 5 x ``` Note that we didn't compare two arrays to get our logical array, but an array to a scalar integer -- this was broadcasting again. ### More Mandelbrot Of course, we didn't calculate the number-of-iterations-to-diverge, just whether the point was in the set. Let's correct our code to do that: ``` def mandel4(position,limit=50): value = position diverged_at_count = np.zeros(position.shape) while limit > 0: limit -= 1 value = value**2 + position diverging = abs(value) > 2 first_diverged_this_time = np.logical_and(diverging, diverged_at_count == 0) diverged_at_count[first_diverged_this_time] = limit value[diverging] = 2 return diverged_at_count data7 = mandel4(values) plt.imshow(data7, interpolation='none') %%timeit data7 = mandel4(values) ``` Note that here, all the looping over mandelbrot steps was in Python, but everything below the loop-over-positions happened in C. The code was amazingly quick compared to pure Python. Can we do better by avoiding a square root? ``` def mandel5(position, limit=50): value = position diverged_at_count = np.zeros(position.shape) while limit > 0: limit -= 1 value = value**2 + position diverging = value * np.conj(value) > 4 first_diverged_this_time = np.logical_and(diverging, diverged_at_count == 0) diverged_at_count[first_diverged_this_time] = limit value[diverging] = 2 return diverged_at_count %%timeit data8 = mandel5(values) ``` Probably not worth the time I spent thinking about it! ### NumPy Testing Now, let's look at calculating those residuals, the differences between the different datasets. ``` data8 = mandel5(values) data5 = mandel2(values) np.sum((data8 - data5)**2) ``` For our non-numpy datasets, numpy knows to turn them into arrays: ``` xmin = -1.5 ymin = -1.0 xmax = 0.5 ymax = 1.0 resolution = 300 xstep = (xmax-xmin)/resolution ystep = (ymax-ymin)/resolution xs = [(xmin + (xmax - xmin) * i / resolution) for i in range(resolution)] ys = [(ymin + (ymax - ymin) * i / resolution) for i in range(resolution)] data1 = [[mandel1(complex(x, y)) for x in xs] for y in ys] sum(sum((data1 - data7)**2)) ``` But this doesn't work for pure non-numpy arrays ``` data2 = [] for y in ys: row = [] for x in xs: row.append(mandel1(complex(x, y))) data2.append(row) data2 - data1 ``` So we have to convert to NumPy arrays explicitly: ``` sum(sum((np.array(data2) - np.array(data1))**2)) ``` NumPy provides some convenient assertions to help us write unit tests with NumPy arrays: ``` x = [1e-5, 1e-3, 1e-1] y = np.arccos(np.cos(x)) y np.testing.assert_allclose(x, y, rtol=1e-6, atol=1e-20) np.testing.assert_allclose(data7, data1) ``` ### Arraywise operations are fast Note that we might worry that we carry on calculating the mandelbrot values for points that have already diverged. ``` def mandel6(position, limit=50): value = np.zeros(position.shape) + position calculating = np.ones(position.shape, dtype='bool') diverged_at_count = np.zeros(position.shape) while limit > 0: limit -= 1 value[calculating] = value[calculating]**2 + position[calculating] diverging_now = np.zeros(position.shape, dtype='bool') diverging_now[calculating] = value[calculating] * \ np.conj(value[calculating])>4 calculating = np.logical_and(calculating, np.logical_not(diverging_now)) diverged_at_count[diverging_now] = limit return diverged_at_count data8 = mandel6(values) %%timeit data8 = mandel6(values) plt.imshow(data8, interpolation='none') ``` This was **not faster** even though it was **doing less work** This often happens: on modern computers, **branches** (if statements, function calls) and **memory access** is usually the rate-determining step, not maths. Complicating your logic to avoid calculations sometimes therefore slows you down. The only way to know is to **measure** ### Indexing with arrays We've been using Boolean arrays a lot to get access to some elements of an array. We can also do this with integers: ``` x = np.arange(64) y = x.reshape([8,8]) y y[[2, 5]] y[[0, 2, 5], [1, 2, 7]] ``` We can use a : to indicate we want all the values from a particular axis: ``` y[0:4:2, [0, 2]] ``` We can mix array selectors, boolean selectors, :s and ordinary array seqeuencers: ``` z = x.reshape([4, 4, 4]) z z[:, [1, 3], 0:3] ``` We can manipulate shapes by adding new indices in selectors with np.newaxis: ``` z[:, np.newaxis, [1, 3], 0].shape ``` When we use basic indexing with integers and : expressions, we get a **view** on the matrix so a copy is avoided: ``` a = z[:, :, 2] a[0, 0] = -500 z ``` We can also use ... to specify ": for as many as possible intervening axes": ``` z[1] z[...,2] ``` However, boolean mask indexing and array filter indexing always causes a copy. Let's try again at avoiding doing unnecessary work by using new arrays containing the reduced data instead of a mask: ``` def mandel7(position, limit=50): positions = np.zeros(position.shape) + position value = np.zeros(position.shape) + position indices = np.mgrid[0:values.shape[0], 0:values.shape[1]] diverged_at_count = np.zeros(position.shape) while limit > 0: limit -= 1 value = value**2 + positions diverging_now = value * np.conj(value) > 4 diverging_now_indices = indices[:, diverging_now] carry_on = np.logical_not(diverging_now) value = value[carry_on] indices = indices[:, carry_on] positions = positions[carry_on] diverged_at_count[diverging_now_indices[0,:], diverging_now_indices[1,:]] = limit return diverged_at_count data9 = mandel7(values) plt.imshow(data9, interpolation='none') %%timeit data9 = mandel7(values) ``` Still slower. Probably due to lots of copies -- the point here is that you need to *experiment* to see which optimisations will work. Performance programming needs to be empirical. ## Profiling We've seen how to compare different functions by the time they take to run. However, we haven't obtained much information about where the code is spending more time. For that we need to use a profiler. IPython offers a profiler through the `%prun` magic. Let's use it to see how it works: ``` %prun mandel7(values) ``` `%prun` shows a line per each function call ordered by the total time spent on each of these. However, sometimes a line-by-line output may be more helpful. For that we can use the `line_profiler` package (you need to install it using `pip`). Once installed you can activate it in any notebook by running: ``` %load_ext line_profiler ``` And the `%lprun` magic should be now available: ``` %lprun -f mandel7 mandel7(values) ``` Here, it is clearer to see which operations are keeping the code busy.
github_jupyter
``` pip install BeautifulSoup4 pip install selenium pip install pandas_datareader pip install pandas pip install webdriver_manager !apt install chromium-chromedriver import os # declare a directory name dir_name= os.getcwd()+'/sentiment-data/' import pandas as pd hkex_files=os.path.join(dir_name,'stock_ticker_datasets/hkex.csv') hkex=pd.read_csv(hkex_files) # hkex[hkex.iloc['Category'] == 'Equity Securities'] hkex=hkex.loc[hkex['Category'] == 'Equity'] hkex['Ticker']=hkex['Ticker'].astype(str) hkex_input=hkex['Ticker'] n = 400 #chunk row size hkex_df = [hkex_input[i:i+n] for i in range(0,len(hkex_input),n)] # Collect all stocks using selenium from bs4 import BeautifulSoup from urllib.request import Request from urllib.request import urlopen import csv import datetime, time from selenium import webdriver from selenium.webdriver.chrome.options import Options from webdriver_manager.chrome import ChromeDriverManager chrome_options = Options() chrome_options.add_argument("--headless") chrome_options.add_argument('no-sandbox') chrome_options.add_argument('-disable-dev-shm-usage') # target_time='2020-01-01' # target_time=datetime.datetime.strptime(target_time, '%Y-%m-%d') # get news from aastock.com def get_news_aastock(ticker,postfix_url,newstype): # driver = webdriver.Chrome(ChromeDriverManager().install()) driver = webdriver.Chrome(ChromeDriverManager().install(),options=chrome_options) driver.implicitly_wait(20) # WebDriverWait wait = new WebDriverWait(driver, 10); prefix_url='http://www.aastocks.com/en/stocks/analysis/stock-aafn/' # postfix_url='/0/research-report/1' try: SCROLL_PAUSE_TIME = 2 fill_ticker=ticker.zfill(5) url=prefix_url+fill_ticker+postfix_url driver.get(url) last_height = driver.execute_script("return document.body.scrollHeight") while True: # Scroll down to bottom driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") # Wait to load page time.sleep(SCROLL_PAUSE_TIME) # Calculate new scroll height and compare with last scroll height new_height = driver.execute_script("return document.body.scrollHeight") if new_height == last_height: break last_height = new_height html=BeautifulSoup(driver.page_source, 'lxml') dates=html.findAll("div", {"class": "newstime4"}) news=html.findAll("div", {"class": "newshead4"}) idx=0 path=os.path.join(dir_name,'data-news/data-aastock-equities/'+'data-'+fill_ticker+'-aastock.csv') if (len(dates)>0): with open(path,'a') as f: writer = csv.writer(f) for i in dates: if "/" in str(i.get_text()): print(idx) date=str(i.get_text()) if "Release Time" in date: date=date[13:23] elif (date[0]==" "): date=str(date[1:11]) else: date=str(date[0:10]) # print(date) text=news[idx].get_text() date_time_obj = datetime.datetime.strptime(date, '%Y/%m/%d') # if(date_time_obj>=target_time): date_time=date_time_obj.strftime('%Y-%m-%d') # print([date_time,text]) # print(([date_time,text,ticker,'news'])) writer.writerow([date_time,text,ticker,newstype]) idx+=1 except Exception as e: print(e) pass driver.quit() # Collect all stocks using selenium within a specific day from bs4 import BeautifulSoup from urllib.request import Request from urllib.request import urlopen import csv import datetime, time from selenium import webdriver from selenium.webdriver.chrome.options import Options from webdriver_manager.chrome import ChromeDriverManager chrome_options = Options() chrome_options.add_argument("--headless") chrome_options.add_argument('no-sandbox') chrome_options.add_argument('-disable-dev-shm-usage') # target_time='2020-01-01' # target_time=datetime.datetime.strptime(target_time, '%Y-%m-%d') # get news from aastock.com def get_news_aastock_time(ticker,postfix_url,newstype,days): # driver = webdriver.Chrome(ChromeDriverManager().install()) driver = webdriver.Chrome(ChromeDriverManager().install(),options=chrome_options) driver.implicitly_wait(20) # WebDriverWait wait = new WebDriverWait(driver, 10); prefix_url='http://www.aastocks.com/en/stocks/analysis/stock-aafn/' # postfix_url='/0/research-report/1' try: SCROLL_PAUSE_TIME = 2 fill_ticker=ticker.zfill(5) url=prefix_url+fill_ticker+postfix_url driver.get(url) last_height = driver.execute_script("return document.body.scrollHeight") while True: # Scroll down to bottom driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") # Wait to load page time.sleep(SCROLL_PAUSE_TIME) # Calculate new scroll height and compare with last scroll height new_height = driver.execute_script("return document.body.scrollHeight") if new_height == last_height: break last_height = new_height html=BeautifulSoup(driver.page_source, 'lxml') dates=html.findAll("div", {"class": "newstime4"}) news=html.findAll("div", {"class": "newshead4"}) idx=0 path=os.path.join(dir_name,'data-news/data-aastock-equities/'+'data-'+fill_ticker+'-aastock.csv') if (len(dates)>0): with open(path,'a') as f: writer = csv.writer(f) for i in dates: if "/" in str(i.get_text()): print(idx) date=str(i.get_text()) if "Release Time" in date: date=date[13:23] elif (date[0]==" "): date=str(date[1:11]) else: date=str(date[0:10]) # print(date) text=news[idx].get_text() date_time_obj = datetime.datetime.strptime(date, '%Y/%m/%d') if(datetime.datetime.now()-date_time_obj).days<=days: date_time=date_time_obj.strftime('%Y-%m-%d') print([date_time,text]) print(([date_time,text,ticker,'news'])) writer.writerow([date_time,text,ticker,newstype]) idx+=1 except Exception as e: print(e) pass driver.quit() # collect all the news in aastock for each ticker def collect_news (ticker): news_report_postfix_url='/0/research-report' news_result_postfix_url='/0/result-announcement' news_daily_postfix_url='/0/hk-stock-news' news_indus_postfix_url='/0/industry-news' get_news_aastock(ticker,news_daily_postfix_url,'news-daily') get_news_aastock(ticker,news_report_postfix_url,'news-report') get_news_aastock(ticker,news_result_postfix_url,'news-result') get_news_aastock(ticker,news_indus_postfix_url,'news-indus') for tickers in hkex_df: for ticker in tickers: try: print (ticker) collect_news(ticker) except Exception as e: print(e) pass # collect all the news in aastock for each ticker within a speicific time def collect_news_time (ticker,days): news_report_postfix_url='/0/research-report' news_result_postfix_url='/0/result-announcement' news_daily_postfix_url='/0/hk-stock-news' news_indus_postfix_url='/0/industry-news' get_news_aastock_time(ticker,news_daily_postfix_url,'news-daily',days) get_news_aastock_time(ticker,news_report_postfix_url,'news-report',days) get_news_aastock_time(ticker,news_result_postfix_url,'news-result',days) get_news_aastock_time(ticker,news_indus_postfix_url,'news-indus',days) for tickers in hkex_df: for ticker in tickers: try: print (ticker) collect_news_time(ticker,45) except Exception as e: print(e) pass import os import datetime, time import pandas as pd #agg function for all equities in hkex def agg_news(hkex_df,df_hkex): for ticker in df_hkex: try: print(ticker) news_path=os.path.join(dir_name,'data-news/data-aastock-equities/'+'data-'+ticker.zfill(5)+'-aastock.csv') df = pd.read_csv(news_path,names=['dates','news','ticker','newstype']) hkex_df=hkex_df.append(df) print(hkex_df) except Exception as e: print(e) pass return hkex_df import csv # get agg news for all equities in hkex df_hkex= pd.DataFrame() df=agg_news(df_hkex,hkex_input) print(df) ```
github_jupyter
``` import matplotlib.pyplot as plt import matplotlib as mpl import numpy as np import pandas as pd %matplotlib inline from matplotlib import rc, font_manager ticks_font = font_manager.FontProperties(family='serif', style='normal', size=24, weight='normal', stretch='normal') import ternary import copy import math import os import json class read_inputjson_BCC_screw_single_calculation: def __init__(self,fh): if os.path.isfile(fh): self.file = fh self.data = json.load(open(fh)) else: print('input file not in current directory') quit() self.model = self.data["model"] self.name = self.data["material"] # properties self.properties = self.data['properties'] # adjustable scalers self.adjustable_scalers = self.data['adjustables'] # exp conditions self.conditions = self.data['conditions'] # output file try: self.savefilename = self.data['savefile'] except: self.savefilename = self.data["material"] + '_out' class ss_model_M_C_screw: # BCC screw dislocation model: Maresca-Curtin 2019: https://doi.org/10.1016/j.actamat.2019.10.007 def __init__(self, inputdata ): # adjustable scalers self.kink_width = inputdata.adjustable_scalers['kink_width'] self.Delta_V_p_scaler = inputdata.adjustable_scalers['Delta_V_p_scaler'] self.Delta_E_p_scaler = inputdata.adjustable_scalers['Delta_E_p_scaler'] # some constants self.boltzmann_J = 1.38064852*10**(-23) #J/K self.boltzmann_eV = 8.617333262145e-5 #eV self.J2eV = 6.2415093433*10**18 # covert J to eV self.eV2J = 1/self.J2eV # properties self.a = inputdata.properties['a'] * 10**(-10) #m # lattice constant self.a_p = self.a*np.sqrt(2/3) # Peierls spacing self.b = self.a*np.sqrt(3)/2 self.E_k = inputdata.properties['E_k'] * self.eV2J # J # kink formation energy self.Delta_E_p = self.Delta_E_p_scaler * inputdata.properties['Delta_E_p'] * self.eV2J # J # screw-solute interaction self.Delta_V_p = self.Delta_V_p_scaler * inputdata.properties['Delta_V_p'] * self.eV2J /self.b# J/b # Peierls barrier self.E_si = inputdata.properties['E_si'] * self.eV2J #J # formation energy of self-interstitial self.E_v = inputdata.properties['E_v'] * self.eV2J #J # formation energy of vacancy # exp conditions self.T = np.arange(inputdata.conditions['temperature']['min'], inputdata.conditions['temperature']['max']+inputdata.conditions['temperature']['inc'], inputdata.conditions['temperature']['inc']) self.strain_r = inputdata.conditions['strain_r'] # strain rate self.strain_r_0 = 10**4 # reference strain rate 10^4 /s self.Delta_H = self.boltzmann_J * self.T * np.log(self.strain_r_0/self.strain_r) #activation enthalpy self.w_k = self.kink_width * self.b # kink width self.xi_c = (1.083*self.E_k/self.Delta_E_p)**2*self.b # characteristic length of dislocation segment self.xi_si = self.xi_c * 15 self.xi_v = self.xi_c * 7.5 def M_C_screw_model(self): # cross-kink # self-interstitial self.tau_xk_0_si = np.pi * self.E_si / (self.a_p * self.b * self.xi_si ) self.tau_xk_si = self.tau_xk_0_si * (1-(self.Delta_H/self.E_si)**(2/3)) # vacancy self.tau_xk_0_v = np.pi * self.E_v / (self.a_p * self.b * self.xi_v ) self.tau_xk_v = self.tau_xk_0_v * (1-(self.Delta_H/self.E_v)**(2/3)) # select the larger value from si or vacancy strengthening self.tau_xk_T = np.array([self.tau_xk_si[i] if self.tau_xk_si[i]>=self.tau_xk_v[i] else self.tau_xk_v[i] for i in range(len(self.T)) ]) # kink glide self.tau_b = 1.08 * self.E_k / (self.a_p * self.b * self.xi_c) self.tau_k_0 = 6.3 * self.Delta_E_p / (self.a_p * self.b**2 * np.sqrt(self.w_k/self.b)) + self.tau_b self.Delta_E_k_0 = 1.37 * np.sqrt(self.w_k/self.b) * self.Delta_E_p self.tau_k_low_T = self.tau_b + \ (self.tau_k_0 - self.tau_b) / \ (np.exp(0.89*self.Delta_H/self.Delta_E_k_0 + \ 0.5*(self.Delta_H/self.Delta_E_k_0)**(1/4) + 0.6)-1) self.tau_k_high_T = self.tau_b - \ (self.tau_k_0 - self.tau_b) * self.w_k / (5.75 * self.xi_c) * \ (self.Delta_H/self.Delta_E_k_0 - np.log(5.75*self.xi_c/self.w_k+1)) self.tau_k_T = np.array([self.tau_k_low_T[i] if (self.tau_k_low_T[i]-self.tau_b)/(self.tau_k_0 - self.tau_b)>= (1/(5.75 * self.xi_c/self.w_k + 1)) else self.tau_k_high_T[i] for i in range(len(self.T))]) # Peierls self.Delta_E_b_p = (10*self.Delta_V_p*self.xi_c + 0.7 * self.E_k)**3/\ (20*self.Delta_V_p*self.xi_c + 0.7 * self.E_k)**2 self.tau_p_0 = np.pi*self.Delta_V_p/(self.b*self.a_p ) + \ 0.44 * self.E_k / (self.b*self.a_p * self.xi_c) * \ ( 1 - 5* self.Delta_V_p*self.xi_c/(20*self.Delta_V_p*self.xi_c+0.7*self.E_k)) self.tau_p_T = self.tau_p_0 * (1-(self.Delta_H/self.Delta_E_b_p)**(2/3)) # min of Peierls and kink glide self.min_tau_k_tau_p_T = np.minimum(self.tau_p_T,self.tau_k_T) # total strength self.tau_tot_T = np.maximum(self.min_tau_k_tau_p_T,np.zeros(len(self.T))) + self.tau_xk_T def calculate(self): self.M_C_screw_model() def writedata(self): self.calc_data = pd.DataFrame(data={}) self.calc_data['T'] = self.T self.calc_data['tau_y'] = np.round(self.tau_tot_T/1e6,2) inputdata = read_inputjson_BCC_screw_single_calculation('../sample_input_NbMo_BCC_screw.json') model = ss_model_M_C_screw(inputdata) model.calculate() plt.plot(model.T, model.tau_tot_T/1e6,label='total') plt.grid() plt.legend() inputdata.data model.Delta_V_p inputdata_TiNbZr = read_inputjson_BCC_screw_single_calculation('../sample_input_TiNbZr_BCC_screw.json') model_TiNbZr = ss_model_M_C_screw(inputdata_TiNbZr) model_TiNbZr.calculate() plt.plot(model_TiNbZr.T, model_TiNbZr.tau_tot_T/1e6*3.07,label='total') plt.plot(model_TiNbZr.T, model_TiNbZr.tau_k_T/1e6*3.07,label='k') plt.plot(model_TiNbZr.T, model_TiNbZr.tau_p_T/1e6*3.07,label='p') plt.plot(model_TiNbZr.T, model_TiNbZr.tau_xk_T/1e6*3.07,label='xk') plt.ylim(0,1600) plt.grid() plt.legend() inputdata_NbW = read_inputjson_BCC_screw_single_calculation('../sample_input_NbW_BCC_screw.json') model_NbW = ss_model_M_C_screw(inputdata_NbW) model_NbW.calculate() plt.plot(model_NbW.T, model_NbW.tau_tot_T/1e6,label='total') plt.grid() plt.legend() a_Fe = 2.866 * 10**(-10) a_Fe = a_Fe*np.sqrt(2/3) # Peierls spacing b_Fe = a_Fe*np.sqrt(3)/2 Delta_V_p_Fe = 0.010977778 * model.eV2J np.pi*Delta_V_p_Fe/b_Fe/(b_Fe*a_Fe)/10**6 ```
github_jupyter
# Landscape Expansion Index More details on the wiki - https://github.com/worldbank/GOST_Urban/wiki/Landscape-Expansion-Index ``` import os, sys, logging, importlib import geojson, rasterio import rasterio.features import geopandas as gpd import pandas as pd import numpy as np from shapely.geometry import shape, GeometryCollection from shapely.wkt import loads from matplotlib import pyplot from rasterio.plot import show, show_hist #Import GOST urban functions sys.path.append("../../") import src.LEI as lei try: sys.path.append('../../../gostrocks/src/') import GOSTRocks.rasterMisc as rMisc except: print("gostrocks is required to clip GHSL to AOI and warp the WSF to a metres projection") # Define input variables input_folder = "/home/wb411133/temp/LOME" if not os.path.exists(input_folder): os.makedirs(input_folder) input_ghsl = os.path.join(input_folder, "GHSL.tif") # This section will extract GHSL data from the global file, if you have the GHSL for the AOI extracted # define above as input_ghsl if not os.path.exists(input_ghsl): # clip from global GHSL file ghsl_vrt = "/home/public/Data/GLOBAL/GHSL/ghsl.vrt" aoi = os.path.join(input_folder, 'Grand_lome_dissolve.shp') in_ghsl = rasterio.open(ghsl_vrt) inA = gpd.read_file(aoi) if not inA.crs == in_ghsl.crs: inA = inA.to_crs(in_ghsl.crs) rMisc.clipRaster(in_ghsl, inA, input_ghsl) # This calculates the change from 1990 and 2000 lei_raw = lei.calculate_LEI(input_ghsl, old_list = [5,6], new_list=[4]) lei_90_00 = pd.DataFrame(lei_raw, columns=['geometry', 'old', 'total']) lei_90_00['LEI'] = lei_90_00['old'] / lei_90_00['total'] lei_90_00.head() # This calculates the change from 2000 and 2014 lei_raw = lei.calculate_LEI(input_ghsl, old_list = [4,5,6], new_list=[3]) lei_00_14 = pd.DataFrame(lei_raw, columns=['geometry', 'old', 'total']) lei_00_14['LEI'] = lei_00_14['old'] / lei_00_14['total'] lei_00_14.head() importlib.reload(lei) #Calculate summaries of lei lei.summarize_LEI(lei_90_00, leap_val=0.05, exp_val=0.75)/1000000 #Calculate summaries of lei lei.summarize_LEI(lei_00_14, leap_val=0.05, exp_val=0.75)/1000000 # write raw LEI results to file lei_90_00.to_csv(os.path.join(input_folder, "GHSL_LEI_90_00.csv")) lei_00_14.to_csv(os.path.join(input_folder, "GHSL_LEI_00_14.csv")) ``` # Re-run analysis using the World Settlement Footprint ``` # Define input variables input_folder = "/home/wb411133/temp/LOME" if not os.path.exists(input_folder): os.makedirs(input_folder) input_WSF = os.path.join(input_folder, "LOME_WSF.tif") input_WSF_proj = os.path.join(input_folder, "LOME_WSF_PROJ.tif") # This section will extract GHSL data from the global file, if you have the GHSL for the AOI extracted # define above as input_ghsl if not os.path.exists(input_WSF_proj): # clip from global GHSL file wsf = "/home/public/Data/GLOBAL/WSF/Togo/Togo_WSF_evolution.tif" aoi = os.path.join(input_folder, 'Grand_lome_dissolve.shp') in_ghsl = rasterio.open(wsf) inA = gpd.read_file(aoi) if not inA.crs == in_ghsl.crs: inA = inA.to_crs(in_ghsl.crs) rMisc.clipRaster(in_ghsl, inA, input_WSF) # WSF is stored in WGS84, making buffering and area calculations impossible. # Instead we will standardize to the GHSL raster ghsl_raster = os.path.join(input_folder, "GHSL.tif") in_wsf = rasterio.open(input_WSF) in_ghsl = rasterio.open(ghsl_raster) rMisc.standardizeInputRasters(in_wsf, in_ghsl, input_WSF_proj, 'C') # This calculates the change from 1990 and 2000 lei_raw = lei.calculate_LEI(input_WSF_proj, old_list = list(range(1985,1991)), new_list=list(range(1991,2001))) lei_90_00 = pd.DataFrame(lei_raw, columns=['geometry', 'old', 'total']) lei_90_00['LEI'] = lei_90_00['old'] / lei_90_00['total'] lei_90_00.head() # This calculates the change from 2000 and 2015 lei_raw = lei.calculate_LEI(input_WSF_proj, old_list = list(range(1985,2001)), new_list=list(range(2001,2016))) lei_00_14 = pd.DataFrame(lei_raw, columns=['geometry', 'old', 'total']) lei_00_14['LEI'] = lei_00_14['old'] / lei_00_14['total'] lei_00_14.head() #Calculate summaries of lei lei.summarize_LEI(lei_90_00, leap_val=0.05, exp_val=0.75)/1000000 #Calculate summaries of lei lei.summarize_LEI(lei_00_14, leap_val=0.05, exp_val=0.75)/1000000 # write raw LEI results to file lei_90_00.to_csv(os.path.join(input_folder, "WSF_LEI_90_00.csv")) lei_00_14.to_csv(os.path.join(input_folder, "WSF_LEI_00_14.csv")) ```
github_jupyter
# This is an even easier demo This demo is designed only just for showing how to use the pretrained model, nothing else. ``` import torch import torchvision from torchvision import transforms transform = transforms.Compose([ transforms.Resize(( 224,224)), transforms.ToTensor() ] ) class CarMakeModelClassifier(torch.nn.Module): def __init__(self, device): super(CarMakeModelClassifier, self).__init__() self.device = device self.make_classifier = torchvision.models.resnet34(pretrained=False, progress=True) self.make_classifier.fc = torch.nn.Linear(in_features = 512, out_features = 163) self.model_classifier = torchvision.models.resnet50(pretrained=False, progress=True) self.model_classifier.fc = torch.nn.Linear(in_features = 2048, out_features = 2004) self.make_classifier = self.make_classifier.to(device) self.model_classifier = self.model_classifier.to(device) def forward(self, x): make = self.make_classifier(x.to(self.device)) model = self.model_classifier(x.to(self.device)) return {'make':make, 'model':model} device = torch.device('cpu') classifier = CarMakeModelClassifier(device) optimizer_model = torch.optim.Adam(classifier.model_classifier.parameters()) optimizer_make = torch.optim.Adam(classifier.make_classifier.parameters()) criterion = torch.nn.CrossEntropyLoss() checkpoint = torch.load('checkpoint4/10_epoch.tar', map_location=torch.device('cpu')) classifier.make_classifier.load_state_dict(checkpoint['make_model_state_dict']) optimizer_make.load_state_dict(checkpoint['make_optimizer_state_dict']) classifier.model_classifier.load_state_dict(checkpoint['model_model_state_dict']) optimizer_model.load_state_dict(checkpoint['model_optimizer_state_dict']) import scipy.io as skio mapping_model = skio.loadmat('../Dataset/data/misc/make_model_name.mat')['model_names'][:, 0] mapping_make = skio.loadmat('../Dataset/data/misc/make_model_name.mat')['make_names'][:, 0] import matplotlib.pyplot as plt from PIL import Image def visualize_perf(imagefile, classifier,transform): classifier.eval() image = Image.open(imagefile) plt.imshow(image) plt.show() image_tensor = transform(image).unsqueeze_(0).to(device) # print(model) output = classifier(image_tensor) # print(type(output)) output_make = output['make'] output_model = output['model'] _, predicted_make = output_make.max(1) _, predicted_model = output_model.max(1) print('Predicted Make '+str(mapping_make[predicted_make.item()-1])) print('Predicted Model '+str(mapping_model[predicted_model.item()-1])) visualize_perf('/home/billy/Documents/CompCarDemo/Dataset/data/image/1/1101/2011/07b90decb92ba6.jpg', classifier, transform) ```
github_jupyter
# Hello, TensorFlow ## A beginner-level, getting started, basic introduction to TensorFlow TensorFlow is a general-purpose system for graph-based computation. A typical use is machine learning. In this notebook, we'll introduce the basic concepts of TensorFlow using some simple examples. TensorFlow gets its name from [tensors](https://en.wikipedia.org/wiki/Tensor), which are arrays of arbitrary dimensionality. A vector is a 1-d array and is known as a 1st-order tensor. A matrix is a 2-d array and a 2nd-order tensor. The "flow" part of the name refers to computation flowing through a graph. Training and inference in a neural network, for example, involves the propagation of matrix computations through many nodes in a computational graph. When you think of doing things in TensorFlow, you might want to think of creating tensors (like matrices), adding operations (that output other tensors), and then executing the computation (running the computational graph). In particular, it's important to realize that when you add an operation on tensors, it doesn't execute immediately. Rather, TensorFlow waits for you to define all the operations you want to perform. Then, TensorFlow optimizes the computation graph, deciding how to execute the computation, before generating the data. Because of this, a tensor in TensorFlow isn't so much holding the data as a placeholder for holding the data, waiting for the data to arrive when a computation is executed. ## Adding two vectors in TensorFlow Let's start with something that should be simple. Let's add two length four vectors (two 1st-order tensors): $\begin{bmatrix} 1. & 1. & 1. & 1.\end{bmatrix} + \begin{bmatrix} 2. & 2. & 2. & 2.\end{bmatrix} = \begin{bmatrix} 3. & 3. & 3. & 3.\end{bmatrix}$ ``` import tensorflow as tf with tf.Session(): input1 = tf.constant([1.0, 1.0, 1.0, 1.0]) input2 = tf.constant([2.0, 2.0, 2.0, 2.0]) output = tf.add(input1, input2) result = output.eval() print result ``` What we're doing is creating two vectors, [1.0, 1.0, 1.0, 1.0] and [2.0, 2.0, 2.0, 2.0], and then adding them. Here's equivalent code in raw Python and using numpy: ``` print [x + y for x, y in zip([1.0] * 4, [2.0] * 4)] import numpy as np x, y = np.full(4, 1.0), np.full(4, 2.0) print "{} + {} = {}".format(x, y, x + y) ``` ## Details of adding two vectors in TensorFlow The example above of adding two vectors involves a lot more than it seems, so let's look at it in more depth. >`import tensorflow as tf` This import brings TensorFlow's public API into our IPython runtime environment. >`with tf.Session():` When you run an operation in TensorFlow, you need to do it in the context of a `Session`. A session holds the computation graph, which contains the tensors and the operations. When you create tensors and operations, they are not executed immediately, but wait for other operations and tensors to be added to the graph, only executing when finally requested to produce the results of the session. Deferring the execution like this provides additional opportunities for parallelism and optimization, as TensorFlow can decide how to combine operations and where to run them after TensorFlow knows about all the operations. >>`input1 = tf.constant([1.0, 1.0, 1.0, 1.0])` >>`input2 = tf.constant([2.0, 2.0, 2.0, 2.0])` The next two lines create tensors using a convenience function called `constant`, which is similar to numpy's `array` and numpy's `full`. If you look at the code for `constant`, you can see the details of what it is doing to create the tensor. In summary, it creates a tensor of the necessary shape and applies the constant operator to it to fill it with the provided values. The values to `constant` can be Python or numpy arrays. `constant` can take an optional shape paramter, which works similarly to numpy's `fill` if provided, and an optional name parameter, which can be used to put a more human-readable label on the operation in the TensorFlow operation graph. >>`output = tf.add(input1, input2)` You might think `add` just adds the two vectors now, but it doesn't quite do that. What it does is put the `add` operation into the computational graph. The results of the addition aren't available yet. They've been put in the computation graph, but the computation graph hasn't been executed yet. >>`result = output.eval()` >>`print result` `eval()` is also slightly more complicated than it looks. Yes, it does get the value of the vector (tensor) that results from the addition. It returns this as a numpy array, which can then be printed. But, it's important to realize it also runs the computation graph at this point, because we demanded the output from the operation node of the graph; to produce that, it had to run the computation graph. So, this is the point where the addition is actually performed, not when `add` was called, as `add` just put the addition operation into the TensorFlow computation graph. ## Multiple operations To use TensorFlow, you add operations on tensors that produce tensors to the computation graph, then execute that graph to run all those operations and calculate the values of all the tensors in the graph. Here's a simple example with two operations: ``` import tensorflow as tf with tf.Session(): input1 = tf.constant(1.0, shape=[4]) input2 = tf.constant(2.0, shape=[4]) input3 = tf.constant(3.0, shape=[4]) output = tf.add(tf.add(input1, input2), input3) result = output.eval() print result ``` This version uses `constant` in a way similar to numpy's `fill`, specifying the optional shape and having the values copied out across it. The `add` operator supports operator overloading, so you could try writing it inline as `input1 + input2` instead as well as experimenting with other operators. ``` with tf.Session(): input1 = tf.constant(1.0, shape=[4]) input2 = tf.constant(2.0, shape=[4]) output = input1 + input2 print output.eval() ``` ## Adding two matrices Next, let's do something very similar, adding two matrices: $\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\ \end{bmatrix} + \begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\ \end{bmatrix} = \begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\ \end{bmatrix}$ ``` import tensorflow as tf import numpy as np with tf.Session(): input1 = tf.constant(1.0, shape=[2, 3]) input2 = tf.constant(np.reshape(np.arange(1.0, 7.0, dtype=np.float32), (2, 3))) output = tf.add(input1, input2) print output.eval() ``` Recall that you can pass numpy or Python arrays into `constant`. In this example, the matrix with values from 1 to 6 is created in numpy and passed into `constant`, but TensorFlow also has `range`, `reshape`, and `tofloat` operators. Doing this entirely within TensorFlow could be more efficient if this was a very large matrix. Try experimenting with this code a bit -- maybe modifying some of the values, using the numpy version, doing this using, adding another operation, or doing this using TensorFlow's `range` function. ## Multiplying matrices Let's move on to matrix multiplication. This time, let's use a bit vector and some random values, which is a good step toward some of what we'll need to do for regression and neural networks. ``` #@test {"output": "ignore"} import tensorflow as tf import numpy as np with tf.Session(): input_features = tf.constant(np.reshape([1, 0, 0, 1], (1, 4)).astype(np.float32)) weights = tf.constant(np.random.randn(4, 2).astype(np.float32)) output = tf.matmul(input_features, weights) print "Input:" print input_features.eval() print "Weights:" print weights.eval() print "Output:" print output.eval() ``` Above, we're taking a 1 x 4 vector [1 0 0 1] and multiplying it by a 4 by 2 matrix full of random values from a normal distribution (mean 0, stdev 1). The output is a 1 x 2 matrix. You might try modifying this example. Running the cell multiple times will generate new random weights and a new output. Or, change the input, e.g., to \[0 0 0 1]), and run the cell again. Or, try initializing the weights using the TensorFlow op, e.g., `random_normal`, instead of using numpy to generate the random weights. What we have here is the basics of a simple neural network already. If we are reading in the input features, along with some expected output, and change the weights based on the error with the output each time, that's a neural network. ## Use of variables Let's look at adding two small matrices in a loop, not by creating new tensors every time, but by updating the existing values and then re-running the computation graph on the new data. This happens a lot with machine learning models, where we change some parameters each time such as gradient descent on some weights and then perform the same computations over and over again. ``` #@test {"output": "ignore"} import tensorflow as tf import numpy as np with tf.Session() as sess: # Set up two variables, total and weights, that we'll change repeatedly. total = tf.Variable(tf.zeros([1, 2])) weights = tf.Variable(tf.random_uniform([1,2])) # Initialize the variables we defined above. tf.initialize_all_variables().run() # This only adds the operators to the graph right now. The assignment # and addition operations are not performed yet. update_weights = tf.assign(weights, tf.random_uniform([1, 2], -1.0, 1.0)) update_total = tf.assign(total, tf.add(total, weights)) for _ in range(5): # Actually run the operation graph, so randomly generate weights and then # add them into the total. Order does matter here. We need to update # the weights before updating the total. sess.run(update_weights) sess.run(update_total) print weights.eval(), total.eval() ``` This is more complicated. At a high level, we create two variables and add operations over them, then, in a loop, repeatedly execute those operations. Let's walk through it step by step. Starting off, the code creates two variables, `total` and `weights`. `total` is initialized to \[0, 0\] and `weights` is initialized to random values between -1 and 1. Next, two assignment operators are added to the graph, one that updates weights with random values from [-1, 1], the other that updates the total with the new weights. Again, the operators are not executed here. In fact, this isn't even inside the loop. We won't execute these operations until the `eval` call inside the loop. Finally, in the for loop, we run each of the operators. In each iteration of the loop, this executes the operators we added earlier, first putting random values into the weights, then updating the totals with the new weights. This call uses `eval` on the session; the code also could have called `eval` on the operators (e.g. `update_weights.eval`). It can be a little hard to wrap your head around exactly what computation is done when. The important thing to remember is that computation is only performed on demand. Variables can be useful in cases where you have a large amount of computation and data that you want to use over and over again with just a minor change to the input each time. That happens quite a bit with neural networks, for example, where you just want to update the weights each time you go through the batches of input data, then run the same operations over again. ## What's next? This has been a gentle introduction to TensorFlow, focused on what TensorFlow is and the very basics of doing anything in TensorFlow. If you'd like more, the next tutorial in the series is Getting Started with TensorFlow, also available in the [notebooks directory](http://127.0.0.1:8888/tree).
github_jupyter
``` from __future__ import division import os import urllib, cStringIO import pymongo as pm import numpy as np import scipy.stats as stats import pandas as pd import json import re from PIL import Image import base64 import sys import matplotlib from matplotlib import pylab, mlab, pyplot %matplotlib inline from IPython.core.pylabtools import figsize, getfigs plt = pyplot import seaborn as sns sns.set_context('talk') sns.set_style('white') import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) warnings.filterwarnings("ignore", message="numpy.dtype size changed") warnings.filterwarnings("ignore", message="numpy.ufunc size changed") ``` ### setup ``` # directory & file hierarchy proj_dir = os.path.abspath('../../..') analysis_dir = os.getcwd() results_dir = os.path.join(proj_dir,'results') plot_dir = os.path.join(results_dir,'plots') csv_dir = os.path.join(results_dir,'csv') exp_dir = os.path.abspath(os.path.join(proj_dir,'experiments')) sketch_dir = os.path.abspath(os.path.join(proj_dir,'sketches')) ## add helpers to python path if os.path.join(proj_dir,'analysis','python') not in sys.path: sys.path.append(os.path.join(proj_dir,'analysis','python')) if not os.path.exists(results_dir): os.makedirs(results_dir) if not os.path.exists(plot_dir): os.makedirs(plot_dir) if not os.path.exists(csv_dir): os.makedirs(csv_dir) # Assign variables within imported analysis helpers import df_generation_helpers as h if sys.version_info[0]>=3: from importlib import reload reload(h) # set vars auth = pd.read_csv('auth.txt', header = None) # this auth.txt file contains the password for the sketchloop user pswd = auth.values[0][0] user = 'sketchloop' host = 'stanford-cogsci.org' ## cocolab ip address # have to fix this to be able to analyze from local import pymongo as pm conn = pm.MongoClient('mongodb://sketchloop:' + pswd + '@127.0.0.1') db = conn['3dObjects'] coll = db['graphical_conventions'] # which iteration name should we use? iterationName = 'run5_submitButton' ## list of researcher mturk worker ID's to ignore jefan = ['A1MMCS8S8CTWKU','A1MMCS8S8CTWKV','A1MMCS8S8CTWKS'] hawkrobe = ['A1BOIDKD33QSDK'] megsano = ['A1DVQQLVZR7W6I'] researchers = jefan + hawkrobe + megsano ## get total number of stroke and clickedObj events in the collection as a whole S = coll.find({ '$and': [{'iterationName':iterationName}, {'eventType': 'stroke'}]}).sort('time') C = coll.find({ '$and': [{'iterationName':iterationName}, {'eventType': 'clickedObj'}]}).sort('time') print str(S.count()) + ' stroke records in the database.' print str(C.count()) + ' clickedObj records in the database.' from IPython.display import clear_output reload(h) ## get list of all candidate games games = coll.find({'iterationName':iterationName}).distinct('gameid') ## get list of complete and valid games run5_complete_games = h.get_complete_and_valid_games(games,coll,iterationName,researchers=researchers, tolerate_undefined_worker=False) reload(h) ## generate actual dataframe and get only valid games (filtering out games with low accuracy, timeouts) D_run5 = h.generate_dataframe(coll, run5_complete_games, iterationName, csv_dir) ## filter crazies and add column D = h.find_crazies(D) ## add features for recognition experiment D = h.add_recog_session_ids(D) D = h.add_distractors_and_shapenet_ids(D) # save out master dataframe D.to_csv(os.path.join(csv_dir, 'graphical_conventions_group_data_{}.csv'.format(iterationName)), index=False) ## load in run5 dataframe (no viewer timing feedback) D = pd.read_csv(os.path.join(csv_dir,'graphical_conventions_group_data_{}.csv'.format(iterationName))) D = h.preprocess_dataframe(D) ## columns to detect how many of each combination we ran D = D.assign(category_subset = pd.Series(D['category'] + D['subset'])) D = D.assign(category_subset_condition = pd.Series(D['category'] + D['subset'] + D['condition'])) #### load in original run3/run4 experiment (with viewer timing feedback) D0 = pd.read_csv(os.path.join(csv_dir,'graphical_conventions_group_data_run3run4.csv')) D0 = h.preprocess_dataframe(D) ### print how many games using each context combinations have run d = [] w = [] g = [] for name, group in D.groupby('gameID'): d.append(sorted(group['category_subset_condition'].unique())[0]) w.append(sorted(group['category_subset_condition'].unique())[1]) g.append(name) C = pd.DataFrame([d,w,g]) C = C.transpose() C.columns = ['dining','waiting','gameID'] C = C.assign(version=C['dining']+C['waiting']) Counter(C.version.values) Counter(C.waiting.values) Counter(C.dining.values) # D.groupby(['gameID','condition'])['outcome'].mean() sns.set_context('talk') plt.figure(figsize=(14,4)) plt.subplot(121) plt.title('refgame 1.2') sns.lineplot(x='repetition',hue='condition',y='drawDuration',data=D0) plt.ylim(0,18) plt.subplot(122) plt.title('refgame 2.0') sns.lineplot(x='repetition',hue='condition',y='drawDuration',data=D) plt.ylim(0,18) # D.groupby(['gameID','condition'])['outcome'].mean() sns.set_context('talk') plt.figure(figsize=(14,4)) plt.subplot(121) plt.title('refgame 1.2') sns.lineplot(x='repetition',hue='condition',y='numStrokes',data=D0) plt.ylim(0,10) plt.subplot(122) plt.title('refgame 2.0') sns.lineplot(x='repetition',hue='condition',y='numStrokes',data=D) plt.ylim(0,10) # D.groupby(['gameID','condition'])['outcome'].mean() sns.set_context('talk') plt.figure(figsize=(14,4)) plt.subplot(121) plt.title('refgame 1.2') sns.lineplot(x='repetition',hue='condition',y='meanPixelIntensity',data=D0) plt.ylim(0,0.06) plt.subplot(122) plt.title('refgame 2.0') sns.lineplot(x='repetition',hue='condition',y='meanPixelIntensity',data=D) plt.ylim(0,0.06) # D.groupby(['gameID','condition'])['outcome'].mean() sns.set_context('talk') sns.set_style('whitegrid') plt.figure(figsize=(14,4)) plt.subplot(121) plt.title('refgame 1.2') sns.lineplot(x='repetition',hue='condition',y='outcome',data=D0) plt.ylim(0,1) plt.subplot(122) plt.title('refgame 2.0') sns.lineplot(x='repetition',hue='condition',y='outcome',data=D) plt.ylim(0,1) fig = plt.figure(figsize=(4,4)) D2 = D.groupby(['gameID','condition'])['outcome'].mean().reset_index() D3 = D2.pivot(index='gameID',columns='condition',values='outcome').reset_index() D3.plot.scatter(x='repeated', y='control') plt.xlim(0,1.05) plt.ylim(0,1.05) plt.plot([0,1],[0,1],'k:') ## save out BIS version of dataframe reload(h) h.save_bis(D, csv_dir, iterationName) ## ['dining','waiting'] ## ['A','B'] ## ['repeated','control'] ```
github_jupyter
## UCI Adult Data Set ### Dataset URL: https://archive.ics.uci.edu/ml/datasets/adult Predict whether income exceeds $50K/yr based on census data. Also known as "Census Income" dataset. ``` import shutil import math from datetime import datetime import multiprocessing import pandas as pd import numpy as np import tensorflow as tf from tensorflow import data from tensorflow.python.feature_column import feature_column print(tf.__version__) MODEL_NAME = 'cenus-model-01' TRAIN_DATA_FILES_PATTERN = 'data/adult.data.csv' TEST_DATA_FILES_PATTERN = 'data/adult.test.csv' RESUME_TRAINING = False PROCESS_FEATURES = True EXTEND_FEATURE_COLUMNS = True MULTI_THREADING = True ``` ## Define Dataset Metadata ``` HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num', 'marital_status', 'occupation', 'relationship', 'race', 'gender', 'capital_gain', 'capital_loss', 'hours_per_week', 'native_country', 'income_bracket'] HEADER_DEFAULTS = [[0], [''], [0], [''], [0], [''], [''], [''], [''], [''], [0], [0], [0], [''], ['']] NUMERIC_FEATURE_NAMES = ['age', 'education_num', 'capital_gain', 'capital_loss', 'hours_per_week'] CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY = { 'gender': ['Female', 'Male'], 'race': ['Amer-Indian-Eskimo', 'Asian-Pac-Islander', 'Black', 'Other', 'White'], 'education': ['Bachelors', 'HS-grad', '11th', 'Masters', '9th', 'Some-college', 'Assoc-acdm', 'Assoc-voc', '7th-8th', 'Doctorate', 'Prof-school', '5th-6th', '10th', '1st-4th', 'Preschool', '12th'], 'marital_status': ['Married-civ-spouse', 'Divorced', 'Married-spouse-absent', 'Never-married', 'Separated', 'Married-AF-spouse', 'Widowed'], 'relationship': ['Husband', 'Not-in-family', 'Wife', 'Own-child', 'Unmarried', 'Other-relative'], 'workclass': ['Self-emp-not-inc', 'Private', 'State-gov', 'Federal-gov', 'Local-gov', '?', 'Self-emp-inc', 'Without-pay', 'Never-worked'] } CATEGORICAL_FEATURE_NAMES_WITH_BUCKET_SIZE = { 'occupation': 50, 'native_country' : 100 } CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.keys()) + list(CATEGORICAL_FEATURE_NAMES_WITH_BUCKET_SIZE.keys()) FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES TARGET_NAME = 'income_bracket' TARGET_LABELS = ['<=50K', '>50K'] WEIGHT_COLUMN_NAME = 'fnlwgt' UNUSED_FEATURE_NAMES = list(set(HEADER) - set(FEATURE_NAMES) - {TARGET_NAME} - {WEIGHT_COLUMN_NAME}) print("Header: {}".format(HEADER)) print("Numeric Features: {}".format(NUMERIC_FEATURE_NAMES)) print("Categorical Features: {}".format(CATEGORICAL_FEATURE_NAMES)) print("Target: {} - labels: {}".format(TARGET_NAME, TARGET_LABELS)) print("Unused Features: {}".format(UNUSED_FEATURE_NAMES)) ``` ## Load and Analyse Dataset ``` TRAIN_DATA_SIZE = 32561 TEST_DATA_SIZE = 16278 train_data = pd.read_csv(TRAIN_DATA_FILES_PATTERN, header=None, names=HEADER ) train_data.head(10) train_data.describe() ``` ### Compute Scaling Statistics for Numeric Columns ``` means = train_data[NUMERIC_FEATURE_NAMES].mean(axis=0) stdvs = train_data[NUMERIC_FEATURE_NAMES].std(axis=0) maxs = train_data[NUMERIC_FEATURE_NAMES].max(axis=0) mins = train_data[NUMERIC_FEATURE_NAMES].min(axis=0) df_stats = pd.DataFrame({"mean":means, "stdv":stdvs, "max":maxs, "min":mins}) df_stats.head(15) ``` ### Save Scaling Statistics ``` df_stats.to_csv(path_or_buf="data/adult.stats.csv", header=True, index=True) ``` ## Define Data Input Function ### a. Parsing and preprocessing logic ``` def parse_csv_row(csv_row): columns = tf.decode_csv(csv_row, record_defaults=HEADER_DEFAULTS) features = dict(zip(HEADER, columns)) for column in UNUSED_FEATURE_NAMES: features.pop(column) target = features.pop(TARGET_NAME) return features, target def process_features(features): capital_indicator = features['capital_gain'] > features['capital_loss'] features['capital_indicator'] = tf.cast(capital_indicator, dtype=tf.int32) return features ``` ### b. Data pipeline input function ``` def csv_input_fn(files_name_pattern, mode=tf.estimator.ModeKeys.EVAL, skip_header_lines=0, num_epochs=None, batch_size=200): shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False num_threads = multiprocessing.cpu_count() if MULTI_THREADING else 1 print("") print("* data input_fn:") print("================") print("Input file(s): {}".format(files_name_pattern)) print("Batch size: {}".format(batch_size)) print("Epoch Count: {}".format(num_epochs)) print("Mode: {}".format(mode)) print("Thread Count: {}".format(num_threads)) print("Shuffle: {}".format(shuffle)) print("================") print("") file_names = tf.matching_files(files_name_pattern) dataset = data.TextLineDataset(filenames=file_names) dataset = dataset.skip(skip_header_lines) if shuffle: dataset = dataset.shuffle(buffer_size=2 * batch_size + 1) dataset = dataset.batch(batch_size) dataset = dataset.map(lambda csv_row: parse_csv_row(csv_row), num_parallel_calls=num_threads) if PROCESS_FEATURES: dataset = dataset.map(lambda features, target: (process_features(features), target), num_parallel_calls=num_threads) dataset = dataset.repeat(num_epochs) iterator = dataset.make_one_shot_iterator() features, target = iterator.get_next() return features, target features, target = csv_input_fn(files_name_pattern="") print("Features in CSV: {}".format(list(features.keys()))) print("Target in CSV: {}".format(target)) ``` ## Define Feature Columns ### a. Load scaling params ``` df_stats = pd.read_csv("data/adult.stats.csv", header=0, index_col=0) df_stats['feature_name'] = NUMERIC_FEATURE_NAMES df_stats.head(10) ``` ### b. Create feature columns ``` def extend_feature_columns(feature_columns, hparams): age_buckets = tf.feature_column.bucketized_column( feature_columns['age'], boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65]) education_X_occupation = tf.feature_column.crossed_column( ['education', 'occupation'], hash_bucket_size=int(1e4)) age_buckets_X_race = tf.feature_column.crossed_column( [age_buckets, feature_columns['race']], hash_bucket_size=int(1e4)) native_country_X_occupation = tf.feature_column.crossed_column( ['native_country', 'occupation'], hash_bucket_size=int(1e4)) native_country_embedded = tf.feature_column.embedding_column( feature_columns['native_country'], dimension=hparams.embedding_size) occupation_embedded = tf.feature_column.embedding_column( feature_columns['occupation'], dimension=hparams.embedding_size) education_X_occupation_embedded = tf.feature_column.embedding_column( education_X_occupation, dimension=hparams.embedding_size) native_country_X_occupation_embedded = tf.feature_column.embedding_column( native_country_X_occupation, dimension=hparams.embedding_size) feature_columns['age_buckets'] = age_buckets feature_columns['education_X_occupation'] = education_X_occupation feature_columns['age_buckets_X_race'] = age_buckets_X_race feature_columns['native_country_X_occupation'] = native_country_X_occupation feature_columns['native_country_embedded'] = native_country_embedded feature_columns['occupation_embedded'] = occupation_embedded feature_columns['education_X_occupation_embedded'] = education_X_occupation_embedded feature_columns['native_country_X_occupation_embedded'] = native_country_X_occupation_embedded return feature_columns def standard_scaler(x, mean, stdv): return (x-mean)/(stdv) def maxmin_scaler(x, max_value, min_value): return (x-min_value)/(max_value-min_value) def get_feature_columns(hparams): numeric_columns = {} for feature_name in NUMERIC_FEATURE_NAMES: feature_mean = df_stats[df_stats.feature_name == feature_name]['mean'].values[0] feature_stdv = df_stats[df_stats.feature_name == feature_name]['stdv'].values[0] normalizer_fn = lambda x: standard_scaler(x, feature_mean, feature_stdv) numeric_columns[feature_name] = tf.feature_column.numeric_column(feature_name, normalizer_fn=normalizer_fn ) CONSTRUCTED_NUMERIC_FEATURES_NAMES = [] if PROCESS_FEATURES: for feature_name in CONSTRUCTED_NUMERIC_FEATURES_NAMES: numeric_columns[feature_name] = tf.feature_column.numeric_column(feature_name) categorical_column_with_vocabulary = \ {item[0]: tf.feature_column.categorical_column_with_vocabulary_list(item[0], item[1]) for item in CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.items()} CONSTRUCTED_INDICATOR_FEATURES_NAMES = ['capital_indicator'] categorical_column_with_identity = {} for feature_name in CONSTRUCTED_INDICATOR_FEATURES_NAMES: categorical_column_with_identity[feature_name] = tf.feature_column.categorical_column_with_identity(feature_name, num_buckets=2, default_value=0) categorical_column_with_hash_bucket = \ {item[0]: tf.feature_column.categorical_column_with_hash_bucket(item[0], item[1], dtype=tf.string) for item in CATEGORICAL_FEATURE_NAMES_WITH_BUCKET_SIZE.items()} feature_columns = {} if numeric_columns is not None: feature_columns.update(numeric_columns) if categorical_column_with_vocabulary is not None: feature_columns.update(categorical_column_with_vocabulary) if categorical_column_with_identity is not None: feature_columns.update(categorical_column_with_identity) if categorical_column_with_hash_bucket is not None: feature_columns.update(categorical_column_with_hash_bucket) if EXTEND_FEATURE_COLUMNS: feature_columns = extend_feature_columns(feature_columns, hparams) return feature_columns feature_columns = get_feature_columns(tf.contrib.training.HParams(num_buckets=5,embedding_size=3)) print("Feature Columns: {}".format(feature_columns)) ``` ## Define a DNN Estimator Creation Function ### a. Get wide and deep feature columns ``` def get_wide_deep_columns(): feature_columns = list(get_feature_columns(hparams).values()) dense_columns = list( filter(lambda column: isinstance(column, feature_column._NumericColumn) | isinstance(column, feature_column._EmbeddingColumn), feature_columns ) ) categorical_columns = list( filter(lambda column: isinstance(column, feature_column._VocabularyListCategoricalColumn) | isinstance(column, feature_column._IdentityCategoricalColumn) | isinstance(column, feature_column._BucketizedColumn), feature_columns) ) sparse_columns = list( filter(lambda column: isinstance(column,feature_column._HashedCategoricalColumn) | isinstance(column, feature_column._CrossedColumn), feature_columns) ) indicator_columns = list( map(lambda column: tf.feature_column.indicator_column(column), categorical_columns) ) deep_feature_columns = dense_columns + indicator_columns wide_feature_columns = categorical_columns + sparse_columns return wide_feature_columns, deep_feature_columns ``` ### b. Define the estimator ``` def create_DNNComb_estimator(run_config, hparams, print_desc=False): wide_feature_columns, deep_feature_columns = get_wide_deep_columns() estimator = tf.estimator.DNNLinearCombinedClassifier( n_classes=len(TARGET_LABELS), label_vocabulary=TARGET_LABELS, dnn_feature_columns = deep_feature_columns, linear_feature_columns = wide_feature_columns, weight_column=WEIGHT_COLUMN_NAME, dnn_hidden_units= hparams.hidden_units, dnn_optimizer= tf.train.AdamOptimizer(), dnn_activation_fn= tf.nn.relu, config= run_config ) if print_desc: print("") print("*Estimator Type:") print("================") print(type(estimator)) print("") print("*deep columns:") print("==============") print(deep_feature_columns) print("") print("wide columns:") print("=============") print(wide_feature_columns) print("") return estimator ``` ## 6. Run Experiment ### a. Set HParam and RunConfig ``` TRAIN_SIZE = TRAIN_DATA_SIZE NUM_EPOCHS = 100 BATCH_SIZE = 500 EVAL_AFTER_SEC = 60 TOTAL_STEPS = (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS hparams = tf.contrib.training.HParams( num_epochs = NUM_EPOCHS, batch_size = BATCH_SIZE, embedding_size = 4, hidden_units= [64, 32, 16], max_steps = TOTAL_STEPS ) model_dir = 'trained_models/{}'.format(MODEL_NAME) run_config = tf.estimator.RunConfig( log_step_count_steps=5000, tf_random_seed=19830610, model_dir=model_dir ) print(hparams) print("Model Directory:", run_config.model_dir) print("") print("Dataset Size:", TRAIN_SIZE) print("Batch Size:", BATCH_SIZE) print("Steps per Epoch:",TRAIN_SIZE/BATCH_SIZE) print("Total Steps:", TOTAL_STEPS) print("That is 1 evaluation step after each",EVAL_AFTER_SEC," training seconds") ``` ### b. Define TrainSpec and EvaluSpec ``` train_spec = tf.estimator.TrainSpec( input_fn = lambda: csv_input_fn( TRAIN_DATA_FILES_PATTERN, mode = tf.estimator.ModeKeys.TRAIN, num_epochs=hparams.num_epochs, batch_size=hparams.batch_size ), max_steps=hparams.max_steps, hooks=None ) eval_spec = tf.estimator.EvalSpec( input_fn = lambda: csv_input_fn( TRAIN_DATA_FILES_PATTERN, mode=tf.estimator.ModeKeys.EVAL, num_epochs=1, batch_size=hparams.batch_size, ), throttle_secs = EVAL_AFTER_SEC, steps=None ) ``` ### c. Run Experiment via train_and_evaluate ``` if not RESUME_TRAINING: print("Removing previous artifacts...") shutil.rmtree(model_dir, ignore_errors=True) else: print("Resuming training...") tf.logging.set_verbosity(tf.logging.INFO) time_start = datetime.utcnow() print("Experiment started at {}".format(time_start.strftime("%H:%M:%S"))) print(".......................................") estimator = create_DNNComb_estimator(run_config, hparams, True) tf.estimator.train_and_evaluate( estimator=estimator, train_spec=train_spec, eval_spec=eval_spec ) time_end = datetime.utcnow() print(".......................................") print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S"))) print("") time_elapsed = time_end - time_start print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds())) ``` ## Evaluate the Model ``` TRAIN_SIZE = TRAIN_DATA_SIZE TEST_SIZE = TEST_DATA_SIZE train_input_fn = lambda: csv_input_fn(files_name_pattern= TRAIN_DATA_FILES_PATTERN, mode= tf.estimator.ModeKeys.EVAL, batch_size= TRAIN_SIZE) test_input_fn = lambda: csv_input_fn(files_name_pattern= TEST_DATA_FILES_PATTERN, mode= tf.estimator.ModeKeys.EVAL, batch_size= TEST_SIZE) estimator = create_DNNComb_estimator(run_config, hparams) train_results = estimator.evaluate(input_fn=train_input_fn, steps=1) print() print("######################################################################################") print("# Train Measures: {}".format(train_results)) print("######################################################################################") test_results = estimator.evaluate(input_fn=test_input_fn, steps=1) print() print("######################################################################################") print("# Test Measures: {}".format(test_results)) print("######################################################################################") ``` ## Prediction ``` import itertools predict_input_fn = lambda: csv_input_fn(TEST_DATA_FILES_PATTERN, mode= tf.estimator.ModeKeys.PREDICT, batch_size= 10) predictions = list(itertools.islice(estimator.predict(input_fn=predict_input_fn),10)) print("") print("* Predicted Classes: {}".format(list(map(lambda item: item["class_ids"][0] ,predictions)))) print("* Predicted Probabilities: {}".format(list(map(lambda item: list(item["probabilities"]) ,predictions)))) ```
github_jupyter
``` import pandas as pd from sklearn.metrics import precision_score, recall_score, f1_score, accuracy_score, confusion_matrix data_dir = '../data/' out_dir = '../data/output/' annotated_out_dir = '../data/output/annotated_output/' output_validation_data = pd.read_excel(data_dir + 'DrugVisData - All Annotations V2.xlsx',\ sheet_name = 'DrugVisData - Copy') output_data_1 = pd.read_csv(out_dir + 'DrugVisData_Negated_Output.csv') output_data_2 = pd.read_excel(out_dir + 'DrugVisData_Negated_Output_2.xlsx', sheet_name='DrugVisData_Negated_Output_2') output_data_3 = pd.read_excel(out_dir + 'DrugVisData_Negated_Output_Parsed.xlsx', sheet_name='DrugVisData_Negated_Output_Pars') output_data_4 = pd.read_csv(out_dir + 'DrugVisdata_Negation_Output_Parsed_D.csv') output_data_5 = pd.read_csv(out_dir + 'DrugVisdata_Negation_Output_Negspacy.csv') output_data_6 = pd.read_csv(annotated_out_dir + 'compare_1_step.csv') output_data_7 = pd.read_csv(annotated_out_dir + 'compare_2_step.csv') output_data_1 = output_data_1.join(output_validation_data[['Unnamed: 0','annotation_expert_1','Treatment']].set_index('Unnamed: 0'),\ on='Unnamed: 0', how = 'inner',\ lsuffix = '_out', rsuffix = '_val')\ .dropna(subset=['annotation_expert_1']) output_data_2 = output_data_2.join(output_validation_data[['Unnamed: 0','annotation_expert_1','Treatment']].set_index('Unnamed: 0'),\ on='Unnamed: 0', how = 'inner',\ lsuffix = '_out', rsuffix = '_val')\ .dropna(subset=['annotation_expert_1']) output_data_3 = output_data_3.join(output_validation_data[['Unnamed: 0','annotation_expert_1','Treatment']].set_index('Unnamed: 0'),\ on='Unnamed: 0', how = 'inner',\ lsuffix = '_out', rsuffix = '_val')\ .dropna(subset=['annotation_expert_1']) output_data_4 = output_data_4.join(output_validation_data[['Unnamed: 0','annotation_expert_1','Treatment']].set_index('Unnamed: 0'),\ on='Unnamed: 0', how = 'inner',\ lsuffix = '_out', rsuffix = '_val')\ .dropna(subset=['annotation_expert_1']) output_data_5 = output_data_5.join(output_validation_data[['Unnamed: 0','annotation_expert_1','Treatment']].set_index('Unnamed: 0'),\ on='Unnamed: 0', how = 'inner',\ lsuffix = '_out', rsuffix = '_val')\ .dropna(subset=['annotation_expert_1']) output_data_6 = output_data_6.join(output_validation_data[['sentence','annotation_expert_1','drug','Treatment']].set_index('sentence'),\ on='sentence', how = 'inner',\ lsuffix = '_out', rsuffix = '_val')\ .dropna(subset=['annotation_expert_1']) output_data_7 = output_data_7.join(output_validation_data[['sentence','annotation_expert_1','drug','Treatment']].set_index('sentence'),\ on='sentence', how = 'inner',\ lsuffix = '_out', rsuffix = '_val')\ .dropna(subset=['annotation_expert_1']) output_data_6['preds'] = [1 if p>=0.5 else 0 for p in output_data_6['preds']] output_data_7['preds'] = [1 if p>=0.5 else 0 for p in output_data_7['preds']] output_data_1.to_csv(annotated_out_dir + 'DrugVisData_Negated_Output.csv') output_data_2.to_csv(annotated_out_dir + 'DrugVisData_Negated_Output_2.csv') output_data_3.to_csv(annotated_out_dir + 'DrugVisData_Negated_Output_Parsed.csv') output_data_4.to_csv(annotated_out_dir + 'DrugVisdata_Negation_Output_Parsed_D.csv') output_data_5.to_csv(annotated_out_dir + 'DrugVisdata_Negation_Output_Negspacy.csv') print(output_data_1['annotation_expert_1'].value_counts()) print(output_data_2['annotation_expert_1'].value_counts()) print(output_data_3['annotation_expert_1'].value_counts()) print(output_data_4['annotation_expert_1'].value_counts()) print(output_data_5['annotation_expert_1'].value_counts()) output_data_3['Is_Negated_Parsed'].value_counts() ``` ## Confusion Matrix ``` print('Confusion Matrix for model using trigger terms V1 (using StanfordCoreNLP to tokenize):\n') print(confusion_matrix(output_data_1['annotation_expert_1'], output_data_1['Is_Negated'])) print('Confusion Matrix for model using trigger terms V1 (without StanfordCoreNLP to tokenize):\n') print(confusion_matrix(output_data_2['annotation_expert_1'], output_data_2['Is_Negated'])) print('Confusion Matrix for model using trigger terms and syntactic parsing to identify negated part\nChecks if negated part contains drug name (done in excel):\n') print(confusion_matrix(output_data_3['annotation_expert_1'], output_data_3['Is_Negated_Parsed'])) print('Confusion Matrix for model using scispacy dependency parsing:\n') print(confusion_matrix(output_data_4['annotation_expert_1'], output_data_4['Is_Negated'])) print('Confusion Matrix for model using scispacy entity recognition and negspacy:\n') print(confusion_matrix(output_data_5['annotation_expert_1'], output_data_5['Is_Negated'])) ``` ## Precision, Recall and Accuracy ``` print('Model using trigger terms V1 (using StanfordCoreNLP to tokenize):\n') print('Overall accuracy: '\ + str(accuracy_score(output_data_1['annotation_expert_1'], output_data_1['Is_Negated'] ))) print('Precision: '\ + str(precision_score(output_data_1['annotation_expert_1'], output_data_1['Is_Negated'] ))) print('Recall: '\ + str(recall_score(output_data_1['annotation_expert_1'], output_data_1['Is_Negated'] ))) print('F1 score: '\ + str(f1_score(output_data_1['annotation_expert_1'], output_data_1['Is_Negated'] ))) print('\n') print('Model using trigger terms V1 (without StanfordCoreNLP to tokenize):\n') print('Overall accuracy: '\ + str(accuracy_score(output_data_2['annotation_expert_1'], output_data_2['Is_Negated'] ))) print('Precision: '\ + str(precision_score(output_data_2['annotation_expert_1'], output_data_2['Is_Negated'] ))) print('Recall: '\ + str(recall_score(output_data_2['annotation_expert_1'], output_data_2['Is_Negated'] ))) print('F1 score: '\ + str(f1_score(output_data_2['annotation_expert_1'], output_data_2['Is_Negated'] ))) print('\n') print('Model using trigger terms and syntactic parsing to identify negated part\nChecks if negated part contains drug name (done in excel):\n') print('Overall accuracy: '\ + str(accuracy_score(output_data_3['annotation_expert_1'], output_data_3['Is_Negated_Parsed'] ))) print('Precision: '\ + str(precision_score(output_data_3['annotation_expert_1'], output_data_3['Is_Negated_Parsed'] ))) print('Recall: '\ + str(recall_score(output_data_3['annotation_expert_1'], output_data_3['Is_Negated_Parsed'] ))) print('F1 score: '\ + str(f1_score(output_data_3['annotation_expert_1'], output_data_3['Is_Negated_Parsed'] ))) print('\n') print('Model using scispacy dependency parsing:\n') print('Overall accuracy: '\ + str(accuracy_score(output_data_4['annotation_expert_1'], output_data_4['Is_Negated'] ))) print('Precision: '\ + str(precision_score(output_data_4['annotation_expert_1'], output_data_4['Is_Negated'] ))) print('Recall: '\ + str(recall_score(output_data_4['annotation_expert_1'], output_data_4['Is_Negated'] ))) print('F1 score: '\ + str(f1_score(output_data_4['annotation_expert_1'], output_data_4['Is_Negated'] ))) print('\n') print('Model using scispacy entity recognition and negspacy:\n') print('Overall accuracy: '\ + str(accuracy_score(output_data_5['annotation_expert_1'], output_data_5['Is_Negated'] ))) print('Precision: '\ + str(precision_score(output_data_5['annotation_expert_1'], output_data_5['Is_Negated'] ))) print('Recall: '\ + str(recall_score(output_data_5['annotation_expert_1'], output_data_5['Is_Negated'] ))) print('F1 score: '\ + str(f1_score(output_data_5['annotation_expert_1'], output_data_5['Is_Negated'] ))) ``` ## Look for patterns in mis-classified sentences ``` print('\nMedian length:'+str(output_data_1.sentence.str.len().median())) print('75% percentile length:'+str(output_data_1.sentence.str.len().quantile(.75))) print('Max length:'+str(output_data_1.sentence.str.len().max())) print('Model using trigger terms V1 (using StanfordCoreNLP to tokenize):\n') print('Total misclassifications:') misclass_1 = output_data_1.loc[output_data_1.annotation_expert_1 != output_data_1.Is_Negated,['sentence','drug','Treatment','annotation_expert_1','Is_Negated']].drop_duplicates() print(misclass_1[['Treatment','sentence']].groupby('Treatment').count().sort_values(by='sentence',ascending=False)) print('\nFalse positives:') print(misclass_1.loc[misclass_1.Is_Negated==1,['Treatment','sentence']].groupby('Treatment').count().sort_values(by='sentence',ascending=False)) print('\nFalse negatives:') print(misclass_1.loc[misclass_1.Is_Negated==0,['Treatment','sentence']].groupby('Treatment').count().sort_values(by='sentence',ascending=False)) print('\nMedian length:'+str(misclass_1.sentence.str.len().median())) print('75% percentile length:'+str(misclass_1.sentence.str.len().quantile(.75))) print('Max length:'+str(misclass_1.sentence.str.len().max())) #Remove cases with multiple sentences and check again print('Model using scispacy entity recognition and negspacy:\n') print('Total misclassifications:') misclass_5 = output_data_5.loc[output_data_5.annotation_expert_1 != output_data_5.Is_Negated,['sentence','drug','Treatment','annotation_expert_1','Is_Negated']].drop_duplicates() print(misclass_5[['Treatment','sentence']].groupby('Treatment').count().sort_values(by='sentence',ascending=False)) print('\nFalse positives:') print(misclass_5.loc[misclass_5.Is_Negated==1,['Treatment','sentence']].groupby('Treatment').count().sort_values(by='sentence',ascending=False)) print('\nFalse negatives:') print(misclass_5.loc[misclass_5.Is_Negated==0,['Treatment','sentence']].groupby('Treatment').count().sort_values(by='sentence',ascending=False)) print('\nMedian length:'+str(misclass_5.sentence.str.len().median())) print('75% percentile length:'+str(misclass_5.sentence.str.len().quantile(.75))) print('Max length:'+str(misclass_5.sentence.str.len().max())) #Remove cases with multiple sentences and check again print('Model using Roberta Base fine tuned on Biocorpus abstracts:\n') print('Total misclassifications:') misclass_6 = output_data_6.loc[output_data_6.annotation_expert_1 != output_data_6.preds,['sentence','drug','Treatment','annotation_expert_1','preds']].drop_duplicates() print(misclass_6[['Treatment','sentence']].groupby('Treatment').count().sort_values(by='sentence',ascending=False)) print('\nFalse positives:') print(misclass_6.loc[misclass_6.preds==1,['Treatment','sentence']].groupby('Treatment').count().sort_values(by='sentence',ascending=False)) print('\nFalse negatives:') print(misclass_6.loc[misclass_6.preds==0,['Treatment','sentence']].groupby('Treatment').count().sort_values(by='sentence',ascending=False)) print('\nMedian length:'+str(misclass_6.sentence.str.len().median())) print('75% percentile length:'+str(misclass_6.sentence.str.len().quantile(.75))) print('Max length:'+str(misclass_6.sentence.str.len().max())) #Remove cases with multiple sentences and check again print('Model using Roberta Base fine tuned on Biocorpus abstracts = annotated DrugVisData:\n') print('Total misclassifications:') misclass_7 = output_data_7.loc[output_data_7.annotation_expert_1 != output_data_7.preds,['sentence','drug','Treatment','annotation_expert_1','preds']].drop_duplicates() print(misclass_7[['Treatment','sentence']].groupby('Treatment').count().sort_values(by='sentence',ascending=False)) print('\nFalse positives:') print(misclass_7.loc[misclass_7.preds==1,['Treatment','sentence']].groupby('Treatment').count().sort_values(by='sentence',ascending=False)) print('\nFalse negatives:') print(misclass_7.loc[misclass_7.preds==0,['Treatment','sentence']].groupby('Treatment').count().sort_values(by='sentence',ascending=False)) print('\nMedian length:'+str(misclass_7.sentence.str.len().median())) print('75% percentile length:'+str(misclass_7.sentence.str.len().quantile(.75))) print('Max length:'+str(misclass_7.sentence.str.len().max())) #Remove cases with multiple sentences and check again ```
github_jupyter
``` import sys, os, glob, warnings, logging import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt import seaborn as sns from scipy import stats from sw_plotting import change_bar_width, plotCountBar from sw_utilities import tukeyTest # logging.basicConfig(stream=sys.stdout, format='%(asctime)s - %(levelname)s - %(message)s', level=logging.DEBUG) logging.basicConfig(stream=sys.stdout, format='%(asctime)s - %(levelname)s - %(message)s', level=logging.INFO) # ignore warnings warnings.filterwarnings('ignore') # plotting configuration font = {'family' : 'Arial', 'size' : 7} matplotlib.rc('font', **font) plt.rcParams['svg.fonttype'] = 'none' # Make a folder if it is not already there to store exported figures !mkdir ../jupyter_figures # Bud count data for single bud culture from dissected E13 SMG epithelial rudiments df = pd.read_csv('../data/SMG-bud-count-single-bud-single-cell-culture/20200917-E13epi-K14-RFP-single-bud-zStack-day1-budCount.txt') df.columns = ['file_name', 'bud_count'] df['time'] = ['day1'] * len(df) df1 = df df = pd.read_csv('../data/SMG-bud-count-single-bud-single-cell-culture/20200918-E13epi-K14-RFP-single-bud-zStack-day2-budCount.txt') df.columns = ['file_name', 'bud_count'] df['time'] = ['day2'] * len(df) df2 = df df = pd.concat([df1, df2]) # df.head() # calculate bud count ratios # day 0, all bud count is 1, so ratio is the say as day 1 counts ratio_d1to0 = df[df.time=='day1']['bud_count'].values ratio_d2to1 = df[df.time=='day2']['bud_count'].values / df[df.time=='day1']['bud_count'].values # annoate ratios with group id ratios = ratio_d1to0.tolist() + ratio_d2to1.tolist() groups = ['ratio_d1to0']*len(ratio_d1to0) + ['ratio_d2to1']*len(ratio_d2to1) # quick visulization sns.swarmplot(groups, ratios) # temporary storage variables ratios1 = ratios groups1 = groups # Raw bud count plotting outputPrefix = '20200916-18-SMG-single-bud-culture-bud-count' outputFigPath = "../jupyter_figures/" + outputPrefix + ".svg" fig_width=0.5 fig_height=0.9 fig = plt.figure(figsize=(fig_width,fig_height), dpi=300) ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) ax = sns.swarmplot(x='time', y='bud_count', data=df, color="blue", size=2.5, alpha=.6) ax = sns.barplot(x='time', y='bud_count', data=df, color=".7", alpha=1.0, errwidth=.7, errcolor="k", capsize=.2, ci=95) plt.ylim(0, 22) plt.yticks([0, 10, 20]) plt.xlabel(None) plt.ylabel("Bud count") # make the bar width narrower change_bar_width(ax, .6) # ax.set_xticklabels(labels=plot_order, rotation=45, ha="right") for o in fig.findobj(): o.set_clip_on(False) for o in ax.findobj(): o.set_clip_on(False) if outputFigPath is not None: plt.savefig(outputFigPath) # Bud count data for single cell culture from dissected E13 SMG epithelial rudiments df = pd.read_csv('../data/SMG-bud-count-single-bud-single-cell-culture/20200919-E13epi-K14-RFP-single-cell-unsorted-zStack-day1-budCount.txt') df.columns = ['file_name', 'bud_count'] df['time'] = ['day1'] * len(df) df1 = df df = pd.read_csv('../data/SMG-bud-count-single-bud-single-cell-culture/20200920-E13epi-K14-RFP-single-cell-unsorted-zStack-day2-budCount.txt') df.columns = ['file_name', 'bud_count'] df['time'] = ['day2'] * len(df) df2 = df df = pd.concat([df1, df2]) # df.head() # calculate bud count ratios # day 0, all bud count is 1, so ratio is the say as day 1 counts ratio_d1to0 = df[df.time=='day1']['bud_count'].values ratio_d2to1 = df[df.time=='day2']['bud_count'].values / df[df.time=='day1']['bud_count'].values # annoate ratios with group id ratios = ratio_d1to0.tolist() + ratio_d2to1.tolist() groups = ['ratio_d1to0']*len(ratio_d1to0) + ['ratio_d2to1']*len(ratio_d2to1) # quick visulization sns.swarmplot(groups, ratios) # temporary storage variables ratios2 = ratios groups2 = groups outputPrefix = '20200916-18-SMG-single-cell-culture-bud-count' outputFigPath = "../jupyter_figures/" + outputPrefix + ".svg" fig_width=0.5 fig_height=0.9 fig = plt.figure(figsize=(fig_width,fig_height), dpi=300) ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) ax = sns.swarmplot(x='time', y='bud_count', data=df, color="blue", size=2.5, alpha=.6) ax = sns.barplot(x='time', y='bud_count', data=df, color=".7", alpha=1.0, errwidth=.7, errcolor="k", capsize=.2, ci=95) plt.ylim(0, 22) plt.yticks([0, 10, 20]) plt.xlabel(None) plt.ylabel("Bud count") # make the bar width narrower change_bar_width(ax, .6) # ax.set_xticklabels(labels=plot_order, rotation=45, ha="right") for o in fig.findobj(): o.set_clip_on(False) for o in ax.findobj(): o.set_clip_on(False) if outputFigPath is not None: plt.savefig(outputFigPath) # Bud count data for intact culture for direct comparison df = pd.read_csv('../data/SMG-bud-count-single-bud-single-cell-culture/20210128-30-SMG-intact-culture-for-comparison-4104-budCount.txt', sep='\t') df.columns = ['file_name', 'bud_count'] # annotate experimental groups # group 1: day 0 from ~E13 stage (13 total) # 6 glands from 1 mouse and 7 glands from another # group 2: day 0 from ~E12 stage (9 total) # 5 glands from 1 mouse and 4 glands from another # groups 3-4: above glands in order, day 1 # groups 5-6: above glands in order, day 2 df['initial_stage'] = ['E13'] * 13 + ['E12'] * 9 + ['E13'] * 13 + ['E12'] * 9 + ['E13'] * 13 + ['E12'] * 9 df['time'] = ['day0'] * 22 + ['day1'] * 22 + ['day2'] * 22 df['group'] = ['E13_d0'] * 13 + ['E12_d0'] * 9 + ['E13_d1'] * 13 + ['E12_d1'] * 9 + ['E13_d2'] * 13 + ['E12_d2'] * 9 # seperate out the 2 groups from different initial stage df_E13 = df[df.initial_stage=='E13'] df_E12 = df[df.initial_stage=='E12'] df.head() # calculate bud count ratios ratio_d1to0 = df[df.time=='day1']['bud_count'].values / df[df.time=='day0']['bud_count'].values ratio_d2to1 = df[df.time=='day2']['bud_count'].values / df[df.time=='day1']['bud_count'].values # annoate ratios with group id ratios = ratio_d1to0.tolist() + ratio_d2to1.tolist() groups = ['ratio_d1to0']*len(ratio_d1to0) + ['ratio_d2to1']*len(ratio_d2to1) # quick visulization sns.swarmplot(groups, ratios) # temporary storage variables ratios3 = ratios groups3 = groups # concatenate all ratios_all = ratios3 + ratios1 + ratios2 groups_all = groups3 + groups1 + groups2 experiment_groups = ['intact']*len(ratios3) + ['single_bud']*len(ratios1) + ['single_cell']*len(ratios2) group_id = [experiment_groups[i] + '_' + groups_all[i] for i in range(len(groups_all))] # quick visulization sns.swarmplot(group_id, ratios_all) tukeyTest(ratios_all, group_id) pd.unique(group_id) outputPrefix = 'SMG-intact-single-bud-single-cell-bud-ratios' outputFigPath = "../jupyter_figures/" + outputPrefix + ".svg" fig_width=1.5 fig_height=0.9 fig = plt.figure(figsize=(fig_width,fig_height), dpi=300) ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) plotting_order = ['intact_ratio_d1to0', 'single_bud_ratio_d1to0', 'single_cell_ratio_d1to0', 'intact_ratio_d2to1', 'single_bud_ratio_d2to1', 'single_cell_ratio_d2to1'] ax = sns.swarmplot(x=group_id, y=ratios_all, order=plotting_order, color="blue", size=2.0, alpha=.5) ax = sns.barplot(x=group_id, y=ratios_all, order=plotting_order, color=".7", alpha=1.0, errwidth=.7, errcolor="k", capsize=.2, ci=95) plt.ylim(0, 12) # plt.yticks([0, 5, 10]) plt.xlabel(None) plt.ylabel("Bud ratios") # make the bar width narrower change_bar_width(ax, .5) # rotate x tick labels if necessary # x_labels = ax.get_xticklabels() x_labels = ['intact culture', 'single bud', 'single cell', 'intact culture', 'single bud', 'single cell'] ax.set_xticklabels(labels=x_labels, rotation=30, ha="right") for o in fig.findobj(): o.set_clip_on(False) for o in ax.findobj(): o.set_clip_on(False) if outputFigPath is not None: plt.savefig(outputFigPath) outputPrefix = 'SMG-intact-single-bud-single-cell-bud-ratios-only-day2-to-1' outputFigPath = "../jupyter_figures/" + outputPrefix + ".svg" fig_width=0.5 fig_height=0.9 fig = plt.figure(figsize=(fig_width,fig_height), dpi=300) ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) plotting_order = ['intact_ratio_d2to1', 'single_bud_ratio_d2to1', 'single_cell_ratio_d2to1'] ax = sns.swarmplot(x=group_id, y=ratios_all, order=plotting_order, color="blue", size=2.0, alpha=.5) ax = sns.barplot(x=group_id, y=ratios_all, order=plotting_order, color=".7", alpha=1.0, errwidth=.7, errcolor="k", capsize=.2, ci=95) plt.ylim(0, 7) plt.yticks([0, 3, 6]) plt.xlabel(None) plt.ylabel("Bud ratios") # make the bar width narrower change_bar_width(ax, .5) # rotate x tick labels if necessary # x_labels = ax.get_xticklabels() x_labels = ['Intact culture', 'Single bud', 'Single cell'] ax.set_xticklabels(labels=x_labels, rotation=45, ha="right") for o in fig.findobj(): o.set_clip_on(False) for o in ax.findobj(): o.set_clip_on(False) if outputFigPath is not None: plt.savefig(outputFigPath) ```
github_jupyter
``` import pandas as pd import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score import numpy as np from scipy.spatial import distance from sklearn.tree import DecisionTreeClassifier d_tree = DecisionTreeClassifier() from sklearn.naive_bayes import GaussianNB gnb = GaussianNB() plt.rcParams["figure.figsize"] = (12,10) plt.rcParams.update({'font.size': 12}) ``` # **K-Nearest Neighbors Algorithm** **K-nearest neighbors is a non-parametric algorithm that can be utilized for classification or regression. For classification, the algorithm predicts the classification for a given input by finding the closest point or points to that input.** **The algorithm requires a distance calculation to measure between points and the assumption that points that are close together are similar.** <sup>Reference: [Machine Learning A Probabilistic Perspective by Kevin Murphy](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiossWtlvXyAhVvhOAKHaHYDNUQFnoECAQQAQ&url=http%3A%2F%2Fnoiselab.ucsd.edu%2FECE228%2FMurphy_Machine_Learning.pdf&usg=AOvVaw0ivnxQoBAr1Kn4BwTBbNxe)</sup> <sup>Reference: [Data Science from Scratch First Principles with Python by Joel Grus](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjO1s20-f7yAhW3GFkFHZwsBcEQFnoECAIQAQ&url=http%3A%2F%2Fmath.ecnu.edu.cn%2F~lfzhou%2Fseminar%2F%5BJoel_Grus%5D_Data_Science_from_Scratch_First_Princ.pdf&usg=AOvVaw3bJ0pcZM201kEXZjeTiLrr)</sup> ``` ``` ## **KNN from scratch** ``` ``` ### **Euclidean Distance between 2 Points** $d = \sqrt{(q_1 - p_1)^2 + (q_2 - p_2)^2}$ ### **Small Data Set Example** ``` ``` ### **Coding the KNN algorithm as a function** ``` ``` ### **Asteroid Data** ``` ``` #### **Measuring how well KNN predicts values** $\text{Accuracy Score =} \frac{\text{True Positive }+\text{ True Negative}}{\text{Total Number of Observed Values}}$ ``` ``` ## **KNN using Scikit-Learn** ``` X = df_features.to_numpy() y = df_target.to_numpy() cmap_light = ListedColormap(['orange', 'cyan']) cmap_bold = ListedColormap(['#FF0000', '#00FF00']) h = .02 x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) classifier.fit(X,y) Z = classifier.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) plt.figure() plt.pcolormesh(xx, yy, Z, cmap=cmap_light) plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title(f'KNN Classifier - Neighbors = {knn_neighbors}') plt.show() ``` <sup>Source: [Nearest Neighbors Classification from scikit-learn](https://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html#sphx-glr-auto-examples-neighbors-plot-classification-py)</sup> # **Issues with KNN** **There are two significant drawbacks to the KNN algorithm.** **1. The algorithm suffers from the "curse of dimensionality", meaning that the algorithm cannot handle data sets with a large number of features well.** **2. The KNN algorithm is slow at classifying variables relative to other algorithms.** ## **Curse of dimensionality** ### **Euclidean Distance for higher dimensions** $d = \sqrt{(q_1 - p_1)^2 + (q_2 - p_2)^2 + ... + (q_i - p_i)^2 + ... + (q_n - p_n)^2}$ ``` ``` <sup>Reference: [Data Science from Scratch First Principles with Python by Joel Grus](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjO1s20-f7yAhW3GFkFHZwsBcEQFnoECAIQAQ&url=http%3A%2F%2Fmath.ecnu.edu.cn%2F~lfzhou%2Fseminar%2F%5BJoel_Grus%5D_Data_Science_from_Scratch_First_Princ.pdf&usg=AOvVaw3bJ0pcZM201kEXZjeTiLrr)</sup> ## **Comparing Time to Run** ``` ``` # **References and Additional Learning** ## **Data Sets** - **[NASA JPL Asteroid Data set from Kaggle](https://www.kaggle.com/sakhawat18/asteroid-dataset)** - **[Microsoft Malware Data Set from Kaggle](https://www.kaggle.com/c/microsoft-malware-prediction)** ## **Textbooks** - **[Machine Learning A Probabilistic Perspective by Kevin Murphy](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiossWtlvXyAhVvhOAKHaHYDNUQFnoECAQQAQ&url=http%3A%2F%2Fnoiselab.ucsd.edu%2FECE228%2FMurphy_Machine_Learning.pdf&usg=AOvVaw0ivnxQoBAr1Kn4BwTBbNxe)** - **[Understanding Machine Learning: From Theory to Algorithms](https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf)** - **[Data Science from Scratch First Principles with Python by Joel Grus](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjO1s20-f7yAhW3GFkFHZwsBcEQFnoECAIQAQ&url=http%3A%2F%2Fmath.ecnu.edu.cn%2F~lfzhou%2Fseminar%2F%5BJoel_Grus%5D_Data_Science_from_Scratch_First_Princ.pdf&usg=AOvVaw3bJ0pcZM201kEXZjeTiLrr)** ## **Videos** - **[StatQuest: K-nearest neighbors, Clearly Explained by Josh Starmer](https://www.youtube.com/watch?v=HVXime0nQeI&t=219s)** - **[K-Nearest Neighbor by ritvikmath](https://www.youtube.com/watch?v=UR2ag4lbBtc&t=196s)**
github_jupyter
# Chernoff Faces, Deep Learning In this notebook, we use convolutional neural networks (CNNs) to classify the Chernoff faces generated from [chernoff-faces.ipynb](chernoff-faces.ipynb). We want to see if framing a numerical problem as an image problem and using CNNs to classify the data (images) would be a promising approach. ## Boilerplate code Below are the boilerplate code to get data loaders, train the model, and assess its different performance measures. ``` import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import time import os import copy from collections import namedtuple from sklearn.metrics import multilabel_confusion_matrix from collections import namedtuple import random def get_dataloaders(input_size=256, batch_size=4): data_transforms = { 'train': transforms.Compose([ transforms.Resize(input_size), transforms.CenterCrop(input_size), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]), 'test': transforms.Compose([ transforms.Resize(input_size), transforms.CenterCrop(input_size), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]), 'valid': transforms.Compose([ transforms.Resize(input_size), transforms.CenterCrop(input_size), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) } shuffles = { 'train': True, 'test': True, 'valid': False } data_dir = './faces' samples = ['train', 'test', 'valid'] image_datasets = { x: datasets.ImageFolder(os.path.join(data_dir, x), transform=data_transforms[x]) for x in samples } dataloaders = { x: torch.utils.data.DataLoader(image_datasets[x], batch_size=batch_size, shuffle=shuffles[x], num_workers=4) for x in samples } dataset_sizes = { x: len(image_datasets[x]) for x in samples } class_names = image_datasets['train'].classes return dataloaders, dataset_sizes, class_names, len(class_names) def train_model(model, criterion, optimizer, scheduler, dataloaders, dataset_sizes, num_epochs=25, is_inception=False): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): results = [] # Each epoch has a training and validation phase for phase in ['train', 'test']: if phase == 'train': optimizer.step() scheduler.step() model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): if is_inception and phase == 'train': outputs, aux_outputs = model(inputs) loss1 = criterion(outputs, labels) loss2 = criterion(aux_outputs, labels) loss = loss1 + 0.4*loss2 else: outputs = model(inputs) loss = criterion(outputs, labels) _, preds = torch.max(outputs, 1) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects.double() / dataset_sizes[phase] result = Result(phase, epoch_loss, float(str(epoch_acc.cpu().numpy()))) results.append(result) # deep copy the model if phase == 'test' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) results = ['{} loss: {:.4f} acc: {:.4f}'.format(r.phase, r.loss, r.acc) for r in results] results = ' | '.join(results) print('Epoch {}/{} | {}'.format(epoch, num_epochs - 1, results)) time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best val Acc: {:4f}'.format(best_acc)) # load best model weights model.load_state_dict(best_model_wts) return model def get_metrics(model, dataloaders, class_names): y_true = [] y_pred = [] was_training = model.training model.eval() with torch.no_grad(): for i, (inputs, labels) in enumerate(dataloaders['valid']): inputs = inputs.to(device) labels = labels.to(device) cpu_labels = labels.cpu().numpy() outputs = model(inputs) _, preds = torch.max(outputs, 1) for j in range(inputs.size()[0]): cpu_label = f'{cpu_labels[j]:02}' clazz_name = class_names[preds[j]] y_true.append(cpu_label) y_pred.append(clazz_name) model.train(mode=was_training) cmatrices = multilabel_confusion_matrix(y_true, y_pred, labels=class_names) metrics = [] for clazz in range(len(cmatrices)): cmatrix = cmatrices[clazz] tn, fp, fn, tp = cmatrix[0][0], cmatrix[0][1], cmatrix[1][0], cmatrix[1][1] sen = tp / (tp + fn) spe = tn / (tn + fp) acc = (tp + tn) / (tp + fp + fn + tn) f1 = (2.0 * tp) / (2 * tp + fp + fn) mcc = (tp * tn - fp * fn) / np.sqrt((tp + fp) * (tp + fn) * (tn + fp) * (tn + fn)) metric = Metric(clazz, tn, fp, fn, tp, sen, spe, acc, f1, mcc) metrics.append(metric) return metrics def print_metrics(metrics): for m in metrics: print('{}: sen = {:.5f}, spe = {:.5f}, acc = {:.5f}, f1 = {:.5f}, mcc = {:.5f}' .format(m.clazz, m.sen, m.spe, m.acc, m.f1, m.mcc)) random.seed(1299827) torch.manual_seed(1299827) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print('device = {}'.format(device)) Result = namedtuple('Result', 'phase loss acc') Metric = namedtuple('Metric', 'clazz tn fp fn tp sen spe acc f1 mcc') ``` ## Train, Test, Validate Below, we applied different image classification networks to the data and perserve the results in the comments. The different networks tried were * ResNet-18 * ResNet-152 * AlexNet * VGG-19 with batch normalization * SqueezeNet 1.1 * Inception v3 * Densenet-201 * GoogleNet * ShuffleNet V2 * MobileNet V2 * ResNeXt-101-32x8d The most promising initial results were with Inception V3 and so we used that network to learn. You will notice that we use transfer learning (the model with pre-trained weights) to bootstrap learning the weights. Also, we do 2 rounds of 50 epoch learning. ``` # Best val Acc: 0.900000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 1.00000, spe = 0.96000, acc = 0.97000, f1 = 0.94340, mcc = 0.92582 # 2: sen = 0.64000, spe = 0.94667, acc = 0.87000, f1 = 0.71111, mcc = 0.63509 # 3: sen = 0.72000, spe = 0.88000, acc = 0.84000, f1 = 0.69231, mcc = 0.58521 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.resnet18(pretrained=True) # model.fc = nn.Linear(model.fc.in_features, num_classes) # is_inception = False # Best val Acc: 0.900000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 1.00000, spe = 0.97333, acc = 0.98000, f1 = 0.96154, mcc = 0.94933 # 2: sen = 0.72000, spe = 0.90667, acc = 0.86000, f1 = 0.72000, mcc = 0.62667 # 3: sen = 0.64000, spe = 0.90667, acc = 0.84000, f1 = 0.66667, mcc = 0.56249 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.resnet152(pretrained=True) # model.fc = nn.Linear(model.fc.in_features, num_classes) # is_inception = False # Best val Acc: 0.730000 # 0: sen = 1.00000, spe = 0.90667, acc = 0.93000, f1 = 0.87719, mcc = 0.84163 # 1: sen = 1.00000, spe = 0.93333, acc = 0.95000, f1 = 0.90909, mcc = 0.88192 # 2: sen = 0.20000, spe = 0.92000, acc = 0.74000, f1 = 0.27778, mcc = 0.16607 # 3: sen = 0.44000, spe = 0.78667, acc = 0.70000, f1 = 0.42308, mcc = 0.22108 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.alexnet(pretrained=True) # model.classifier[6] = nn.Linear(4096, num_classes) # is_inception = False # Best val Acc: 0.910000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 1.00000, spe = 0.98667, acc = 0.99000, f1 = 0.98039, mcc = 0.97402 # 2: sen = 0.68000, spe = 0.96000, acc = 0.89000, f1 = 0.75556, mcc = 0.69282 # 3: sen = 0.84000, spe = 0.89333, acc = 0.88000, f1 = 0.77778, mcc = 0.69980 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.vgg19_bn(pretrained=True) # model.classifier[6] = nn.Linear(model.classifier[6].in_features, num_classes) # is_inception = False # Best val Acc: 0.780000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 1.00000, spe = 0.88000, acc = 0.91000, f1 = 0.84746, mcc = 0.80440 # 2: sen = 0.40000, spe = 0.90667, acc = 0.78000, f1 = 0.47619, mcc = 0.35351 # 3: sen = 0.48000, spe = 0.84000, acc = 0.75000, f1 = 0.48980, mcc = 0.32444 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.squeezenet1_1(pretrained=True) # model.classifier[1] = nn.Conv2d(512, num_classes, kernel_size=(1,1), stride=(1,1)) # model.num_classes = num_classes # is_inception = False # Best val Acc: 0.920000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 1.00000, spe = 0.97333, acc = 0.98000, f1 = 0.96154, mcc = 0.94933 # 2: sen = 0.76000, spe = 0.93333, acc = 0.89000, f1 = 0.77551, mcc = 0.70296 # 3: sen = 0.72000, spe = 0.92000, acc = 0.87000, f1 = 0.73469, mcc = 0.64889 dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=299) model = models.inception_v3(pretrained=True) model.AuxLogits.fc = nn.Linear(model.AuxLogits.fc.in_features, num_classes) model.fc = nn.Linear(model.fc.in_features, num_classes) is_inception = True # Best val Acc: 0.910000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 1.00000, spe = 0.96000, acc = 0.97000, f1 = 0.94340, mcc = 0.92582 # 2: sen = 0.72000, spe = 0.94667, acc = 0.89000, f1 = 0.76596, mcc = 0.69687 # 3: sen = 0.72000, spe = 0.90667, acc = 0.86000, f1 = 0.72000, mcc = 0.62667 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.densenet201(pretrained=True) # model.classifier = nn.Linear(model.classifier.in_features, num_classes) # is_inception = False # Best val Acc: 0.900000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 0.96000, spe = 1.00000, acc = 0.99000, f1 = 0.97959, mcc = 0.97333 # 2: sen = 0.64000, spe = 0.93333, acc = 0.86000, f1 = 0.69565, mcc = 0.60952 # 3: sen = 0.84000, spe = 0.88000, acc = 0.87000, f1 = 0.76364, mcc = 0.68034 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.googlenet(pretrained=True) # model.fc = nn.Linear(model.fc.in_features, num_classes) # is_inception = False # Best val Acc: 0.500000 # 0: sen = 1.00000, spe = 0.49333, acc = 0.62000, f1 = 0.56818, mcc = 0.44246 # 1: sen = 0.96000, spe = 0.82667, acc = 0.86000, f1 = 0.77419, mcc = 0.70554 # 2: sen = 0.00000, spe = 1.00000, acc = 0.75000, f1 = 0.00000, mcc = nan # 3: sen = 0.00000, spe = 1.00000, acc = 0.75000, f1 = 0.00000, mcc = nan # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.shufflenet_v2_x0_5(pretrained=True) # model.fc = nn.Linear(model.fc.in_features, num_classes) # is_inception = False # Best val Acc: 0.910000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 0.88000, spe = 0.98667, acc = 0.96000, f1 = 0.91667, mcc = 0.89175 # 2: sen = 0.72000, spe = 0.92000, acc = 0.87000, f1 = 0.73469, mcc = 0.64889 # 3: sen = 0.76000, spe = 0.88000, acc = 0.85000, f1 = 0.71698, mcc = 0.61721 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.mobilenet_v2(pretrained=True) # model.classifier[1] = nn.Linear(model.classifier[1].in_features, num_classes) # is_inception = False # Best val Acc: 0.890000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 1.00000, spe = 0.90667, acc = 0.93000, f1 = 0.87719, mcc = 0.84163 # 2: sen = 0.76000, spe = 0.90667, acc = 0.87000, f1 = 0.74510, mcc = 0.65812 # 3: sen = 0.44000, spe = 0.92000, acc = 0.80000, f1 = 0.52381, mcc = 0.41499 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.resnext101_32x8d(pretrained=True) # model.fc = nn.Linear(model.fc.in_features, num_classes) # is_inception = False model = model.to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9) scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1) model = train_model(model, criterion, optimizer, scheduler, dataloaders, dataset_sizes, num_epochs=50, is_inception=is_inception) print_metrics(get_metrics(model, dataloaders, class_names)) ```
github_jupyter
<a href="https://colab.research.google.com/github/njaramillov07/MDigital/blob/main/Clase2%2005_08_21.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> **Operadores Aritmeticos** --- Los operadores permiten realizar diferentes procesos de calculo en cualquier lenguaje de programación. Los operadores más basicos son: 1. Suma 2. Resta 3. Multiplicacion 4. División **Suma** --- símbolo suma(+) el cual se utilizara en medio de la declaración de las variables a operar. símbolo resta(-) ``` print(10+300) a=20 b=35 print(a+b) print(10-300) print(a-b) numero_botellas=12 print(numero_botellas + 8) numero_botellas=12 numero_botellas=numero_botellas + 8 print(numero_botellas) ``` **sumas entre los diferentes tipos de variables** --- Recordemos los tipos de variables vistos en la clase de ayer: 1. Declaracion de variables numericas enteros. 2. Flotantes(incorpora y flexibiliza los decimales) 3. Tipo cadena ``` q1=100 q2=24.5 print(type(q1)) print(type(q2)) print(type(q1+q2)) xq2=235 xq3=400 print(xq2-xq3) v1="quiero" v2="un celular" v3=v1+" "+v2 print(v1+v2) print(v3) xq2=235 xq3=400 print(xq2-xq3) print(int(xq2-xq3)) xq2=24.66 xq3=4 print(type(xq2-xq3)) ``` **IMPORTANTE** La resta no funciona directamente para varable tipo cadena.(Ejemplo error) ``` v1="quiero" v2="un celular" print(v1-v2) ``` **Multiplicación** --- Definamos la multiplicacion como la suma de varias veces un número, cantidad de veces indicadas por el otro número. Ejemplo: 4x2=4+4=8 El simbolo que manejaria es (*) ``` cali=5 palmira=6 print(cali*palmira) ``` **IMPORTANTE** Se aplica la ley conmutativa. Recordemos que la ley conmutativa es que el orden de los factores no altera el resultado (Aplica para suma y multiplicacion) ``` cali=5 palmira=6 print(palmira*cali) ``` **Division** --- ACLARACION: **NO** aplica la ley conmutativa. El simbolo de division (/). ``` a=20 b=4 print(a/b) print(20/4) print(a/b, b/a, a-b) ``` Para el **print** podemos separar las operaciones por variables con (,). Lo cual, no afecta la impresion de los resultados por operación asignada. **Tipos de Division** --- 1. Division exacta (//) 2. Division inexacta (/) ``` print(15/3) print(15/4) print(15//4) ``` Nota: operación de división, al igual que una division normal, **NO** es posible dividir entre cero, Porque en matematicas es un resultado indefinido. 4. Potenciación 5. Módulo **Potenciacion** --- En python el simbolo es (**). El primer valor es la base y el ultimo valor el exponente. ``` print(3**3) print(2**4) ``` **Módulo** --- Le llamamos módulo al residuo de la operación de división. El simbolo usado para mostrar el residuo es (%) El primer valor es el dividendo y el ultimo valor es el divisor. ``` print(88%9) ``` **Operadores de Comparacion** --- Los operadores de comparacion, sirven para comparar varios valores. En la vida cotidiana a la hora de tomar decisiones tenemos que decidir entre un camino u otro, ejemplo: compras, decisiones de vida, cominos por recorrer. **Igualdad** --- Simbolo en Python es (==) ``` x=30 z=31 print(x==z) x="casa" z="caza" print(x==z) ``` **Diferencia** --- A diferencia del operador de igualdad el simbolo esta combinado por el simbolo de admiracion, es decir, (!=) ``` x="casa" z="caza" print(x!=z) ``` **Mayor que** --- Simbolo en Python(>) Simpre en comparativa se designa cual es menor o el mayor, por el valor que esta a la izquierda de la operacion. ``` print(30>29.8) print(20>10) ``` **Menor que** --- Caso contrario el simbolo es (<). ``` x=4 y=3 print(x<y) ``` **Mayor o igual que** --- El simbolo al mayor que, con la diferencia de que la forma me dara resultado verdadero es si ambos son iguales y flexibiliza dado verdadero de igual forma si el valor de la izquierda es mayor que el de la deracha. El simbolo es (>=) ``` x=10 y=10 print(x>=y) x=10 y=5 print(x>=y) ``` **Menor o igual que** --- El simbolo es (<=) ``` c=45 print(c<=46) ``` **Tener en cuenta.** En comparaciones los resultados que nos arrojan siempre es de tipo Booleabo, es decir: True or False (verdadero o falso). **OPERADORES LOGICOS** --- Python incluye tres operadores logicos basicos base: 1. AND (y) 2. OR (o) 3. NOT (no) Los operadores logicos funcionan solo con valores o variables booleanas, a su vez, devuelven como respuesta valores boolenos. **Operador AND** --- Este operador solo devolvera (verdadero) si ambos valores los son. Como usar el operador AND? Para usar este operador basta con escribir un valor booleano, luego, la palabra "and" y por ultimo otro valor booleano. ``` print(True and False) ``` **Ejercicio 2 de contextualizacion** 1. True and True = True 2. True and False = False 3. False and False = False 4. False and True = False **Operador OR** --- A diferencia del operador AND, OR devuelve a todo verdadero a menos de que ambos valores sean **FALSOS** ``` print(True or False) print(False or False) ``` Ejercicio de contextualizacion de OR** 1. False and False = False 2. True and True = True 3. True and False = True 4. False and True = True ``` a=3 b=2 print(a>4 or a==4) print(b==b and a>=b) ``` **Operador NOT** --- Este operador es negacion, en español "no". Este operador es muy especial, dado que por si solo no sirve para hacer ninguna operacion, lo unico que hace es negar, o en palabras mas simples, invertir el valor booleano que viene despues de el. ``` print(not False) print(not True) ``` **PRECEDENCIA DE OPERADORES** --- Es muy importante manejar el orden de las operaciones al realizarlas, para no caer en problemas de codigo error. El orden debe ser: 1. De izquierda a derecha 2. Parentesis 3. Potenciacion o radicacion. 4. Multiplicacion y division 5. Suma y Resta. **Ejemplo:** 4-2(10) 1. Primero el parentesis 4-2*10= 2. Se continua con la multiplicacion 4-20= 3. Por ultimo la resta 4-20=16 **Cual crees que es el resultado de la siguiente operacion?** ``` a=False b=True c=True print(not c and (a or b)) print(3-2*(13-5)) ``` **Resumen** --- 1. Aplicacion de ley conmutativa: Solo para suma y resta (Operadores Basicos) 2. Los resultados con operadores logicos son de tipo booleano. 3. **NO** se puede realizar resta entre variables de tipo cadena (El procedimiento si abarca numeros es la conversion desde el inicio) 4. Si aplicamos el condicional AND, para el resultado se TRUE, la unica forma es que ambos valores sean verdaderos. 5. En cambio con el OR, si cumple que alguno de los dos valores son TRUE, el resultado sera TRUE.
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import random import math import json from torch.autograd import Variable from torch.distributions import Categorical from utils.qf_data import normalize,load_observations from tools.ddpg.replay_buffer import ReplayBuffer from tools.ddpg.ornstein_uhlenbeck import OrnsteinUhlenbeckActionNoise import tensorflow as tf from tensorboardX import SummaryWriter import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" C_CUDA = torch.cuda.is_available() def obs_normalizer(observation): # Normalize the observation into close/open ratio if isinstance(observation, tuple): observation = observation[0] observation = observation[:, :, 3:4] / observation[:, :, 0:1] observation = normalize(observation) return observation # Define actor network--CNN class Actor(nn.Module): def __init__(self,product_num, win_size): super(Actor, self).__init__() self.conv1 = nn.Conv2d( in_channels = 1, out_channels = 32, kernel_size = (1,3), #stride = (1,3) ) self.conv2 = nn.Conv2d( in_channels = 32, out_channels = 32, kernel_size = (1, win_size-2), #stride = (1, win_size-2) ) self.linear1 = nn.Linear((product_num + 1)*1*32, 64) self.linear2 = nn.Linear(64, 64) self.linear3 = nn.Linear(64,product_num + 1) def reset_parameters(self): self.linear1.weight.data.uniform_(*hidden_init(self.linear1)) self.linear2.weight.data.uniform_(*hidden_init(self.linear2)) self.linear3.weight.data.uniform_(-3e-3, 3e-3) def forward(self, state): conv1_out = self.conv1(state) conv1_out = F.relu(conv1_out) conv2_out = self.conv2(conv1_out) conv2_out = F.relu(conv2_out) # Flatten conv2_out = conv2_out.view(conv2_out.size(0), -1) fc1_out = self.linear1(conv2_out) fc1_out = F.relu(fc1_out) fc2_out = self.linear2(fc1_out) fc2_out = F.relu(fc2_out) fc3_out = self.linear3(fc2_out) fc3_out = F.softmax(fc3_out,dim=1) return fc3_out # Define policy gradient actor network--LSTM class Policy(nn.Module): def __init__(self,product_num, win_size,action_size): super(Policy, self).__init__() self.lstm = nn.LSTM(win_size,32,1) self.linear1 = nn.Linear((product_num+1)*1*32, 64) self.linear2 = nn.Linear(64, 64) self.linear3 = nn.Linear(64,action_size) # Define the vars for recording log prob and reawrd self.saved_log_probs = [] self.rewards = [] def reset_parameters(self): self.linear1.weight.data.uniform_(*hidden_init(self.linear1)) self.linear2.weight.data.uniform_(*hidden_init(self.linear2)) self.linear3.weight.data.uniform_(-3e-3, 3e-3) def forward(self, state): state = torch.reshape(state, (-1, 1, 3)) lstm_out, _ = self.lstm(state) batch_n,win_s,hidden_s = lstm_out.shape lstm_out = lstm_out.view(batch_n, win_s*hidden_s) lstm_out = torch.reshape(lstm_out, (-1, product_num+1, 32)) lstm_out = lstm_out.view(lstm_out.size(0), -1) fc1_out = self.linear1(lstm_out) #fc1_out = F.relu(fc1_out) fc2_out = self.linear2(fc1_out) #fc2_out = F.relu(fc2_out) fc3_out = self.linear3(fc2_out) fc3_out = F.softmax(fc3_out,dim=1) return fc3_out from environment.QF_env_2 import envs model_add = 'models/' model_name = 'QFPIS_DDPG_2' pg_model_name = 'QFPIS_PG_2' product_num =9 def load_actor(): actor = Actor(product_num =9,win_size = 3).cuda() actor.load_state_dict(torch.load(model_add+model_name)) return actor def load_policy(): test = Policy(product_num = 9, win_size = 3, action_size = 3).cuda() test.load_state_dict(torch.load(model_add+pg_model_name)) return test def test_model(env, actor, policy): eps = 1e-8 actions = [] weights = [] observation, info = env.reset() observation = obs_normalizer(observation) observation = observation.transpose(2, 0, 1) done = False ep_reward = 0 wealth=10000 wealths = [wealth] while not done: observation = torch.tensor(observation, dtype=torch.float).unsqueeze(0).cuda() action = actor(observation).squeeze(0).cpu().detach().numpy() # Here is the code for the policy gradient actions_prob = policy(observation) m = Categorical(actions_prob) # Selection action by sampling the action prob action_policy = m.sample() actions.append(action_policy.cpu().numpy()) w1 = np.clip(action, 0, 1) # np.array([cash_bias] + list(action)) # [w0, w1...] w1 /= (w1.sum() + eps) weights.append(w1) observation, reward,policy_reward, done, info = env.step(action,action_policy) r = info['log_return'] wealth=wealth*math.exp(r) wealths.append(wealth) ep_reward += reward observation = obs_normalizer(observation) observation = observation.transpose(2, 0, 1) return actions, weights,wealths # Testing enviroment data_add ='Data/' train_ratio = 0.8 window_size = 1 window_length = 3 market_feature = ['Open','High','Low','Close','QPL1','QPL-1','QPL2','QPL-2'] feature_num = 8 product_num =9 product_list = ["AUDCAD","AUDUSD","EURAUD","EURCAD","EURUSD","GBPUSD","NZDCHF","NZDUSD","USDCHF"] observations,ts_d_len = load_observations(window_size,market_feature,feature_num,product_list) train_size = int(train_ratio*ts_d_len) #print(train_size) test_observations = observations[int(train_ratio * observations.shape[0]):] test_observations = np.squeeze(test_observations) test_observations = test_observations.transpose(2, 0, 1) mode = "Test" steps = 405 env = envs(product_list,market_feature,feature_num,steps,window_length,mode,start_index=train_size+282,start_date='2019-6-25') actor = load_actor() policy = load_policy() test_actions, test_weight,wealths1= test_model(env,actor,policy) from environment.QF_env_1 import envs model_name = 'QFPIS_DDPG_1' pg_model_name = 'QFPIS_PG_1' def load_actor(): actor = Actor(product_num =9,win_size = 3).cuda() actor.load_state_dict(torch.load(model_add+model_name)) return actor def load_policy(): test = Policy(product_num = 9, win_size = 3, action_size = 2).cuda() test.load_state_dict(torch.load(model_add+pg_model_name)) return test def test_model(env, actor, policy): eps = 1e-8 actions = [] weights = [] observation, info = env.reset() observation = obs_normalizer(observation) observation = observation.transpose(2, 0, 1) done = False ep_reward = 0 wealth=10000 wealths = [wealth] while not done: observation = torch.tensor(observation, dtype=torch.float).unsqueeze(0).cuda() action = actor(observation).squeeze(0).cpu().detach().numpy() # Here is the code for the policy gradient actions_prob = policy(observation) m = Categorical(actions_prob) # Selection action by sampling the action prob action_policy = m.sample() actions.append(action_policy.cpu().numpy()) w1 = np.clip(action, 0, 1) # np.array([cash_bias] + list(action)) # [w0, w1...] w1 /= (w1.sum() + eps) weights.append(w1) observation, reward,policy_reward, done, info = env.step(action,action_policy) r = info['log_return'] wealth=wealth*math.exp(r) wealths.append(wealth) ep_reward += reward observation = obs_normalizer(observation) observation = observation.transpose(2, 0, 1) return actions, weights,wealths data_add ='Data/' train_ratio = 0.8 window_size = 1 window_length = 3 market_feature = ['Open','High','Low','Close','QPL1','QPL-1'] feature_num = 6 product_list = ["AUDCAD","AUDUSD","EURAUD","EURCAD","EURUSD","GBPUSD","NZDCHF","NZDUSD","USDCHF"] observations,ts_d_len = load_observations(window_size,market_feature,feature_num,product_list) train_size = int(train_ratio*ts_d_len) test_observations = observations[int(train_ratio * observations.shape[0]):] test_observations = np.squeeze(test_observations) test_observations = test_observations.transpose(2, 0, 1) mode = "Test" steps = 405 env = envs(product_list,market_feature,feature_num,steps,window_length,mode,start_index=train_size+282,start_date='2019-6-25') actor = load_actor() policy = load_policy() test_actions, test_weight,wealths2 = test_model(env,actor,policy) from environment.QF_env import envs model_name = 'DDPG' def load_model(): actor = Actor(product_num = 9,win_size = 3).cuda() actor.load_state_dict(torch.load(model_add+model_name)) return actor def test_model(env, model): observation, info = env.reset() observation = obs_normalizer(observation) observation = observation.transpose(2, 0, 1) done = False ep_reward = 0 counter = 0 wealth=10000 wealths = [wealth] while not done: observation = torch.tensor(observation, dtype=torch.float).unsqueeze(0).cuda() action = model(observation).squeeze(0).cpu().detach().numpy() observation, reward, done, info = env.step(action) ep_reward += reward r = info['log_return'] wealth=wealth*math.exp(r) wealths.append(wealth) observation = obs_normalizer(observation) observation = observation.transpose(2, 0, 1) return wealths data_add ='Data/' train_ratio = 0.8 window_size = 1 window_length = 3 market_feature = ['Open','High','Low','Close'] feature_num = 4 product_list = ["AUDCAD","AUDUSD","EURAUD","EURCAD","EURUSD","GBPUSD","NZDCHF","NZDUSD","USDCHF"] observations,ts_d_len = load_observations(window_size,market_feature,feature_num,product_list) train_size = int(train_ratio*ts_d_len) test_observations = observations[int(train_ratio * observations.shape[0]):] test_observations = np.squeeze(test_observations) test_observations = test_observations.transpose(2, 0, 1) mode = "Test" steps = 405 env = envs(product_list,market_feature,feature_num,steps,window_length,mode,start_index=train_size+282,start_date='2019-6-25') model = load_model() wealths3 = test_model(env,model) ts = pd.read_csv("Data/AUDUSD.csv") ts = ts[::-1].copy() date = ts['date'][0:407] date = pd.to_datetime(date) date = date[::-1].copy() plt.figure() plt.rcParams['figure.figsize'] = (8.0, 5.0) plt.plot(date,wealths2, label='QFPIS-1') plt.plot(date,wealths1, label='QFPIS-2') plt.plot(date,wealths3, label='DDPG') plt.grid(True) plt.xlabel("Date",fontsize=15) plt.xticks(rotation=45) plt.legend(loc='upper left') plt.ylabel("Portfolio value",fontsize=15) plt.savefig("figure/backtest/backtest.png",dpi=600,bbox_inches = 'tight') ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import scipy.interpolate ``` # Интерполяция многочленами Допустим мы знаем значения $f_k=f(x_k)$ некоторой функции $f(x)$ только на некотором множестве аргументов $x_k\in\mathbb R$, $k=1..K$. Мы хотим вычислять $f$ в точках $x$ лежащих между узлами интерполяции $x_k$, для чего мы строим интерполирующую функцию $p(x)$, определенную при всех значениях $x$ и совпадающую с $f$ в узлах интерполяции $f(x)=p(x)$, $k=1..K$. Распространенным (но не единственным) выбором для функции $p$ является многочлен степени $K-1$: $$p(x)=\sum_{n=0}^{K-1} p_n x^n,$$ число коэффициентов которого равно число известных значений функции, что позволяет однозначно найти эти коэффициенты. Формально коэффициенты находятся из системы уравнений: $$p(x_k)=f_k=\sum_{n=0}^{K-1} p_n x_k^n,$$ или в матричном виде $MP=F$, где $$ F=\begin{pmatrix}f_1\\\vdots\\f_K\end{pmatrix},\quad P=\begin{pmatrix}p_0\\\vdots\\p_{K-1}\end{pmatrix},\quad M=\begin{pmatrix} x_1^0 & \cdots & x_1^{K-1} \\ \vdots & \ddots & \vdots \\ x_K^0 & \cdots & x_K^{K-1} \\ \end{pmatrix}. $$ Матрица $M$ называется [матрицей Вандермонда](https://ru.wikipedia.org/wiki/%D0%9E%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D0%B8%D1%82%D0%B5%D0%BB%D1%8C_%D0%92%D0%B0%D0%BD%D0%B4%D0%B5%D1%80%D0%BC%D0%BE%D0%BD%D0%B4%D0%B0). Попробуем построить интерполяционный многочлен таким способом. ``` # Numpy уже имеет класс для полиномов: numpy.poly1d. # Для полноты изложения мы реализуем свой класс. class Poly(): def __init__(self, pn): """ Создает многочлен с коэффициентами pn: self(x) = sum_n pn[n] * x**n. Коэффициенты pn перечисляются в порядке возрастания степени одночлена. """ self.pn = pn def __call__(self, x): """ Вычисляет многочлен на векторе значений x. """ a = 1. # Здесь мы накапливаем степени x**n. p = 0. # Сюда мы помещаем сумму одночленов. for pn in self.pn: p += a*pn # Учитываем очередной одночлен. a *= x # Повышаем степень одночлена return p # Вспомогательная функция для счета матрицы Вандермонда. def vandermonde(xn): return np.power(xn[:,None], np.arange(len(xn))[None,:]) # Напишем функцию, которая будет находить интерполяционных многочлен через решение системы. def interp_naive(xn, fn): """ Возвращает интерполяционный многочлен, принимающий в точках xp значение fp. """ M = vandermonde(xn) # Мы используем функцию numpy для решения линейных систем. # Методы решения линейных систем обсуждаются в другой лабораторной работе. pn = np.linalg.solve(M, fn) return Poly(pn) # Возьмем логарифмическую решетку на интервале [1E-6,1] и посмотрим, насколько точно мы восстанавливаем многочлен. N = 8 # Число узлов интерполяции. x = np.logspace(-6,0,N) # Точки равномерной решетки # Будет интерполировать многочлен степени N-1, который зададим случайными коэффициентами. f = Poly(np.random.randn(N)) y = f(x) # Значения многочлена на решетке. p = interp_naive(x, y) # Строим интерполяционный многочлен. z = p(x) # Значения интерполяционного многочлена на решетке. print("Absolute error of values", np.linalg.norm(z-y)) print("Absolute error of coefficients", np.linalg.norm(f.pn-p.pn)) ``` Построенный интерполяционный многочлен принимает близкие значения к заданным в узлах значениям. Но хотя значения в узлах должны задавать многочлен однозначно, интерполяционный и интерполируемый многочлен имеют значительно отличающиеся коэффициенты. Значения многочленов между узлами также значительно отличаются. ``` t = np.linspace(0,1,100) _, ax = plt.subplots(figsize=(10,7)) ax.plot(t, f(t), '-k') ax.plot(t, p(t), '-r') ax.plot(x, f(x), '.') ax.set_xlabel("Argument") ax.set_ylabel("Function") ax.legend(["$f(x)$", "$p(x)$"]) plt.show() ``` ## Задания 1. Измените метод `__call__`, так чтобы он реализовывал [схему Горнера](https://ru.wikipedia.org/wiki/Схема_Горнера). Чем эта схема лучше? 2. Почему нахождение коэффициентов интерполяционного многочлена через решение системы дает ошибочный ответ? 3. Найдите определитель матрицы Вандермонда теоретически и численно. 4. Найдите числа обусловленности матрицы Вандермонда. Сравните экспериментально полученные погрешности решения системы и невязку с теоретическим предсказанием. На практике интерполяционный многочлен обычно находится в форме [многочлена Лагранжа](https://ru.wikipedia.org/wiki/Интерполяционный_многочлен_Лагранжа): $$ p(x)=\sum_{k=1}^K f_k L_k(x),\;\text{где}\; L_k(x)=\prod_{j\neq k}\frac{x-x_j}{x_k-x_j}. $$ Для ускорения вычисления многочлена Лагранжа используется [схема Эйткена](https://ru.wikipedia.org/wiki/Схема_Эйткена), основанная на рекурсии. Обозначим $p_{i,\ldots,j}$ многочлен Лагранжа, построенный по узлам интерполяции $(x_i,f_i),\ldots,(x_j,f_j)$, в частности искомое $p=p_{1,\ldots,K}$. Справедливо следующее соотношение, выражающие интеполяционный многочлен через такой же с меньшим числом узлов: $$ p_{i,\ldots,j}=\frac{(x-x_i)p_{i+1,\ldots,j}-(x-x_j)p_{i,\ldots,j-1}(x)}{x_j-x_i}. $$ База рекурсии задается очевидным равенством $p_{i}(x)=f_i$. [Интерполяционные формулы Ньютона](ru.wikipedia.org/wiki/Интерполяционные_формулы_Ньютона) дают другой популярный способ записи интерполяционного многочлена: $$ p(x)=\sum_{k=1}^K[x_1,\ldots,x_k]f\prod_{i=1}^k(x-x_j), $$ где разделенные разности $[\ldots]f$ заданы рекурсивно: $$ [x_1,\ldots,x_k,x]f=\frac{[x_1,\ldots,x_{k-1},x]f-[x_1,\ldots,x_{k-1},x_k]f}{x-x_k}. $$ Данные формулы можно рассматривать как дискретный вариант формулы Тейлора. На основе формул Ньютона разработан [алгоритм Невилла](https://en.wikipedia.org/wiki/Neville%27s_algorithm) для вычисления интерполяционного многочлена, по существу эквивалентный схеме Эйткена. ## Задания 5. Реализуйте метод Эйткена вычисления интерполяционного многочлена. 6. Если мы попытаемся восстановить многочлен через его значения в точках, аналогично заданию 2, получим ли мы с помощью метода Эйткена ответ точнее, чем через решение системы? 7. Scipy содержит готовую реализацию интерполяционного многочлена Лагранжа [`scipy.interpolate.lagrange`](docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpolate.lagrange.html). В документации отмечается, что метод численно неустойчив. Что это означает? 8. Ошибки в исходных данных для построения интерполяционного многочлена вызывают ошибки при вычислении интерполяционного многочлена в промежуточных точках. При каком расположении узлов интерполяция многочленом Лагранжа имеет наименьшую ошибку? Как это связано с численной устойчивостью? Рассмотрим теперь насколько хорошо интерполяционный многочлен прилижает интерполируемую функцию. ``` # В качестве интерполируемой функции возьмем f(x)=x sin(2x. def f(x): return x*np.sin(2*x) # Будем интерполировать функцию на интервале [x0-r,x0+r], где x0 = 10 r = 1 # В качестве узлов интерполяции возьмем равномерную решетку из N узлов. N = 5 xn = np.linspace(x0-r, x0+r, N) # Построим интерполяционный многочлен. p = interp_naive(xn, f(xn)) # Оценим точность приближения функции многочленом как максимум # отклонения значений многочлена от значений функции на интервале. # Так как мы не можем рассмотреть все точки, то ограничимся # плотной решеткой. tn = np.linspace(x0-r, x0+r, 10000) error = np.abs(f(tn)-p(tn)) print("Error", np.max(error)) _, ax = plt.subplots(figsize=(10,5)) ax.semilogy(tn, error) ax.set_xlabel("Argument") ax.set_ylabel("Absolute error") plt.show() ``` # Задания 9. Найдите погрешность прилижения функции $f$ интерполяционным многолченом $p$ для $x0=10, 100, 1000$ и для $N=5, 10, 15$. Объясните получающиеся результаты. 10. Постройте график зависимости ошибки от числа узлов интерполяции $N$ для $x0=100$ и $r=5$ в диапазоне $5\leq N \leq 50$. 11. Повторите задания 9 и 10 для узлов интерполяции Чебышева: $$x_n=x0+r\cos\left(\frac{\pi}{2}\frac{2n-1}{N}\right),\; k=1\ldots N.$$ 12. Сравните распределение ошибки внутри интервала $x\in[x0-r,x0+r]$ для равномерно расположенных узлов и для узлов Чебышева. 13. Повторите задания 9 и 10 для функции $f(x)=|x-1|$, $x0=1$, $r=1$. Объясните наблюдающиеся различия. Использование интерполяционного полинома очень высокой степени часто приводит к тому, что в некоторых точках погрешность приближения оказывается очень большой. Вместо одного многочлена высокой степени, приближающего функцию на всем интервале, можно использовать несколько многочленов меньше степени, каждый из которых приближает функцию только на подинтервале. Если функция обладает некоторой степенью гладкости, например, несколько ее производных непрерывные функции, то такую же гладкость естественно требовать от результирующего семейства интерполяционных многочленов, что накладывает ограничения на их коэффициенты. Получающаяся кусочно-полиномиальная функция называется [сплайном](https://ru.wikipedia.org/wiki/%D0%A1%D0%BF%D0%BB%D0%B0%D0%B9%D0%BD). [Кубическим сплайном](https://ru.wikipedia.org/wiki/%D0%9A%D1%83%D0%B1%D0%B8%D1%87%D0%B5%D1%81%D0%BA%D0%B8%D0%B9_%D1%81%D0%BF%D0%BB%D0%B0%D0%B9%D0%BD) дефекта 1 называется функция, которая: 1. на каждом интервале $[x_{k-1}, x_k]$ является многочленом третьей степени (или меньше); 2. имеет непрерывные первую и вторую производные во всех точках; 3. совпадает с интерполируемой функцией в узлах $x_k$. # Задания 14. Для функции из задания 9 постройте кубический сплайн дефекта 1 с узлами из задания 9. Можете воспользоваться функциями `scipy.interpolate.splrep` и `scipy.interpolate.splev` или реализовать свои аналоги. 15. Изучите зависимость погрешности приближения функции сплайном от числа узлов интерполяции. Сравните с результатом из задания 10. Когда погрешности совпадут? 16. Как можно обобщить изученные методы интерполяции на кривые в многомерном пространстве? 17. Как можно интерполировать функции нескольких переменных? 18. Какие еще способы интерполяции существуют?
github_jupyter
# <p style="text-align: center;">Clusterização e algoritmo K-means</p> Organizar dados em agrupamentos é um dos modos mais fundamentais de compreensão e aprendizado. Como por exemplo, os organismos em um sistema biologico são classificados em domínio, reino, filo, classe, etc. A análise de agrupamento é o estudo formal de métodos e algoritmos para agrupar objetos de acordo com medidas ou características semelhantes. A análise de cluster, em sua essência, não utiliza rótulos de categoria que marcam objetos com identificadores anteriores, ou seja, rótulos de classe. A ausência de informação de categoria distingue o agrupamento de dados (aprendizagem não supervisionada) da classificação ou análise discriminante (aprendizagem supervisionada). O objetivo da clusterização é encontrar estruturas em dados e, portanto, é de natureza exploratória. A técnica de Clustering tem uma longa e rica história em uma variedade de campos científicos. Um dos algoritmos de clusterização mais populares e simples, o K-means, foi publicado pela primeira vez em 1955. Apesar do K-means ter sido proposto há mais de 50 anos e milhares de algoritmos de clustering terem sido publicados desde então, o K-means é ainda amplamente utilizado. Fonte: Anil K. Jain, Data clustering: 50 years beyond K-means, Pattern Recognition Letters, Volume 31, Issue 8, 2010 # Objetivo - Implementar as funções do algoritmo KMeans passo-a-passo - Comparar a implementação com o algoritmo do Scikit-Learn - Codificar o Método do Cotovelo # Carregando os dados de teste Carregue os dados disponibilizados, e identifique visualmente em quantos grupos os dados parecem estar distribuídos. ``` # import libraries # linear algebra import numpy as np # data processing import pandas as pd # data visualization from matplotlib import pyplot as plt # load the data with pandas dataset = pd.read_csv('dataset.csv', header=None) dataset = np.array(dataset) plt.scatter(dataset[:,0], dataset[:,1], s=10) plt.show() ``` ## Criar um novo dataset para práticar ``` # Selecionar três centróides cluster_center_1 = np.array([2,3]) cluster_center_2 = np.array([6,6]) cluster_center_3 = np.array([10,1]) # Gerar amostras aleátorias a partir dos centróides escolhidos cluster_data_1 = np.random.randn(100, 2) + cluster_center_1 cluster_data_2 = np.random.randn(100,2) + cluster_center_2 cluster_data_3 = np.random.randn(100,2) + cluster_center_3 new_dataset = np.concatenate((cluster_data_1, cluster_data_2, cluster_data_3), axis = 0) plt.scatter(new_dataset[:,0], new_dataset[:,1], s=10) plt.show() ``` # 1. Implementar o algoritmo K-means Nesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída. ## 1.1 Inicializar os centróides A primeira etapa do algoritmo consiste em inicializar os centróides de maneira aleatória. Essa etapa é uma das mais importantes do algoritmo e uma boa inicialização pode diminuir bastante o tempo de convergência. Para inicializar os centróides você pode considerar o conhecimento prévio sobre os dados, mesmo sem saber a quantidade de grupos ou sua distribuição. > Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html ``` def calculate_initial_centers(dataset, k): """ Inicializa os centróides iniciais de maneira arbitrária Argumentos: dataset -- Conjunto de dados - [m,n] k -- Número de centróides desejados Retornos: centroids -- Lista com os centróides calculados - [k,n] """ #### CODE HERE #### minimum = np.min(dataset, axis=0) maximum = np.max(dataset, axis=0) shape = [k, dataset.shape[1]] centroids = np.random.uniform(minimum, maximum, size=shape) ### END OF CODE ### return centroids ``` Teste a função criada e visualize os centróides que foram calculados. ``` k = 3 centroids = calculate_initial_centers(dataset, k) plt.scatter(dataset[:,0], dataset[:,1], s=10) plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red',s=100) plt.show() ``` ## 1.2 Definir os clusters Na segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados. ### 1.2.1 Função de distância Codifique a função de distância euclidiana entre dois pontos __(a, b)__. Definido pela equação: $$ dist(a, b) = \sqrt{(a_1-b_1)^{2}+(a_2-b_2)^{2}+ ... + (a_n-b_n)^{2}} $$ $$ dist(a, b) = \sqrt{\sum_{i=1}^{n}(a_i-b_i)^{2}} $$ ``` def euclidean_distance(a, b): """ Calcula a distância euclidiana entre os pontos a e b Argumentos: a -- Um ponto no espaço - [1,n] b -- Um ponto no espaço - [1,n] Retornos: distance -- Distância euclidiana entre os pontos """ #### CODE HERE #### distance = np.sqrt(np.sum(np.square(a-b))) ### END OF CODE ### return distance ``` Teste a função criada. ``` a = np.array([1, 5, 9]) b = np.array([3, 7, 8]) if (euclidean_distance(a,b) == 3): print("Distância calculada corretamente!") else: print("Função de distância incorreta") ``` ### 1.2.2 Calcular o centróide mais próximo Utilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centróide mais próximo de um ponto qualquer. > Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmin.html ``` def nearest_centroid(a, centroids): """ Calcula o índice do centroid mais próximo ao ponto a Argumentos: a -- Um ponto no espaço - [1,n] centroids -- Lista com os centróides - [k,n] Retornos: nearest_index -- Índice do centróide mais próximo """ #### CODE HERE #### distance_zeros = np.zeros(centroids.shape[0]) for index, centroid in enumerate(centroids): distance = euclidean_distance(a, centroid) distance_zeros[index] = distance nearest_index = np.argmin(distance_zeros) ### END OF CODE ### return nearest_index ``` Teste a função criada ``` # Seleciona um ponto aleatório no dataset index = np.random.randint(dataset.shape[0]) a = dataset[index,:] # Usa a função para descobrir o centroid mais próximo idx_nearest_centroid = nearest_centroid(a, centroids) # Plota os dados ------------------------------------------------ plt.scatter(dataset[:,0], dataset[:,1], s=10) # Plota o ponto aleatório escolhido em uma cor diferente plt.scatter(a[0], a[1], c='magenta', s=30) # Plota os centroids plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100) # Plota o centroid mais próximo com uma cor diferente plt.scatter(centroids[idx_nearest_centroid,0], centroids[idx_nearest_centroid,1], marker='^', c='springgreen', s=100) # Cria uma linha do ponto escolhido para o centroid selecionado plt.plot([a[0], centroids[idx_nearest_centroid,0]], [a[1], centroids[idx_nearest_centroid,1]],c='orange') plt.annotate('CENTROID', (centroids[idx_nearest_centroid,0], centroids[idx_nearest_centroid,1],)) plt.show() ``` ### 1.2.3 Calcular centróide mais próximo de cada dado do dataset Utilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centróide mais próximo de cada dado do dataset. ``` def all_nearest_centroids(dataset, centroids): """ Calcula o índice do centroid mais próximo para cada ponto do dataset Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] Retornos: nearest_indexes -- Índices do centróides mais próximos - [m,1] """ #### CODE HERE #### nearest_indexes = np.zeros(dataset.shape[0]) for index, a in enumerate(dataset): nearest_indexes[index] = nearest_centroid(a, centroids) ### END OF CODE ### return nearest_indexes ``` Teste a função criada visualizando os cluster formados. ``` nearest_indexes = all_nearest_centroids(dataset, centroids) plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes) plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100) plt.show() ``` ## 1.3 Métrica de avaliação Após formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação. O algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conhecida como __inertia__. $$\sum_{i=0}^{n}\min_{c_j \in C}(||x_i - c_j||^2)$$ A __inertia__, ou o critério de soma dos quadrados dentro do cluster, pode ser reconhecido como uma medida de o quão internamente coerentes são os clusters, porém ela sofre de alguns inconvenientes: - A inertia pressupõe que os clusters são convexos e isotrópicos, o que nem sempre é o caso. Desta forma, pode não representar bem em aglomerados alongados ou variedades com formas irregulares. - A inertia não é uma métrica normalizada: sabemos apenas que valores mais baixos são melhores e zero é o valor ótimo. Mas em espaços de dimensões muito altas, as distâncias euclidianas tendem a se tornar infladas (este é um exemplo da chamada “maldição da dimensionalidade”). A execução de um algoritmo de redução de dimensionalidade, como o PCA, pode aliviar esse problema e acelerar os cálculos. Fonte: https://scikit-learn.org/stable/modules/clustering.html Para podermos avaliar os nosso clusters, codifique a métrica da inertia abaixo, para isso você pode utilizar a função de distância euclidiana construída anteriormente. $$inertia = \sum_{i=0}^{n}\min_{c_j \in C} (dist(x_i, c_j))^2$$ ``` def inertia(dataset, centroids, nearest_indexes): """ Soma das distâncias quadradas das amostras para o centro do cluster mais próximo. Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] nearest_indexes -- Índices do centróides mais próximos - [m,1] Retornos: inertia -- Soma total do quadrado da distância entre os dados de um cluster e seu centróide """ #### CODE HERE #### inertia = 0 for index, centroid in enumerate(centroids): dataframe = dataset[nearest_indexes == index,:] for a in dataframe: inertia += np.square(euclidean_distance(a,centroid)) ### END OF CODE ### return inertia ``` Teste a função codificada executando o código abaixo. ``` tmp_data = np.array([[1,2,3],[3,6,5],[4,5,6]]) tmp_centroide = np.array([[2,3,4]]) tmp_nearest_indexes = all_nearest_centroids(tmp_data, tmp_centroide) if inertia(tmp_data, tmp_centroide, tmp_nearest_indexes) == 26: print("Inertia calculada corretamente!") else: print("Função de inertia incorreta!") # Use a função para verificar a inertia dos seus clusters inertia(dataset, centroids, nearest_indexes) ``` ## 1.4 Atualizar os clusters Nessa etapa, os centróides são recomputados. O novo valor de cada centróide será a media de todos os dados atribuídos ao cluster. ``` def update_centroids(dataset, centroids, nearest_indexes): """ Atualiza os centroids Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] nearest_indexes -- Índices do centróides mais próximos - [m,1] Retornos: centroids -- Lista com centróides atualizados - [k,n] """ #### CODE HERE #### for index, centroid in enumerate(centroids): dataframe = dataset[nearest_indexes == index,:] if(dataframe.size != 0): centroids[index] = np.mean(dataframe, axis=0) ### END OF CODE ### return centroids ``` Visualize os clusters formados ``` nearest_indexes = all_nearest_centroids(dataset, centroids) # Plota os os cluster ------------------------------------------------ plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes) # Plota os centroids plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100) for index, centroid in enumerate(centroids): dataframe = dataset[nearest_indexes == index,:] for data in dataframe: plt.plot([centroid[0], data[0]], [centroid[1], data[1]], c='lightgray', alpha=0.3) plt.show() ``` Execute a função de atualização e visualize novamente os cluster formados ``` centroids = update_centroids(dataset, centroids, nearest_indexes) ``` # 2. K-means ## 2.1 Algoritmo completo Utilizando as funções codificadas anteriormente, complete a classe do algoritmo K-means! ``` class KMeans(): def __init__(self, n_clusters=8, max_iter=300): self.n_clusters = n_clusters self.max_iter = max_iter def fit(self,X): # Inicializa os centróides self.cluster_centers_ = calculate_initial_centers(X, self.n_clusters) # Computa o cluster de cada amostra self.labels_ = all_nearest_centroids(X, self.cluster_centers_) # Calcula a inércia inicial old_inertia = inertia(X, self.cluster_centers_, self.labels_) for index in range(self.max_iter): #### CODE HERE #### self.cluster_centers_ = update_centroids(X, self.cluster_centers_, self.labels_) self.labels_ = all_nearest_centroids(X, self.cluster_centers_) self.inertia_ = inertia(X, self.cluster_centers_, self.labels_) if(old_inertia == self.inertia_): break else: old_inertia = self.inertia_ ### END OF CODE ### return self def predict(self, X): return all_nearest_centroids(X, self.cluster_centers_) ``` Verifique o resultado do algoritmo abaixo! ``` kmeans = KMeans(n_clusters=3) kmeans.fit(dataset) print("Inércia = ", kmeans.inertia_) plt.scatter(dataset[:,0], dataset[:,1], c=kmeans.labels_) plt.scatter(kmeans.cluster_centers_[:,0], kmeans.cluster_centers_[:,1], marker='^', c='red', s=100) plt.show() ``` ## 2.2 Comparar com algoritmo do Scikit-Learn Use a implementação do algoritmo do scikit-learn do K-means para o mesmo conjunto de dados. Mostre o valor da inércia e os conjuntos gerados pelo modelo. Você pode usar a mesma estrutura da célula de código anterior. > Dica: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans ``` from sklearn.cluster import KMeans as scikit_KMeans scikit_kmeans = scikit_KMeans(n_clusters=3) scikit_kmeans.fit(dataset) print("Inércia = ", scikit_kmeans.inertia_) plt.scatter(dataset[:,0], dataset[:,1], c=scikit_kmeans.labels_) plt.scatter(scikit_kmeans.cluster_centers_[:,0], scikit_kmeans.cluster_centers_[:,1], c='red') plt.show() ``` # 3. Método do cotovelo Implemete o método do cotovelo e mostre o melhor K para o conjunto de dados. ``` n_clusters_test = 8 n_sequence = np.arange(1, n_clusters_test+1) inertia_vec = np.zeros(n_clusters_test) for index, n_cluster in enumerate(n_sequence): inertia_vec[index] = KMeans(n_clusters=n_cluster).fit(dataset).inertia_ plt.plot(n_sequence, inertia_vec, 'ro-') plt.show() ```
github_jupyter
## Regression Inference and OLS Asymptotics In this notebook, we are going to study and demonstrate the use of *Python* to perform **statistical inference** to test our regression models. We are also going to explore the **Asymptotic Theory** and understand how it allows us to relax some assumptions needed to derive the sampling distribution of estimators if the sample is size is large enough. **Topics:** 1. The *t* Test 2. Confidence Intervals 3. Linear Restrictions: *F* Tests 4. Simulation Exercises 5. LM Test Section 4.1 of Wooldridge (2019) adds assumption MLR.6 (normal distribution of the error term) to the previous assumptions MLR.1 through MLR.5. Together, these assumptions consitute the classical linear model (CLM). The main additional result we get from this assumption is stated in Theorem 4.1: The OLS parameter estimators are normally distributed (conditional on the regressors $x_1, x_2, ..., x_k$). The benefit of this result is that it allows us to do statistical inference similar to the approaches discussed the simple estimator of the mean of a normally distributed random variable. ### 1. The *t* Test After the sign and magnitude of the estimated parameters, empirical resarch typically pays most attention to the results of *t* tests discussed in this section. #### General Setup An important type of hypotheses we are often interested in is of the form $$H_0: \beta_j = a_j$$ where $a_j$ is some given number, very often, $a_j$ = 0. For the most common cast of two-tailed tests, the alternative hypothesis is $$H_1: \beta_j \neq a_j$$ and for one-tailed tests it is either one of $$H_1: \beta_j > a_j$$ or $$H_1: \beta_j < a_j$$ These hypotheses can be conveniently tested using a *t* test which is based on the test statistic $$t = \frac{\hat{\beta}_j - a_j}{se(\hat{\beta}_j)}$$ If $H_0$ is in fact true and the CLM assumptions holds, then this statistic has a t distribution with *n - k - 1* degree of freedom. #### Standard Case Very often, we want to test whether there is any relation at all between the dependent variable *y* and a regressor $x_j$ and do not want to impose a sign on the partial effect *a priori*. This is a mission for the standard two-sided *t* test with the hypothetical value $a_j$ = 0, so $$H_0: \beta_j = 0$$ $$H_1: \beta_j \neq 0$$ $$t_{\hat{\beta}_j} = \frac{\hat{\beta}_j}{se(\hat{\beta}_j)}$$ The subscript on the *t* statistic indicates that this is **the** *t* value for $\hat{\beta}_j$ for this frequent version of the test. Under $H_0$, it has the *t* distribution with *n - k - 1* degree of freedom implying that the probability that $|t_{\hat{\beta}_j}| > c$ is equal to $\alpha$ if *c* is the $1 - \frac{\alpha}{2}$ quantile of this distribution. If $\alpha$ is our significance level (e.g. $\alpha = 5\%$), then we reject $H_0$ if $|t_{\hat{\beta}_j}| > c$ in our sample. For the typical significance level $\alpha = 5\%$, the critical value *c* will be around 2 for reasonable large degrees of freedom and approach the counterpart of 1.96 from the standard normal distribution in very large samples. The *p* value indicates the smallest value of the significance level $\alpha$ for which we would still reject $H_0$ using our sample. So it is the probability for a random variable *T* with the respective *t* distribution that |*T*| > $|t_{\hat{\beta}_j}|$ where $t_{\hat{\beta}_j}$ is the value of the *t* statistic in our particular sample. In our two-tailed test, it can be calculated as $$p_{\hat{\beta}_j} = 2 \cdot F_{t_{n-k-1}} \cdot (-|t_{\hat{\beta}_j}|)$$ where $F_{t_{n-k-1}} (\cdot)$ is the CDF of the *t* distribution with *n - k - 1* degree of freedom. If our software provides us with the relevant *p* values, they are easy to use: We reject $H_0$ if $p_{\hat{\beta}_j} \leq \alpha$. Since this standard case of a *t* test is so common, **statsmodels** provides us with the relevant *t* and *p* values directly in the **summary** of the estimation results we already saw in the previous notebooks. The regression table includes for all regressors and the intercept: - parameter estimates and standard errors - test statistics $t_{\hat{\beta}_j}$ in column **t** - respective *p* values $p_{\hat{\beta}_j}$ in the colmun **P>|t|** - respective 95% confidence interval in columns **[0.025 and 0.975]** #### Wooldridge, Example 4.3: Determinants of College GPA We have repeatedly used the data set *GPA1* in the previous notebooks. This example uses three regressors and estimates a regression model of the form. $$colGPA = \beta_0 + \beta_1 \cdot hsGPA + \beta_2 \cdot ACT + \beta_3 \cdot skipped + u$$ For the critical values of the *t* test, using the normal approximation instead of the exact *t* distribution with *n - k - 1* = 137 d.f. doesn't make much of a difference: ``` # Import Modules import scipy.stats as stats import numpy as np # CV for alpha = 5% and 1% using the t distribution with 137 d.f. alpha = np.array([0.05, 0.01]) cv_t = stats.t.ppf(1 - alpha / 2, 137) print(f'Critical Values by t Distribution: {cv_t}\n') # CV for alpha = 5% and 1% using the normal approximation cv_n = stats.norm.ppf(1 - alpha / 2) print(f'Critical Values by Normal Distribution: {cv_n}\n') ``` We presents the standard **summary** which directly contains all the information to test the hypotheses for all parameters. The *t* statistics for all coefficients excepts $\beta_2$ are larger in absolute value than the *critical value* c = 2.61 (or c = 2.58 using the normal approximation) for $\alpha$ = 1%. So we would reject $H_0$ for all usual significance levels. By construction, we draw the same conclusions from the *p* values. In order to confirm that **statsmodels** is exactly using the formulas of Wooldridge (2019). We next reconstruct the *t* and *p* values manually. We extract the coefficients (**params**) and standard errors (**bse**) from the regression results, and simply apply the $t_{\hat{\beta}_j}$ and $p_{\hat{\beta}_j}$ equations. ``` # Import Modules import wooldridge as woo import scipy.stats as stats import numpy as np # Import data set gpa1 gpa1 = woo.dataWoo('gpa1') # Store and display results: reg = smf.ols(formula = 'colGPA ~ hsGPA + ACT + skipped', data = gpa1) results = reg.fit() print(f'Regression Summary: \n{results.summary()}\n') # Manually confirm the formulas: # Extract coefficients and SE b = results.params se = results.bse # Reproduce t statistic tstat = b / se print(f't Statistics: \n{tstatat}\n') # Reproduce p value pval = 2 * stats.t.cdf(-abs(tstat), 137) print(f'P Value: \n{pval}\n') ``` #### Other Hypotheses For a one-tailed test, the critical value *c* of the *t* test and the *p* values have to be adjsuted appropriately Wooldridge (2019) provides a general discussion in Section 4.2. For testing the null hypothesis $H_0: \beta_j = a_j$, the tests for the three common alternative hypotheses are summarized in the following table. **One- and Two-tailed *t* Tests for $H_0: \beta_j = a_j$** | $H_1$: | $\beta_j \neq a_j$ | $\beta_j > a_j$ | $\beta_j < a_j$ | | :---: | :---: | :---: | :---: | | *c* = quantile | $1 - \frac{\alpha}{2}$ | $1 - \alpha$ | $1 - \alpha$ | | Reject $H_0$ if | $|t_{\hat{\beta}_j}| > c$ | $\hat{\beta}_j > c$ | $\hat{\beta}_j > c$ | | *p* value | $2 \cdot F_{t_{n - k - 1}} \cdot (-|t_{\hat{\beta}_j}|)$ | $F_{t_{n - k - 1}} \cdot (-t_{\hat{\beta}_j})$ | $F_{t_{n - k - 1}} \cdot (-t_{\hat{\beta}_j})$ | Given the stardard regerssion output including the *p* value for two-sided tests $p_{\hat{\beta}_j}$, we can easily do one-sided *t* tests for the null hypothesis $H_0: \beta_j = 0$ in two steps: * Is $\hat{\beta}_j$ positive (if $H_1: \beta_j > 0$) or negative (if $H_1: \beta_j < 0$)? - No -> Do not reject $H_0$ since this cannot be evidence against $H_0$. - Yes -> The relevent *p* value is half of the reported $p_{\hat{\beta}_j}$. - Reject $H_0$ if $p = \frac{1}{2} p_{\hat{\beta}_j} < \alpha$. #### Wooldridge, Example 4.1: Hourly Wage Equation We have already estimated the wage equation $$log(wage) = \beta_0 + \beta_1 \cdot educ + \beta_2 \cdot exper + \beta_3 \cdot tenure + u$$ Now we are ready to test $H_0: \beta_2 > 0$. For the critical values of the *t* test, using the normal approximation instead of the exact *t* distribution with *n - k - 1* = 522 d.f. doesn't make any relevant difference: ``` # Import Modules import scipy.stats as stats import numpy as np # CV for alpha = 5% and 1% using the t distribution with 522 d.f. alpha = np.array([0.05, 0.01]) cv_t = stats.t.ppf(1 - alpha, 522) print(f'Critical Values by t Distribution: {cv_t}\n') # CV for alpha = 5% and 1% using the normal approximation cv_n = stats.norm.ppf(1 - alpha) print(f'Critical Values by Normal Distribution: {cv_n}\n') ``` In this example, we show the standard regression output. The reported *t* statistic for the parameter of *exper* is $t_{\hat{\beta}_2}$ = 2.391 which is larger than the critical value *c* = 2.33 for the significance level $\alpha$ = 1%, so we reject $H_0$. By construction, we get the same answer from looking at the *p* value. Like always, the reported $p_{\hat{\beta}_j}$ value is for a two-sided test, so we have to divide it by 2. The resulting value $p = \frac{0.017}{2} = 0.0085 < 0.01$, so we reject $H-0$ using an $\alpha$ = 1% significance level. ``` # Import Modules import wooldridge as woo import numpy as np import statsmodels.formula.api as smf # Import data set 'wage1' wage1 = woo.dataWoo('wage1') # Construct the regression model and print the model summary reg = smf.ols(formula = 'np.log(wage) ~ educ + exper + tenure', data = wage1) results = reg.fit() print(f'Regression Summary Output: \n{results}\n') ``` ### 2. Confidence Intervals We have already looked at confidence intervals (CI) for the mean of a normally distributed random variabel in the previous notebook. CI for the regression parameters are equally easy to construct and closely related to *t* test. Wooldridge (2019, Section 4.3) provides a succinct discussion. The 95% confidence interval for parameter $\beta_j$ is simply. $$\hat{\beta}_j \pm c \cdot se(\hat{\beta}_j)$$ where *c* is the same critical value for the two-sided *t* test using a significance level $\alpha$ = 5%. Wooldridge (2019) shows examples of how to manually construct these CI. **statsmodels** provides the 95% confdience intervals for all parameters in the regression table. If you use the method **conf_int()** on the object with the regression results, you can compute other significance levels. Below example demonstrates the procedure. #### Wooldridge, Example 4.8: Model of R&D Expenditures We study the relationship between the R&D expenditures of a firm, its size, and the profit margin for a sample of 32 firms in the chemical industry. The regression equation is $$log(rd) = \beta_0 +\beta_1 \cdot log(sales) + \beta_2 \cdot profmarg + u$$ Here, we present the regression results as well as the 95% and 99% CI. See Wooldridge (2019) for the manual calculation of the CI and comments on the results. ``` # Import Modules import wooldridge as woo import numpy as np import statsmodels.formula.api as smf # Import data set 'rdchem' rdchem = woo.dataWoo('rdchem') # OLS regression reg = smf.ols(formula = 'np.log(rd) ~ np.log(sales) + profmarg', data = rdchem) results = reg.fit() print(f'Regression Summary Output: \n{results}\n') # 95% and 99% Confidence Interval: ci95 = results.conf_int(0.05) ci99 = results.conf_int(0.01) print(f'95% Confidence Interval: {ci95}\n') print(f'99% Confidence Interval: {ci99}\n') ``` ### 3. Linear Restrictions: *F* Tests Wooldridge (2019, Sections 4.4 and 4.5) discusses more general tests than those for the null hypotheses for individual estimated parameter. They can involve one or more hypotheses involving one or more population parameters in a linear fashion. We follow the illustrative example of Wooldridge (2019, Section 4.5) and analyze major league baseball players' salaries using the data set *MLB1* and the regression model $$log(salary) = \beta_0 + \beta_1 \cdot years + \beta_2 \cdot gamesyr + \beta_3 \cdot bavg + \beta_4 \cdot hrunsyr + \beta_5 \cdot rbisyr + u$$ We want to test whether the performance measures batting average (*bavg*), home runs per year (*hrunsyr*), and runs batted in per year (*rbisyr*) have an impact on the salary once we control for the number of years as an active player (*years*) and the number of games played per year (*gamesyr*). So we state our null hypothesis as $H_0: \beta_3 = \beta_4 = \beta_5 = 0$ versus $H_1: H_0$ is false, i.e. at least one of the performance measures matters. The test statistic of the *F* test is based on the relative difference between the sum of squared residuals in the general (unrestricted) model and a restricted model in which the hypotheses are imposed $SSR_{ur}$ and $SSR_{r}$, respectively. In our example, the restricted model is one in which *bavg*, *hrunsyr*, and *rbisyr* are excluded as regressors. If both models involve the same dependent variable, it can also be written in terms of the coefficient of determination in the unrestricted and the restricted model $R_{ur}^2$ and $R_{r}^2$, respectively: $$F = \frac{SSR_{r} - SSR_{ur}}{SSR_{ur}} \cdot \frac{n - k - 1}{q} = \frac{R_{ur}^2 - R_{r}^2}{R_{ur}^2} \cdot \frac{n - k - 1}{q}$$ where *q* is the number of restrictions (in our example, *q* = 3). Intuitively, if the null hypothesis is correct, then imposing it as a restriction will not lead to a significant drop in the model fit and the *F* test statistic should be relatively small. It can be shown that under the CLM assumptions and the null hypothesis, the statistic has an *F* distribution with the numerator degrees of freedom equal to *q* if *F* > *c*, where critical value *c* is the 1 - $\alpha$ quantile of the relevant $F_{q, n-k-1}$ distribution. In our example, *n* = 353, *k* = 5, *q* = 3. So with $\alpha$ = 1%, the critical value is 3.84 and can be calculated using the **f.ppf()** function in **scipy.stats** as ``` Python f.ppf(1 - 0.01, 3, 347) ``` Here, we show the calculation for this example. The result is *F* = 9.55 > 3.84, so we clearly reject $H_0$. We also calculate the *p* value for this test. It is $p = 4.47 \cdot 10^{-06} = 0.00000447$, so we reject $H_0$ for any reasonable significance level. ``` # Import Modules import wooldridge as woo import numpy as np import statsmodels.formula.api as smf import scipy.stats as stats # Import data set "mlb1" mlb1 = woo.dataWoo('mlb1') n = mlb1.shape[0] # Unrestricted OLS Regression reg_ur = smf.ols( formula = 'np.log(salary) ~ years + gamesyr + bavg + hrunsyr + rbisyr', data = mlb1) fit_ur = reg_ur.fit() r2_ur = fit_ur.rsquared print(f'R Square for Unrestricted Model: {r2_ur}\n') # Restricted OLS Regression reg_r = smf.ols( formula = 'np.log(salary) ~ years + gamesyr', data = mlb1) fit_r = reg_r.fit() r2_r = fit_r.rsquared print(f'R Square for Restricted Model: {r2_r}\n') # F statistic fstat = (r2_ur - r2_r) / (1 - r2_ur) * (n - 6) / 3 print(f'F Statistic: {fstat}\n') # Critical Value for alpha = 1% cv = stats.f.ppf(1 - 0.01, 3, 347) print(f'Critical Value at 1% Significant Level: {cv}\n') # p value = 1 - cdf of the appropriate F distribution fpval = 1 - stats.f.cdf(fstat, 3, 347) print(f'p-value for the F Test: {fpval}\n') ``` It should not be surprising that there is more convenient way to do this. The module **statsmodels** provides a command **f_test()** which is well suited for these kinds of tests. Given the object with regression results, for example **results**, an *F* test is conducted with ``` Python hypotheses = ['var_name1 = 0', 'var_name2 = 0'] ftest = results.f_test(hypotheses) ``` where **hypotheses** collects null hypothesis to be tested. It is a list of length *q* where each restriction is described as a text in which the variable name takes the place of its parameter. In our example, $H_0$ is that the three parameters of *bavg*, *hrunsyr*, and *rbisyr* are all equal to zero, which translates as **hypotheses = ['bavg = 0', 'hrunsyr = 0', 'rbisyr = 0']**. We implement this for the same test as the manual calculations done in the previous example and results in exactly the same *F* statistic and *p* value. ``` # Import Modules import wooldridge as woo import numpy as np import statsmodels.formula.api as smf # Import data set "mlb1" mlb1 = woo.dataWoo('mlb1') # OLS Regression reg = smf.ols( formula = 'np.log(salary) ~ years + gamesyr + bavg + hrunsyr + rbisyr', data = mlb1) results = reg.fit() # Automate F test hypotheses = ['bavg = 0', 'hrunsyr = 0', 'rbisyr = 0'] ftest = results.f_test(hypotheses) fstat = ftest.statistic[0][0] fpval = ftest.pvalue print(f'F Statistic: {fstat}\n') print(f'p value for the F Test: {fpval}\n') ``` This function can also be used to test more complicated null hypotheses. For example, suppose a sport reporter claims that the batting average plays no role and that the number of home runs has twice the impact as the number of runs batted in. This translates (using variable names instead of numbers as subscripts) as $H_0: \beta_{bavg} = 0, \beta_{hrunsyr} = 2 \cdot \beta_{rbisyr}$. For *Python* we translate it as **hypotheses = ['bavg = 0', 'hrunsyr = 2 * rbisyr']**. The output shows the results of this test. The *p* value is 0.6, so we cannot reject $H_0$. ``` # Automate F test hypotheses = ['bavg = 0', 'hrunsyr = 2 * rbisyr'] ftest = results.f_test(hypotheses) fstat = ftest.statistic[0][0] fpval = ftest.pvalue print(f'F Statistic: {fstat}\n') print(f'p value for the F Test: {fpval}\n') ``` Both the most important and the most straightforward *F* test is the one for **overall significance**. The null hypothesis is that all parameters except for the constant are equal to zero. If this null hypothesis holds, the regressors do not have any joint explanatory power for *y*. The result of such a test are automatically included in the upper part of the **summary** output as **F-statistic** (F statistic) and **Prob(F-statistic)** (*p* value). ****** Asymptotic theory allows us to relax some assumptions needed to derive the sampling distribution of estimators if the sample size is large enough. For running a regression in a software package, it does not matter whether we rely on stronger assumptions or on asymptotic arguments. So we don't have to learn anything new regarding the implementation. Instead, we aim to imporve our intuition regarding the working of asymptotics by looking at some simulation exercises briefly discusses the implementation of the regression-based Lagrange multiplier (LM) test presented by Wooldridge (2019, Section 5.2). ### 4. Simulation Exercises In the previous notebook, we already used Monte Carlo Simulation methods to study the mean and variance of OLS estimators under the assumptions SLR.1 -SLR.5. Here, we will conduct similar experiments but will look at the whole sampling distribution of OLS estimators. Remember that the sampling distribution is important since confidence intervals, *t* and *F* tests and other tools of inference rely on it. Theorem 4.1 of Wooldridge (2019) gives the normal distribution of the OLS estimators (conditional on the regressors) based on assumptions MLR.1 through MLR.6. In contrast, Theorem 5.2 states that *asymptotically*, the distribution is normal by assumptions MLR.1 through MLR.5 only. Assumption MRL.6 - the normal distribution of the error terms - is not required if the sample is large enough to justify asymptotic argument. In other words: In small samples, the parameter estimates have a normal sampling distriubtion only if - the error terms are normally distributed and - we condition on the regressors To see how this works out in practice, we set up a series of simulation experiments. The first case simulates a model consistent with MLR.1 through MLR.6 and keeps the regressors fixed. Theory suggests that the sampling distribution of $\hat{\beta}$ is normal, independent of the sample size. The second case simulates a violation of assumption MLR.6. Normality of $\hat{\beta}$ only holds asymptotically, so for small sample size we suspect a violation. Finally, we will look closer into what "conditional on the regressors" means and simulate a (very plausible) violation of this in the last case. #### Normally Distributed Error Terms (Case 1) Here, we draws 10,000 samples of a given size (which has to be stored in variable *n* before) from a population that is consistent with assumption MLR.1 through MLR.6. The error terms are specified to be standard normal. The slope estimate $\hat{\beta}_1$ is stored for each of the generated samples in the array **b1**. ``` # Import Modules import pandas as pd import numpy as np import statsmodels.formula.api as smf import scipy.stats as stats # Set the random seed np.random.seed(835742) # Set sample size and number of simulations n = 100 r = 10000 # Set true parameters beta0 = 1 beta1 = 0.5 sx = 1 ex = 4 # Initialize b1 to store results later b1 = np.empty(r) # draw a sample of x, fixed over replications x = stats.norm.rvs(ex, sx, size = n) # Repeat the similation r times for i in range(r): # draw a saimple of u (std. normal) u = stats.norm.rvs(0, 1, size = n) y = beta0 + beta1 * x + u df = pd.DataFrame({"y": y, "x": x}) # estimate conditional OLS reg = smf.ols(formula = 'y ~ x', data = df) results = reg.fit() b1[i] = results.params['x'] ``` The code was run for different sample sizes. The density estimate together with the corresponding normal density are shown in Figure 5.1. Not surprisingly, all distributions look very similar to the normal distribution - that is what Theorem 4.1 predicted. Note that the fact that the sampling variance decreases as *n* rises is only obvious if we pay attention to the different scales of the axes. **Figure 5.1:** Density of $\hat{\beta}_1$ with Different Sample Sizes: Normal Error Terms | | | | :---: | :---: | | n = 5 | n = 10 | | ![alt](images/MCSim-olsasy-norm-n5.png) | ![alt](images/MCSim-olsasy-norm-n10.png) | | n = 100 | n = 1000 | | ![alt](images/MCSim-olsasy-norm-n100.png) | ![alt](images/MCSim-olsasy-norm-n1000.png) | #### Non-Normal Error Terms (Case 2) The next step is to simulate a violation of assumtpion MLR.6. In order to implement a rather drastic violation of the normality assumtpion, we implement a "standardized" $\chi^2$ distribution with one degree of freedom. More specifically, let *v* be distributed as $\chi_{[1]}^2$. Because this distribution has a mean of 1 and a variance of 2, the error term $u = \frac{v - 1}{\sqrt{2}}$ has a mean of 0 and a variance of 1. This simplifies the comparison to the exercise with the standard normal errors above. Figure 5.2 plots the density functions fo the standard normal distribution used above and the "standardized" $\chi^2$ distribution. Both have a mean of 0 and a variance of 1 but very different shapes. The only line of code we changed compared to the previous is the sampling of *u* where we replace drawing from a standard normal distribution using **u = stats.norm.rvs(0, 1, size = n)** with sampling from the standardized $\chi_{[1]}^2$ distribution with ``` Python u = (stats.chi2.rvs(1, size = n) - 1) / np.sqrt(2) ``` **Figure 5.2:** Density Functions of the Simulated Error Terms ![alt](images/MCSim-olsasy-stdchisq.png) For each of the same sample sizes used above, we again estimate the slope parameter for 10,000 samples. The densities of $\hat{\beta}_1$ are plotted in Figure 5.3 together with the respective normal distributions with the corresponding variances. For the small sample sizes, the deviation from the normal distribution is strong. Note that the dashed normal distributions have the same mean and variance. The main difference is the kurtosis which is larger than 8 in the simulations for n = 5 compared to the normal distribution for which the kurtosis is equal to 3. For larger sample size, the sampling distribution of $\hat{\beta}_1$ coverges to the normal distribution. For *n* = 100, the difference is much smaller but still discernible. For *n* = 1,000, it cannot be detected anymore in our simulation exerice. How large the sample needs to be depends among other things on the severity of the violations of MLR.6. If the distribution of the error terms is not as extremely non-normal as in our simulation, smaller sample sizes like the rule of thumb *n* = 30 might suffice for valid asymptotics. **Figure 5.3:** Density of $\hat{\beta}_1$ with Different Sample Sizes: Non-Normal Error Terms | | | | :---: | :---: | | n = 5 | n = 10 | | ![alt](images/MCSim-olsasy-chisq-n5.png) | ![alt](images/MCSim-olsasy-chisq-n10.png) | | n = 100 | n = 1000 | | ![alt](images/MCSim-olsasy-chisq-n100.png) | ![alt](images/MCSim-olsasy-chisq-n1000.png) | ``` # Import Modules import pandas as pd import numpy as np import statsmodels.formula.api as smf import scipy.stats as stats # Set the random seed np.random.seed(835742) # Set sample size and number of simulations n = 100 r = 10000 # Set true parameters beta0 = 1 beta1 = 0.5 sx = 1 ex = 4 # Initialize b1 to store results later b1 = np.empty(r) # draw a sample of x, fixed over replications x = stats.norm.rvs(ex, sx, size = n) # Repeat the similation r times for i in range(r): # draw a saimple of u (std. normal) u = (stats.chi2.rvs(1, size = n) - 1) / np.sqrt(2) y = beta0 + beta1 * x + u df = pd.DataFrame({"y": y, "x": x}) # estimate conditional OLS reg = smf.ols(formula = 'y ~ x', data = df) results = reg.fit() b1[i] = results.params['x'] ``` #### Unconditioning on the Regressors (Case 3) There is more subtle difference between the finite-sample results regarding the variance (Theorem 3.2) and distribution (Theorem 4.1) on one hand and the corresponding asymptotic results (Theorem 5.2). The former results describe the sampling distribution "conditional on the sample values of the independent variables". This implies that as we draw different samples, the values of the regressors $x_1, x_2, x_3, ... x_k$ remain the same and only the error terms and dependent varialbes change. In our previous simulation exerices, this is implemented by making random draws of *x* outside of the simulation loop. This is a realistic description of how data is generated only in some simple experiments: The experimenter chooses the regressors for sample, conducts the experiement and measures the dependent variable. In most applications we are concerned with, this is an unrealistic description of how we obtain our data. If we draw a sample of individuals, both their dependent and independent variables differ across samples. In these cases, the distribution "conditional on the sample values of the independent variables" can only serve as an approximation of the actual distribution with varying regressors. For large samples, this distinction is irrelevant and the asymptotic distribution is the same. Let's see how this plays out in an example. The code in this example differs from case 1 only by moving the generation of the regressors into the loop in which the 10,000 samples are generated. This is inconsistent with Theorem 4.1, so for small samples, we don't know the distribution of $\hat{\beta}_1$. Theorem 5.2 is applicable, so for (very) large sample, we know that the estimator is normally distributed. Figure 5.4 shows the distribution of the 10,000 estimates generated for *n* = 5, 10, 100, and 1,000. As we expected from theory, the distribution is (close to) normal for large samples. For small samples, it deviates quite a bit. The kurtosis is 8.7 for a sample size of *n* = 5 which is far away from the kurtosis of 3 of a normal distribution. **Figure 5.4:** Density of $\hat{\beta}_1$ with Different Sample Size: Varying Regressors | | | | :---: | :---: | | n = 5 | n = 10 | | ![alt](images/MCSim-olsasy-uncond-n5.png) | ![alt](images/MCSim-olsasy-uncond-n10.png) | | n = 100 | n = 1000 | | ![alt](images/MCSim-olsasy-uncond-n100.png) | ![alt](images/MCSim-olsasy-uncond-n1000.png) | ``` # Import Modules import pandas as pd import numpy as np import statsmodels.formula.api as smf import scipy.stats as stats # Set the random seed np.random.seed(835742) # Set sample size and number of simulations n = 100 r = 10000 # Set true parameters beta0 = 1 beta1 = 0.5 sx = 1 ex = 4 # Initialize b1 to store results later b1 = np.empty(r) # Repeat the similation r times for i in range(r): # draw a sample of x, varying over replications x = stats.norm.rvs(ex, sx, size = n) # draw a saimple of u (std. normal) u = stats.norm.rvs(0, 1, size = n) y = beta0 + beta1 * x + u df = pd.DataFrame({"y": y, "x": x}) # estimate conditional OLS reg = smf.ols(formula = 'y ~ x', data = df) results = reg.fit() b1[i] = results.params['x'] ``` ### 5. LM Test As an alternative to the *F* tests discussed previously, *LM* tests for the same sort of hypotheses can be very useful with large samples. In the linear regression setup, the test statistic is $$LM = n \cdot R_\bar{u}^2$$ where *n* is the sample size and $R_\bar{u}^2$ is the usual $R^2$ statistic in a regression of the residual $\bar{u}$ from the restricted model on the unrestricted set of regressors. Under the null hypothesis, it is asymptotically distributed as $\chi_{q}^2$ with *q* denoting number of restrictions. Details are given in Wooldridge (2019, Section 5.2). The implementation in **statsmodels** is straightforward if we remember that the residuals can be obtained with the **resid** attribute. #### Wooldridge, Example 5.3: Economic Model of Crime We analyze the same data on the number of arrests as in the crime example. The unrestricted regression model is $$narr86 = \beta_0 + \beta_1pcnv + \beta_2avgsen + \beta_3tottime + \beta_4ptime86 + \beta_5qemp86 + u$$ The dependent variable narr86 reflects the number of times a man was arrested and is explained by the proportion of prior arrests (*pcnv*), previous average sentences (*avgsen*), the time spend in prison before 1986 (*tottime*), the number of months in prison in 1986 (*ptime86*), and the number of quarters unemployed in 1986 (*qemp86*). The joint null hypothesis is $$H_0: \beta_2 = \beta_3 = 0$$ so the restricted set of regressors excludes *avgsen* and *tottime*. In this example, we show an implmentation of this *LM* test. The restricted model is estimated and its residuals **utilde =** $\tilde{u}$ are calculated. They are regressed on the unrestricted set of regressors. The $R^2$ from this regression is 0.001494, so the *LM* test statistic is calculated to be around *LM* = 0.001494 $\cdot$ 2725 = 4.071. This is smaller than the critical value for a significance level of $\alpha$ = 10%, so we do not reject the null hypothesis. We can also easily calculate the *p* value using the $\chi^2$ CDF **chi2.cdf()**. It turns out to be 0.1306. The same hypothesis can be tested using the F test using the command **f_test()**. In this example, it delivers the same *p* value up to three digits. ``` # Import Modules import wooldridge as woo import statsmodels.formula.api as smf import scipy.stats as stats # Import the crime1 data set crime1 = woo.dataWoo('crime1') # 1. Estimate restricted model reg_r = smf.ols(formula = 'narr86 ~ pcnv + ptime86 + qemp86', data = crime1) fit_r = reg_r.fit() r2_r = fit_r.rsquared print(f'R^2 for Restricted Model: {r2_r}\n') # 2. Regression of residuals from restricted model crime1['utilde'] = fit_r.resid reg_LM = smf.ols(formula = 'utilde ~ pcnv + ptime86 + qemp86 + avgsen + tottime', data = crime1) fit_LM = reg_LM.fit() r2_LM = fit_LM.rsquared print(f'R^2 for Residual from Restricted Model: {r2_LM}\n') # 3. calculation of LM test statistic LM = r2_LM * fit_LM.nobs print(f'LM Test Statistic: {LM}\n') #4 Critical Value from chi-squared distribution, alpha = 10% cv = stats.chi2.ppf(1 - 0.10, 2) print(f'Critical Value at 10% Significance Level: {cv}\n') # 5. p value (alternative to critical value) pval = 1 - stats.chi2.cdf(LM, 2) print(f'p value for the Test: {pval}\n') # 6. Compare to F-test reg = smf.ols(formula = ' narr86 ~ pcnv + ptime86 + qemp86 + avgsen + tottime', data = crime1) results = reg.fit() hypotheses = ['avgsen = 0', 'tottime = 0'] ftest = results.f_test(hypotheses) fstat = ftest.statistic[0][0] fpval = ftest.pvalue print(f'F Statistic: {fstat}\n') print(f'p value: {fpval}\n') ```
github_jupyter
# Benchmarking cell2location pyro model using softplus/exp for scales using 5x larger data ``` import sys, ast, os import scanpy as sc import anndata import pandas as pd import numpy as np import os import matplotlib.pyplot as plt import matplotlib as mpl data_type='float32' import cell2location_model from matplotlib import rcParams rcParams['pdf.fonttype'] = 42 # enables correct plotting of text import seaborn as sns ``` ## Read datasets and train cell2location Data can be downloaded as follows: ```bash wget https://cell2location.cog.sanger.ac.uk/paper/synthetic_with_tissue_zones/synth_adata_real_mg_20210131.h5ad wget https://cell2location.cog.sanger.ac.uk/paper/synthetic_with_tissue_zones/training_5705STDY8058280_5705STDY8058281_20210131.h5ad ``` ``` sc_data_folder = '/nfs/team205/vk7/sanger_projects/cell2location_paper/notebooks/selected_data/mouse_visium_snrna/' sp_data_folder = '/nfs/team205/vk7/sanger_projects/cell2location_paper/notebooks/selected_results/benchmarking/with_tissue_zones/data/' results_folder = '/nfs/team205/vk7/sanger_projects/cell2location_paper/notebooks/selected_results/benchmarking/with_tissue_zones/real_mg/pyro/' # read synthetic data adata_vis = anndata.read(f'{sp_data_folder}synth_adata_real_mg_20210131.h5ad') adata_vis.uns['spatial'] = {'x': 'y'} # select 5 samples (each with 2500 observations) adata_vis = adata_vis[adata_vis.obs['sample'].isin([f'exper{i}' for i in range(5)]),:] # read scRNA-seq data with reference cell types adata_snrna_raw = anndata.read(f'{sp_data_folder}training_5705STDY8058280_5705STDY8058281_20210131.h5ad') import scipy adata_snrna_raw.X = scipy.sparse.csr_matrix(adata_snrna_raw.X) adata_vis.X = scipy.sparse.csr_matrix(adata_vis.X) ``` Add counts matrix as `adata.raw` ``` adata_snrna_raw.raw = adata_snrna_raw adata_vis.raw = adata_vis # compute average for each cluster aver = cell2location_model.get_cluster_averages(adata_snrna_raw, 'annotation_1') # make sure the order of gene matches between aver and x_data aver = aver.loc[adata_vis.var_names,:] # generate one-hot encoded matrix telling which obs belong to whic samples obs2sample_df = pd.get_dummies(adata_vis.obs['sample']) adata_vis ``` ## Model training ``` results_folder #import torch #torch.set_default_tensor_type(torch.cuda.FloatTensor) mod = cell2location_model.LocationModelLinearDependentWMultiExperiment( n_obs=adata_vis.n_obs, n_vars=adata_vis.n_vars, n_factors=aver.shape[1], n_exper=obs2sample_df.shape[1], batch_size=None, cell_state_mat=aver.values.astype(data_type), ) from pyro.distributions import constraints mod.guide.scale_constraint = constraints.positive #mod.guide.scale_constraint = constraints.softplus_positive #mod.to('cuda') mod._train_full_data(x_data=adata_vis.raw.X.toarray().astype(data_type), obs2sample=obs2sample_df.values.astype(data_type), n_epochs=30000, lr=0.005) means = mod.guide.median() means = {k: means[k].cpu().detach().numpy() for k in means.keys()} means mod_s = cell2location_model.LocationModelLinearDependentWMultiExperiment( n_obs=adata_vis.n_obs, n_vars=adata_vis.n_vars, n_factors=aver.shape[1], n_exper=obs2sample_df.shape[1], batch_size=None, cell_state_mat=aver.values.astype(data_type), ) from pyro.distributions import constraints from pyro.distributions.transforms import SoftplusTransform mod_s.guide.scale_constraint = constraints.softplus_positive mod_s._train_full_data(x_data=adata_vis.raw.X.toarray().astype(data_type), obs2sample=obs2sample_df.values.astype(data_type), n_epochs=30000, lr=0.005) means_softplus = mod_s.guide.median() means_softplus = {k: means_softplus[k].cpu().detach().numpy() for k in means_softplus.keys()} means_softplus mod_s3 = cell2location_model.LocationModelLinearDependentWMultiExperiment( n_obs=adata_vis.n_obs, n_vars=adata_vis.n_vars, n_factors=aver.shape[1], n_exper=obs2sample_df.shape[1], batch_size=None, cell_state_mat=aver.values.astype(data_type), ) from pyro.distributions import constraints from torch.distributions import biject_to, transform_to @biject_to.register(constraints.positive) @transform_to.register(constraints.positive) def _transform_to_positive(constraint): return SoftplusTransform() mod_s3.guide.scale_constraint = constraints.positive mod_s3._train_full_data(x_data=adata_vis.raw.X.toarray().astype(data_type), obs2sample=obs2sample_df.values.astype(data_type), n_epochs=30000, lr=0.005) ``` ### Compare ELBO as training progresses ``` #plt.plot(range(2000, len(mod.hist)), np.array(mod.hist)[2000:]); plt.plot(range(2000, len(mod_s.hist)), np.array(mod_s.hist)[2000:]); plt.plot(range(2000, len(mod_s3.hist)), np.array(mod_s3.hist)[2000:]); plt.legend(labels=['exp', 'softplus scales', 'all_softplus']); plt.xlim(0, len(mod_s.hist)); #plt.plot(range(5000, len(mod.hist)), np.array(mod.hist)[5000:]); plt.plot(range(5000, len(mod_s.hist)), np.array(mod_s.hist)[5000:]); plt.plot(range(5000, len(mod_s3.hist)), np.array(mod_s3.hist)[5000:]); plt.legend(labels=['exp', 'softplus scales', 'all_softplus']); plt.xlim(0, len(mod_s.hist)); plt.ylim(1.57e+8, 1.59e+8); plt.title('zoom in on y-axis'); ``` ### Evaluate accuracy using $R^2$ with ground truth data ``` #means = mod.guide.median() #means = {k: means[k].cpu().detach().numpy() # for k in means.keys()} means_softplus = mod_s.guide.median() means_softplus = {k: means_softplus[k].cpu().detach().numpy() for k in means_softplus.keys()} means_softplus_all = mod_s3.guide.median() means_softplus_all = {k: means_softplus_all[k].cpu().detach().numpy() for k in means_softplus_all.keys()} from re import sub cell_count = adata_vis.obs.loc[:, ['cell_abundances_' in i for i in adata_vis.obs.columns]] cell_count.columns = [sub('cell_abundances_', '', i) for i in cell_count.columns] cell_count_columns = cell_count.columns #infer_cell_count = pd.DataFrame(means['w_sf'], index=adata_vis.obs_names, # columns=aver.columns) #infer_cell_count = infer_cell_count[cell_count.columns] infer_cell_count_softplus = pd.DataFrame(means_softplus['w_sf'], index=adata_vis.obs_names, columns=aver.columns) infer_cell_count_softplus = infer_cell_count_softplus[cell_count.columns] infer_cell_count_softplus_all = pd.DataFrame(means_softplus_all['w_sf'], index=adata_vis.obs_names, columns=aver.columns) infer_cell_count_softplus_all = infer_cell_count_softplus_all[cell_count.columns] #infer_cell_count.iloc[0:5,0:5], infer_cell_count_softplus.iloc[0:5,0:5], infer_cell_count_softplus_all.iloc[0:5,0:5] ``` ``` rcParams['figure.figsize'] = 4, 4 rcParams["axes.facecolor"] = "white" plt.hist2d(cell_count.values.flatten(), infer_cell_count_softplus.values.flatten(), bins=[50, 50], norm=mpl.colors.LogNorm()); plt.xlabel('Simulated cell abundance'); plt.ylabel('Estimated cell abundance'); plt.title(r'softplus, $R^2$: ' \ + str(np.round(np.corrcoef(cell_count.values.flatten(), infer_cell_count_softplus.values.flatten()), 3)[0,1])); plt.tight_layout() rcParams['figure.figsize'] = 4, 4 rcParams["axes.facecolor"] = "white" plt.hist2d(cell_count.values.flatten(), infer_cell_count_softplus_all.values.flatten(), bins=[50, 50], norm=mpl.colors.LogNorm()); plt.xlabel('Simulated cell abundance'); plt.ylabel('Estimated cell abundance'); plt.title(r'softplus all, $R^2$: ' \ + str(np.round(np.corrcoef(cell_count.values.flatten(), infer_cell_count_softplus_all.values.flatten()), 3)[0,1])); plt.tight_layout() ``` Original implementation of cell2location in pymc3 has $R^2 = 0.791$. ## Evaluate with PR curves ``` import matplotlib as mpl from matplotlib import pyplot as plt import numpy as np from scipy import interpolate with plt.style.context('seaborn'): seaborn_colors = mpl.rcParams['axes.prop_cycle'].by_key()['color'] def compute_precision_recall(pos_cell_count, infer_cell_proportions, mode='macro'): r""" Plot precision-recall curves on average and for each cell type. :param pos_cell_count: binary matrix showing which cell types are present in which locations :param infer_cell_proportions: inferred locations (the higher the more cells) """ from sklearn.metrics import precision_recall_curve from sklearn.metrics import average_precision_score ### calculating ### predictor = infer_cell_proportions.values + np.random.gamma(20, 1e-12, infer_cell_proportions.shape) # For each cell type precision = dict() recall = dict() average_precision = dict() for i, c in enumerate(infer_cell_proportions.columns): precision[c], recall[c], _ = precision_recall_curve(pos_cell_count[:, i], predictor[:, i]) average_precision[c] = average_precision_score(pos_cell_count[:, i], predictor[:, i], average=mode) average_precision["averaged"] = average_precision_score(pos_cell_count, predictor, average=mode) # A "micro-average": quantifying score on all classes jointly if mode == 'micro': precision_, recall_, threshold = precision_recall_curve(pos_cell_count.ravel(), predictor.ravel()) #precision_[threshold < 0.1] = 0 precision["averaged"], recall["averaged"] = precision_, recall_ elif mode == 'macro': precisions = [] recall_grid = np.linspace(0, 1, 2000) for i, c in enumerate(infer_cell_proportions.columns): f = interpolate.interp1d(recall[c], precision[c]) precision_interp = f(recall_grid) precisions.append(precision_interp) precision["averaged"] = np.mean(precisions, axis=0) recall['averaged'] = recall_grid return precision, recall, average_precision def compare_precision_recall(pos_cell_count, infer_cell_proportions, method_title, title='', legend_loc=(0, -.37), colors=sc.pl.palettes.default_102, mode='macro', curve='PR'): r""" Plot precision-recall curves on average and for each cell type. :param pos_cell_count: binary matrix showing which cell types are present in which locations :param infer_cell_proportions: inferred locations (the higher the more cells), list of inferred parameters for several methods :param method_title: title for each infer_cell_proportions :param title: plot title """ # setup plot details from itertools import cycle colors = cycle(colors) lines = [] labels = [] roc = {} ### plotting ### for i, color in zip(range(len(infer_cell_proportions)), colors): if curve == 'PR': precision, recall, average_precision = compute_precision_recall(pos_cell_count, infer_cell_proportions[i], mode=mode) xlabel = 'Recall' ylabel = 'Precision' l, = plt.plot(recall["averaged"], precision["averaged"], color=color, lw=3) elif curve == 'ROC': FPR, TPR, average_precision = compute_roc(pos_cell_count, infer_cell_proportions[i], mode=mode) xlabel = 'FPR' ylabel = 'TPR' l, = plt.plot(FPR["averaged"], TPR["averaged"], color=color, lw=3) lines.append(l) labels.append(method_title[i] + '(' + curve + ' score = {0:0.2f})' ''.format(average_precision["averaged"])) roc[method_title[i]] = average_precision["averaged"] fig = plt.gcf() fig.subplots_adjust(bottom=0.25) plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel(xlabel) plt.ylabel(ylabel) plt.title(title) if legend_loc is not None: plt.legend(lines, labels, loc=legend_loc, prop=dict(size=8)) #plt.show() return roc rcParams['figure.figsize'] = 6, 3 rcParams['font.size'] = 8 results = [ #infer_cell_count, infer_cell_count_softplus, infer_cell_count_softplus_all ] names = [ #'exp', 'softplus scales', 'all_softplus' ] compare_precision_recall(cell_count.values > 0.1, results, method_title=names, legend_loc=(1.1, 0.5)) plt.tight_layout() ``` Original implementation of cell2location in pymc3 has PR score = 0.66. ``` import sys for module in sys.modules: try: print(module,sys.modules[module].__version__) except: try: if type(modules[module].version) is str: print(module,sys.modules[module].version) else: print(module,sys.modules[module].version()) except: try: print(module,sys.modules[module].VERSION) except: pass ```
github_jupyter
### Introduction to K-Nearest Neighbors (KNN) The K-Nearest Neighbors (KNN) algorithm is very simple and very effective.The k-Nearest Neighbors (kNN) algorithm is arguably the simplest machine learning algorithm. Building the model only consists of storing the training dataset. #### Making Predictions with KNN KNN makes predictions using the training dataset directly. Predictions are made for a new data point by searching through the entire training set for the K most similar instances (the neighbors) and summarizing the output variable for those K instances. For regression this might be the mean output variable, in classification this might be the mode (or most common) class value. To determine which of the K instances in the training dataset are most similar to a new input a distance measure is used. For real-valued input variables, the most popular distance measure is Euclidean distance. Euclidean distance is calculated as the square root of the sum of the squared differences between a point a and point b across all input attributes i. ![eucldist.png](attachment:eucldist.png) There are many other distance measures that can be used, such as Tanimoto, Jaccard, Mahalanobis and cosine distance. You can choose the best distance metric based on the properties of your data. If you are unsure, you can experiment with different distance metrics and different values of K together and see which mix results in the most accurate models. Euclidean is a good distance measure to use if the input variables are similar in type (e.g. all measured widths and heights). Manhattan distance is a good measure to use if the input variables are not similar in type (such as age, gender, height, etc.). The value for K can be found by algorithm tuning. It is a good idea to try many different values for K (e.g. values from 1 to 21) and see what works best for your problem. The computational complexity of KNN increases with the size of the training dataset. For very large training sets, KNN can be made stochastic by taking a sample from the training dataset from which to calculate the K-most similar instances. KNN can be used for regression and classification problems. ##### When KNN is used for regression problems the prediction is based on the mean or the median of the K-most similar instances. ##### When KNN is used for classification, the output can be calculated as the class with the highest frequency from the K-most similar instances. #### Curse of Dimensionality KNN works well with a small number of input variables (p), but struggles when the number of inputs is very large. Each input variable can be considered a dimension of a p-dimensional input space. For example, if you had two input variables X1 and X2, the input space would be 2-dimensional. As the number of dimensions increases the volume of the input space increases at an exponential rate. In high dimensions, points that may be similar may have very large distances. All points will be far away from each other and our intuition for distances in simple 2 and 3-dimensional spaces breaks down. This might feel unintuitive at first, but this general problem is called the Curse of Dimensionality. #### Preparing Data For KNN 1. Rescale Data: KNN performs much better if all of the data has the same scale. Normal- izing your data to the range between 0 and 1 is a good idea. It may also be a good idea to standardize your data if it has a Gaussian distribution. 2. Address Missing Data: Missing data will mean that the distance between samples cannot be calculated. These samples could be excluded or the missing values could be imputed. 3. Lower Dimensionality: KNN is suited for lower dimensional data. You can try it on high dimensional data (hundreds or thousands of input variables) but be aware that it may not perform as well as other techniques. KNN can benefit from feature selection that reduces the dimensionality of the input feature space. #### KNN Tutorial ##### KNN from Scratch ``` import pandas as pd import numpy as np import os data_2C = pd.read_csv("column_2C_weka.csv") data_2C.head() data_2C.describe().transpose() data_2C.dtypes #features columns colnames_numeric = data_2C.columns[0:6] colnames_numeric #Scaling a data in always a good idea while using KNN from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() data_2C[colnames_numeric] = scaler.fit_transform(data_2C[colnames_numeric]) # X_scaled = scale * X + min - X.min(axis=0) * scale # where scale = (max - min) / (X.max(axis=0) - X.min(axis=0)) data_2C.head() data_2C.shape df = data_2C.values.tolist() df #Breaking the data into training and test set import random def train_test_split(data, split, trainingSet = [], testSet = []): for x in range(len(data)): if random.random() < split: trainingSet.append(data[x]) else: testSet.append(data[x]) trainingSet = [] testSet = [] split = 0.66 train_test_split(df, split, trainingSet, testSet) len(trainingSet) trainingSet = trainingSet[:5] len(testSet) #Define Euclidean distances import math def Euclideandist(x,xi, length): d = 0.0 for i in range(length): d += pow(float(x[i])- float(xi[i]),2) return math.sqrt(d) #Getting the K neighbours having the closest Euclidean distance to the test instance import operator def getNeighbors(trainingSet, testInstance, k): distances = [] length = len(testInstance)-1 for x in range(len(trainingSet)): dist = Euclideandist(testInstance, trainingSet[x], length) distances.append((trainingSet[x], dist)) distances.sort(key=operator.itemgetter(1)) neighbors = [] for x in range(k): neighbors.append(distances[x][0]) return neighbors #After sorting the neighbours based on their respective classes, max voting to give the final class of the test instance import operator def getResponse(neighbors): classVotes = {} for x in range(len(neighbors)): response = neighbors[x][-1] if response in classVotes: classVotes[response] += 1 else: classVotes[response] = 1 sortedVotes = sorted(classVotes.items(), key=operator.itemgetter(1), reverse=True)#Sorting it based on votes return sortedVotes[0][0] #Please note we need the class for the top voted class, hence [0][0]# #Getting the accuracy def getAccuracy(testSet, predictions): correct = 0 for x in range(len(testSet)): if testSet[x][-1] == predictions[x]: correct += 1 return (correct/float(len(testSet))) * 100.0 predictions=[] k = 2 neighbors = getNeighbors(trainingSet, testSet[0], k) print(neighbors) result = getResponse(neighbors) # result predictions.append(result) predictions # generate predictions predictions=[] k = 3 for x in range(len(testSet)): neighbors = getNeighbors(trainingSet, testSet[x], k) result = getResponse(neighbors) predictions.append(result) print('> predicted=' + repr(result) + ', actual=' + repr(testSet[x][-1])) accuracy = getAccuracy(testSet, predictions) print('Accuracy: ' + repr(accuracy) + '%') ``` ##### Implementing KNN using scikitlearn ``` import pandas as pd import numpy as np import os df = pd.read_csv('column_2C_weka.csv') df.head() df.shape numeric_cols = df.columns[0:6] #Scaling a data in always a good idea while using KNN from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() df[numeric_cols] = scaler.fit_transform(df[numeric_cols]) df.head() from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(df[numeric_cols], df['class'], random_state = 0) X_test.shape from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import accuracy_score knn = KNeighborsClassifier(n_neighbors = 3) knn.fit(X_train,y_train) prediction = knn.predict(X_test) # print('Prediction: {}'.format(prediction)) # print(prediction) print('With KNN (K=3) accuracy is: ',knn.score(X_test,y_test)) # accuracy y_test ```
github_jupyter
# Bis438 Final Project Problem 2 ## Import Python Libraries ``` import numpy as np import deepchem as dc from MPP.model import GCN, MLP from MPP.utils import process_prediction, make_feature, split_data ``` ## Build GraphConv Model ``` batch_size = 50 gcn_model = GCN(batch_size=batch_size) # build model ``` ## Training GraphConv Model and Calculate ROC-AUC ``` # define metric as roc_auc_score metric = dc.metrics.Metric(dc.metrics.roc_auc_score, task_averager=np.mean, verbose=False, mode='classification') num_models = 10 # the number of iteration roc_auc_train = list() # save roc_auc value for training dataset roc_auc_valid = list() # save roc_auc value for validation dataset roc_auc_test = list() # save roc_auc value for test dataset # Do featurization conv_feature = make_feature(data_name='BACE', feature_name='GraphConv') for i in range(num_models): # Load ith dataset with GraphConv Featurizer and random split train_dataset, valid_dataset, test_dataset = split_data(conv_feature) # Fitting ith model with training dataset gcn_model.fit(train_dataset, epochs=3) # fitting with training epoch 3 # Evaluating model # save roc_auc for training dataset pred_train = gcn_model.predict(train_dataset) pred_train = process_prediction(y_true=train_dataset.y, y_pred=pred_train) train_scores = metric.compute_metric(y_true=train_dataset.y, y_pred=pred_train, w=train_dataset.w) roc_auc_train.append(train_scores) # save roc_auc for valid dataset pred_valid = gcn_model.predict(valid_dataset) pred_valid = process_prediction(y_true=valid_dataset.y, y_pred=pred_valid) valid_scores = metric.compute_metric(y_true=valid_dataset.y, y_pred=pred_valid, w=valid_dataset.w) roc_auc_valid.append(valid_scores) # save roc_auc for test dataset pred_test = gcn_model.predict(test_dataset) pred_test = process_prediction(y_true=test_dataset.y, y_pred=pred_test) test_scores = metric.compute_metric(y_true=test_dataset.y, y_pred=pred_test, w=test_dataset.w) roc_auc_test.append(test_scores) # print roc_auc result print(f'\nEvaluating model number {i+1:02d}.') # 1-based indexing of model number. print(f'Train ROC-AUC Score: {train_scores:.3f}, ' f'Valid ROC-AUC Score: {valid_scores:.3f}, Test ROC-AUC Score: {test_scores:.3f}.\n') ``` ## Calculate mean value of ROC-AUC and use std1 for error bar in GCN model ``` gcn_values = list() gcn_values.append(np.mean(roc_auc_train)) gcn_values.append(np.mean(roc_auc_valid)) gcn_values.append(np.mean(roc_auc_test)) gcn_stds = list() gcn_stds.append(np.std(roc_auc_train)) gcn_stds.append(np.std(roc_auc_valid)) gcn_stds.append(np.std(roc_auc_test)) ``` ## Build Multi Layer Perceptron using keras ``` batch_size = 50 dense_model = MLP(batch_size=batch_size) ``` ## Training Multi Layer Percpetron Model and Calculate ROC-AUC ``` # define metric as roc_auc_score metric = dc.metrics.Metric(dc.metrics.roc_auc_score, task_averager=np.mean, verbose=False, mode='classification') num_models = 10 # the number of iteration roc_auc_train = list() # save roc_auc value for training dataset roc_auc_valid = list() # save roc_auc value for validation dataset roc_auc_test = list() # save roc_auc value for test dataset # Do featurization ecfp_feature = make_feature(data_name='BACE', feature_name='ECFP') for i in range(num_models): # Load ith dataset with GraphConv Featurizer and random split train_dataset, valid_dataset, test_dataset = split_data(ecfp_feature) # Fitting ith model with training dataset dense_model.fit(train_dataset, epochs=3) # fitting with training epoch 3 # Evaluating model # save roc_auc for training dataset pred_train = dense_model.predict(train_dataset) pred_train = process_prediction(y_true=train_dataset.y, y_pred=pred_train) train_scores = metric.compute_metric(y_true=train_dataset.y, y_pred=pred_train) roc_auc_train.append(train_scores) # save roc_auc for valid dataset pred_valid = dense_model.predict(valid_dataset) pred_valid = process_prediction(y_true=valid_dataset.y, y_pred=pred_valid) valid_scores = metric.compute_metric(y_true=valid_dataset.y, y_pred=pred_valid) roc_auc_valid.append(valid_scores) # save roc_auc for test dataset pred_test = dense_model.predict(test_dataset) pred_test = process_prediction(y_true=test_dataset.y, y_pred=pred_test) test_scores = metric.compute_metric(y_true=test_dataset.y, y_pred=pred_test) roc_auc_test.append(test_scores) # print roc_auc result print(f'\nEvaluating model number {i+1:02d}.') print(f'Train ROC-AUC Score: {train_scores:.3f}, ' f'Valid ROC-AUC Score: {valid_scores:.3f}, Test ROC-AUC Score: {test_scores:.3f}.\n') ``` ## Calculate mean value of ROC-AUC and use std1 for error bar in MLP model ``` mlp_values = list() mlp_values.append(np.mean(roc_auc_train)) mlp_values.append(np.mean(roc_auc_valid)) mlp_values.append(np.mean(roc_auc_test)) mlp_stds = list() mlp_stds.append(np.std(roc_auc_train)) mlp_stds.append(np.std(roc_auc_valid)) mlp_stds.append(np.std(roc_auc_test)) ``` ## Plot ROC-AUC Score ``` from matplotlib import pyplot as plt %matplotlib inline topics = ['train', 'valid', 'test'] def create_x(t, w, n, d): return [t*x + w*n for x in range(d)] gcn_values_x = create_x(2, 0.8, 1, 3) mlp_values_x = create_x(2, 0.8, 2, 3) ax = plt.subplot() p1 = ax.bar(gcn_values_x, gcn_values, yerr=gcn_stds, capsize=1) p2 = ax.bar(mlp_values_x, mlp_values, yerr=mlp_stds, capsize=1) middle_x = [(a+b)/2 for (a,b) in zip(gcn_values_x, mlp_values_x)] ax.set_title('Mean ROC-AUC Score for GCN/MLP Model') ax.set_xlabel('Dataset') ax.set_ylabel('ROC-AUC Score') ax.legend((p1[0], p2[0]), ('GCN', 'MLP'), fontsize=15) ax.set_xticks(middle_x) ax.set_xticklabels(topics) plt.show() ```
github_jupyter
``` import numpy as np from tensorflow import keras import matplotlib.pyplot as plt import os import cv2 import random import sklearn.model_selection as model_selection import datetime from model import createModel from contextlib import redirect_stdout from tensorflow.keras import layers import tensorflow as tf policy = tf.keras.mixed_precision.experimental.Policy('mixed_float16') tf.keras.mixed_precision.experimental.set_policy(policy) ``` # The following model and it's predictions should not be considered as a valid medical advice ``` categories = ["NonDemented", "MildDemented", "ModerateDemented", "VeryMildDemented"] # data classes 0 NonDemented ,1 MildDemented,2 Moderate Demented,3 Very Mild Demented SIZE = 120 def getData(): rawdata = [] data = [] dir = "./data/" for category in categories: path = os.path.join(dir, category) class_num = categories.index(category) for img in os.listdir(path): try: rawdata = cv2.imread(os.path.join(path, img), cv2.IMREAD_GRAYSCALE) new_data = cv2.resize(rawdata, (SIZE, SIZE)) data.append([new_data, class_num]) except Exception as e: pass random.shuffle(data) img_data = [] img_labels = [] for features, label in data: img_data.append(features) img_labels.append(label) img_data = np.array(img_data).reshape(-1, SIZE, SIZE, 1) img_data = img_data / 255.0 img_labels = np.array(img_labels) return img_data, img_labels data, labels = getData() labels train_data, test_data, train_labels, test_labels = model_selection.train_test_split(data, labels, test_size=0.20) train_data, val_data, train_labels, val_labels = model_selection.train_test_split(train_data, train_labels, test_size=0.10) model = keras.Sequential([ keras.Input(shape=train_data.shape[1:]), layers.Conv2D(64, kernel_size=(3, 3), activation="relu"), layers.MaxPooling2D(pool_size=(2, 2)), layers.Conv2D(64, kernel_size=(3, 3), activation="relu"), layers.MaxPooling2D(pool_size=(2, 2)), layers.Flatten(), layers.Dense(128, activation="relu"), layers.Dropout(0.5), layers.Dense(4, activation="softmax") ]) checkpoint = keras.callbacks.ModelCheckpoint(filepath='./model/model.h5', save_best_only=True, monitor='val_loss', mode='min') opt = keras.optimizers.Adam(learning_rate=0.001) model.compile(optimizer=opt, loss="sparse_categorical_crossentropy", metrics=["accuracy"], ) history = model.fit(train_data, train_labels, epochs=10, validation_data=(val_data, val_labels)) import shap explainer = shap.DeepExplainer(model,train_data[:120]) val = explainer.shap_values(train_data[:120]) shap.initjs() shap.force_plot(explainer.expected_value[0],val[0][0],train_data) ```
github_jupyter
``` # default_exp utils.clusterization ! pip install pyclustering ``` ## clusterization ``` #export import logging import sentencepiece as sp from pyclustering.cluster.kmedoids import kmedoids from pyclustering.utils.metric import euclidean_distance_square, euclidean_distance from pyclustering.cluster.silhouette import silhouette, silhouette_ksearch_type, silhouette_ksearch from sklearn.cluster import KMeans from sklearn.decomposition import PCA from sklearn.manifold import TSNE from sklearn.metrics import silhouette_score, pairwise_distances_argmin_min import umap import numpy as np from abc import ABC from typing import Tuple, Optional # Configs logger = logging.getLogger(__name__) logger.setLevel(logging.INFO) ``` ## Distance metrics In order to allow flexible implementation of several clustering techniques, a base CustomDistance class is defined. ``` # export class CustomDistance(ABC): def compute_distance(self, x, y) -> float: """ Computes the distance between 2 vectors according to a particular distance metric :param x: Vector :param y: Vector :return: """ pass ``` Euclidean distance is the most commonly used distance metric. This distance is used for default in some of the methods ``` # export class EuclideanSquareDistance(CustomDistance): """Euclidean (square) distance.""" def compute_distance(self, x, y) -> float: return euclidean_distance_square(x, y) # export class EuclideanDistance(CustomDistance): """Euclidean distance.""" def compute_distance(self, x, y) -> float: return euclidean_distance(x, y) ``` ## Utils ## Dimensionality reduction ``` # export # Uses PCA first and then t-SNE def reduce_dims_pca_tsne(feature_vectors, dims = 2): """ """ # hyperparameters from https://towardsdatascience.com/visualising-high-dimensional-datasets-using-pca-and-t-sne-in-python-8ef87e7915b pca = PCA(n_components=50) pca_features = pca.fit_transform(feature_vectors) logging.info("Reduced dims via PCA.") tsne = TSNE(n_components=dims, verbose=1, perplexity=40, n_iter=300) tsne_features = tsne.fit_transform(pca_features) logging.info("Reduced dims via t-SNE.") return tsne_features # export def reduce_dims_tsne(vectors, dims=2): """ Perform dimensionality reduction using t-SNE (from sklearn) :param vectors: Data vectors to be reduced :param dims: Optional[int] indicating the number of dimensions of the desired space :return: Vectors with the desired dimensionality """ tsne = TSNE(n_components=dims, verbose=1, perplexity=40, n_iter=300) tsne_feats = tsne.fit_transform(vectors) logging.info("Reduced dims via t-SNE") return tsne_feats # export def reduce_dims_pca(vectors, dims=2): """ Perform dimensionality reduction using PCA (from sklearn) :param vectors: Data vectors to be reduced :param dims: Optional[int] indicating the number of dimensions of the desired space :return: Vectors with the desired dimensionality """ pca = PCA(n_components=dims) pca_feats = pca.fit_transform(vectors) logging.info("Reduced dims via PCA.") return pca_feats # export def get_silhouette(samples1, samples2): cluster1, medoid_id1, kmedoid_instance1 = run_kmedoids(samples1, 1) cluster2, medoid_id2, kmedoid_instance12 = run_kmedoids(samples2, 1) cluster2 = np.array([[len(samples1) + x for x in cluster2[0]]]) samples = np.concatenate((samples1, samples2), axis=0) clusters = np.concatenate((cluster1, cluster2), axis=0) score = sum(silhouette(samples, clusters).process().get_score()) / len(samples) return score ``` Check UMAP details at [the official documentation](https://umap-learn.readthedocs.io/en/latest/) ``` # export def reduce_dims_umap(vectors, n_neighbors: Optional[int]=15, min_dist: Optional[float]=0.1, dims: Optional[int]=2, metric: Optional[str]='euclidean') -> np.ndarray: """ Perform dimensionality reduction using UMAP :param vectors: Data vectors to be reduced :param dims: Optional[int] indicating the number of dimensions of the desired space :return: Vectors with the desired dimensionality """ reducer = umap.UMAP( n_neighbors=n_neighbors, min_dist=min_dist, n_components=dims, metric=metric ) umap_vectors = reducer.fit_transform(vectors) return umap_vectors ``` ## k-means ``` # export def k_means(feature_vectors, k_range=[2, 3]): # finding best k bst_k = k_range[0] bst_silhouette = -1 bst_labels = None bst_centroids = None bst_kmeans = None for k in k_range: kmeans = KMeans(n_clusters = k) kmeans.fit(feature_vectors) labels = kmeans.predict(feature_vectors) centroids = kmeans.cluster_centers_ silhouette_avg = silhouette_score(feature_vectors, labels) if silhouette_avg > bst_silhouette: bst_k = k bst_silhouette = silhouette_avg bst_labels = labels bst_centroids = centroids bst_kmeans = kmeans logger.info(f'Best k = {bst_k} with a silhouette score of {bst_silhouette}') centroid_mthds = pairwise_distances_argmin_min(bst_centroids, feature_vectors) return bst_labels, bst_centroids, bst_kmeans, centroid_mthds # export def clusterize(feature_vecs, k_range = [2], dims = 2): # feature_vectors = reduce_dims(np.array(list(zip(*feature_vecs))[1]), dims = dims) feature_vectors = reduce_dims_umap(np.array(list(zip(*feature_vecs))[1]), dims=dims) experimental_vectors = feature_vectors#[:len(feature_vectors) * 0.1] labels, centroids, kmeans, centroid_mthds = k_means(experimental_vectors, k_range = k_range) return (feature_vectors, centroid_mthds, labels, centroids, kmeans) # export def find_best_k(samples): logging.info("Searching best k for clustering.") search_instance = silhouette_ksearch(samples, 2, 10, algorithm=silhouette_ksearch_type.KMEDOIDS).process() amount = search_instance.get_amount() scores = search_instance.get_scores() logging.info(f"Best Silhouette Score for k = {amount}: {scores[amount]}") return amount # export def run_kmedoids(samples, k): initial_medoids = list(range(k)) # Create instance of K-Medoids algorithm. kmedoids_instance = kmedoids(samples, initial_medoids) kmedoids_instance.process() clusters = kmedoids_instance.get_clusters() medoid_ids = kmedoids_instance.get_medoids() return clusters, medoid_ids, kmedoids_instance # export def perform_clusterize_kmedoids(data: np.array, reduct_dist='euclidean', dims: int = 2) -> Tuple: """ Perform clusterization of the dataset by means of k-medoids :param data: Data to be clusterized :param reduct_dist: Distance metric to be used for dimensionality reduction :param dims: Number of dims to get with umap before clustering :return: Tuple (reduced_vectors, clusters, medoid_ids, pyclustering kmedoids instance) """ reduced_data = reduce_dims_umap(data, dims = dims) k = find_best_k(reduced_data) clusters, medoid_ids, kmedoids_instance = run_kmedoids(reduced_data, k) return reduced_data, clusters, medoid_ids, kmedoids_instance # export def clusterize_kmedoids(data: np.array, distance_metric='euclidean', dims: int = 2) -> Tuple: """ Performs clusterization (k-medoids) using UMAP for dim. reduction """ reduced_data = reduce_dims_umap(data, dims = dims, metric=distance_metric) logging.info('Reduced dimensionality via UMAP') k = find_best_k(reduced_data) clusters, medoid_ids, kmedoids_instance = run_kmedoids(reduced_data, k) return reduced_data, clusters, medoid_ids, kmedoids_instance # export def new_clusterize_kmedoids(h_samples, m1_samples, m2_samples, m3_samples, dims = 2): samples = np.concatenate((h_samples, m1_samples, m2_samples, m3_samples), axis=0) samples = reduce_dims(samples, dims = dims) # np.array(list(zip(*samples)))[0], dims = dims) h_samples, m1_samples, m2_samples, m3_samples = samples[:len(h_samples)], samples[len(h_samples):len(h_samples) + len(m1_samples)], samples[len(h_samples) + len(m1_samples):len(h_samples) + len(m1_samples) + len(m2_samples)], samples[len(h_samples) + len(m1_samples) + len(m2_samples):] h_k = find_best_k(h_samples) h_clusters, h_medoid_ids, h_kmedoids_instance = run_kmedoids(h_samples, h_k) m1_k = find_best_k(m1_samples) m1_clusters, m1_medoid_ids, m1_kmedoids_instance = run_kmedoids(m1_samples, m1_k) m2_k = find_best_k(m2_samples) m2_clusters, m2_medoid_ids, m2_kmedoids_instance = run_kmedoids(m2_samples, m2_k) m3_k = find_best_k(m3_samples) m3_clusters, m3_medoid_ids, m3_kmedoids_instance = run_kmedoids(m3_samples, m3_k) return ( (h_samples, h_clusters, h_medoid_ids, h_kmedoids_instance), (m1_samples, m1_clusters, m1_medoid_ids, m1_kmedoids_instance), (m2_samples, m2_clusters, m2_medoid_ids, m2_kmedoids_instance), (m3_samples, m3_clusters, m3_medoid_ids, m3_kmedoids_instance) ) ``` ## Prototypes and criticisms ``` # export def gen_criticisms(samples, prototypes, n = None, distance = None): if n is None: n = len(prototypes) if distance is None: distance = EuclideanDistance() crits = [] for x in samples: mean_dist_x = 0. for x_i in samples: mean_dist_x += distance.compute_distance(x, x_i) mean_dist_x = mean_dist_x / len(x) mean_dist_proto = 0. for z_j in prototypes: mean_dist_proto += distance.compute_distance(x, z_j) mean_dist_proto = mean_dist_proto / len(prototypes) crits.append(mean_dist_x - mean_dist_proto) crits = np.array(crits) crit_ids = crits.argsort()[-n:][::-1] return crits, crit_ids from nbdev.export import notebook2script notebook2script() ```
github_jupyter
``` // #r ".\binaries2\bossspad.dll" // #r ".\binaries2\XNSEC.dll" // #r "C:\BoSSS_Binaries\bossspad.dll" // #r "C:\BoSSS_Binaries\XNSEC.dll" #r "C:\BoSSS\experimental\public\src\L4-application\BoSSSpad\bin\Release\net5.0\bossspad.dll" #r "C:\BoSSS\experimental\public\src\L4-application\BoSSSpad\bin\Release\net5.0\XNSEC.dll" using System; using System.Collections.Generic; using System.Linq; using System.IO; using System.Data; using System.Globalization; using System.Threading; using ilPSP; using ilPSP.Utils; using BoSSS.Platform; using BoSSS.Foundation; using BoSSS.Foundation.Grid; using BoSSS.Foundation.Grid.Classic; using BoSSS.Foundation.IO; using BoSSS.Solution; using BoSSS.Solution.Control; using BoSSS.Solution.GridImport; using BoSSS.Solution.Statistic; using BoSSS.Solution.Utils; using BoSSS.Solution.Gnuplot; using BoSSS.Application.BoSSSpad; using static BoSSS.Application.BoSSSpad.BoSSSshell; using BoSSS.Foundation.Grid.RefElements; using BoSSS.Platform.LinAlg; using BoSSS.Solution.NSECommon; using BoSSS.Application.XNSEC; Init(); ``` ## Case configuration ``` int[] multiplierS = new int[]{5}; int[] DGDegrees = new int[] {1,2,3,4}; int[] nCellsArray = new int[] {4}; // double[] xCellsMultiplier = new double[] {1,2,3,4,5}; double[] xCellsMultiplier = new double[] {3,4,5,6}; double InitialMassFuelIn = 0.02400; //kg/m2s double InitialMassAirIn = 0.02400 *3; //kg/m2s bool parabolicVelocityProfile = false; bool chemicalReactionActive = true; int numberOfMpiCores = 8; bool useFullGeometry = false; bool wallBounded = true; bool AMRinEachNewtonIteration = true; ``` ## Open Database ``` static var myDb = OpenOrCreateDatabase(@"\\hpccluster\hpccluster-scratch\gutierrez\CDF_TemperatureStudy_4"); myDb.Summary() myDb.Sessions[1].Export().Do() BoSSSshell.WorkflowMgm.Init("CounterFlowFlame_VariableCP_TempStudy2"); // BoSSSshell.WorkflowMgm.SetNameBasedSessionJobControlCorrelation(); BoSSSshell.WorkflowMgm.Sessions var myBatch = BoSSSshell.ExecutionQueues[3]; // MiniBatchProcessor.Server.StartIfNotRunning(true); myBatch.AllowedDatabasesPaths.Add(new AllowedDatabasesPair(myDb.Path,"")); // ================================== // setup Client & Workflow & Database // ================================== // var myBatch = (SlurmClient)ExecutionQueues[1]; // var AddSbatchCmds = new List<string>(); // AddSbatchCmds.AddRange(new string[]{"#SBATCH -N 4","#SBATCH -p test24", "#SBATCH -C avx512", "#SBATCH --mem-per-cpu="+2000}); // myBatch.AllowedDatabasesPaths.Add(new AllowedDatabasesPair(myDb.Path,"")); // myBatch.AdditionalBatchCommands = AddSbatchCmds.ToArray(); // myBatch.AdditionalBatchCommands ``` ## Create grid ``` public static class GridFactory { public static Grid2D GenerateGrid(int nCells, bool wallBounded, double xDensity) { double L = 0.02; double LRef = L; double R_dim = L / 2; double separation = 1.0; // nondimensional double xleft = 0; double xright = separation; double radius_inlet = R_dim / LRef; double ybot = -separation * 3; double ytop = separation * 3; // Define Grid double[] _xNodes; double[] _yNodes; // X-NODES double xNodesDensity = xDensity; _xNodes = GenericBlas.Linspace(xleft, xright, (int)(nCells * xNodesDensity + 1)); double sf3 = 0.80; var myCenterNodes = GenericBlas.SinLinSpacing(0, radius_inlet*2,0.7, nCells*2 + 1).ToList(); // center var centerNodes = myCenterNodes.GetSubVector(0, myCenterNodes.Count / 2 + 1); // Take only "bottom side" of node array List<double> yNodesTop = (GenericBlas.SinLinSpacing(radius_inlet, (ytop - radius_inlet) * 2 + radius_inlet, sf3, nCells * 2 + 1).ToList()); // Nodes corresponding to the oxidizer inlet, right part var myYnodesTop = yNodesTop.GetSubVector(0, yNodesTop.Count / 2 + 1); // Take only "bottom side" of node array List<double> yUpperPart = new List<double>(); yUpperPart.AddRange(centerNodes.SkipLast(1)); yUpperPart.AddRange(myYnodesTop.SkipLast(0)); var yBottomPart = (yUpperPart.ToArray()).CloneAs(); yBottomPart.ScaleV(-1.0); Array.Reverse(yBottomPart); var yNodes = new List<double>(); yNodes.AddRange(yBottomPart.SkipLast(1)); yNodes.AddRange(yUpperPart.SkipLast(0)); var grd = Grid2D.Cartesian2DGrid(_xNodes, yNodes.ToArray()); // Define Edge tags grd.EdgeTagNames.Add(1, "Velocity_Inlet_CH4"); grd.EdgeTagNames.Add(2, "Velocity_Inlet_O2"); grd.EdgeTagNames.Add(3, "Pressure_Outlet"); if(wallBounded){ grd.EdgeTagNames.Add(4, "Wall"); } grd.DefineEdgeTags(delegate (double[] X) { double x = X[0]; double y = X[1]; //Edge tags //1: Velocity inlet O_2 //2: Velocity inlet CH_4 //3: Pressure outlet if(Math.Abs(x - xleft) < 1e-8) { // Left boundary if(Math.Abs(y) - radius_inlet < 1e-8) { // Fuel Inlet return 1; } else { if(wallBounded){ return 4; } else{ return 3; } } } if(Math.Abs(x - xright) < 1e-8) { // right boundary if(Math.Abs(y) - radius_inlet < 1e-8) { // oxy Inlet return 2;//2 } else { if(wallBounded){ return 4; } else{ return 3; } } } else { return 3; //3 Pressure outlet } }); myDb.SaveGrid(ref grd, true); return grd; } } public static class BoundaryValueFactory { public static string GetPrefixCode(double ConstVal, double inletRadius, double uInFuel, double uInAir, double sigma) { using(var stw = new System.IO.StringWriter()) { stw.WriteLine("static class BoundaryValues {"); stw.WriteLine(" static public double ConstantValue(double[] X) {"); stw.WriteLine(" return "+ ConstVal +";"); stw.WriteLine(" }"); stw.WriteLine(" static public double ParabolaVelocityFuel(double[] X) {"); stw.WriteLine(" return (1.0 - Math.Pow(X[1] / "+inletRadius+", 2)) * "+uInFuel+" ;"); stw.WriteLine(" }"); stw.WriteLine(" static public double ParabolaVelocityAir(double[] X) {"); stw.WriteLine(" return -(1.0 - Math.Pow(X[1] / "+inletRadius+", 2)) * "+uInAir+";"); stw.WriteLine(" }"); stw.WriteLine(" static public double RegularizedPlugFlowFuel(double[] X) {"); stw.WriteLine(" double res = 0;"); stw.WriteLine(" if(X[1] > 0) { "); stw.WriteLine(" double H = 0.5 * (1.0 + Math.Tanh("+sigma+" * (X[1] - "+inletRadius+"))); "); stw.WriteLine(" res = "+uInFuel+" * (1 - 2*H); "); stw.WriteLine(" } "); stw.WriteLine(" else { "); stw.WriteLine(" double H = 0.5 * (1.0 + Math.Tanh("+sigma+" * (X[1] + ("+inletRadius+")))); "); stw.WriteLine(" res = "+uInFuel+" * ( 2*H-1); "); stw.WriteLine(" } "); stw.WriteLine("return res;"); stw.WriteLine(" }"); stw.WriteLine(" static public double RegularizedPlugFlowAir(double[] X) {"); stw.WriteLine(" double res = 0;"); stw.WriteLine(" if(X[1] > 0) { "); stw.WriteLine(" double H = 0.5 * (1.0 + Math.Tanh("+sigma+" * (X[1] - "+inletRadius+"))); "); stw.WriteLine(" res = "+uInAir+" * (1 - 2*H)*(-1); "); stw.WriteLine(" } "); stw.WriteLine(" else { "); stw.WriteLine(" double H = 0.5 * (1.0 + Math.Tanh("+sigma+" * (X[1] + ("+inletRadius+")))); "); stw.WriteLine(" res = "+uInAir+" * ( 2*H-1)*(-1); "); stw.WriteLine(" } "); stw.WriteLine("return res;"); stw.WriteLine(" }"); stw.WriteLine("}"); return stw.ToString(); } } static public Formula Get_ConstantValue(double ConstVal, double inletRadius, double uInFuel, double uInAir, double sigma){ return new Formula("BoundaryValues.ConstantValue", AdditionalPrefixCode:GetPrefixCode(ConstVal, inletRadius, uInFuel, uInAir,sigma)); } static public Formula Get_ParabolaVelocityFuel(double ConstVal, double inletRadius, double uInFuel, double uInAir, double sigma){ return new Formula("BoundaryValues.ParabolaVelocityFuel", AdditionalPrefixCode:GetPrefixCode(ConstVal, inletRadius, uInFuel, uInAir,sigma)); } static public Formula Get_ParabolaVelocityAir(double ConstVal, double inletRadius, double uInFuel, double uInAir, double sigma){ return new Formula("BoundaryValues.ParabolaVelocityAir", AdditionalPrefixCode:GetPrefixCode(ConstVal, inletRadius, uInFuel, uInAir,sigma)); } static public Formula Get_RegularizedPlugFlowFuel(double ConstVal, double inletRadius, double uInFuel, double uInAir, double sigma){ return new Formula("BoundaryValues.RegularizedPlugFlowFuel", AdditionalPrefixCode:GetPrefixCode(ConstVal, inletRadius, uInFuel, uInAir,sigma)); } static public Formula Get_RegularizedPlugFlowAir(double ConstVal, double inletRadius, double uInFuel, double uInAir, double sigma){ return new Formula("BoundaryValues.RegularizedPlugFlowAir", AdditionalPrefixCode:GetPrefixCode(ConstVal, inletRadius, uInFuel, uInAir,sigma)); } } ``` ## Create base control file In this ControlFile basic configuration of the CounterDiffusionFlame is defined. ``` static XNSEC_Control GiveMeTheCtrlFile(int dg, int nCells, bool isMF, double massFuelIn, double massAirIn, bool parabolicVelocityProfile, bool UsefullGeometry, bool wallBounded, double xNodesMultiplier) { var CC = new ChemicalConstants(); var C = isMF ? new XNSEC_MF_Control() : new XNSEC_Control(); // C.AlternateDbPaths = new[] { // (@"S:\work\scratch\jg11bano\"+dirname, "PCMIT30"), // (@"/work/scratch/jg11bano/"+dirname,"")}; // C.AlternateDbPaths = new[] { // (winpath, ""), // (dirname,"")}; C.NumberOfChemicalSpecies = 4; C.SetDGdegree(dg); // C.SetGrid(GridFactory.GenerateGrid(nCells, wallBounded, xNodesMultiplier)); // C.MatParamsMode = MaterialParamsMode.Sutherland; // // Problem Definition //=================== double L = 0.02; // separation between the two inlets, meters double TemperatureInFuel = 300; // double TemperatureInOxidizer = 300; // double AtmPressure = 101325; // Pa double[] FuelInletConcentrations = new double[] { 0.2, 0.0, 0.0, 0.0, 0.8 }; // double[] FuelInletConcentrations = new double[] { 1.0, 0.0, 0.0, 0.0, 0.0 }; double[] OxidizerInletConcentrations = new double[] { 0.0, 0.23, 0.0, 0.0, 0.77 }; // double[] MWs = new double[] { CC.MW_CH4, CC.MW_O2, CC.MW_CO2, CC.MW_H2O, CC.MW_N2 }; double mwFuel = CC.getAvgMW(MWs, FuelInletConcentrations); double mwAir = CC.getAvgMW(MWs, OxidizerInletConcentrations); double densityAirIn = AtmPressure * mwAir / (CC.R_gas * TemperatureInOxidizer * 1000); // kg / m3 double densityFuelIn = AtmPressure * mwFuel / (CC.R_gas * TemperatureInFuel * 1000); // kg / m3. double uInFuel = massFuelIn / densityFuelIn; // double uInAir = massAirIn / densityAirIn; // Console.WriteLine("VelocityFuel" + uInFuel); Console.WriteLine("VelocityAir" + uInAir); // Reference values //=================== // Basic units to be used: Kg, m, s, mol, pa, double TRef = TemperatureInOxidizer;// Reference temperature is the inlet temperature, (K) double pRef = AtmPressure; // Pa double uRef = Math.Max(uInFuel, uInAir); // m/s double LRef = L; C.GravityDirection = new double[] { 0.0, 0.0, 0.0 }; //No gravity. // Solver configuration // ======================= C.smoothingFactor = 80*0-1*1; // C.NonLinearSolver.ConvergenceCriterion = 1e-8; // C.LinearSolver.ConvergenceCriterion = 1e-10; C.NonLinearSolver.verbose = true; C.NonLinearSolver.SolverCode = NonLinearSolverCode.Newton; C.NonLinearSolver.MaxSolverIterations = 10; C.LinearSolver.SolverCode = LinearSolverCode.classic_pardiso; C.LinearSolver.verbose = false; C.TimesteppingMode = AppControl._TimesteppingMode.Steady; C.saveperiod = 1; C.PenaltyViscMomentum = 1.0; /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// C.PenaltyHeatConduction = 1.0;/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// C.YFuelInlet = FuelInletConcentrations[0]; C.YOxInlet = OxidizerInletConcentrations[1]; C.FuelInletConcentrations = FuelInletConcentrations; C.OxidizerInletConcentrations = OxidizerInletConcentrations; C.TFuelInlet = 1.0; C.TOxInlet = 1.0; C.PhysicalParameters.IncludeConvection = true; // Chemical related parameters double s = (CC.nu_O2 * CC.MW_O2) / (CC.nu_CH4 * CC.MW_CH4); C.phi = s * C.YFuelInlet / C.YOxInlet; C.zSt = 1.0 / (1.0 + C.phi); var MLC = new MaterialLawCombustion(300, new double[] { }, C.MatParamsMode, C.rhoOne, true, 1.0, 1, 1, C.YOxInlet, C.YFuelInlet, C.zSt, CC, 0.75); var ThermoProperties = new ThermodynamicalProperties(); //========================== //Derived reference values //========================== C.uRef = uRef; // Reference velocity C.LRef = LRef; // reference length C.pRef = AtmPressure; // reference pressure C.TRef = TemperatureInFuel;// reference temperature C.MWRef = MLC.getAvgMW(MWs, C.OxidizerInletConcentrations); // Air mean molecular weight C.rhoRef = C.pRef * C.MWRef / (8.314 * C.TRef * 1000); // Kg/m3. ok ; C.cpRef = 1.3;//ThermoProperties.Calculate_Cp_Mixture(new double[] { 0.23, 0.77 }, new string[] { "O2", "N2" }, 300); // 1.219185317353029;// Representative value, KJ/Kg K ========> 1.31 for the one-step kinetic model C.muRef = MLC.getViscosityDim(300); C.MolarMasses = new double[] { C.CC.MW_CH4, C.CC.MW_O2, C.CC.MW_CO2, C.CC.MW_H2O, C.CC.MW_N2 }; C.MolarMasses.ScaleV(1.0 / C.MWRef); //NonDimensionalized Molar masses C.T_ref_Sutherland = 300; double heatRelease_Ref = (C.TRef * C.cpRef); C.HeatRelease = C.CC.HeatReleaseMass / heatRelease_Ref; C.B = CC.PreExponentialFactor; C.StoichiometricCoefficients = new double[] { -1, -2, 1, 2, 0 }; C.Damk = C.rhoRef * C.LRef * C.B / (C.uRef * C.MWRef); C.Reynolds = C.rhoRef * C.uRef * C.LRef / C.muRef; C.Prandtl = 0.75;////////////////////0.75; C.Schmidt = C.Prandtl; // Because Lewis number is assumed as 1.0 (Le = Pr/Sc) // C.Lewis = new double[] { 0.97, 1.11, 1.39, 0.83, 1.0 }; C.Lewis = new double[] { 1.0, 1.0, 1.0, 1.0, 1.0 }; double g = 9.8; // m/s2 C.Froude = Math.Sqrt(uRef * uRef / (C.LRef * g)); // Not used C.T_ref_Sutherland = 300; //////// Check this C.ReactionRateConstants = new double[] { C.Damk, CC.Ta / TRef, 1.0, 1.0 }; // NOTE! activation temperature is also nondimensional //========================== // Initial conditions //========================== double dummy = 0; double Radius = 0.5; C.AddInitialValue(VariableNames.VelocityX, BoundaryValueFactory.Get_ConstantValue(0.0, Radius, uInFuel / C.uRef, uInAir / C.uRef,dummy)); C.AddInitialValue(VariableNames.VelocityY, BoundaryValueFactory.Get_ConstantValue(0.0, Radius, uInFuel / C.uRef, uInAir / C.uRef,dummy)); C.AddInitialValue(VariableNames.Pressure, BoundaryValueFactory.Get_ConstantValue(0.0, Radius, uInFuel / C.uRef, uInAir / C.uRef,dummy)); //========================== // Boundary conditions //========================== double sigma = 10; if(parabolicVelocityProfile) { C.AddBoundaryValue("Velocity_Inlet_CH4", VariableNames.Velocity_d(0), BoundaryValueFactory.Get_ParabolaVelocityFuel(dummy, Radius, uInFuel / C.uRef, dummy, dummy)); C.AddBoundaryValue("Velocity_Inlet_O2", VariableNames.Velocity_d(0), BoundaryValueFactory.Get_ParabolaVelocityAir(dummy, Radius, dummy, uInAir / C.uRef, dummy)); } else { // Plug Flow // C.AddBoundaryValue("Velocity_Inlet_CH4", VariableNames.Velocity_d(0), BoundaryValueFactory.Get_ConstantValue(uInFuel / C.uRef, dummy, dummy, dummy, dummy)); // C.AddBoundaryValue("Velocity_Inlet_O2", VariableNames.Velocity_d(0), BoundaryValueFactory.Get_ConstantValue((-1) * uInAir / C.uRef, dummy, dummy, dummy, dummy)); C.AddBoundaryValue("Velocity_Inlet_CH4", VariableNames.Velocity_d(0), BoundaryValueFactory.Get_RegularizedPlugFlowFuel(dummy, Radius, uInFuel / C.uRef, dummy, sigma)); C.AddBoundaryValue("Velocity_Inlet_O2", VariableNames.Velocity_d(0), BoundaryValueFactory.Get_RegularizedPlugFlowAir( dummy, Radius, dummy, uInAir / C.uRef, sigma)); } C.AddBoundaryValue("Velocity_Inlet_CH4", VariableNames.Velocity_d(1), BoundaryValueFactory.Get_ConstantValue(0.0, dummy, dummy, dummy,dummy)); C.AddBoundaryValue("Velocity_Inlet_O2", VariableNames.Velocity_d(1), BoundaryValueFactory.Get_ConstantValue(0.0, dummy, dummy, dummy,dummy)); if(wallBounded){ C.AddBoundaryValue("Wall", VariableNames.VelocityX, BoundaryValueFactory.Get_ConstantValue(0.0, dummy, dummy, dummy,dummy)); C.AddBoundaryValue("Wall", VariableNames.VelocityY, BoundaryValueFactory.Get_ConstantValue(0.0, dummy, dummy, dummy,dummy)); } return C; } ``` ## Starting the MixtureFraction simulation Configuration for the simulation using the mixture fraction approach, where an infinite reaction rate is assumed. Used to find adequate starting solution for the full problem. ``` static XNSEC_Control GiveMeTheMixtureFractionCtrlFile(int dg, int nCells, double massFuelIn, double massAirIn, bool parabolicVelocityProfile, int multiplier, bool useFullGeometry, bool wallBounded, double xNodesMultiplier){ var C_MixtureFraction = GiveMeTheCtrlFile(dg, nCells, true,massFuelIn, massAirIn, parabolicVelocityProfile, useFullGeometry, wallBounded , xNodesMultiplier); C_MixtureFraction.physicsMode = PhysicsMode.MixtureFraction; C_MixtureFraction.ProjectName = "CounterDifFlame"; string name = C_MixtureFraction.ProjectName + "P" + dg + "K" + nCells + "Multiplier" +multiplier + "xNodesMultiplier"+xNodesMultiplier; C_MixtureFraction.SessionName = "FS_" + name; C_MixtureFraction.UseSelfMadeTemporalOperator = false; C_MixtureFraction.ChemicalReactionActive = false; C_MixtureFraction.physicsMode = PhysicsMode.MixtureFraction; C_MixtureFraction.NonLinearSolver.MaxSolverIterations = 50; C_MixtureFraction.dummycounter = multiplier; // Boundary and initial conditions double dummy = -11111111; C_MixtureFraction.AddInitialValue(VariableNames.MixtureFraction,BoundaryValueFactory.Get_ConstantValue(1.0,dummy,dummy , dummy, dummy)); C_MixtureFraction.AddBoundaryValue("Velocity_Inlet_CH4", VariableNames.MixtureFraction, BoundaryValueFactory.Get_ConstantValue(1.0,dummy,dummy , dummy, dummy)); C_MixtureFraction.AddBoundaryValue("Velocity_Inlet_O2", VariableNames.MixtureFraction, BoundaryValueFactory.Get_ConstantValue(0.0,dummy,dummy , dummy, dummy)); C_MixtureFraction.Tags.Add(multiplier.ToString()); // Tag used for restart of the full problem double radius_inlet = 0.5; // radius is 1/2 from length separation var troubledPoints = new List<double[]>(); if(useFullGeometry){ troubledPoints.Add(new double[]{2, +radius_inlet }); troubledPoints.Add(new double[]{2, -radius_inlet }); troubledPoints.Add(new double[]{1, +radius_inlet }); troubledPoints.Add(new double[]{1, -radius_inlet }); } else { troubledPoints.Add(new double[]{0, +radius_inlet }); troubledPoints.Add(new double[]{0, -radius_inlet }); troubledPoints.Add(new double[]{1, +radius_inlet }); troubledPoints.Add(new double[]{1, -radius_inlet }); } // Dictionary<string, Tuple<double, double>> Bounds = new Dictionary<string, Tuple<double, double>>(); // double eps = 0.05; // Bounds.Add(VariableNames.MixtureFraction, new Tuple<double, double>(0- eps, 1 + eps)); // C_MixtureFraction.VariableBounds = Bounds; bool useHomotopy = false; if(useHomotopy) { C_MixtureFraction.HomotopyApproach = XNSEC_Control.HomotopyType.Automatic; // C_MixtureFraction.HomotopyVariable = XNSEC_Control.HomotopyVariableEnum.VelocityInletMultiplier; // C_MixtureFraction.homotopieAimedValue = multiplier; C_MixtureFraction.HomotopyVariable = XNSEC_Control.HomotopyVariableEnum.Reynolds; C_MixtureFraction.homotopieAimedValue = C_MixtureFraction.Reynolds; } C_MixtureFraction.AdaptiveMeshRefinement = false; C_MixtureFraction.TimesteppingMode = BoSSS.Solution.Control.AppControl._TimesteppingMode.Steady; // C_MixtureFraction.activeAMRlevelIndicators.Add( new BoSSS.Application.XNSEC.AMR_onProblematicPoints(troubledPoints,C_MixtureFraction.AMR_startUpSweeps) ); // C_MixtureFraction.activeAMRlevelIndicators.Add(new BoSSS.Application.XNSEC.AMR_RefineAroundProblematicPoints(troubledPoints, 1, 0.01)); // C_MixtureFraction.activeAMRlevelIndicators.Add( new BoSSS.Application.XNSEC.AMR_onFlameSheet(C_MixtureFraction.zSt,3) ); return C_MixtureFraction; } ``` ## Send and run jobs ``` bool HLLRCalculation = true; // foreach(double xDensityMult in xCellsMultiplier) // foreach(int nCells in nCellsArray){ // int dg = 2; // foreach(int i in multiplierS){ // double massFuelIn = InitialMassFuelIn*i; // double massAirIn = InitialMassAirIn*i; // Type solver_MF = typeof(BoSSS.Application.XNSEC.XNSEC_MixtureFraction); // var C_MixtureFraction = GiveMeTheMixtureFractionCtrlFile(dg, nCells, massFuelIn, massAirIn, parabolicVelocityProfile,i,useFullGeometry, wallBounded,xDensityMult); // string jobName = C_MixtureFraction.SessionName; // Console.WriteLine(jobName); // var oneJob = new Job(jobName, solver_MF); // oneJob.NumberOfMPIProcs = 8; // // oneJob.ExecutionTime = "2:00:00"; // // oneJob.UseComputeNodesExclusive = true; // oneJob.SetControlObject(C_MixtureFraction); // oneJob.Activate(myBatch); // } // } BoSSSshell.WorkflowMgm.BlockUntilAllJobsTerminate(); ``` ## Starting the finite-rate chemistry simulation Now that the simulation for an "infinite" reaction rate is done, we use it for initializing the system with finite reaction rate. The goal is to obtain solutions of the counter difussion flame for increasing strain values. We start with a low strain (bigger Dahmkoehler number), which is increased until extintion is (hopefully) found ``` static XNSEC_Control GiveMeTheFullCtrlFile(int dg, int nCells, double massFuelIn, double massAirIn, ISessionInfo SessionToRestart, bool parabolicVelocityProfile, bool chemReactionActive, bool useFullGeometry, bool wallBounded, int mult, int counter, double xNodesMultiplier) { var C_OneStep = GiveMeTheCtrlFile(dg, nCells, false, massFuelIn, massAirIn, parabolicVelocityProfile, useFullGeometry, wallBounded, xNodesMultiplier); C_OneStep.physicsMode = PhysicsMode.Combustion; C_OneStep.ProjectName = "CounterDifFlame"; string name = C_OneStep.ProjectName + "P" + dg + "K" + nCells + "mult" + mult+"_c"+counter ; C_OneStep.SessionName = "Full_" + name; C_OneStep.VariableOneStepParameters = true; // C_OneStep.Tags.Add("VelocityMultiplier" + mult); C_OneStep.Tags.Add( mult.ToString()); C_OneStep.dummycounter = counter; C_OneStep.UseSelfMadeTemporalOperator = false; C_OneStep.myThermalWallType = SIPDiffusionTemperature.ThermalWallType.Adiabatic; C_OneStep.Timestepper_LevelSetHandling = BoSSS.Solution.XdgTimestepping.LevelSetHandling.None; C_OneStep.UseMixtureFractionsForCombustionInitialization = true; C_OneStep.LinearSolver.SolverCode = LinearSolverCode.exp_Kcycle_schwarz; C_OneStep.LinearSolver.NoOfMultigridLevels = 5; C_OneStep.ChemicalReactionActive = chemReactionActive; C_OneStep.AdaptiveMeshRefinement = true; C_OneStep.HeatCapacityMode = MaterialLaw_MultipleSpecies.CpCalculationMode.mixture; bool AMRinEachNewtonStep = false; if( AMRinEachNewtonStep) { C_OneStep.NoOfTimesteps = 4; C_OneStep.NonLinearSolver.MaxSolverIterations = 8; // Do only one newton iteration before refining C_OneStep.NonLinearSolver.MinSolverIterations = 8; // Do only one newton iteration before refining } else{ C_OneStep.NoOfTimesteps = 1; // The steady solution will be calculated again and do AMR C_OneStep.NonLinearSolver.MaxSolverIterations = 50; } C_OneStep.AMR_startUpSweeps = 0; if(C_OneStep.ChemicalReactionActive){ C_OneStep.activeAMRlevelIndicators.Add(new AMR_onReactiveZones(C_OneStep.MolarMasses, 2, 0.2)); C_OneStep.activeAMRlevelIndicators.Add(new AMR_BasedOnVariableLimits("Temperature", new double[] { -100, 4 },3)); // Refine all cells with T > 5 (and T < -100) // C_OneStep.activeAMRlevelIndicators.Add(new AMR_BasedOnFieldGradient(2, 0.9, VariableNames.Temperature)); // C_OneStep.activeAMRlevelIndicators.Add(new AMR_BasedOnPerssonSensor(VariableNames.Temperature, 2)); } // C_OneStep.activeAMRlevelIndicators.Add(new AMR_BasedOnPerssonSensor(VariableNames.Temperature, 3)); // C_OneStep.NonLinearSolver.MaxSolverIterations = 10; // limiting of variable values Dictionary<string, Tuple<double, double>> Bounds = new Dictionary<string, Tuple<double, double>>(); double eps = 1e-2; Bounds.Add(VariableNames.Temperature, new Tuple<double, double>(1.0 - eps, 10)); // Min temp should be the inlet temperature. Bounds.Add(VariableNames.MassFraction0, new Tuple<double, double>(0.0 - 1e-4, 1.0 + 1e-4)); // Between 0 and 1 per definition Bounds.Add(VariableNames.MassFraction1, new Tuple<double, double>(0.0 - 1e-4, 1.0 + 1e-4)); Bounds.Add(VariableNames.MassFraction2, new Tuple<double, double>(0.0 - 1e-4, 1.0 + 1e-4)); Bounds.Add(VariableNames.MassFraction3, new Tuple<double, double>(0.0 - 1e-4, 1.0 + 1e-4)); C_OneStep.VariableBounds = Bounds; // Boundary conditions double dummy = 0; if(SessionToRestart != null) { C_OneStep.SetRestart(SessionToRestart); } else { C_OneStep.AddInitialValue(VariableNames.Temperature, BoundaryValueFactory.Get_ConstantValue(1.0, dummy, dummy, dummy, dummy)); C_OneStep.AddInitialValue(VariableNames.MassFraction0, BoundaryValueFactory.Get_ConstantValue(0.0, dummy, dummy, dummy, dummy)); C_OneStep.AddInitialValue(VariableNames.MassFraction1, BoundaryValueFactory.Get_ConstantValue(0.23, dummy, dummy, dummy, dummy)); C_OneStep.AddInitialValue(VariableNames.MassFraction2, BoundaryValueFactory.Get_ConstantValue(0.0, dummy, dummy, dummy, dummy)); C_OneStep.AddInitialValue(VariableNames.MassFraction3, BoundaryValueFactory.Get_ConstantValue(0.0, dummy, dummy, dummy, dummy)); } if(wallBounded){ C_OneStep.AddBoundaryValue("Wall", VariableNames.Temperature, BoundaryValueFactory.Get_ConstantValue(1.0, dummy, dummy, dummy, dummy)); } C_OneStep.AddBoundaryValue("Velocity_Inlet_CH4", VariableNames.Temperature, BoundaryValueFactory.Get_ConstantValue(1.0, dummy, dummy, dummy, dummy)); C_OneStep.AddBoundaryValue("Velocity_Inlet_CH4", VariableNames.MassFraction0, BoundaryValueFactory.Get_ConstantValue(C_OneStep.FuelInletConcentrations[0], dummy, dummy, dummy, dummy)); C_OneStep.AddBoundaryValue("Velocity_Inlet_CH4", VariableNames.MassFraction1, BoundaryValueFactory.Get_ConstantValue(C_OneStep.FuelInletConcentrations[1], dummy, dummy, dummy, dummy)); C_OneStep.AddBoundaryValue("Velocity_Inlet_CH4", VariableNames.MassFraction2, BoundaryValueFactory.Get_ConstantValue(C_OneStep.FuelInletConcentrations[2], dummy, dummy, dummy, dummy)); C_OneStep.AddBoundaryValue("Velocity_Inlet_CH4", VariableNames.MassFraction3, BoundaryValueFactory.Get_ConstantValue(C_OneStep.FuelInletConcentrations[3], dummy, dummy, dummy, dummy)); C_OneStep.AddBoundaryValue("Velocity_Inlet_O2", VariableNames.Temperature, BoundaryValueFactory.Get_ConstantValue(1.0, dummy, dummy, dummy, dummy)); C_OneStep.AddBoundaryValue("Velocity_Inlet_O2", VariableNames.MassFraction0, BoundaryValueFactory.Get_ConstantValue(C_OneStep.OxidizerInletConcentrations[0], dummy, dummy, dummy, dummy)); C_OneStep.AddBoundaryValue("Velocity_Inlet_O2", VariableNames.MassFraction1, BoundaryValueFactory.Get_ConstantValue(C_OneStep.OxidizerInletConcentrations[1], dummy, dummy, dummy, dummy)); C_OneStep.AddBoundaryValue("Velocity_Inlet_O2", VariableNames.MassFraction2, BoundaryValueFactory.Get_ConstantValue(C_OneStep.OxidizerInletConcentrations[2], dummy, dummy, dummy, dummy)); C_OneStep.AddBoundaryValue("Velocity_Inlet_O2", VariableNames.MassFraction3, BoundaryValueFactory.Get_ConstantValue(C_OneStep.OxidizerInletConcentrations[3], dummy, dummy, dummy, dummy)); return C_OneStep; } Type solver = typeof(BoSSS.Application.XNSEC.XNSEC); ``` Calculate the full solution for the initial value ``` int counter = 0; foreach (double xDensityNodes in xCellsMultiplier){ foreach (int nCells in nCellsArray) { foreach (int dg in DGDegrees) { foreach (int i in multiplierS) { // var sess =(myDb.Sessions.Where(s=>Convert.ToInt64(s.Tags.ToArray()[0]) == i)).FirstOrDefault(); var sess = (myDb.Sessions.Where(s => s.Name == "FS_CounterDifFlameP" + 2 + "K" + nCells + "Multiplier" + i + "xNodesMultiplier" + xDensityNodes)).FirstOrDefault(); var C = GiveMeTheFullCtrlFile(dg, nCells, InitialMassFuelIn * i, InitialMassAirIn * i, sess, parabolicVelocityProfile, chemicalReactionActive, useFullGeometry, wallBounded, i, counter, xDensityNodes); string jobName = C.SessionName + "TemperatureStudy"; Console.WriteLine(jobName); var oneJob = new Job(jobName, solver); oneJob.NumberOfMPIProcs = 12; oneJob.SetControlObject(C); oneJob.Activate(myBatch); counter++; } } } } BoSSSshell.WorkflowMgm.BlockUntilAllJobsTerminate(); ``` ## Postprocessing ## Calculate max temperature ``` Dictionary<string, double> MaxTemperature = new Dictionary<string, double>(); Dictionary<double, double> MaxTemperaturek1 = new Dictionary<double, double>(); Dictionary<double, double> MaxTemperaturek2 = new Dictionary<double, double>(); Dictionary<double, double> MaxTemperaturek3 = new Dictionary<double, double>(); Dictionary<double, double> MaxTemperaturek4 = new Dictionary<double, double>(); foreach(var sess in myDb.Sessions){ if(sess.Name.StartsWith("Full") ){ var timestep_FullChem = sess.Timesteps.Last(); double min = 0; double max = 0; timestep_FullChem.Fields.Pick(5).GetExtremalValues( out min, out max); double nCells =Convert.ToDouble( sess.KeysAndQueries["Grid:NoOfCells"]); double hMin = Convert.ToDouble( sess.KeysAndQueries["Grid:hMin"]); int k = Convert.ToInt32(sess.KeysAndQueries["DGdegree:Temperature"]); Console.WriteLine("nCells=" +nCells + "____" + "hmin" + hMin + "____k" + k + "__MaxVal___" + max*300); // switch(k){ // case 1: // MaxTemperaturek1.Add(hMin, max*300); // break; // case 2: // MaxTemperaturek2.Add(hMin, max*300); // break; // case 3: // MaxTemperaturek3.Add(hMin, max*300); // break; // case 4: // MaxTemperaturek4.Add(hMin, max*300); // break; // default: throw new Exception(); // } } } myDb.Sessions[1].KeysAndQueries["Lewis"] myDb.Sessions.KeysAndQueries ```
github_jupyter
# Ames Housing Prices - Step 4: Modeling We are now ready to begin building our regression model to predict prices. This notebook demonstrates how to use the previous work (cleaning, feature prep) to quickly build up the engineered features we need to train our ML model. ``` # Basic setup %run config.ipynb # Connect to Cortex 5 and create a Builder instance cortex = Cortex.client() builder = cortex.builder() ``` ### Training Data We will start with the training dataset from our previous steps and run the _features_ pipeline to get cleaned and prepared data ``` train_ds = cortex.dataset('kaggle/ames-housing-train') train_df = train_ds.as_pandas() pipeline = train_ds.pipeline('features') train_df = pipeline.run(train_df) ``` ### Feature Framing We now need to split out our target variable from the training data and convert our categorical values into _dummies_. ``` y = train_df['SalePrice'] train_df.shape def drop_target(pipeline, df): df.drop('SalePrice', 1, inplace=True) def get_dummies(pipeline, df): return pd.get_dummies(df) pipeline = train_ds.pipeline('engineer', depends=['features']) pipeline.reset() pipeline.add_step(drop_target) pipeline.add_step(get_dummies) # Run the feature engineering pipeline to prepare for model training train_df = pipeline.run(train_ds.as_pandas()) # Remember the full set of engineered columns we need to produce for the model pipeline.set_context('columns', train_df.columns.tolist()) # Save the dataset to persist pipeline changes train_ds.save() print('\nTrain shape: (%d, %d)' % train_df.shape) ``` ## Model Training, Validation, and Experimentation We are going to try a variety of alogithms and parameters to achieve optimal results. This will be an iterative process that Cortex 5 will help us track and reproduce in the future by recording the data pipeline used, the model parameters, model metrics, and model artifacts in Experiments. ``` from sklearn.linear_model import LinearRegression, RidgeCV, LassoCV, ElasticNetCV, Ridge, Lasso, ElasticNet from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split, GridSearchCV def train(x, y, **kwargs): alphas = kwargs.get('alphas', [1, 0.1, 0.001, 0.0001]) # Select alogrithm mtype = kwargs.get('model_type') if mtype == 'Lasso': model = LassoCV(alphas=alphas) elif mtype == 'Ridge': # model = RidgeCV(alphas=alphas) model = GridSearchCV(Ridge(), param_grid={'alpha': np.logspace(0, 1, num=10), 'normalize': [True, False], 'solver': ['auto', 'svd']}, scoring=['explained_variance', 'r2', 'neg_mean_squared_error'], n_jobs=-1, cv=10, refit='neg_mean_squared_error') elif mtype == 'ElasticNet': model = ElasticNetCV(alphas=alphas) else: model = LinearRegression() # Train model model.fit(x, y) if hasattr(model, 'best_estimator_'): return model.best_estimator_, model.best_params_ return model, alphas def predict_and_score(model, x, y): predictions = model.predict(x) rmse = np.sqrt(mean_squared_error(predictions, y)) return [predictions, rmse] X_train, X_test, y_train, y_test = train_test_split(train_df, y.values, test_size=0.20, random_state=10) ``` ### Experiment Management We are ready to run our train and validation loop and select the optimal model. As we run our experiment, Cortex will track each run and record the key params, metrics, and artifacts needed to reproduce and/or deploy the model later. ``` %%time best_model = None best_model_type = None best_rmse = 1.0 exp = cortex.experiment('kaggle/ames-housing-regression') # exp.reset() exp.set_meta('style', 'supervised') exp.set_meta('function', 'regression') with exp.start_run() as run: alphas = [1, 0.1, 0.001, 0.0005] for model_type in ['Linear', 'Lasso', 'Ridge', 'ElasticNet']: print('---'*30) print('Training model using {} regression algorithm'.format(model_type)) model, params = train(X_train, y_train, model_type=model_type, alphas=alphas) print('Params: ', params) [predictions, rmse] = predict_and_score(model, X_train, y_train) print('Training error:', rmse) [predictions, rmse] = predict_and_score(model, X_test, y_test) print('Testing error:', rmse) if rmse < best_rmse: best_rmse = rmse best_model = model best_model_type = model_type r2 = best_model.score(X_test, y_test) run.log_metric('r2', r2) run.log_metric('rmse', best_rmse) run.log_param('model_type', best_model_type) run.log_param('alphas', alphas) run.log_artifact('model', best_model) print('---'*30) print('Best model: ' + best_model_type) print('Best testing error: %.6f' % best_rmse) print('R2 score: %.6f' % r2) exp ```
github_jupyter
``` import SimPEG as simpeg import simpegMT as simpegmt import numpy as np, os import matplotlib.pyplot as plt ## Setup the modelling # Setting up 1D mesh and conductivity models to forward model data. # Frequency nFreq = 31 freqs = np.logspace(3,-3,nFreq) # Set mesh parameters ct = 20 air = simpeg.Utils.meshTensor([(ct,16,1.4)]) core = np.concatenate( ( np.kron(simpeg.Utils.meshTensor([(ct,10,-1.3)]),np.ones((5,))) , simpeg.Utils.meshTensor([(ct,5)]) ) ) bot = simpeg.Utils.meshTensor([(core[0],10,-1.4)]) x0 = -np.array([np.sum(np.concatenate((core,bot)))]) # Make the model m1d = simpeg.Mesh.TensorMesh([np.concatenate((bot,core,air))], x0=x0) # Setup model varibles active = m1d.vectorCCx<0. layer1 = (m1d.vectorCCx<-500.) & (m1d.vectorCCx>=-800.) layer2 = (m1d.vectorCCx<-3500.) & (m1d.vectorCCx>=-5000.) # Set the conductivity values sig_half = 2e-3 sig_air = 1e-8 sig_layer1 = .2 sig_layer2 = .2 # Make the true model sigma_true = np.ones(m1d.nCx)*sig_air sigma_true[active] = sig_half sigma_true[layer1] = sig_layer1 sigma_true[layer2] = sig_layer2 # Extract the model m_true = np.log(sigma_true[active]) # Make the background model sigma_0 = np.ones(m1d.nCx)*sig_air sigma_0[active] = sig_half m_0 = np.log(sigma_0[active]) # Set the mapping actMap = simpeg.Maps.ActiveCells(m1d, active, np.log(1e-8), nC=m1d.nCx) mappingExpAct = simpeg.Maps.ExpMap(m1d) * actMap ## Setup the layout of the survey, set the sources and the connected receivers # Receivers rxList = [] for rxType in ['z1dr','z1di']: rxList.append(simpegmt.SurveyMT.RxMT(simpeg.mkvc(np.array([0.0]),2).T,rxType)) # Source list srcList =[] for freq in freqs: srcList.append(simpegmt.SurveyMT.srcMT_polxy_1Dprimary(rxList,freq)) # Make the survey survey = simpegmt.SurveyMT.SurveyMT(srcList) survey.mtrue = m_true # Set the problem problem = simpegmt.ProblemMT1D.eForm_psField(m1d,sigmaPrimary=sigma_0,mapping=mappingExpAct) from pymatsolver import MumpsSolver problem.solver = MumpsSolver problem.pair(survey) ## Forward model observed data # Project the data d_true = survey.dpred(m_true) survey.dtrue = d_true # Add noise to the true data std = 0.05 # 5% std noise = std*abs(survey.dtrue)*np.random.randn(*survey.dtrue.shape) # Assign the dobs survey.dobs = survey.dtrue + noise survey.std = survey.dobs*0 + std # Assign the data weight survey.Wd = 1/(abs(survey.dobs)*std) modList = [] modFiles = glob('*52.npy') modFiles.sort() for f in modFiles: modList.append(np.load(f)) simpegmt.Utils.dataUtils.plotMT1DModelData(problem,[m_0,modList[10],modList[15],modList[24]]) plt.show() len(modList) ```
github_jupyter
# Receiver Operating Characteristic (ROC) with cross validation on Iris In this notebook we find an example of Receiver Operating Characteristic (ROC) Curves. An ROC curve represents the discriminative ability of a binary classification in a graphical plot. In an ROC curve the true positive rate of the binary classification on the Y axis is plotted against the false positive rate of the binary classification on the X axis. A method, known as K-fold cross-validation, can be used to visualize the confidence of the ROC curve. The idea behind K-fold cross-validation is to split up the Iris dataset into K folds. For each of the K folds, one fold is kept seperated, and a classifier is trained on the remaining folds. Then this classifier is evaluated on the seperated fold. The K-fold cross-validation method yields K classifiers with K validations. The variance of the curves roughly shows how the classifier output is affected by the changes in the train data, and gives an indication of the confidence intervals. We took the example below from the [SKLearn documentation](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc_crossval.html#sphx-glr-auto-examples-model-selection-plot-roc-crossval-py). ``` import numpy as np from scipy import interp import matplotlib.pyplot as plt from sklearn import svm, datasets from sklearn.metrics import roc_curve, auc from sklearn.model_selection import StratifiedKFold # ############################################################################# # Data IO and generation # Import some data to play with iris = datasets.load_iris() X = iris.data y = iris.target X, y = X[y != 2], y[y != 2] n_samples, n_features = X.shape # Add noisy features random_state = np.random.RandomState(0) X = np.c_[X, random_state.randn(n_samples, 200 * n_features)] # ############################################################################# # Classification and ROC analysis # Run classifier with cross-validation and plot ROC curves cv = StratifiedKFold(n_splits=6) classifier = svm.SVC(kernel='linear', probability=True, random_state=random_state) tprs = [] aucs = [] mean_fpr = np.linspace(0, 1, 100) total_labels = np.array([]) total_probas = np.array([]) i = 0 for train, test in cv.split(X, y): probas_ = classifier.fit(X[train], y[train]).predict_proba(X[test]) # Compute ROC curve and area the curve fpr, tpr, thresholds = roc_curve(y[test], probas_[:, 1]) total_labels = np.concatenate([total_labels, y[test]], axis=None) total_probas = np.concatenate([total_probas, probas_[:, 1]], axis=None) tprs.append(interp(mean_fpr, fpr, tpr)) tprs[-1][0] = 0.0 roc_auc = auc(fpr, tpr) aucs.append(roc_auc) plt.plot(fpr, tpr, lw=1, alpha=0.3, label='ROC fold %d (AUC = %0.2f)' % (i, roc_auc)) i += 1 plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r', label='Chance', alpha=.8) mean_tpr = np.mean(tprs, axis=0) mean_tpr[-1] = 1.0 mean_auc = auc(mean_fpr, mean_tpr) std_auc = np.std(aucs) plt.plot(mean_fpr, mean_tpr, color='b', label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc), lw=2, alpha=.8) std_tpr = np.std(tprs, axis=0) tprs_upper = np.minimum(mean_tpr + std_tpr, 1) tprs_lower = np.maximum(mean_tpr - std_tpr, 0) plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='b', alpha=.2, label=r'$\pm$ 1 std. dev.') plt.xlim([-0.05, 1.05]) plt.ylim([-0.05, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show() ``` # ROC confidence intervals with pyroc and cross validation Our next goal is to create ROC confidence intervals (1 sigma) for the cross validation predictions. In the previous example we performed a K-fold cross validation, yielding K validation on K disjoint folds. We create an ROC curve for all K-fold validations together. Next we plot the confidence interval for 1-sigma. As p-value we should take p = 1 - (0.68 + 0.32 / 2) = 1 - 0.84 = 0.16. We compare the results with the SKLearn results from the example above. ``` import pyroc roc = pyroc.ROC(total_labels, total_probas) fig, ax = plt.subplots(figsize=(8, 8)) roc.plot( x_label='False Positive Rate', y_label='True Positive Rate', title='Receiver operating characteristic example', label=f'Mean ROC curve (area = {roc.auc})', color='green', bootstrap=True, p_value=0.16, ax=ax) ax.plot(mean_fpr, mean_tpr, color='b', label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc), lw=2, alpha=.8) ax.fill_between(mean_fpr, tprs_lower, tprs_upper, color='b', alpha=.2, label=r'$\pm$ 1 std. dev.') ax.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r', label='Chance', alpha=.8) ax.legend(loc="lower right") ```
github_jupyter
# Neural networks with PyTorch Deep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks. ``` # Import necessary packages %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import torch import helper import matplotlib.pyplot as plt ``` Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample below <img src='assets/mnist.png'> Our goal is to build a neural network that can take one of these images and predict the digit in the image. First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later. ``` # The MNIST datasets are hosted on yann.lecun.com that has moved under CloudFlare protection # Run this script to enable the datasets download # Reference: https://github.com/pytorch/vision/issues/1938 from six.moves import urllib opener = urllib.request.build_opener() opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib.request.install_opener(opener) ### Run this cell from torchvision import datasets, transforms # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)), ]) # Download and load the training data trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) ``` We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like ```python for image, label in trainloader: ## do things with images and labels ``` You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images. ``` dataiter = iter(trainloader) images, labels = dataiter.next() print(type(images)) print(images.shape) print(labels.shape) ``` This is what one of the images looks like. ``` plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r'); ``` First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures. The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors. Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next. > **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next. ``` ## Your solution ## Activation function def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Neural network def multi_Layer_NW(inputUnits, hiddenUnits, outputUnits): torch.manual_seed(7) # Set the random seed so things are predictable # Define the size of each layer in our network n_input = inputUnits # Number of input units, must match number of input features n_hidden = hiddenUnits # Number of hidden units n_output = outputUnits # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) return W1,W2,B1,B2 def calc_output(features,W1,W2,B1,B2): h = activation(torch.matmul(features,W1).add_(B1)) output = activation(torch.matmul(h,W2).add_(B2)) return output # Features are flattened batch input features = torch.flatten(images,start_dim=1) W1,W2,B1,B2 = multi_Layer_NW(features.shape[1],256,10) out = calc_output(features,W1,W2,B1,B2) # output of your network, should have shape (64,10) ``` Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this: <img src='assets/image_distribution.png' width=500px> Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class. To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like $$ \Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}} $$ What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one. > **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns. ``` def softmax(x): ## TODO: Implement the softmax function here retrun torch.exp(x) / torch.sum(torch.exp(x)) # Here, out should be the output of the network in the previous excercise with shape (64,10) probabilities = softmax(out) # Does it have the right shape? Should be (64, 10) print(probabilities.shape) # Does it sum to 1? print(probabilities.sum(dim=1)) ``` ## Building networks with PyTorch PyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output. ``` from torch import nn class Network(nn.Module): def __init__(self): super().__init__() # Inputs to hidden layer linear transformation self.hidden = nn.Linear(784, 256) # Output layer, 10 units - one for each digit self.output = nn.Linear(256, 10) # Define sigmoid activation and softmax output self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) def forward(self, x): # Pass the input tensor through each of our operations x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) return x ``` Let's go through this bit by bit. ```python class Network(nn.Module): ``` Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything. ```python self.hidden = nn.Linear(784, 256) ``` This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`. ```python self.output = nn.Linear(256, 10) ``` Similarly, this creates another linear transformation with 256 inputs and 10 outputs. ```python self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) ``` Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns. ```python def forward(self, x): ``` PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method. ```python x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) ``` Here the input tensor `x` is passed through each operation and reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method. Now we can create a `Network` object. ``` # Create the network and look at its text representation model = Network() model ``` You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`. ``` import torch.nn.functional as F class Network(nn.Module): def __init__(self): super().__init__() # Inputs to hidden layer linear transformation self.hidden = nn.Linear(784, 256) # Output layer, 10 units - one for each digit self.output = nn.Linear(256, 10) def forward(self, x): # Hidden layer with sigmoid activation x = F.sigmoid(self.hidden(x)) # Output layer with softmax activation x = F.softmax(self.output(x), dim=1) return x ``` ### Activation functions So far we've only been looking at the sigmoid activation function, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit). <img src="assets/activation.png" width=700px> In practice, the ReLU function is used almost exclusively as the activation function for hidden layers. ### Your Turn to Build a Network <img src="assets/mlp_mnist.png" width=600px> > **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function. It's good practice to name your layers by their type of network, for instance 'fc' to represent a fully-connected layer. As you code your solution, use `fc1`, `fc2`, and `fc3` as your layer names. ``` ## Your solution here ``` ### Initializing weights and biases The weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance. ``` print(model.fc1.weight) print(model.fc1.bias) ``` For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values. ``` # Set biases to all zeros model.fc1.bias.data.fill_(0) # sample from random normal with standard dev = 0.01 model.fc1.weight.data.normal_(std=0.01) ``` ### Forward pass Now that we have a network, let's see what happens when we pass in an image. ``` # Grab some data dataiter = iter(trainloader) images, labels = dataiter.next() # Resize images into a 1D vector, new shape is (batch size, color channels, image pixels) images.resize_(64, 1, 784) # or images.resize_(images.shape[0], 1, 784) to automatically get batch size # Forward pass through the network img_idx = 0 ps = model.forward(images[img_idx,:]) img = images[img_idx] helper.view_classify(img.view(1, 28, 28), ps) ``` As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random! ### Using `nn.Sequential` PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.html#torch.nn.Sequential)). Using this to build the equivalent network: ``` # Hyperparameters for our network input_size = 784 hidden_sizes = [128, 64] output_size = 10 # Build a feed-forward network model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]), nn.ReLU(), nn.Linear(hidden_sizes[0], hidden_sizes[1]), nn.ReLU(), nn.Linear(hidden_sizes[1], output_size), nn.Softmax(dim=1)) print(model) # Forward pass through the network and display output images, labels = next(iter(trainloader)) images.resize_(images.shape[0], 1, 784) ps = model.forward(images[0,:]) helper.view_classify(images[0].view(1, 28, 28), ps) ``` Here our model is the same as before: 784 input units, a hidden layer with 128 units, ReLU activation, 64 unit hidden layer, another ReLU, then the output layer with 10 units, and the softmax output. The operations are available by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`. ``` print(model[0]) model[0].weight ``` You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_. ``` from collections import OrderedDict model = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(input_size, hidden_sizes[0])), ('relu1', nn.ReLU()), ('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])), ('relu2', nn.ReLU()), ('output', nn.Linear(hidden_sizes[1], output_size)), ('softmax', nn.Softmax(dim=1))])) model ``` Now you can access layers either by integer or the name ``` print(model[0]) print(model.fc1) ``` In the next notebook, we'll see how we can train a neural network to accuractly predict the numbers appearing in the MNIST images.
github_jupyter
<a href="https://colab.research.google.com/github/dcshapiro/AI-Feynman/blob/master/AI_Feynman_2_0.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # AI Feynman 2.0: Learning Regression Equations From Data ### Clone repository and install dependencies ``` !git clone https://github.com/SJ001/AI-Feynman.git ``` Look at what we downloaded ``` !ls /content/AI-Feynman # %pycat AI-Feynman/requirements.txt if you need to fix the dependencies ``` Fix broken requirements file (may not be needed if later versions fix this). ``` %%writefile AI-Feynman/requirements.txt torch>=1.4.0 matplotlib sympy==1.4 pandas scipy sortedcontainers ``` Install dependencies not already installed in Google Collab ``` !pip install -r AI-Feynman/requirements.txt ``` Check that fortran is installed ``` !gfortran --version ``` Check the OS version ``` !lsb_release -a ``` Install the csh shell ``` !sudo apt-get install csh ``` Set loose permissions to avoid some reported file permissions issues ``` !chmod +777 /content/AI-Feynman/Code/* ``` ### Compile the fortran code Look at the code directory ``` !ls -l /content/AI-Feynman/Code ``` Compile .f files into .x files ``` !cd /content/AI-Feynman/Code/ && ./compile.sh ``` ### Run the first example from the AI-Feynman repository Change working directory to the Code directory ``` import os os.chdir("/content/AI-Feynman/Code/") print(os.getcwd()) !pwd ``` Check that the bruteforce code runs without errors ``` from S_brute_force import brute_force brute_force("/content/AI-Feynman/example_data/","example1.txt",30,"14ops.txt") ``` Look at the first line of the example 1 file ``` !head -n 1 /content/AI-Feynman/example_data/example1.txt # Example 1 has data generated from an equation, where the last column is the regression target, and the rest of the columns are the input data # The following example shows the relationship between the first line of the file example1.txt and the formula used to make the data x=[1.6821347439986711,1.1786188905177983,4.749225735259924,1.3238356535004034,3.462199507094163] x0,x1,x2,x3=x[0],x[1],x[2],x[3] (x0**2 - 2*x0*x1 + x1**2 + x2**2 - 2*x2*x3 + x3**2)**0.5 ``` Run the code. It takes a long time, so go get some coffee. ``` from S_run_aifeynman import run_aifeynman # Run example 1 as the regression dataset run_aifeynman("/content/AI-Feynman/example_data/","example1.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=400) ``` ### Assess the results ``` !cat results.dat ``` We found a candidate with an excellent fit, let's see what we got ``` !ls -l /content/AI-Feynman/Code/results/ !ls -l /content/AI-Feynman/Code/results/NN_trained_models/models !cat /content/AI-Feynman/Code/results/solution_example1.txt ``` Note in the cell above that the solution with the lowest error is the formula this data was generated from ### Try our own dataset generation and equation learning The code below generates our regression example dataset We generate points for 4 columns, where x0 is from the same equation as x1, and x2 is from the same equation as x3 The last column is Y ``` import os import random os.chdir("/content/AI-Feynman/example_data") def getY(x01,x23): y = -0.5*x01+0.5*x23+3 return y def getRow(): [x0,x2]=[random.random() for x in range(2)] x1=x0 x3=x2 y=getY(x1,x3) return str(x0)+" "+str(x1)+" "+str(x2)+" "+str(x3)+" "+str(y)+"\n" with open("duplicateVarsExample.txt", "w") as f: for _ in range(10000): f.write(getRow()) f.close() # switch back to the code directory os.chdir("/content/AI-Feynman/Code") ``` Let's look at our data ``` !head -n 10 ../example_data/duplicateVarsExample.txt ``` Let's also plot the data for x01 and x23 against Y ``` %matplotlib inline import matplotlib.pyplot as plt import pandas as pd plt.style.use('seaborn-whitegrid') import numpy as np df=pd.read_csv("../example_data/duplicateVarsExample.txt",sep=" ",header=None) df.plot.scatter(x=0, y=4) df.plot.scatter(x=2, y=4) ``` Now we run the experiment, and go get more coffee, because this is not going to be fast... ``` from S_run_aifeynman import run_aifeynman run_aifeynman("/content/AI-Feynman/example_data/","duplicateVarsExample.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=400) ``` Initial models quickly mapped to x0 and x2 (the system realized x1 and x3 are duplicates and so not needed) Later on the system found 3.000000000000+log(sqrt(exp((x2-x1)))) which is a bit crazy but looks like a plane We can see on Wolfram alpha that an equivalent form of this equation is: (x2 - x1)/2 + 3.000000000000 which is what we used to generate the dataset! Link: https://www.wolframalpha.com/input/?i=3.000000000000%2Blog%28sqrt%28exp%28%28x2-x1%29%29%29%29 ``` !ls -l /content/AI-Feynman/Code/results/ !cat /content/AI-Feynman/Code/results/solution_duplicateVarsExample.txt ``` The solver settled on *log(sqrt(exp(-x1 + x3))) + 3.0* which we know is correct Now, that was a bit of a softball problem as it has an exact solution. Let's now add noise to the dataset and see how the library holds up ### Let's add small amount of noise to every variabe and see the fit quality We do the same thing as before, but now we add or subtract noise to x0,x1,x2,x3 after generating y ``` import os import random import numpy as np os.chdir("/content/AI-Feynman/example_data") def getY(x01,x23): y = -0.5*x01+0.5*x23+3 return y def getRow(): x=[random.random() for x in range(4)] x[1]=x[0] x[3]=x[2] y=getY(x[1],x[3]) mu=0 sigma=0.05 noise=np.random.normal(mu, sigma, 4) x=x+noise return str(x[0])+" "+str(x[1])+" "+str(x[2])+" "+str(x[3])+" "+str(y)+"\n" with open("duplicateVarsWithNoise100k.txt", "w") as f: for _ in range(100000): f.write(getRow()) f.close() # switch back to the code directory os.chdir("/content/AI-Feynman/Code") ``` Let's have a look at the data ``` !head -n 20 ../example_data/duplicateVarsWithNoise100k.txt ``` Now let's plot the data ``` %matplotlib inline import matplotlib.pyplot as plt import pandas as pd plt.style.use('seaborn-whitegrid') import numpy as np df=pd.read_csv("../example_data/duplicateVarsWithNoise100k.txt",sep=" ",header=None) df.plot.scatter(x=0, y=4) df.plot.scatter(x=1, y=4) df.plot.scatter(x=2, y=4) df.plot.scatter(x=3, y=4) from S_run_aifeynman import run_aifeynman run_aifeynman("/content/AI-Feynman/example_data/","duplicateVarsWithNoise100k.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=600) !cat /content/AI-Feynman/Code/results/solution_duplicateVarsWithNoise100k.txt !cp -r /content/AI-Feynman /content/gdrive/My\ Drive/Lemay.ai_research/ # from S_run_aifeynman import run_aifeynman # run_aifeynman("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/example_data/","duplicateVarsWithNoise.txt",30,"19ops.txt", polyfit_deg=3, NN_epochs=1000) import os import random import numpy as np os.chdir("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/example_data") def getY(x01,x23): y = -0.5*x01+0.5*x23+3 return y def getRow(): x=[0 for x in range(4)] x[1]=random.random() x[3]=random.random() y=getY(x[1],x[3]) mu=0 sigma=0.05 noise=np.random.normal(mu, sigma, 4) x=x+noise return str(x[1])+" "+str(x[3])+" "+str(y)+"\n" with open("varsWithNoise.txt", "w") as f: for _ in range(100000): f.write(getRow()) f.close() # switch back to the code directory os.chdir("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/Code") %matplotlib inline import matplotlib.pyplot as plt import pandas as pd plt.style.use('seaborn-whitegrid') import numpy as np df=pd.read_csv("../example_data/varsWithNoise.txt",sep=" ",header=None) df.plot.scatter(x=0, y=2) df.plot.scatter(x=1, y=2) from S_run_aifeynman import run_aifeynman run_aifeynman("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/example_data/","varsWithNoise.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=1000) ```
github_jupyter
## AI for Medicine Course 1 Week 1 lecture exercises # Data Exploration In the first assignment of this course, you will work with chest x-ray images taken from the public [ChestX-ray8 dataset](https://arxiv.org/abs/1705.02315). In this notebook, you'll get a chance to explore this dataset and familiarize yourself with some of the techniques you'll use in the first graded assignment. <img src="xray-image.png" alt="U-net Image" width="300" align="middle"/> The first step before jumping into writing code for any machine learning project is to explore your data. A standard Python package for analyzing and manipulating data is [pandas](https://pandas.pydata.org/docs/#). With the next two code cells, you'll import `pandas` and a package called `numpy` for numerical manipulation, then use `pandas` to read a csv file into a dataframe and print out the first few rows of data. ``` # Import necessary packages import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import os import seaborn as sns sns.set() # Read csv file containing training datadata train_df = pd.read_csv("nih/train-small.csv") # Print first 5 rows print(f'There are {train_df.shape[0]} rows and {train_df.shape[1]} columns in this data frame') train_df.head() ``` Have a look at the various columns in this csv file. The file contains the names of chest x-ray images ("Image" column) and the columns filled with ones and zeros identify which diagnoses were given based on each x-ray image. ### Data types and null values check Run the next cell to explore the data types present in each column and whether any null values exist in the data. ``` # Look at the data type of each column and whether null values are present train_df.info() ``` ### Unique IDs check "PatientId" has an identification number for each patient. One thing you'd like to know about a medical dataset like this is if you're looking at repeated data for certain patients or whether each image represents a different person. ``` print(f"The total patient ids are {train_df['PatientId'].count()}, from those the unique ids are {train_df['PatientId'].value_counts().shape[0]} ") ``` As you can see, the number of unique patients in the dataset is less than the total number so there must be some overlap. For patients with multiple records, you'll want to make sure they do not show up in both training and test sets in order to avoid data leakage (covered later in this week's lectures). ### Explore data labels Run the next two code cells to create a list of the names of each patient condition or disease. ``` columns = train_df.keys() columns = list(columns) print(columns) # Remove unnecesary elements columns.remove('Image') columns.remove('PatientId') # Get the total classes print(f"There are {len(columns)} columns of labels for these conditions: {columns}") ``` Run the next cell to print out the number of positive labels (1's) for each condition ``` # Print out the number of positive labels for each class for column in columns: print(f"The class {column} has {train_df[column].sum()} samples") ``` Have a look at the counts for the labels in each class above. Does this look like a balanced dataset? ### Data Visualization Using the image names listed in the csv file, you can retrieve the image associated with each row of data in your dataframe. Run the cell below to visualize a random selection of images from the dataset. ``` # Extract numpy values from Image column in data frame images = train_df['Image'].values # Extract 9 random images from it random_images = [np.random.choice(images) for i in range(9)] # Location of the image dir img_dir = 'nih/images-small/' print('Display Random Images') # Adjust the size of your images plt.figure(figsize=(20,10)) # Iterate and plot random images for i in range(9): plt.subplot(3, 3, i + 1) img = plt.imread(os.path.join(img_dir, random_images[i])) plt.imshow(img, cmap='gray') plt.axis('off') # Adjust subplot parameters to give specified padding plt.tight_layout() ``` ### Investigate a single image Run the cell below to look at the first image in the dataset and print out some details of the image contents. ``` # Get the first image that was listed in the train_df dataframe sample_img = train_df.Image[0] raw_image = plt.imread(os.path.join(img_dir, sample_img)) plt.imshow(raw_image, cmap='gray') plt.colorbar() plt.title('Raw Chest X Ray Image') print(f"The dimensions of the image are {raw_image.shape[0]} pixels width and {raw_image.shape[1]} pixels height, one single color channel") print(f"The maximum pixel value is {raw_image.max():.4f} and the minimum is {raw_image.min():.4f}") print(f"The mean value of the pixels is {raw_image.mean():.4f} and the standard deviation is {raw_image.std():.4f}") ``` ### Investigate pixel value distribution Run the cell below to plot up the distribution of pixel values in the image shown above. ``` # Plot a histogram of the distribution of the pixels sns.distplot(raw_image.ravel(), label=f'Pixel Mean {np.mean(raw_image):.4f} & Standard Deviation {np.std(raw_image):.4f}', kde=False) plt.legend(loc='upper center') plt.title('Distribution of Pixel Intensities in the Image') plt.xlabel('Pixel Intensity') plt.ylabel('# Pixels in Image') ``` <a name="image-processing"></a> # Image Preprocessing in Keras Before training, you'll first modify your images to be better suited for training a convolutional neural network. For this task you'll use the Keras [ImageDataGenerator](https://keras.io/preprocessing/image/) function to perform data preprocessing and data augmentation. Run the next two cells to import this function and create an image generator for preprocessing. ``` # Import data generator from keras from keras.preprocessing.image import ImageDataGenerator # Normalize images image_generator = ImageDataGenerator( samplewise_center=True, #Set each sample mean to 0. samplewise_std_normalization= True # Divide each input by its standard deviation ) ``` ### Standardization The `image_generator` you created above will act to adjust your image data such that the new mean of the data will be zero, and the standard deviation of the data will be 1. In other words, the generator will replace each pixel value in the image with a new value calculated by subtracting the mean and dividing by the standard deviation. $$\frac{x_i - \mu}{\sigma}$$ Run the next cell to pre-process your data using the `image_generator`. In this step you will also be reducing the image size down to 320x320 pixels. ``` # Flow from directory with specified batch size and target image size generator = image_generator.flow_from_dataframe( dataframe=train_df, directory="nih/images-small/", x_col="Image", # features y_col= ['Mass'], # labels class_mode="raw", # 'Mass' column should be in train_df batch_size= 1, # images per batch shuffle=False, # shuffle the rows or not target_size=(320,320) # width and height of output image ) ``` Run the next cell to plot up an example of a pre-processed image ``` # Plot a processed image sns.set_style("white") generated_image, label = generator.__getitem__(0) plt.imshow(generated_image[0], cmap='gray') plt.colorbar() plt.title('Raw Chest X Ray Image') print(f"The dimensions of the image are {generated_image.shape[1]} pixels width and {generated_image.shape[2]} pixels height") print(f"The maximum pixel value is {generated_image.max():.4f} and the minimum is {generated_image.min():.4f}") print(f"The mean value of the pixels is {generated_image.mean():.4f} and the standard deviation is {generated_image.std():.4f}") ``` Run the cell below to see a comparison of the distribution of pixel values in the new pre-processed image versus the raw image. ``` # Include a histogram of the distribution of the pixels sns.set() plt.figure(figsize=(10, 7)) # Plot histogram for original iamge sns.distplot(raw_image.ravel(), label=f'Original Image: mean {np.mean(raw_image):.4f} - Standard Deviation {np.std(raw_image):.4f} \n ' f'Min pixel value {np.min(raw_image):.4} - Max pixel value {np.max(raw_image):.4}', color='blue', kde=False) # Plot histogram for generated image sns.distplot(generated_image[0].ravel(), label=f'Generated Image: mean {np.mean(generated_image[0]):.4f} - Standard Deviation {np.std(generated_image[0]):.4f} \n' f'Min pixel value {np.min(generated_image[0]):.4} - Max pixel value {np.max(generated_image[0]):.4}', color='red', kde=False) # Place legends plt.legend() plt.title('Distribution of Pixel Intensities in the Image') plt.xlabel('Pixel Intensity') plt.ylabel('# Pixel') ``` #### That's it for this exercise, you should now be a bit more familiar with the dataset you'll be using in this week's assignment!
github_jupyter
``` !nvidia-smi ``` # **Intro to Generative Adversarial Networks (GANs)** Generative adversarial networks (GANs) are algorithmic architectures that use two neural networks, compitting one against the other (thus the “adversarial”) in order to generate new, synthetic instances of data that can pass for real data. They are used widely in image generation, video generation and voice generation. GANs were introduced in [a paper by Ian Goodfellow](https://arxiv.org/abs/1406.2661) and other researchers at the University of Montreal, including Yoshua Bengio, in 2014. Referring to GANs, Facebook’s AI research director Yann LeCun called adversarial training “the most interesting idea in the last 10 years in ML.” ## **Some cool demos**: * Progress over the last several years, from [Ian Goodfellow tweet](https://twitter.com/goodfellow_ian/status/1084973596236144640) <img src='http://drive.google.com/uc?export=view&id=1PSfze4ZHgAn4BAjLuZhqAZO_HJQ1NEHX' width=1000 height=350/> A generative adversarial network (GAN) has two parts: * The **generator** learns to generate plausible data. The generated instances become negative training examples for the discriminator. * The **discriminator** learns to distinguish the generator's fake data from real data. The discriminator penalizes the generator for producing implausible results. When training begins, the generator produces obviously fake data, and the discriminator quickly learns to tell that it's fake: <img src='http://drive.google.com/uc?export=view&id=1Auxzsi3395vL0K80GfYlAEvWufTMTZ59' width=1000 height=350/> # **<font color='Darkblue'>Import Required Libraries:</font>** ``` from __future__ import print_function #%matplotlib inline import random import torch from torch import nn from tqdm.auto import tqdm from torchvision import transforms from torchvision import datasets # Training dataset from torchvision.utils import make_grid from torchvision import utils from torch.utils.data import DataLoader import matplotlib.pyplot as plt import numpy as np import matplotlib.animation as animation from IPython.display import HTML # Decide which device we want to run on device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print("My device: => ", device) # Set random seed for reproducibility my_seed = 123 random.seed(my_seed) torch.manual_seed(my_seed); ``` # **Fashion-MNIST Dataset:** `Fashion-MNIST` is a dataset of [Zalando](https://jobs.zalando.com/en/tech/?gh_src=281f2ef41us)'s article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original [MNIST dataset](http://yann.lecun.com/exdb/mnist/) for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits. <img src='https://raw.githubusercontent.com/zalandoresearch/fashion-mnist/master/doc/img/fashion-mnist-sprite.png' width=1000 height=700/> ``` batch_size = 128 transform = transforms.Compose([transforms.ToTensor()]) data_train = datasets.FashionMNIST('./data', download=True, train=True, transform=transform) train_loader = DataLoader(data_train, batch_size=batch_size, shuffle=True) classes = ['T-shirt/top','Trouser','Pullover','Dress','Coat','Sandal','Shirt','Sneaker','Bag','Ankle Boot'] dataiter = iter(train_loader) images, labels = dataiter.next() images_arr = [] labels_arr = [] for i in range(0, 30): images_arr.append(images[i].unsqueeze(0)) labels_arr.append(labels[i].item()) fig = plt.figure(figsize=(25, 10)) for i in range(30): ax = fig. add_subplot(3, 10, i+1, xticks=[], yticks=[]) ax.imshow(images_arr[i].resize_(1, 28, 28).numpy().squeeze(), cmap='gray') ax.set_title("{}".format(classes[labels_arr[i]]), color=("blue")) ``` # **<font color='darkorange'>Generator Part:</font>** The generator part of a GAN learns to create fake data by incorporating feedback from the discriminator. It learns to make the discriminator classify its output as real. Generator training requires tighter integration between the generator and the discriminator than discriminator training requires. The portion of the GAN that trains the generator includes: * random input * generator network, which transforms the random input into a data instance * discriminator network, which classifies the generated data * discriminator output * generator loss, which penalizes the generator for failing to fool the discriminator <img src='http://drive.google.com/uc?export=view&id=1dbk5FmAHE3LHwspYm8qxL-qHBLXBq29i' width=1000 height=350/> ### **Generator Block:** ``` def get_generator_block(input_dim, output_dim): seq = nn.Sequential( nn.Linear(input_dim, output_dim), nn.BatchNorm1d(output_dim), nn.LeakyReLU(negative_slope=0.2, inplace=False), nn.Dropout(0.3), ) return seq ``` ### **Generator Class:** ``` class Generator(nn.Module): def __init__(self, z_dim=10, img_dim=28*28, hidden_dim=128): super(Generator, self).__init__() self.gen = nn.Sequential( get_generator_block(z_dim, hidden_dim), get_generator_block(hidden_dim, hidden_dim * 2), get_generator_block(hidden_dim * 2, hidden_dim * 4), get_generator_block(hidden_dim * 4, hidden_dim * 8), nn.Linear(hidden_dim * 8, img_dim), nn.Sigmoid(), ) def forward(self, noise): gen_output = self.gen(noise) return gen_output # Generate Noise: def get_generator_noise(n_sample, z_dim, device='cpu'): my_noise = torch.randn(n_sample, z_dim, device=device) return my_noise ``` # **<font color='darkorange'>Discriminator Part:</font>** The discriminator in a GAN is simply a classifier. It tries to distinguish real data from the data created by the generator. It could use any network architecture appropriate to the type of data it's classifying. The discriminator's training data comes from two sources: * **Real data** instances, such as real pictures of people. The discriminator uses these instances as positive examples during training. * **Fake data** instances created by the generator. The discriminator uses these instances as negative examples during training. <img src='http://drive.google.com/uc?export=view&id=1A3_gYqcPORqXFio1wNHAsc8ZndY3zpIP' width=1000 height=350/> ``` # Discriminator Block def get_discriminator_block(input_dim, output_dim): seq = nn.Sequential( nn.Linear(input_dim, output_dim), nn.LeakyReLU(negative_slope=0.2, inplace=False), ) return seq ``` <img src='https://miro.medium.com/max/1400/1*siH_yCvYJ9rqWSUYeDBiRA.png' width=800 height=400/> ``` # Discriminator Class: class Discriminator(nn.Module): def __init__(self, img_dim=28*28, hidden_dim=128): super(Discriminator, self).__init__() self.disc = nn.Sequential( get_discriminator_block(img_dim, hidden_dim * 4), get_discriminator_block(hidden_dim * 4, hidden_dim * 2), get_discriminator_block(hidden_dim * 2, hidden_dim), nn.Linear(hidden_dim, 1), ) def forward(self, image): state = self.disc(image) return state ``` # **<font color='deepskyblue'>Training Process:</font>** Because a GAN contains two separately trained networks, its training algorithm must address two complications: * GANs must juggle two different kinds of training (generator and discriminator). * GAN convergence is hard to identify. ### **Set Hyperparameters:** ``` # Set your parameters criterion = nn.BCEWithLogitsLoss() num_epochs = 51 z_dim = 64 display_step = 100 lr = 0.0001 size = (1, 28, 28) device = 'cuda' # Generator: generator = Generator(z_dim).to(device) gen_optimizer = torch.optim.Adam(generator.parameters(), lr=lr) # Discriminator: discriminator = Discriminator().to(device) disc_optimizer = torch.optim.Adam(discriminator.parameters(), lr=lr) # Discriminator Loss: def get_discriminator_loss(gen, disc, criterion, real, num_images, z_dim, device): noise = get_generator_noise(num_images, z_dim, device=device) gen_output = gen(noise) disc_out_fake = disc(gen_output.detach()) disc_loss_fake = criterion(disc_out_fake, torch.zeros_like(disc_out_fake)) disc_out_real = disc(real) disc_loss_real = criterion(disc_out_real, torch.ones_like(disc_out_real)) disc_loss = (disc_loss_fake + disc_loss_real) / 2 return disc_loss # Generator Loss: def get_generator_loss(gen, disc, criterion, num_images, z_dim, device): noise = get_generator_noise(num_images, z_dim, device=device) gen_output = gen(noise) disc_preds = disc(gen_output) # gen_output.detach() gen_loss = criterion(disc_preds, torch.ones_like(disc_preds)) return gen_loss # Show Images Function: def show_tensor_images(real, fake, num_images=25, size=(1, 28, 28)): plt.figure(figsize=(15,15)) image_unflat_real = real.detach().cpu().view(-1, *size) image_grid_real = make_grid(image_unflat_real[:num_images], nrow=5, normalize=True, padding=2) plt.subplot(1,2,1) plt.axis("off") plt.title("Real Images") plt.imshow(image_grid_real.permute(1, 2, 0).squeeze()) image_unflat_fake = fake.detach().cpu().view(-1, *size) image_grid_fake = make_grid(image_unflat_fake[:num_images], nrow=5, normalize=True, padding=2) plt.subplot(1,2,2) plt.axis("off") plt.title("Fake Images") plt.imshow(image_grid_fake.permute(1, 2, 0).squeeze()) plt.show() # Training Loop img_list = [] G_losses = [] D_losses = [] iters = 0 cur_step = 0 img_show = 3 mean_generator_loss = 0 mean_discriminator_loss = 0 for epoch in range(num_epochs): for real, _ in tqdm(train_loader): cur_batch_size = len(real) real = real.view(cur_batch_size, -1).to(device) disc_optimizer.zero_grad() disc_loss = get_discriminator_loss(generator, discriminator, criterion, real, cur_batch_size, z_dim, device) disc_loss.backward() disc_optimizer.step() gen_optimizer.zero_grad() gen_loss = get_generator_loss(generator, discriminator, criterion, cur_batch_size, z_dim, device) gen_loss.backward() gen_optimizer.step() mean_discriminator_loss += disc_loss.item() / cur_batch_size mean_generator_loss += gen_loss.item() / cur_batch_size G_losses.append(mean_discriminator_loss) D_losses.append(mean_generator_loss) if cur_step % display_step == 0 and cur_step >= 0: print(f"[Epoch: {epoch}/{num_epochs}] | [Step: {cur_step}/{num_epochs*len(train_loader)}], Generator Loss: {mean_generator_loss}, Discriminator Loss: {mean_discriminator_loss}") fake_noise = get_generator_noise(cur_batch_size, z_dim, device=device) fake = generator(fake_noise) img_list.append(make_grid(fake.detach().cpu().view(-1, *size)[:36], nrow=6, normalize=True, padding=2)) mean_discriminator_loss = 0 mean_generator_loss = 0 cur_step += 1 if epoch % img_show == 0: fake_noise = get_generator_noise(cur_batch_size, z_dim, device=device) fake = generator(fake_noise) show_tensor_images(real, fake) ``` # **Visualization:** ``` plt.figure(figsize=(20, 7)) plt.title("Generator and Discriminator Loss During Training") plt.plot(G_losses, label="Generator") plt.plot(D_losses, label="Discriminator") plt.xlabel("steps") plt.ylabel("Loss") plt.legend() plt.show() fig = plt.figure(figsize=(8, 8)) plt.axis("off") imgs = [[plt.imshow(np.transpose(img, (1,2,0)), animated=True)] for img in img_list] anim = animation.ArtistAnimation(fig, imgs, interval=100, repeat_delay=1000, blit=True) HTML(anim.to_jshtml()) ```
github_jupyter
# Iris Training and Prediction with Sagemaker Scikit-learn ### Modified Version of AWS Example: https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/scikit_learn_iris/Scikit-learn%20Estimator%20Example%20With%20Batch%20Transform.ipynb Following modifications were made: 1. Incorporated scripts for local mode hosting 2. Added Train and Test Channels 3. Visualize results (confusion matrix and reports) 4. Added steps to deploy using model artifacts stored in S3 Following Script changes were made: 1. RandomForest Algorithm 2. Refactored script to follow the template provided in tensorflow example: https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/tensorflow_script_mode_training_and_serving/tensorflow_script_mode_training_and_serving.ipynb This tutorial shows you how to use [Scikit-learn](https://scikit-learn.org/stable/) with Sagemaker by utilizing the pre-built container. Scikit-learn is a popular Python machine learning framework. It includes a number of different algorithms for classification, regression, clustering, dimensionality reduction, and data/feature pre-processing. The [sagemaker-python-sdk](https://github.com/aws/sagemaker-python-sdk) module makes it easy to take existing scikit-learn code, which we will show by training a model on the IRIS dataset and generating a set of predictions. For more information about the Scikit-learn container, see the [sagemaker-scikit-learn-containers](https://github.com/aws/sagemaker-scikit-learn-container) repository and the [sagemaker-python-sdk](https://github.com/aws/sagemaker-python-sdk) repository. For more on Scikit-learn, please visit the Scikit-learn website: <http://scikit-learn.org/stable/>. ### Table of contents * [Upload the data for training](#upload_data) * [Create a Scikit-learn script to train with](#create_sklearn_script) * [Create the SageMaker Scikit Estimator](#create_sklearn_estimator) * [Train the SKLearn Estimator on the Iris data](#train_sklearn) * [Using the trained model to make inference requests](#inferece) * [Deploy the model](#deploy) * [Choose some data and use it for a prediction](#prediction_request) * [Endpoint cleanup](#endpoint_cleanup) * [Batch Transform](#batch_transform) * [Prepare Input Data](#prepare_input_data) * [Run Transform Job](#run_transform_job) * [Check Output Data](#check_output_data) First, lets create our Sagemaker session and role, and create a S3 prefix to use for the notebook example. ### Local Mode Execution - requires docker compose configured ### The below setup script is from AWS SageMaker Python SDK Examples : tf-eager-sm-scriptmode.ipynb ``` !/bin/bash ./setup.sh import os import sys import sagemaker from sagemaker import get_execution_role import pandas as pd import numpy as np import matplotlib.pyplot as plt import itertools from sklearn import preprocessing from sklearn.metrics import classification_report, confusion_matrix # SageMaker SKLearn Estimator from sagemaker.sklearn.estimator import SKLearn sagemaker_session = sagemaker.Session() role = get_execution_role() region = sagemaker_session.boto_session.region_name ``` ## Training Data ``` column_list_file = 'iris_train_column_list.txt' train_file = 'iris_train.csv' test_file = 'iris_validation.csv' columns = '' with open(column_list_file,'r') as f: columns = f.read().split(',') # Specify your bucket name bucket_name = 'chandra-ml-sagemaker' training_folder = r'iris/train' test_folder = r'iris/test' model_folder = r'iris/model/' training_data_uri = r's3://' + bucket_name + r'/' + training_folder testing_data_uri = r's3://' + bucket_name + r'/' + test_folder model_data_uri = r's3://' + bucket_name + r'/' + model_folder training_data_uri,testing_data_uri,model_data_uri sagemaker_session.upload_data(train_file, bucket=bucket_name, key_prefix=training_folder) sagemaker_session.upload_data(test_file, bucket=bucket_name, key_prefix=test_folder) ``` Once we have the data locally, we can use use the tools provided by the SageMaker Python SDK to upload the data to a default bucket. ## Create a Scikit-learn script to train with <a class="anchor" id="create_sklearn_script"></a> SageMaker can now run a scikit-learn script using the `SKLearn` estimator. When executed on SageMaker a number of helpful environment variables are available to access properties of the training environment, such as: * `SM_MODEL_DIR`: A string representing the path to the directory to write model artifacts to. Any artifacts saved in this folder are uploaded to S3 for model hosting after the training job completes. * `SM_OUTPUT_DIR`: A string representing the filesystem path to write output artifacts to. Output artifacts may include checkpoints, graphs, and other files to save, not including model artifacts. These artifacts are compressed and uploaded to S3 to the same S3 prefix as the model artifacts. Supposing two input channels, 'train' and 'test', were used in the call to the `SKLearn` estimator's `fit()` method, the following environment variables will be set, following the format `SM_CHANNEL_[channel_name]`: * `SM_CHANNEL_TRAIN`: A string representing the path to the directory containing data in the 'train' channel * `SM_CHANNEL_TEST`: Same as above, but for the 'test' channel. A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to model_dir so that it can be hosted later. Hyperparameters are passed to your script as arguments and can be retrieved with an `argparse.ArgumentParser` instance. For example, the script that we will run in this notebook is the below: ``` !pygmentize 'scikit_learn_iris.py' ``` Because the Scikit-learn container imports your training script, you should always put your training code in a main guard `(if __name__=='__main__':)` so that the container does not inadvertently run your training code at the wrong point in execution. For more information about training environment variables, please visit https://github.com/aws/sagemaker-containers. ## Create SageMaker Scikit Estimator <a class="anchor" id="create_sklearn_estimator"></a> To run our Scikit-learn training script on SageMaker, we construct a `sagemaker.sklearn.estimator.sklearn` estimator, which accepts several constructor arguments: * __entry_point__: The path to the Python script SageMaker runs for training and prediction. * __role__: Role ARN * __train_instance_type__ *(optional)*: The type of SageMaker instances for training. __Note__: Because Scikit-learn does not natively support GPU training, Sagemaker Scikit-learn does not currently support training on GPU instance types. * __sagemaker_session__ *(optional)*: The session used to train on Sagemaker. * __hyperparameters__ *(optional)*: A dictionary passed to the train function as hyperparameters. To see the code for the SKLearn Estimator, see here: https://github.com/aws/sagemaker-python-sdk/tree/master/src/sagemaker/sklearn ``` #instance_type='ml.m5.xlarge' instance_type='local' # Reference: http://sagemaker.readthedocs.io/en/latest/estimators.html # SDK 2.x version does not require train prefix for instance count and type # Specify framework and python Version estimator = SKLearn(entry_point='scikit_learn_iris.py', framework_version = "0.20.0", py_version = 'py3', instance_type= instance_type, role=role, output_path=model_data_uri, base_job_name='sklearn-iris', hyperparameters={'n_estimators': 50,'max_depth':5}) ``` ## Train SKLearn Estimator on Iris data <a class="anchor" id="train_sklearn"></a> Training is very simple, just call `fit` on the Estimator! This will start a SageMaker Training job that will download the data for us, invoke our scikit-learn code (in the provided script file), and save any model artifacts that the script creates. ``` estimator.fit({'training':training_data_uri,'testing':testing_data_uri}) estimator.latest_training_job.job_name estimator.model_data ``` ## Using the trained model to make inference requests <a class="anchor" id="inference"></a> ### Deploy the model <a class="anchor" id="deploy"></a> Deploying the model to SageMaker hosting just requires a `deploy` call on the fitted model. This call takes an instance count and instance type. ``` predictor = estimator.deploy(initial_instance_count=1, instance_type=instance_type) ``` ### Choose some data and use it for a prediction <a class="anchor" id="prediction_request"></a> In order to do some predictions, we'll extract some of the data we used for training and do predictions against it. This is, of course, bad statistical practice, but a good way to see how the mechanism works. ``` df = pd.read_csv(test_file, names=columns) from sklearn import preprocessing from sklearn.metrics import classification_report, confusion_matrix # Encode Class Labels to integers # Labeled Classes labels=[0,1,2] classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'] le = preprocessing.LabelEncoder() le.fit(classes) df.head() X_test = df.iloc[:,1:] print(X_test[:5]) result = predictor.predict(X_test) result df['predicted_class'] = result df.head() ``` <h2>Confusion Matrix</h2> Confusion Matrix is a table that summarizes performance of classification model.<br><br> ``` # Reference: # https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] #print("Normalized confusion matrix") #else: # print('Confusion matrix, without normalization') #print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.ylabel('True label') plt.xlabel('Predicted label') plt.tight_layout() # Compute confusion matrix cnf_matrix = confusion_matrix(df['encoded_class'], df['predicted_class'],labels=labels) cnf_matrix # Plot confusion matrix plt.figure() plot_confusion_matrix(cnf_matrix, classes=classes, title='Confusion matrix - Count') # Plot confusion matrix plt.figure() plot_confusion_matrix(cnf_matrix, classes=classes, title='Confusion matrix - Count',normalize=True) print(classification_report( df['encoded_class'], df['predicted_class'], labels=labels, target_names=classes)) ``` ### Endpoint cleanup <a class="anchor" id="endpoint_cleanup"></a> When you're done with the endpoint, you'll want to clean it up. ``` # SDK 2 predictor.delete_endpoint() ``` ## Another way to deploy endpoint ## Using trained model artifacts https://sagemaker.readthedocs.io/en/stable/sagemaker.sklearn.html#scikit-learn-predictor https://sagemaker.readthedocs.io/en/stable/using_sklearn.html#working-with-existing-model-data-and-training-jobs ``` model_data = estimator.model_data model_data import sagemaker.sklearn model = sagemaker.sklearn.model.SKLearnModel(model_data=model_data, role=role, entry_point='scikit_learn_iris.py', framework_version = "0.20.0", py_version = 'py3') predictor_2 = model.deploy(initial_instance_count=1, instance_type=instance_type) predictor_2.predict(X_test[:5]) predictor_2.delete_endpoint() ```
github_jupyter
``` %matplotlib inline from pyvista import set_plot_theme set_plot_theme('document') pyvista._wrappers['vtkPolyData'] = pyvista.PolyData ``` Load data using a Reader {#reader_example} ======================== To have more control over reading data files, use a class based reader. This class allows for more fine-grained control over reading datasets from files. See `pyvista.get_reader`{.interpreted-text role="func"} for a list of file types supported. ``` import pyvista from pyvista import examples import numpy as np from tempfile import NamedTemporaryFile ``` An XML PolyData file in `.vtp` format is created. It will be saved in a temporary file for this example. ``` temp_file = NamedTemporaryFile('w', suffix=".vtp") temp_file.name ``` `pyvista.Sphere`{.interpreted-text role="class"} already includes `Normals` point data. Additionally `height` point data and `id` cell data is added. ``` mesh = pyvista.Sphere() mesh['height'] = mesh.points[:, 1] mesh['id'] = np.arange(mesh.n_cells) mesh.save(temp_file.name) ``` `pyvista.read`{.interpreted-text role="func"} function reads all the data in the file. This provides a quick and easy one-liner to read data from files. ``` new_mesh = pyvista.read(temp_file.name) print(f"All arrays: {mesh.array_names}") ``` Using `pyvista.get_reader`{.interpreted-text role="func"} enables more fine-grained control of reading data files. Reading in a `.vtp`[ file uses the :class:\`pyvista.XMLPolyDataReader]{.title-ref}. ``` reader = pyvista.get_reader(temp_file.name) reader # Alternative method: reader = pyvista.XMLPolyDataReader(temp_file.name) ``` Some reader classes, including this one, offer the ability to inspect the data file before loading all the data. For example, we can access the number and names of point and cell arrays. ``` print(f"Number of point arrays: {reader.number_point_arrays}") print(f"Available point data: {reader.point_array_names}") print(f"Number of cell arrays: {reader.number_cell_arrays}") print(f"Available cell data: {reader.cell_array_names}") ``` We can select which data to read by selectively disabling or enabling specific arrays or all arrays. Here we disable all the cell arrays and the `Normals` point array to leave only the `height` point array. The data is finally read into a pyvista object that only has the `height` point array. ``` reader.disable_all_cell_arrays() reader.disable_point_array('Normals') print(f"Point array status: {reader.all_point_arrays_status}") print(f"Cell array status: {reader.all_cell_arrays_status}") reader_mesh = reader.read() print(f"Read arrays: {reader_mesh.array_names}") ``` We can reuse the reader object to choose different variables if needed. ``` reader.enable_all_cell_arrays() reader_mesh_2 = reader.read() print(f"New read arrays: {reader_mesh_2.array_names}") ``` Some Readers support setting different time points or iterations. In both cases, this is done using the time point functionality. The NACA dataset has two such points with density. This dataset is in EnSight format, which uses the `pyvista.EnSightReader`{.interpreted-text role="class"} class. ``` filename = examples.download_naca(load=False) reader = pyvista.get_reader(filename) time_values = reader.time_values print(reader) print(f"Available time points: {time_values}") print(f"Available point arrays: {reader.point_array_names}") ``` First both time points are read in, and then the difference in density is calculated and saved on the second mesh. The read method of `pyvista.EnSightReader`{.interpreted-text role="class"} returns a `pyvista.MultiBlock`{.interpreted-text role="class"} instance. In this dataset, there are 3 blocks and the new scalar must be applied on each block. ``` reader.set_active_time_value(time_values[0]) mesh_0 = reader.read() reader.set_active_time_value(time_values[1]) mesh_1 = reader.read() for block_0, block_1 in zip(mesh_0, mesh_1): block_1['DENS_DIFF'] = block_1['DENS'] - block_0['DENS'] ``` The value of [DENS]{.title-ref} is plotted on the left column for both time points, and the difference on the right. ``` plotter = pyvista.Plotter(shape='2|1') plotter.subplot(0) plotter.add_mesh(mesh_0, scalars='DENS',show_scalar_bar=False) plotter.add_text(f"{time_values[0]}") plotter.subplot(1) plotter.add_mesh(mesh_1, scalars='DENS', show_scalar_bar=False) plotter.add_text(f"{time_values[1]}") # pyvista currently cannot plot the same mesh twice with different scalars plotter.subplot(2) plotter.add_mesh(mesh_1.copy(), scalars='DENS_DIFF', show_scalar_bar=False) plotter.add_text("DENS Difference") plotter.link_views() plotter.camera_position= ((0.5, 0, 8), (0.5, 0, 0), (0, 1, 0)) plotter.show() ``` Reading time points or iterations can also be utilized to make a movie. Compare to `gif_movie_example`{.interpreted-text role="ref"}, but here a set of files are read in through a ParaView Data format file. This file format and reader also return a `pyvista.MultiBlock`{.interpreted-text role="class"} mesh. ``` filename = examples.download_wavy(load=False) reader = pyvista.get_reader(filename) print(reader) ``` For each time point, plot the mesh colored by the height. Put iteration value in top left ``` plotter = pyvista.Plotter(notebook=False, off_screen=True) # Open a gif plotter.open_gif("wave_pvd.gif") for time_value in reader.time_values: reader.set_active_time_value(time_value) mesh = reader.read()[0] # This dataset only has 1 block plotter.add_mesh(mesh, scalars='z', show_scalar_bar=False) plotter.add_text(f"Time: {time_value:.0f}", color="black") plotter.render() plotter.write_frame() plotter.clear() plotter.close() ```
github_jupyter
# Quantum State Tomography (Unsupervised Learning) Quantum state tomography (QST) is a machine learning task which aims to reconstruct the full quantum state from measurement results. __Aim__: Given a variational ansatz $\Psi(\lbrace \boldsymbol{\beta} \rbrace)$ and a set of measurement results, we want to find the parameters $\boldsymbol{\beta}$ which best reproduce the probability distribution of the measurements. __Training Data__: A set of single shot measurements in some basis, e.g. $\lbrace(100010, \textrm{XZZZYZ}), (011001, \textrm{ZXXYZZ}), \dots \rbrace$ ``` import netket as nk import numpy as np import matplotlib.pyplot as plt import math import sys ``` While in practice, the measurement results can be experimental data e.g. from a quantum computer/simulator, for our purpose, we shall construct our data set by making single shot measurement on a wavefunction that we have obtained via exact diagonalisation. For this tutorial, we shall focus on the one-dimensional anti-ferromagnetic transverse-field Ising model defined by $$H = \sum_{i} Z_{i}Z_{i+1} + h \sum_{i} X_{i}$$ ``` # Define the Hamiltonian N = 8 g = nk.graph.Hypercube(length=N, n_dim=1, pbc=False) hi = nk.hilbert.Spin(g, s=0.5) ha = nk.operator.Ising(hi, h=1.0) ``` Next, perform exact diagonalisation to obtain the ground state wavefunction. ``` # Obtain the ED wavefunction res = nk.exact.lanczos_ed(ha, first_n=1, compute_eigenvectors=True) psi = res.eigenvectors[0] print("Ground state energy =", res.eigenvalues[0]) ``` Finally, to construct the dataset, we will make single shot measurements in various bases. To obtain a single shot measurement, we need to sample from the wavefunction in the relevant basis. Since the wavefunction we obtained is in the computational basis (i.e. $Z$ basis), to obtain a single shot measurement in another basis, one would need to transform the wavefunction as follows (this is similar to how one would do a measurement in a different basis on a quantum computer): X Basis: $$ |{\Psi}\rangle \rightarrow I_n \otimes \frac{1}{\sqrt{2}}\pmatrix{1 & 1 \\ 1& -1} \otimes I_m \ |\Psi\rangle$$ Y Basis: $$ |{\Psi}\rangle \rightarrow I_n \otimes \frac{1}{\sqrt{2}}\pmatrix{1 & -i \\ 1& i} \otimes I_m \ |\Psi\rangle$$ ``` def build_rotation(hi, basis): localop = nk.operator.LocalOperator(hi, 1.0) U_X = 1.0 / (math.sqrt(2)) * np.asarray([[1.0, 1.0], [1.0, -1.0]]) U_Y = 1.0 / (math.sqrt(2)) * np.asarray([[1.0, -1j], [1.0, 1j]]) N = hi.size assert len(basis) == hi.size for j in range(hi.size): if basis[j] == "X": localop *= nk.operator.LocalOperator(hi, U_X, [j]) if basis[j] == "Y": localop *= nk.operator.LocalOperator(hi, U_Y, [j]) return localop n_basis = 2*N n_shots = 1000 rotations = [] training_samples = [] training_bases = [] np.random.seed(1234) for m in range(n_basis): basis = np.random.choice( list("XYZ"), size=N, p=[1.0 / N, 1.0 / N, (N - 2.0) / N] ) psi_rotated = np.copy(psi) if 'X' in basis or 'Y' in basis: rotation = build_rotation(hi, basis) psi_rotated = rotation.to_sparse().dot(psi_rotated) psi_square = np.square(np.absolute(psi_rotated)) rand_n = np.random.choice(hi.n_states, p=psi_square, size=n_shots) for rn in rand_n: training_samples.append(hi.number_to_state(rn)) training_bases += [m] * n_shots rotations.append(rotation) print('Number of bases:', n_basis) print('Number of shots:', n_shots) print('Total size of the dataset:', n_basis*n_shots) print('Some single shot results: (sample, basis)\n', list(zip(training_samples[:3], training_bases[:3]))) ``` The basis rotations are contained in ``rotations`` and the single shot measurements are stored in ``training_samples``. ``training_bases`` is a list of integers which labels each samples in ``training_samples`` according to their basis. Having obtained the dataset, we can proceed to define the variational ansatz one wishes to train. We shall simply use the Restricted Boltzmann Machine (RBM) with real parameters defined as: $$ \tilde\psi (\boldsymbol{\sigma}) = p_{\boldsymbol{\lambda}}(\boldsymbol{\sigma}) e^{i \phi_{\boldsymbol{\mu}}(\boldsymbol{\sigma})} $$ where $\phi_{\boldsymbol{\mu}}(\boldsymbol{\sigma}) = \log p_{\boldsymbol{\mu}}(\boldsymbol{\sigma})$ and $p_{\boldsymbol{\lambda/\mu}}$ are standard RBM real probability distributions. Notice that the amplitude part $p_{\boldsymbol{\lambda}}$ completely defines the measurements in the Z basis and vice versa. ``` # Define the variational wavefunction ansatz ma = nk.machine.RbmSpinPhase(hilbert=hi, alpha=1) ``` With the variational ansatz as well as the dataset, the quantum state tomography can now be performed. Recall that the aim is to reconstruct a wavefunction $|\Psi_{U}\rangle$ (for our case, the ground state of the 1D TFIM) given single shot measurements of the wavefunction in various bases. The single shot measurements are governed by a probability distribution $$P_{b}(\boldsymbol{\sigma}) = | \langle \boldsymbol{\sigma}| \hat{U}_{b} |\Psi_{U}\rangle|^{2}$$ which depends on $|\Psi_{U}\rangle$ and the basis $b$ in which the measurement is performed. $\hat{U}_{b}$ is simply the unitary which rotates the wavefunction into the corresponding basis. Similarly, the variational wavefunction $|\tilde\psi\rangle$ also defines a set of basis dependent probability distributions $\tilde P_{b}(\boldsymbol{\sigma})$. The optimisation procedure is then basically a minimisation task to find the set of parameters $\boldsymbol{\kappa}$ which minimises the total Kullback–Leibler divergence between $\tilde P_{b}$ and $P_{b}$, i.e. $$ \Xi(\boldsymbol{\kappa}) = \sum_{b} \mathbb{KL}_{b}({\kappa}) $$ where $$\mathbb{KL}_{b}({\boldsymbol{\kappa}}) = \sum_{\boldsymbol{\sigma}} P_{b}(\boldsymbol{\sigma}) \log \frac{ P_{b}(\boldsymbol{\sigma})}{\tilde P_{b}(\boldsymbol{\sigma})}$$. This minimisation can be achieved by gradient descent. Although one does not have access to the underlying probability distributions $P_{b}(\boldsymbol{\sigma})$, the total KL divergence can be estimated by summing over the dataset $\lbrace D_{b} \rbrace$ $$ \Xi(\boldsymbol{\kappa}) = -\sum_{b} \sum_{\boldsymbol{\sigma} \in D_{b}} \log \tilde P_{b}(\boldsymbol{\sigma}) + S $$ where $S$ is the constant entropy of $P_{b}(\boldsymbol{\sigma})$ which can be ignored. The details regarding the computation of the gradients can be found in arXiv:1703.05334. In addition to reconstructing the quantum state, we would also like to investigate how the size of the dataset affects the quality of the reconstruction. To that end, we shall run the optimisation with different dataset sizes. ``` # Shuffle our datasets import random temp = list(zip(training_bases, training_samples)) random.shuffle(temp) training_bases, training_samples = zip(*temp) # Sampler sa = nk.sampler.MetropolisLocal(machine=ma) # Optimizer op = nk.optimizer.AdaDelta() dataset_sizes = [2000, 4000, 8000, 16000] # During the optimisation, we would like to keep track # of the fidelities and energies fidelities = {} energies = {} for size in dataset_sizes: # First remember to reinitialise the machine ma.init_random_parameters(seed=1234, sigma=0.01) # Quantum State Tomography object qst = nk.unsupervised.Qsr( sampler=sa, optimizer=op, batch_size=300, n_samples=300, rotations=tuple(rotations[:size]), samples=training_samples[:size], bases=training_bases, method="Gd" ) qst.add_observable(ha, "Energy") # Run the optimisation using the iterator print("Starting optimisation for dataset size", size) fidelities_temp = [] energies_temp = [] for step in qst.iter(2000, 100): # Compute fidelity with exact state psima = ma.to_array() fidelity = np.abs(np.vdot(psima, psi)) fidelities_temp.append(fidelity) fidelities[size] = fidelities_temp energies[size] = energies_temp for size in fidelities: plt.plot(np.arange(1,21,1),fidelities[size], label='dataset size = '+str(size)) plt.axhline(y=1, xmin=0, xmax=20, linewidth=3, color='k', label='Perfect Fidelity') plt.ylabel('Fidelity') plt.xlabel('Iteration') plt.xticks(range(0,21,4)) plt.legend() plt.show() ``` As expected, it is relatively clear that with increasing dataset size, the final fidelity does increase.
github_jupyter
# Visualization We use dpi=300 to genrate the plots in the paper. But for an easy viewing purpose here, we use dpi = 50. ``` DPI = 50 ``` ## Figure 1(b)-(f): Data Statisctics ``` import os import numpy as np import random import pandas as pd from sklearn import metrics from operator import itemgetter import json import seaborn as sns import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(2,4),dpi=DPI) x = np.array([1,2]) value1 = [55.3, 43.4] value2 = [56.7, 42.5] bar1 = plt.bar(x-0.1, height = value1, width = 0.2,alpha = 0.8,label = 'Positive') bar2 = plt.bar(x+0.1,value2,width = 0.2,alpha = 0.8,label = 'Negative') plt.tick_params(labelsize=20) plt.ylabel('Percentage %',fontsize = 20) plt.xlabel('Gender',fontsize = 20) plt.xticks([1,2],['Male','Female'],fontsize = 18) ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); #plt.legend(fontsize=20) plt.show() plt.close() fig, ax = plt.subplots(figsize=(7,4),dpi=DPI) x = np.array([1,2,3,4,5,6,7]) value1 = [3.8,20.3,29.5,24.7,12.1,5.6,2.3] value2 = [5.4, 23.5, 28.2, 24.3, 11.3, 4.3, 1.9] bar1 = plt.bar(x-0.15, height = value1, width = 0.3,alpha = 0.8,label = 'Positive') bar2 = plt.bar(x+0.15,value2,width = 0.3,alpha = 0.8,label = 'Negative') plt.tick_params(labelsize=20) plt.ylabel('Percentage %',fontsize = 20) plt.xlabel('Age',fontsize = 20) plt.xticks([1,2,3,4,5,6,7],['16-19','20-29','30-39','40-49','50-59','60-69','70-'],fontsize = 18) ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); plt.legend(fontsize=20) plt.show() plt.close() fig, ax = plt.subplots(figsize=(2.5,4),dpi=DPI) x = np.array([1,2,3]) value1 = [64.8, 17.5, 16.7] value2 = [55.6, 19.2, 23.8 ] bar1 = plt.bar(x-0.15, height = value1, width = 0.3,alpha = 0.8,label = 'Positive') bar2 = plt.bar(x+0.15,value2,width = 0.3,alpha = 0.8,label = 'Negative') plt.tick_params(labelsize=20) plt.ylabel('Percentage %',fontsize = 20) plt.xlabel('Smoking',fontsize = 20) plt.xticks([1,2,3],['Never','Ex','Now'],fontsize = 18) ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); #plt.legend(fontsize=20) plt.show() plt.close() fig, ax = plt.subplots(figsize=(2.5,4),dpi=DPI) x = np.array([1,2,3]) value1 = [22.1, 11.1, 4.2] value2 = [43.7, 30.1, 11.1] bar1 = plt.bar(x-0.15, height = value1, width = 0.3,alpha = 0.8,label = 'Positive') bar2 = plt.bar(x+0.15,value2,width = 0.3,alpha = 0.8,label = 'Negative') plt.tick_params(labelsize=20) plt.ylabel('Percentage %',fontsize = 20) plt.xlabel('Country',fontsize = 20) plt.xticks([1,2,3],['US','UK','DE'],fontsize = 18) ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); #plt.legend(fontsize=20) plt.show() plt.close() fig, ax = plt.subplots(figsize=(2,4),dpi=DPI) x = np.array([1,2]) value1 = [16,84] value2 = [51,49] bar1 = plt.bar(x-0.1, height = value1, width = 0.2,alpha = 0.8,label = 'Positive') bar2 = plt.bar(x+0.1,value2,width = 0.2,alpha = 0.8,label = 'Negative') plt.tick_params(labelsize=20) plt.ylabel('Percentage %',fontsize = 20) plt.xlabel('Symptom',fontsize = 20) plt.xticks([1,2],['Asym.','Sym.'],fontsize = 18) ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); #plt.legend(fontsize=20) plt.show() plt.close() fig, ax = plt.subplots(figsize=(2,4),dpi=DPI) x = np.array([1,2]) value1 = [5.6,94.4] value2 = [0.5,99.3] bar1 = plt.bar(x-0.1, height = value1, width = 0.2,alpha = 0.8,label = 'Positive') bar2 = plt.bar(x+0.1,value2,width = 0.2,alpha = 0.8,label = 'Negative') plt.tick_params(labelsize=20) plt.ylabel('Percentage %',fontsize = 20) plt.xlabel('Hospitalised',fontsize = 20) plt.xticks([1,2],['Yes','No'],fontsize = 18) ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); #plt.legend(fontsize=20) plt.show() plt.close() ``` ## Figure 2(a), suplementary Figure 1(a) and 2(a): ROC-AUC curves ``` import numpy as np import csv, glob, os import pandas as pd import matplotlib.pyplot as plt from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report, accuracy_score, confusion_matrix, roc_curve, auc, average_precision_score from sklearn import preprocessing from sklearn.metrics import make_scorer from sklearn.metrics import recall_score,roc_auc_score,f1_score,precision_score from sklearn import metrics from sklearn.metrics import precision_recall_curve def get_metrics(probs,label): predicted = [] for i in range(len(probs)): if probs[i]> 0.5: predicted.append(1) else: predicted.append(0) pre = metrics.precision_score(label, predicted) acc = metrics.accuracy_score(label, predicted) auc = metrics.roc_auc_score(label, probs) precision, recall, _ = metrics.precision_recall_curve(label, probs) rec = metrics.recall_score(label, predicted) TN, FP, FN, TP = metrics.confusion_matrix(label,predicted).ravel() # Sensitivity, hit rate, recall, or true positive rate TPR = TP*1.0/(TP+FN) # Specificity or true negative rate TNR = TN*1.0/(TN+FP) return auc, TPR, TNR, 0 def get_CI(data, AUC, Sen, Spe): AUCs = [] TPRs = [] TNRs = [] for s in range(1000): np.random.seed(s) #Para2 sample = np.random.choice(range(len(data)), len(data), replace=True) samples = [data[i] for i in sample] sample_pro = [x[0] for x in samples] sample_label = [x[1] for x in samples] try: get_metrics(sample_pro,sample_label) except ValueError: np.random.seed(1001) #Para2 sample = np.random.choice(range(len(data)), len(data), replace=True) samples = [data[i] for i in sample] sample_pro = [x[0] for x in samples] sample_label = [x[1] for x in samples] else: auc, TPR, TNR, _ = get_metrics(sample_pro,sample_label) AUCs.append(auc) TPRs.append(TPR) TNRs.append(TNR) q_0 = pd.DataFrame(np.array(AUCs)).quantile(0.025)[0] #2.5% percentile q_1 = pd.DataFrame(np.array(AUCs)).quantile(0.975)[0] #97.5% percentile q_2 = pd.DataFrame(np.array(TPRs)).quantile(0.025)[0] #2.5% percentile q_3 = pd.DataFrame(np.array(TPRs)).quantile(0.975)[0] #97.5% percentile q_4 = pd.DataFrame(np.array(TNRs)).quantile(0.025)[0] #2.5% percentile q_5 = pd.DataFrame(np.array(TNRs)).quantile(0.975)[0] #97.5% percentile return('&' + str(AUC.round(2)) + '(' + str(q_0.round(2)) + '-' + str(q_1.round(2)) + ')' + '&' + str(Sen.round(2)) + '(' + str(q_2.round(2)) + '-' + str(q_3.round(2)) + ')' '&' + str(Spe.round(2)) + '(' + str(q_4.round(2)) + '-' + str(q_5.round(2)) + ')' ) mean_fpr = np.linspace(0, 1, 100) fig, ax = plt.subplots(figsize=(8,7),dpi=DPI) ax.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',label='Chance (AUC=0.5)', alpha=.8) ############################################################################### File = 'output/main_test_B.txt' user = [] probs = [] labels = [] data = [] negative_user = [] with open(File) as f: for line in f: uid, pro, label = line.split() UID, date = uid.split('/') if True: user.append(UID) pro = float(pro) label = float(label) label = 1 if label > 0 else 0 probs.append(pro) data.append([pro,label]) labels.append(label) tprs = [] # # Plot ROC curve for s in range(1000): np.random.seed(s) #Para2 sample = np.random.choice(range(len(data)), len(data), replace=True) samples = [data[i] for i in sample] sample_pro = [x[0] for x in samples] sample_label = [x[1] for x in samples] fpr_rf, tpr_rf, _ = roc_curve(sample_label, sample_pro) interp_tpr= np.interp(mean_fpr, fpr_rf, tpr_rf) interp_tpr[0] = 0.0 tprs.append(interp_tpr) #ax.plot(fpr_rf, tpr_rf, lw=2, alpha=0.3, linestyle=linestyle, label='ROC fold %d (AUC = %0.2f)' % (fold_index, auc[-1])) mean_tpr = np.mean(tprs, axis=0) mean_tpr[-1] = 1.0 mean_auc = metrics.auc(mean_fpr, mean_tpr) ax.plot(mean_fpr, mean_tpr, color='green',label=r'Breathing (AUC=0.62)',lw=2, alpha=.4) std_tpr = np.std(tprs, axis=0) tprs_upper = np.minimum(mean_tpr + std_tpr, 1) tprs_lower = np.maximum(mean_tpr - std_tpr, 0) ax.fill_between(mean_fpr, tprs_lower, tprs_upper, color='green', alpha=.08 ) #,label=r'95% CIs.') ############################################################################### File = 'output/main_test_C.txt' user = [] probs = [] labels = [] data = [] negative_user = [] with open(File) as f: for line in f: uid, pro, label = line.split() UID, date = uid.split('/') if True: user.append(UID) pro = float(pro) label = float(label) label = 1 if label > 0 else 0 probs.append(pro) data.append([pro,label]) labels.append(label) tprs = [] # # Plot ROC curve for s in range(1000): np.random.seed(s) #Para2 sample = np.random.choice(range(len(data)), len(data), replace=True) samples = [data[i] for i in sample] sample_pro = [x[0] for x in samples] sample_label = [x[1] for x in samples] fpr_rf, tpr_rf, _ = roc_curve(sample_label, sample_pro) interp_tpr= np.interp(mean_fpr, fpr_rf, tpr_rf) interp_tpr[0] = 0.0 tprs.append(interp_tpr) #ax.plot(fpr_rf, tpr_rf, lw=2, alpha=0.3, linestyle=linestyle, label='ROC fold %d (AUC = %0.2f)' % (fold_index, auc[-1])) mean_tpr = np.mean(tprs, axis=0) mean_tpr[-1] = 1.0 mean_auc = metrics.auc(mean_fpr, mean_tpr) ax.plot(mean_fpr, mean_tpr, color='pink',label=r'Cough (AUC=0.66)',lw=2, alpha=.9) std_tpr = np.std(tprs, axis=0) tprs_upper = np.minimum(mean_tpr + std_tpr, 1) tprs_lower = np.maximum(mean_tpr - std_tpr, 0) ax.fill_between(mean_fpr, tprs_lower, tprs_upper, color='pink', alpha=.3 ) #,label=r'95% CIs.') ############################################################################### File = 'output/main_test_V.txt' user = [] probs = [] labels = [] data = [] negative_user = [] with open(File) as f: for line in f: uid, pro, label = line.split() UID, date = uid.split('/') if True: user.append(UID) pro = float(pro) label = float(label) label = 1 if label > 0 else 0 probs.append(pro) data.append([pro,label]) labels.append(label) tprs = [] # # Plot ROC curve for s in range(1000): np.random.seed(s) #Para2 sample = np.random.choice(range(len(data)), len(data), replace=True) samples = [data[i] for i in sample] sample_pro = [x[0] for x in samples] sample_label = [x[1] for x in samples] fpr_rf, tpr_rf, _ = roc_curve(sample_label, sample_pro) interp_tpr= np.interp(mean_fpr, fpr_rf, tpr_rf) interp_tpr[0] = 0.0 tprs.append(interp_tpr) #ax.plot(fpr_rf, tpr_rf, lw=2, alpha=0.3, linestyle=linestyle, label='ROC fold %d (AUC = %0.2f)' % (fold_index, auc[-1])) mean_tpr = np.mean(tprs, axis=0) mean_tpr[-1] = 1.0 mean_auc = metrics.auc(mean_fpr, mean_tpr) ax.plot(mean_fpr, mean_tpr, color='olive',label=r'Voice (AUC=0.61)',lw=2, alpha=.5) std_tpr = np.std(tprs, axis=0) tprs_upper = np.minimum(mean_tpr + std_tpr, 1) tprs_lower = np.maximum(mean_tpr - std_tpr, 0) ax.fill_between(mean_fpr, tprs_lower, tprs_upper, color='olive', alpha=.13 ) #,label=r'95% CIs.') ############################################################################### File = 'output/main_task_0.5.txt' user = [] probs = [] labels = [] data = [] negative_user = [] with open(File) as f: for line in f: uid, pro, label = line.split() UID, date = uid.split('/') if True: user.append(UID) pro = float(pro) label = float(label) label = 1 if label > 0 else 0 probs.append(pro) data.append([pro,label]) labels.append(label) tprs = [] # # Plot ROC curve for s in range(1000): np.random.seed(s) #Para2 sample = np.random.choice(range(len(data)), len(data), replace=True) samples = [data[i] for i in sample] sample_pro = [x[0] for x in samples] sample_label = [x[1] for x in samples] fpr_rf, tpr_rf, _ = roc_curve(sample_label, sample_pro) interp_tpr= np.interp(mean_fpr, fpr_rf, tpr_rf) interp_tpr[0] = 0.0 tprs.append(interp_tpr) #ax.plot(fpr_rf, tpr_rf, lw=2, alpha=0.3, linestyle=linestyle, label='ROC fold %d (AUC = %0.2f)' % (fold_index, auc[-1])) mean_tpr = np.mean(tprs, axis=0) mean_tpr[-1] = 1.0 mean_auc = metrics.auc(mean_fpr, mean_tpr) ax.plot(mean_fpr, mean_tpr, color='b',label=r'All modalities (AUC=0.71)',lw=2, alpha=.8) std_tpr = np.std(tprs, axis=0) tprs_upper = np.minimum(mean_tpr + std_tpr, 1) tprs_lower = np.maximum(mean_tpr - std_tpr, 0) ax.fill_between(mean_fpr, tprs_lower, tprs_upper, color='b', alpha=.1 ) #,label=r'95% CIs.') # ax.set(xlim=[-0.05, 1.05], ylim=[-0.05, 1.05] ) plt.tick_params(labelsize=20) plt.xlabel('1-Specificity',fontsize = 20) plt.ylabel('Sensitivity',fontsize = 20) ax.legend(loc="lower right",fontsize = 17) #plt.savefig("figures/rocCurves-"+ str(tasks_index[task_id]) +".pdf", format="pdf") ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); plt.grid() #plt.savefig("AUC.pdf", format="pdf") plt.show() plt.close() ############################################################################### map_dict = {'Male': 0, 'Female': 1, 'Sex':2, '0-19':3, '20-29':4, '30-39':5, '40-49':6, '50-59':7, '60-69':8, '70-79':9, '80-89':10, '90-':11, 'Age': 12, 'ecig':13, 'ex': 14, 'never':15, 'ltOnce':16, '1to10':17, '21+':18, 'Smoke':19, 'it':20, 'en':21, 'es':22, 'de':23, 'Others':24 } #Uid;Age;Sex;Medhistory;Smoking;Language;Date;Folder Name;Symptoms;Covid-Tested;Hospitalized;Location;Voice filename;Cough filename;Breath filename def get_covid(temp): cot, sym, med, smo = temp[9], temp[8], temp[3], temp[4] #print(cot, sym, med, smo) sym_dict = {'drycough':0.0, 'smelltasteloss':0.0, 'headache':0.0,'sorethroat':0.0, 'muscleache':0.0,'wetcough':0.0,'shortbreath':0.0,'tightness':0.0, 'fever':0.0,'dizziness':0.0,'chills':0.0,'runnyblockednose':0.0, 'None': 0.0} syms = sym.split(',') for s in syms: if s == 'tighness': s = 'tightness' if s == 'drycoough': s = 'drycough' if s == 'runny': s = 'runnyblockednose' if s == 'none' or s == '': s = 'None' if s in sym_dict: sym_dict[s] = 1 sym_feature = [sym_dict[s] for s in sorted(sym_dict)] if cot == 'last14' or cot == 'yes' or cot == 'positiveLast14': #or cot == 'positiveOver14' or cot == 'over14' : if sym in ['None','','none']: #'pnts' label = 'covidnosym' else: label = 'covidsym' elif cot == 'negativeNever': if sym in ['None','','none']: label = 'healthnosym' else: label = 'healthsym' else: label = 'negativeLast14_over14' return label,sym_feature def get_demo(temp): dis = [0]*25 uid, age, sex, smo, lan = temp[0], temp[1], temp[2], temp[4], temp[5] #print(uid, age, sex, smo, lan) if age in map_dict: dis[map_dict[age]] = 1.0 else: dis[map_dict['Age']] = 1.0 if sex in map_dict: dis[map_dict[sex]] = 1.0 else: dis[map_dict['Sex']] = 1.0 if smo in map_dict: dis[map_dict[smo]] = 1.0 else: dis[map_dict['Smoke']] = 1.0 lan = 'en' if lan in map_dict: dis[map_dict[lan]] = 1.0 else: dis[map_dict['Others']] = 1.0 return dis demo_dict = {} with open('../COVID19_prediction/data/preprocess/results_all_raw_0426.csv') as f: for index, line in enumerate(f): if index>0: temp = line.strip().split(';') demo_dict[temp[0] + '/' + temp[7]] = temp demo_label = sorted(map_dict.items(),key = lambda x:x[1],reverse = False) mean_fpr = np.linspace(0, 1, 100) fig, ax = plt.subplots(figsize=(8,7),dpi=DPI) ax.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',label='Chance (AUC=0.5)', alpha=.8) ############################################################################### File = 'output/main_task_0.5.txt' user = [] probs = [] labels = [] data = [] negative_user = [] with open(File) as f: for line in f: uid, pro, label = line.split() UID, date = uid.split('/') temp = demo_dict[uid] covid,sym = get_covid(temp) if True: user.append(UID) pro = float(pro) label = float(label) label = 1 if label > 0 else 0 if covid == 'covidsym' or covid == 'healthsym': probs.append(pro) data.append([pro,label]) labels.append(label) tprs = [] # # Plot ROC curve for s in range(1000): np.random.seed(s) #Para2 sample = np.random.choice(range(len(data)), len(data), replace=True) samples = [data[i] for i in sample] sample_pro = [x[0] for x in samples] sample_label = [x[1] for x in samples] fpr_rf, tpr_rf, _ = roc_curve(sample_label, sample_pro) interp_tpr= np.interp(mean_fpr, fpr_rf, tpr_rf) interp_tpr[0] = 0.0 tprs.append(interp_tpr) #ax.plot(fpr_rf, tpr_rf, lw=2, alpha=0.3, linestyle=linestyle, label='ROC fold %d (AUC = %0.2f)' % (fold_index, auc[-1])) mean_tpr = np.mean(tprs, axis=0) mean_tpr[-1] = 1.0 mean_auc = metrics.auc(mean_fpr, mean_tpr) ax.plot(mean_fpr, mean_tpr, color='b',label=r'Symptomatic(AUC=0.66)',lw=2, alpha=.8) std_tpr = np.std(tprs, axis=0) tprs_upper = np.minimum(mean_tpr + std_tpr, 1) tprs_lower = np.maximum(mean_tpr - std_tpr, 0) ax.fill_between(mean_fpr, tprs_lower, tprs_upper, color='b', alpha=.1 ) #,label=r'95% CIs.') # ax.set(xlim=[-0.05, 1.05], ylim=[-0.05, 1.05] ) plt.tick_params(labelsize=20) plt.xlabel('1-Specificity',fontsize = 20) plt.ylabel('Sensitivity',fontsize = 20) ax.legend(loc="lower right",fontsize = 17) #plt.savefig("figures/rocCurves-"+ str(tasks_index[task_id]) +".pdf", format="pdf") ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); plt.grid() #plt.savefig("AUC.pdf", format="pdf") plt.show() plt.close() mean_fpr = np.linspace(0, 1, 100) fig, ax = plt.subplots(figsize=(8,7),dpi=DPI) ax.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',label='Chance (AUC=0.5)', alpha=.8) ############################################################################### File = 'output/main_task_0.5.txt' user = [] probs = [] labels = [] data = [] negative_user = [] with open(File) as f: for line in f: uid, pro, label = line.split() UID, date = uid.split('/') temp = demo_dict[uid] covid,sym = get_covid(temp) if True: user.append(UID) pro = float(pro) label = float(label) label = 1 if label > 0 else 0 if covid == 'covidnosym' or covid == 'healthnosym': probs.append(pro) data.append([pro,label]) labels.append(label) tprs = [] # # Plot ROC curve for s in range(1000): np.random.seed(s) #Para2 sample = np.random.choice(range(len(data)), len(data), replace=True) samples = [data[i] for i in sample] sample_pro = [x[0] for x in samples] sample_label = [x[1] for x in samples] fpr_rf, tpr_rf, _ = roc_curve(sample_label, sample_pro) interp_tpr= np.interp(mean_fpr, fpr_rf, tpr_rf) interp_tpr[0] = 0.0 tprs.append(interp_tpr) #ax.plot(fpr_rf, tpr_rf, lw=2, alpha=0.3, linestyle=linestyle, label='ROC fold %d (AUC = %0.2f)' % (fold_index, auc[-1])) mean_tpr = np.mean(tprs, axis=0) mean_tpr[-1] = 1.0 mean_auc = metrics.auc(mean_fpr, mean_tpr) ax.plot(mean_fpr, mean_tpr, color='b',label=r'Asymptomatic(AUC=0.75)',lw=2, alpha=.8) std_tpr = np.std(tprs, axis=0) tprs_upper = np.minimum(mean_tpr + std_tpr, 1) tprs_lower = np.maximum(mean_tpr - std_tpr, 0) ax.fill_between(mean_fpr, tprs_lower, tprs_upper, color='b', alpha=.1 ) #,label=r'95% CIs.') # ax.set(xlim=[-0.05, 1.05], ylim=[-0.05, 1.05] ) plt.tick_params(labelsize=20) plt.xlabel('1-Specificity',fontsize = 20) plt.ylabel('Sensitivity',fontsize = 20) ax.legend(loc="lower right",fontsize = 17) #plt.savefig("figures/rocCurves-"+ str(tasks_index[task_id]) +".pdf", format="pdf") ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); plt.grid() #plt.savefig("AUC.pdf", format="pdf") plt.show() plt.close() ``` ## Figure 3(b): boxplot comparison ``` File = 'output/main_test_all.txt' #main results All_negative = [] neg_asthma = [] neg_hbp = [] neg_none = [] neg_now = [] neg_ex = [] neg_never = [] user = [] probs = [] labels = [] data = [] negative_user = [] with open(File) as f: for line in f: uid, pro, label = line.split() UID, date = uid.split('/') temp = demo_dict[uid] demo = get_demo(temp) if 'asthma' in temp: user.append(UID) pro = float(pro) label = float(label) label = 1 if label > 0 else 0 probs.append(pro) data.append([pro,label]) labels.append(label) if label == 0: negative_user.append(pro) neg_asthma.append(pro) auc, TPR, TNR, _ = get_metrics(probs,labels) All_negative.append(negative_user[:150]) user = [] probs = [] labels = [] data = [] negative_user = [] with open(File) as f: for line in f: uid, pro, label = line.split() UID, date = uid.split('/') temp = demo_dict[uid] demo = get_demo(temp) if 'hbp' in temp: user.append(UID) pro = float(pro) label = float(label) label = 1 if label > 0 else 0 probs.append(pro) data.append([pro,label]) labels.append(label) if label == 0: negative_user.append(pro) neg_hbp.append(pro) auc, TPR, TNR, _ = get_metrics(probs,labels) All_negative.append(negative_user[:150]) user = [] probs = [] labels = [] data = [] negative_user = [] with open(File) as f: for line in f: uid, pro, label = line.split() UID, date = uid.split('/') temp = demo_dict[uid] demo = get_demo(temp) if temp[3] == '' or temp[3] == 'none' or temp[3] == 'None': user.append(UID) pro = float(pro) label = float(label) label = 1 if label > 0 else 0 probs.append(pro) data.append([pro,label]) labels.append(label) if label == 0: negative_user.append(pro) neg_none.append(pro) auc, TPR, TNR, _ = get_metrics(probs,labels) All_negative.append(negative_user[:150]) ############################################################################################################################### user = [] probs = [] labels = [] data = [] negative_user = [] with open(File) as f: for line in f: uid, pro, label = line.split() UID, date = uid.split('/') temp = demo_dict[uid] demo = get_demo(temp) if temp[4] == 'never': user.append(UID) pro = float(pro) label = float(label) label = 1 if label > 0 else 0 probs.append(pro) data.append([pro,label]) labels.append(label) if label == 0: negative_user.append(pro) neg_never.append(pro) auc, TPR, TNR, _ = get_metrics(probs,labels) All_negative.append(negative_user[:150]) user = [] probs = [] labels = [] data = [] negative_user = [] with open(File) as f: for line in f: uid, pro, label = line.split() UID, date = uid.split('/') temp = demo_dict[uid] demo = get_demo(temp) if temp[4] == 'ex': user.append(UID) pro = float(pro) label = float(label) label = 1 if label > 0 else 0 probs.append(pro) data.append([pro,label]) labels.append(label) if label == 0: negative_user.append(pro) neg_ex.append(pro) auc, TPR, TNR, _ = get_metrics(probs,labels) All_negative.append(negative_user[:150]) user = [] probs = [] labels = [] data = [] negative_user = [] with open(File) as f: for line in f: uid, pro, label = line.split() UID, date = uid.split('/') temp = demo_dict[uid] demo = get_demo(temp) if temp[4] == 'ltOnce' or temp[4] == '1to10' or temp[4] == '10to20' or temp[4] == ' 21+' : user.append(UID) pro = float(pro) label = float(label) label = 1 if label > 0 else 0 probs.append(pro) data.append([pro,label]) labels.append(label) if label == 0: negative_user.append(pro) neg_now.append(pro) All_negative.append(negative_user[:150]) fig, ax = plt.subplots(figsize=(7,5),dpi=DPI) sns.boxplot(data=All_negative,whis=[0, 100], width=.5, palette="vlag") # Add in points to show each observation sns.stripplot(data=All_negative,size=3, color=".3", linewidth=0) # Tweak the visual presentation ax.xaxis.grid(True) ax.set(ylabel="") sns.despine(trim=True, left=True) plt.tick_params(labelsize=20) plt.ylabel('Predicted Probability',fontsize = 20) plt.xticks([0,1,2,3,4,5],['Asthma','Hpb','Non-','Never-','Ex-','Now-'],fontsize = 18) ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(2); ax.spines['top'].set_linewidth(2); #plt.savefig("pros_negative.pdf", format="pdf") plt.show() plt.close() ################################################### from scipy.stats import kruskal stat, p = kruskal(neg_asthma, neg_hbp, neg_none) print(stat,p) stat, p = kruskal(neg_now, neg_ex, neg_never) print(stat,p) ``` ## Figure 4, Suplementary Figure 3-5: Bias comparison ``` import os import numpy as np import random import pandas as pd from sklearn import metrics from operator import itemgetter import json import seaborn as sns import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(6,4),dpi=DPI) x = np.array([1,2,3,4]) value1 = [0.65,0.71,0.84,0.63] value2 = [0.69,0.74,0.78,0.71] bar1 = plt.bar(x-0.15, height = value1, width = 0.3,alpha = 0.8,label = 'Sensitivity', color = 'steelblue') bar2 = plt.bar(x+0.15,value2,width = 0.3,alpha = 0.8,label = 'Specificity', color = 'thistle') CIs1 = [[0.58,0.72],[0.63,0.77],[0.75,0.92],[0.53,0.72]] CIs2 = [[0.62,0.76],[0.68,0.81],[0.68,0.87],[0.60,0.81]] for i in range(1,5): plt.plot([i-0.15,i-0.15],CIs1[i-1],'black',linewidth=2) plt.plot([i+0.15,i+0.15],CIs2[i-1],'black',linewidth=2) plt.tick_params(labelsize=18) #plt.ylabel('Percentage %',fontsize = 20) #plt.xlabel('Group',fontsize = 20) plt.xticks([1,2,3,4],['User-splits','Random-splits','Random(Seen)', 'Random(Unseen)'],fontsize = 17, rotation=15) ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); plt.ylim([0.4,1]) plt.legend(fontsize=16) plt.show() plt.close() fig, ax = plt.subplots(figsize=(2.8,4),dpi=DPI) x = np.array([1.2,2]) value1 = [0.71,0.23] value2 = [0.65,0.93] bar1 = plt.bar(x-0.1, height = value1, width = 0.2,alpha = 0.8,label = 'Sensitivity', color = 'steelblue') bar2 = plt.bar(x+0.1,value2,width = 0.2,alpha = 0.8,label = 'Specificity', color = 'thistle') CIs1 = [[0.61,0.81],[0.14,0.33]] CIs2 = [[0.55,0.75],[0.90,0.97]] plt.plot([1+0.1,1+0.1],CIs1[0],'black',linewidth=2) plt.plot([1.9,1.9],CIs1[1],'black',linewidth=2) plt.plot([1+0.3,1+0.3],CIs2[0],'black',linewidth=2) plt.plot([2.1,2.1],CIs2[1],'black',linewidth=2) plt.tick_params(labelsize=18) #plt.ylabel('Percentage %',fontsize = 20) #plt.xlabel('Group',fontsize = 20) plt.xticks(x,['Controlled(Female)','Biased(Female)'],fontsize = 17, rotation=15) ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); plt.ylim([0.1,1]) #plt.legend(fontsize=16) plt.show() plt.close() fig, ax = plt.subplots(figsize=(2.8,4),dpi=DPI) x = np.array([1.2,2]) value1 = [0.88,0.25] value2 = [0.88,0.79] bar1 = plt.bar(x-0.1, height = value1, width = 0.2,alpha = 0.8,label = 'Sensitivity', color = 'steelblue') bar2 = plt.bar(x+0.1,value2,width = 0.2,alpha = 0.8,label = 'Specificity', color = 'thistle') CIs1 = [[0.6,1],[0.,0.60]] CIs2 = [[0.69,1],[0.68,0.89]] plt.plot([1+0.1,1+0.1],CIs1[0],'black',linewidth=2) plt.plot([1.9,1.9],CIs1[1],'black',linewidth=2) plt.plot([1+0.3,1+0.3],CIs2[0],'black',linewidth=2) plt.plot([2.1,2.1],CIs2[1],'black',linewidth=2) plt.tick_params(labelsize=18) #plt.ylabel('Percentage %',fontsize = 20) #plt.xlabel('Group',fontsize = 20) plt.xticks(x,['Controlled(Aged60-)','Biased(Aged60-)'],fontsize = 17, rotation=15) ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); plt.ylim([0.0,1]) #plt.legend(fontsize=16) plt.show() plt.close() fig, ax = plt.subplots(figsize=(2.8,4),dpi=DPI) x = np.array([1.2,2]) value1 = [0.57,0.44] value2 = [0.65,0.69] bar1 = plt.bar(x-0.1, height = value1, width = 0.2,alpha = 0.8,label = 'Sensitivity', color = 'steelblue') bar2 = plt.bar(x+0.1,value2,width = 0.2,alpha = 0.8,label = 'Specificity', color = 'thistle') CIs1 = [[0.46,0.68],[0.34,0.55]] CIs2 = [[0.55,0.75],[0.60,0.77]] plt.plot([1+0.1,1+0.1],CIs1[0],'black',linewidth=2) plt.plot([1.9,1.9],CIs1[1],'black',linewidth=2) plt.plot([1+0.3,1+0.3],CIs2[0],'black',linewidth=2) plt.plot([2.1,2.1],CIs2[1],'black',linewidth=2) plt.tick_params(labelsize=18) #plt.ylabel('Percentage %',fontsize = 20) #plt.xlabel('Group',fontsize = 20) plt.xticks(x,['Controlled(Aged16-39)','Biased(Aged16-39)'],fontsize = 17, rotation=15) ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); plt.ylim([0.0,1]) #plt.legend(fontsize=16) plt.show() plt.close() fig, ax = plt.subplots(figsize=(6,4),dpi=DPI) x = np.array([1,2,3,4]) value1 = [0.88,0.25,0.57,0.44] value2 = [0.88,0.79,0.65,0.69] bar1 = plt.bar(x-0.15, height = value1, width = 0.3,alpha = 0.8,label = 'Sensitivity', color = 'steelblue') bar2 = plt.bar(x+0.15,value2,width = 0.3,alpha = 0.8,label = 'Specificity', color = 'thistle') CIs1 = [[0.6,1],[0.,0.60],[0.46,0.68],[0.34,0.55]] CIs2 = [[0.69,1],[0.68,0.89],[0.55,0.75],[0.60,0.77]] for i in range(1,5): plt.plot([i-0.15,i-0.15],CIs1[i-1],'black',linewidth=2) plt.plot([i+0.15,i+0.15],CIs2[i-1],'black',linewidth=2) plt.tick_params(labelsize=18) #plt.ylabel('Percentage %',fontsize = 20) #plt.xlabel('Group',fontsize = 20) plt.xticks([1,2,3,4],['Controlled(Aged60-)','Biased(Aged60-)','Controlled(Aged16-39)','Biased(Aged16-39)'],fontsize = 17, rotation=15) ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); plt.ylim([0.0,1]) #plt.legend(fontsize=16) plt.show() plt.close() fig, ax = plt.subplots(figsize=(5,4),dpi=DPI) x = np.array([1,2,3,4]) value1 = [0.65,0.60,0.25,0.98] value2 = [0.69,0.78,0.81,0.0] bar1 = plt.bar(x-0.15, height = value1, width = 0.3,alpha = 0.8,label = 'Sensitivity', color = 'steelblue') bar2 = plt.bar(x+0.15,value2,width = 0.3,alpha = 0.8,label = 'Specificity', color = 'thistle') CIs1 = [[0.58,0.72],[0.51,0.68],[0.15,0.36],[0.95,1]] CIs2 = [[0.62,0.76],[0.72,0.84],[0.74,0.86],[0.,0.02]] for i in range(1,5): plt.plot([i-0.15,i-0.15],CIs1[i-1],'black',linewidth=2) plt.plot([i+0.15,i+0.15],CIs2[i-1],'black',linewidth=2) plt.tick_params(labelsize=18) #plt.ylabel('Percentage %',fontsize = 20) #plt.xlabel('Group',fontsize = 20) plt.xticks([1,2,3,4],['Controlled','Biased(En+It)','Biased(En)','Biased(It)'],fontsize = 17, rotation=15) ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); plt.ylim([0.0,1]) #plt.legend(fontsize=16) plt.show() plt.close() fig, ax = plt.subplots(figsize=(5,4),dpi=DPI) x = np.array([1,2,3,4]) value1 = [0.59,0.62,0.55,0.70] value2 = [0.66,0.59,0.60,0.33] bar1 = plt.bar(x-0.15, height = value1, width = 0.3,alpha = 0.8,label = 'Sensitivity', color = 'steelblue') bar2 = plt.bar(x+0.15,value2,width = 0.3,alpha = 0.8,label = 'Specificity', color = 'thistle') CIs1 = [[0.51,0.66],[0.53,0.70],[0.44,0.67],[0.59,0.81]] CIs2 = [[0.58,0.73],[0.51,0.66],[0.52,0.68],[0.0,0.75]] for i in range(1,5): plt.plot([i-0.15,i-0.15],CIs1[i-1],'black',linewidth=2) plt.plot([i+0.15,i+0.15],CIs2[i-1],'black',linewidth=2) plt.tick_params(labelsize=18) #plt.ylabel('Percentage %',fontsize = 20) #plt.xlabel('Group',fontsize = 20) plt.xticks([1,2,3,4],['Controlled','Biased(En+It)','Biased(En)','Biased(It)'],fontsize = 17, rotation=15) ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); plt.ylim([0.0,1]) #plt.legend(fontsize=16) plt.show() plt.close() fig, ax = plt.subplots(figsize=(5,4),dpi=DPI) x = np.array([1,2,3,4]) value1 = [0.57,0.59,0.22,1] value2 = [0.60,0.82,0.85,0] bar1 = plt.bar(x-0.15, height = value1, width = 0.3,alpha = 0.8,label = 'Sensitivity', color = 'steelblue') bar2 = plt.bar(x+0.15,value2,width = 0.3,alpha = 0.8,label = 'Specificity', color = 'thistle') CIs1 = [[0.49,0.64],[0.5,0.67],[0.12,0.32],[0.99,1.1]] CIs2 = [[0.52,0.67],[0.76,0.88],[0.7,0.9],[0,0.02]] for i in range(1,5): plt.plot([i-0.15,i-0.15],CIs1[i-1],'black',linewidth=2) plt.plot([i+0.15,i+0.15],CIs2[i-1],'black',linewidth=2) plt.tick_params(labelsize=18) #plt.ylabel('Percentage %',fontsize = 20) #plt.xlabel('Group',fontsize = 20) plt.xticks([1,2,3,4],['Controlled','Biased(En+It)','Biased(En)','Biased(It)'],fontsize = 17, rotation=15) ax.spines['bottom'].set_linewidth(2); ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(0); ax.spines['top'].set_linewidth(0); plt.ylim([0.0,1]) #plt.legend(fontsize=16) plt.show() plt.close() ```
github_jupyter
## Result Visualizations with Comparison Analysis ### CHAPTER 02 - *Model Explainability Methods* From **Applied Machine Learning Explainability Techniques** by [**Aditya Bhattacharya**](https://www.linkedin.com/in/aditya-bhattacharya-b59155b6/), published by **Packt** ### Objective In this notebook, we will try to implement some of the concepts related to Comparison Analysis part of the result visualization based explainability methods discussed in Chapter 2 - Model Explainability Methods. ### Installing the modules Install the following libraries in Google Colab or your local environment, if not already installed. ``` !pip install --upgrade pandas numpy matplotlib seaborn scikit-learn statsmodels ``` ### Loading the modules ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set(style="whitegrid") import warnings warnings.filterwarnings("ignore") np.random.seed(5) from sklearn import preprocessing from sklearn.model_selection import train_test_split from sklearn.manifold import TSNE from sklearn.cluster import KMeans from statsmodels.tsa.arima.model import ARIMA import random ``` ### About the data Kaggle Data Source Link - [Kaggle | Pima Indians Diabetes Database](https://www.kaggle.com/uciml/pima-indians-diabetes-database?select=diabetes.csv) The Pima Indian Diabetes dataset is used to predict whether or not the diagnosed patient has diabetes, which is also a Binary Classification problem, based on the various diagnostic feature values provided. The dataset used for this analysis is obtained from Kaggle. Although the dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The patient dynamics provided in this dataset is that of female patients who are at least 21 years old and of Pima Indian heritage. The datasets used might be derived and transformed datasets from original datasets. The sources of the original datasets will be mentioned and I would strongly recommend to look at the original data for more details on the data description and for a more detailed analysis. Kaggle Data Source Link - [Kaggle | Daily Female Births Dataset](https://www.kaggle.com/dougcresswell/daily-total-female-births-in-california-1959) This is a time series dataset used for building predictive models. The data is about total number of female births recorded in California, USA on 1959. It consists of the time index and the count of female birth and consist around 365 records. ### Loading the data ``` data = pd.read_csv('Datasets/diabetes.csv') data.head() data.shape ``` ### Data Preprocessing We will perform some preliminary pre-processing and very basic exploration as our main focus is on the comparison analysis. And since some of these methods are already covered in sufficient details in other notebook tutorials provided, I will try to jump to the important steps for the comparison analysis. ``` data[(data['BMI'] == 0) & (data['Glucose'] == 0) & (data['BloodPressure'] == 0)] data[(data['Glucose'] == 0)] ``` From the above observation, it looks like the data does have alot of noise, as there are multiple cases where some of the key features are 0. But, following human intuition, since blood glucose level is one of the key features to observe diabetes, I would consider dropping all records where Glucose value is 0. ``` cleaned_data = data[(data['Glucose'] != 0)] cleaned_data.shape feature_engg_data = cleaned_data.copy() outlier_data = cleaned_data.copy() factor = 3 # Include this only for columns with suspected outliers # Using a factor of 3, following Nelson's rule 1 to remove outliers - https://en.wikipedia.org/wiki/Nelson_rules # Only for non-categorical fields columns_to_include = ['Pregnancies','Glucose','BloodPressure','SkinThickness','Insulin','BMI','DiabetesPedigreeFunction'] for column in columns_to_include: upper_lim = feature_engg_data[column].mean () + feature_engg_data[column].std () * factor lower_lim = feature_engg_data[column].mean () - feature_engg_data[column].std () * factor feature_engg_data = feature_engg_data[(feature_engg_data[column] < upper_lim) & (feature_engg_data[column] > lower_lim)] outlier_data = pd.concat([outlier_data, feature_engg_data]).drop_duplicates(keep=False) print(feature_engg_data.shape) print(outlier_data.shape) ``` In the following section in-order to build the model, we will need to normalize the data and split the data into train, validation and test dataset. The outlier data that we have, we will keep it separate, just in case to see how does our model performs on the outlier dataset. ``` def normalize_data(df): val = df.values min_max_normalizer = preprocessing.MinMaxScaler() norm_val = min_max_normalizer.fit_transform(val) df2 = pd.DataFrame(norm_val, columns=df.columns) return df2 norm_feature_engg_data = normalize_data(feature_engg_data) norm_outlier_data = normalize_data(outlier_data) ``` In the previous steps we have done some fundamental steps to understand and prepare the data so that it can be used for further modeling. Let's split the data and then we will try to apply the comparison analysis for result visualization based explainability. ### Splitting the data ``` input_data = norm_feature_engg_data.drop(['Outcome'],axis='columns') targets =norm_feature_engg_data.filter(['Outcome'],axis='columns') x, x_test, y, y_test = train_test_split(input_data,targets,test_size=0.1,train_size=0.9, random_state=5) x_train, x_valid, y_train, y_valid = train_test_split(x,y,test_size = 0.22,train_size =0.78, random_state=5) ``` ### t-SNE based visualization Now, to compare the classes and the formation of the clusters, we will perform t-SNE based visualization and observe the goodness of the clusters. If the clusters are not compact and well separated, it is highly possible that any classification algorithm will not work effectively because of the data formation.t-Distributed Stochastic Neighbor Embedding (t-SNE) is a dimensionality reduction method, which is often used with clustering. To find out more on this method, please refer this link: https://towardsdatascience.com/an-introduction-to-t-sne-with-python-example-5a3a293108d1. ``` def visualize_clusters(x, labels, hue = "hls"): ''' Visualization of clusters using t-SNE plots ''' tsne_transformed = TSNE(n_components=2, random_state = 0).fit_transform(x) df_tsne_subset = pd.DataFrame() df_tsne_subset['tsne-one'] = tsne_transformed[:,0] df_tsne_subset['tsne-two'] = tsne_transformed[:,1] df_tsne_subset['y'] = labels plt.figure(figsize=(6,4)) sns.scatterplot( x="tsne-one", y="tsne-two", hue="y", palette=sns.color_palette(hue,df_tsne_subset['y'].nunique()), data=df_tsne_subset, legend="full", alpha=1.0 ) plt.show() # K-means model = KMeans(n_clusters=2, init='k-means++', random_state=0) model.fit(x) km_labels = model.predict(x) visualize_clusters(x, km_labels) ``` As we can see that overall, the clusters does have a compact shape and can be separated. If the clusters were not properly formed and if the t_SNE transfored data points we sparse and spread out, we could have hypothesized that any classification algorithm might also fail. But in this, apart from few data points, most of the points are part pf th two distinguishable clusters which are formed. More detailed comparison analysis can be done to identify key information about those points which are not part of the correct clusters. But we will not cover those in this notebook to keep things simple! ### Comparison Analysis for time series data ``` plt.rcParams["figure.figsize"] = (15,5) series = pd.read_csv('Datasets/daily-total-female-births.csv', header=0, index_col=0) series.head() series.plot(color = 'g') X = series.values X = X.astype('float32') size = len(X) - 1 train, test = X[0:size], X[size:] # fit an ARIMA model model = ARIMA(train, order=(2,1,1)) # Simple ARIMA time series forecast model model_fit = model.fit() # forecast forecast = model_fit.predict(start=366, end=466) for i in range(len(forecast)): forecast[i] = random.random() * 10 + forecast[i] result = model_fit.get_forecast() con_interval = result.conf_int(0.05) forecast_ub = forecast + 0.5 * con_interval[0][1] # Upper bound and lower bound of confidence interval forecast_lb = forecast - 0.5 * con_interval[0][0] plt.plot(series.index, series['Births'], color = 'g') plt.plot(list(range(len(series.index), len(series.index)+101)),forecast, color = 'b') plt.fill_between(list(range(len(series.index), len(series.index)+101)), forecast_lb, forecast_ub, alpha = 0.3, color = 'pink') plt.xlabel('Time Period') plt.xticks([0, 50, 120, 200, 300, 360],[series.index[0],series.index[50], series.index[120], series.index[200], series.index[300], series.index[360]] ,rotation=45) plt.ylabel('Births') plt.show() ``` From the above plot, we can see the confidence band around the predicted values. Although, our model isn't very accurate, but our focus is on uderstanding the importance of the comparison analysis. The confidence band gives us a clear indication the range of values that the forecast can take. So, it helps in setting up the right expectation of the best case and worst case scenario and is quite effective rather than just a single point estimation. Even if the actual model prediction is incorrect, the confidence interval showing the possible range of values can prevent any shock for end stakeholders and can eventually make the model more trustworthy. ### Final Thoughts The methods explored in this notebook, are quite simple and helpful for complete black-box models. I strongly recommend to try out more examples and more problems to understand these approaches much better. ### Reference 1. [Kaggle | Pima Indians Diabetes Database](https://www.kaggle.com/uciml/pima-indians-diabetes-database?select=diabetes.csv) 2. [Kaggle | Daily Female Births Dataset](https://www.kaggle.com/dougcresswell/daily-total-female-births-in-california-1959) 3. An Introduction to t-SNE with Python Example, by Andre Violante - https://towardsdatascience.com/an-introduction-to-t-sne-with-python-example-5a3a293108d1 4. Some of the utility functions and code are taken from the GitHub Repository of the author - Aditya Bhattacharya https://github.com/adib0073
github_jupyter
# TSG013 - Show file list in Storage Pool (HDFS) ## Steps ### Parameters ``` path = "/" ``` ### Common functions Define helper functions used in this notebook. ``` # Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows import sys import os import re import platform import shlex import shutil import datetime from subprocess import Popen, PIPE from IPython.display import Markdown retry_hints = {} # Output in stderr known to be transient, therefore automatically retry error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help install_hint = {} # The SOP to help install the executable if it cannot be found def run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False): """Run shell command, stream stdout, print stderr and optionally return output NOTES: 1. Commands that need this kind of ' quoting on Windows e.g.: kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name} Need to actually pass in as '"': kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name} The ' quote approach, although correct when pasting into Windows cmd, will hang at the line: `iter(p.stdout.readline, b'')` The shlex.split call does the right thing for each platform, just use the '"' pattern for a ' """ MAX_RETRIES = 5 output = "" retry = False # When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see: # # ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)') # if platform.system() == "Windows" and cmd.startswith("azdata sql query"): cmd = cmd.replace("\n", " ") # shlex.split is required on bash and for Windows paths with spaces # cmd_actual = shlex.split(cmd) # Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries # user_provided_exe_name = cmd_actual[0].lower() # When running python, use the python in the ADS sandbox ({sys.executable}) # if cmd.startswith("python "): cmd_actual[0] = cmd_actual[0].replace("python", sys.executable) # On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail # with: # # UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128) # # Setting it to a default value of "en_US.UTF-8" enables pip install to complete # if platform.system() == "Darwin" and "LC_ALL" not in os.environ: os.environ["LC_ALL"] = "en_US.UTF-8" # When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc` # if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ: cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc") # To aid supportability, determine which binary file will actually be executed on the machine # which_binary = None # Special case for CURL on Windows. The version of CURL in Windows System32 does not work to # get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance # of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost # always the first curl.exe in the path, and it can't be uninstalled from System32, so here we # look for the 2nd installation of CURL in the path) if platform.system() == "Windows" and cmd.startswith("curl "): path = os.getenv('PATH') for p in path.split(os.path.pathsep): p = os.path.join(p, "curl.exe") if os.path.exists(p) and os.access(p, os.X_OK): if p.lower().find("system32") == -1: cmd_actual[0] = p which_binary = p break # Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this # seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound) # # NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split. # if which_binary == None: which_binary = shutil.which(cmd_actual[0]) # Display an install HINT, so the user can click on a SOP to install the missing binary # if which_binary == None: print(f"The path used to search for '{cmd_actual[0]}' was:") print(sys.path) if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None: display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.')) raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") else: cmd_actual[0] = which_binary start_time = datetime.datetime.now().replace(microsecond=0) print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)") print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})") print(f" cwd: {os.getcwd()}") # Command-line tools such as CURL and AZDATA HDFS commands output # scrolling progress bars, which causes Jupyter to hang forever, to # workaround this, use no_output=True # # Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait # wait = True try: if no_output: p = Popen(cmd_actual) else: p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1) with p.stdout: for line in iter(p.stdout.readline, b''): line = line.decode() if return_output: output = output + line else: if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file regex = re.compile(' "(.*)"\: "(.*)"') match = regex.match(line) if match: if match.group(1).find("HTML") != -1: display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"')) else: display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"')) wait = False break # otherwise infinite hang, have not worked out why yet. else: print(line, end='') if wait: p.wait() except FileNotFoundError as e: if install_hint is not None: display(Markdown(f'HINT: Use {install_hint} to resolve this issue.')) raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait() if not no_output: for line in iter(p.stderr.readline, b''): try: line_decoded = line.decode() except UnicodeDecodeError: # NOTE: Sometimes we get characters back that cannot be decoded(), e.g. # # \xa0 # # For example see this in the response from `az group create`: # # ERROR: Get Token request returned http error: 400 and server # response: {"error":"invalid_grant",# "error_description":"AADSTS700082: # The refresh token has expired due to inactivity.\xa0The token was # issued on 2018-10-25T23:35:11.9832872Z # # which generates the exception: # # UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte # print("WARNING: Unable to decode stderr line, printing raw bytes:") print(line) line_decoded = "" pass else: # azdata emits a single empty line to stderr when doing an hdfs cp, don't # print this empty "ERR:" as it confuses. # if line_decoded == "": continue print(f"STDERR: {line_decoded}", end='') if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"): exit_code_workaround = 1 # inject HINTs to next TSG/SOP based on output in stderr # if user_provided_exe_name in error_hints: for error_hint in error_hints[user_provided_exe_name]: if line_decoded.find(error_hint[0]) != -1: display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.')) # Verify if a transient error, if so automatically retry (recursive) # if user_provided_exe_name in retry_hints: for retry_hint in retry_hints[user_provided_exe_name]: if line_decoded.find(retry_hint) != -1: if retry_count < MAX_RETRIES: print(f"RETRY: {retry_count} (due to: {retry_hint})") retry_count = retry_count + 1 output = run(cmd, return_output=return_output, retry_count=retry_count) if return_output: if base64_decode: import base64 return base64.b64decode(output).decode('utf-8') else: return output elapsed = datetime.datetime.now().replace(microsecond=0) - start_time # WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so # don't wait here, if success known above # if wait: if p.returncode != 0: raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n') else: if exit_code_workaround !=0 : raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n') print(f'\nSUCCESS: {elapsed}s elapsed.\n') if return_output: if base64_decode: import base64 return base64.b64decode(output).decode('utf-8') else: return output # Hints for tool retry (on transient fault), known errors and install guide # retry_hints = {'azdata': ['Endpoint sql-server-master does not exist', 'Endpoint livy does not exist', 'Failed to get state for cluster', 'Endpoint webhdfs does not exist', 'Adaptive Server is unavailable or does not exist', 'Error: Address already in use', 'Login timeout expired (0) (SQLDriverConnect)', 'SSPI Provider: No Kerberos credentials available', ], 'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', ], 'python': [ ], } error_hints = {'azdata': [['Please run \'azdata login\' to first authenticate', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['The token is expired', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Reason: Unauthorized', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Max retries exceeded with url: /api/v1/bdc/endpoints', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Look at the controller logs for more details', 'TSG027 - Observe cluster deployment', '../diagnose/tsg027-observe-bdc-create.ipynb'], ['provided port is already allocated', 'TSG062 - Get tail of all previous container logs for pods in BDC namespace', '../log-files/tsg062-tail-bdc-previous-container-logs.ipynb'], ['Create cluster failed since the existing namespace', 'SOP061 - Delete a big data cluster', '../install/sop061-delete-bdc.ipynb'], ['Failed to complete kube config setup', 'TSG067 - Failed to complete kube config setup', '../repair/tsg067-failed-to-complete-kube-config-setup.ipynb'], ['Data source name not found and no default driver specified', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Can\'t open lib \'ODBC Driver 17 for SQL Server', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Control plane upgrade failed. Failed to upgrade controller.', 'TSG108 - View the controller upgrade config map', '../diagnose/tsg108-controller-failed-to-upgrade.ipynb'], ['NameError: name \'azdata_login_secret_name\' is not defined', 'SOP013 - Create secret for azdata login (inside cluster)', '../common/sop013-create-secret-for-azdata-login.ipynb'], ['ERROR: No credentials were supplied, or the credentials were unavailable or inaccessible.', 'TSG124 - \'No credentials were supplied\' error from azdata login', '../repair/tsg124-no-credentials-were-supplied.ipynb'], ['Please accept the license terms to use this product through', 'TSG126 - azdata fails with \'accept the license terms to use this product\'', '../repair/tsg126-accept-license-terms.ipynb'], ], 'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb'], ], 'python': [['Library not loaded: /usr/local/opt/unixodbc', 'SOP012 - Install unixodbc for Mac', '../install/sop012-brew-install-odbc-for-sql-server.ipynb'], ['WARNING: You are using pip version', 'SOP040 - Upgrade pip in ADS Python sandbox', '../install/sop040-upgrade-pip.ipynb'], ], } install_hint = {'azdata': [ 'SOP063 - Install azdata CLI (using package manager)', '../install/sop063-packman-install-azdata.ipynb' ], 'kubectl': [ 'SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb' ], } print('Common functions defined successfully.') ``` ### Get the Kubernetes namespace for the big data cluster Get the namespace of the Big Data Cluster use the kubectl command line interface . **NOTE:** If there is more than one Big Data Cluster in the target Kubernetes cluster, then either: - set \[0\] to the correct value for the big data cluster. - set the environment variable AZDATA_NAMESPACE, before starting Azure Data Studio. ``` # Place Kubernetes namespace name for BDC into 'namespace' variable if "AZDATA_NAMESPACE" in os.environ: namespace = os.environ["AZDATA_NAMESPACE"] else: try: namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True) except: from IPython.display import Markdown print(f"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.") display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.')) display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.')) display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.')) raise print(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}') ``` ### Get the controller username and password Get the controller username and password from the Kubernetes Secret Store and place in the required AZDATA_USERNAME and AZDATA_PASSWORD environment variables. ``` # Place controller secret in AZDATA_USERNAME/AZDATA_PASSWORD environment variables import os, base64 os.environ["AZDATA_USERNAME"] = run(f'kubectl get secret/controller-login-secret -n {namespace} -o jsonpath={{.data.username}}', return_output=True, base64_decode=True) os.environ["AZDATA_PASSWORD"] = run(f'kubectl get secret/controller-login-secret -n {namespace} -o jsonpath={{.data.password}}', return_output=True, base64_decode=True) print(f"Controller username '{os.environ['AZDATA_USERNAME']}' and password stored in environment variables") ``` ### Use azdata to list files ``` run(f'azdata bdc hdfs ls --path {path}') print("Notebook execution is complete.") ```
github_jupyter
# Novel Fraud Analysis We show that hybrid model with exploration detects novel frauds better (e.g., trades from new HS6 and new import ID) ``` import numpy as np import pandas as pd import glob import csv import traceback import datetime import os pd.options.display.max_columns=50 ``` ### Basic statistics and Novel fraud statistics Number of test weeks: * dfm: 196 weeks * dfn: 257 weeks * dft: 257 weeks * dfs: 48 weeks ``` def firstCheck(df): """ Sorting and indexing necessary for data preparation """ df = df.dropna(subset=["illicit"]) df = df.sort_values("sgd.date") df = df.reset_index(drop=True) return df dfn = firstCheck(pd.read_csv('../data/ndata.csv')) dft = firstCheck(pd.read_csv('../data/tdata.csv')) dfm = firstCheck(pd.read_csv('../data/mdata.csv')) dfs = firstCheck(pd.read_csv('../data/synthetic-imports-declarations.csv')) for df in [dfm, dfn, dft]: print(df['importer.id'].nunique(), df['tariff.code'].nunique(), df['country'].nunique()) ``` #### Average illicit rates ``` # Malawi date_begin = '20130101' test_length = 7 start_day = datetime.date(int(date_begin[:4]), int(date_begin[4:6]), int(date_begin[6:8])) period = datetime.timedelta(days=test_length) end_day = start_day + datetime.timedelta(days=test_length) old_IID = set() new_proportions = [] avg_illicit_rates_m = [] num_trades_m = [] for week in range(208): weekly_trade = dfm[(dfm['sgd.date'] < end_day.strftime('%y-%m-%d')) & (dfm['sgd.date'] >= start_day.strftime('%y-%m-%d'))] start_day = end_day end_day = start_day + datetime.timedelta(days=test_length) avg_illicit_rates_m.append(np.mean(weekly_trade['illicit'])) num_trades_m.append(len(weekly_trade)) import matplotlib.pyplot as plt % matplotlib inline avg_illicit_rates_m = avg_illicit_rates_m[:-13] plt.style.use('seaborn-whitegrid') f = plt.figure() plt.plot(pd.Series(avg_illicit_rates_m).rolling(4).mean(), color='red') plt.title('Country M', fontsize=25) plt.xlabel('Year', fontsize=25) plt.ylabel('Illicit rate', fontsize=25) plt.xticks(ticks=[0,51,103,155,207], labels=['2013', 14, 15, 16,17], fontsize=25) f.savefig("illicit_rate_m.pdf", bbox_inches='tight') plt.style.use('seaborn-whitegrid') f = plt.figure() plt.plot(pd.Series(num_trades_m).rolling(4).mean(), color='blue') plt.title('Country M', fontsize=25) plt.xlabel('Year', fontsize=25) plt.ylabel('# of weekly trades', fontsize=25) plt.xticks(ticks=[0,51,103,155,207], labels=['2013', 14, 15, 16,17], fontsize=25) f.savefig("num_weekly_trades_m.pdf", bbox_inches='tight') # Nigeria date_begin = '20130101' test_length = 7 start_day = datetime.date(int(date_begin[:4]), int(date_begin[4:6]), int(date_begin[6:8])) period = datetime.timedelta(days=test_length) end_day = start_day + datetime.timedelta(days=test_length) old_IID = set() new_proportions = [] avg_illicit_rates_n = [] num_trades_n = [] for week in range(260): weekly_trade = dfn[(dfn['sgd.date'] < end_day.strftime('%y-%m-%d')) & (dfn['sgd.date'] >= start_day.strftime('%y-%m-%d'))] start_day = end_day end_day = start_day + datetime.timedelta(days=test_length) avg_illicit_rates_n.append(np.mean(weekly_trade['illicit'])) num_trades_n.append(len(weekly_trade)) plt.style.use('seaborn-whitegrid') f = plt.figure() plt.plot(pd.Series(avg_illicit_rates_n).rolling(4).mean(), color='red') plt.title('Country N', fontsize=25) plt.xlabel('Year', fontsize=25) plt.ylabel('Illicit rate', fontsize=25) plt.xticks(ticks=[0,51,103,155,207,259], labels=['2013', 14, 15, 16, 17, 18]) f.savefig("illicit_rate_n.pdf", bbox_inches='tight') plt.style.use('seaborn-whitegrid') f = plt.figure() plt.plot(pd.Series(num_trades_n).rolling(4).mean(), color='blue') plt.title('Country N', fontsize=25) plt.xlabel('Year', fontsize=25) plt.ylabel('# of weekly trades', fontsize=25) plt.xticks(ticks=[0,51,103,155,207,259], labels=['2013', 14, 15, 16, 17, 18]) f.savefig("num_weekly_trades_n.pdf", bbox_inches='tight') # Tunisia date_begin = '20150101' test_length = 7 start_day = datetime.date(int(date_begin[:4]), int(date_begin[4:6]), int(date_begin[6:8])) period = datetime.timedelta(days=test_length) end_day = start_day + datetime.timedelta(days=test_length) old_IID = set() new_proportions = [] avg_illicit_rates_t = [] num_trades_t = [] for week in range(260): weekly_trade = dft[(dft['sgd.date'] < end_day.strftime('%y-%m-%d')) & (dft['sgd.date'] >= start_day.strftime('%y-%m-%d'))] start_day = end_day end_day = start_day + datetime.timedelta(days=test_length) avg_illicit_rates_t.append(np.mean(weekly_trade['illicit'])) num_trades_t.append(len(weekly_trade)) plt.style.use('seaborn-whitegrid') f = plt.figure() plt.plot(pd.Series(avg_illicit_rates_t).rolling(4).mean(), color='red') plt.title('Country T', fontsize=25) plt.xlabel('Year', fontsize=25) plt.ylabel('Illicit rate', fontsize=25) plt.xticks(ticks=[0,51,103,155,207,259], labels=['2015', 16, 17, 18, 19, 20]) f.savefig("illicit_rate_t.pdf", bbox_inches='tight') plt.style.use('seaborn-whitegrid') f = plt.figure() plt.plot(pd.Series(num_trades_t).rolling(4).mean(), color='blue') plt.title('Country T', fontsize=25) plt.xlabel('Year', fontsize=25) plt.ylabel('# of weekly trades', fontsize=25) plt.xticks(ticks=[0,51,103,155,207,259], labels=['2015', 16, 17, 18,19, 20]) f.savefig("num_weekly_trades_t.pdf", bbox_inches='tight') results = glob.glob('../results/performances/fld7-result-*') # quick- or www21- or fld- list1, list2 = zip(*sorted(zip([os.stat(result).st_size for result in results], results))) # Retrieving results num_logs = len([i for i in list1 if i > 1000]) count= 0 summary = [] for i in range(1,num_logs+1): rslt = pd.read_csv(list2[-i]) dic = rslt[['runID','data','sampling','subsamplings','numWeek','current_inspection_rate','test_start','test_end']].iloc[len(rslt)-1].to_dict() run_id = round(dic['runID'], 3) data = dic['data'] subsamplings = dic['subsamplings'].replace('/','+') strategy = dic['sampling'] cir = dic['current_inspection_rate'] summary.append(dic) summary = pd.DataFrame(summary) # Index will be used later summary[summary.data == 'real-t'] ### Previous code def performanceOnNovel(exp_a): run_id = round(exp_a['runID'], 3) strategy = exp_a['sampling'] subsamplings = exp_a['subsamplings'].replace('/','+') cir = exp_a['current_inspection_rate'] week = exp_a['numWeek'] measure_start = 0 measure_end = week novelty = {} old_IID = set() for week in range(measure_start,measure_end): filename = glob.glob(f'results/query_indices/{run_id}-{strategy}-{subsamplings}-*-scratch-week-{week}.csv')[0] with open(filename, "r") as f: reader = csv.reader(f, delimiter=",") expid = next(reader)[1] dataset = next(reader)[1] episode = next(reader)[1] start_day = next(reader)[1] end_day = next(reader)[1] start_day = datetime.date(int(start_day[:4]), int(start_day[5:7]), int(start_day[8:10])).strftime('%y-%m-%d') end_day = datetime.date(int(end_day[:4]), int(end_day[5:7]), int(end_day[8:10])).strftime('%y-%m-%d') if week == measure_start: if dataset == 'real-m': df = dfm elif dataset == 'synthetic': df = dfs elif dataset == 'real-n': df = dfn elif dataset == 'real-t': df = dft alldata = df[(df['sgd.date'] < end_day) & (df['sgd.date'] >= start_day)].loc[:, ['illicit', 'revenue', 'importer.id']] alldata = alldata[~alldata['importer.id'].isin(old_IID)] if alldata.empty: continue all_indices = [] all_samps = '' while True: try: indices = next(reader) samp = indices[0] indices = indices[1:] indices = list(map(int, indices)) all_indices.extend(indices) all_samps = all_samps + (samp + '-') except StopIteration: break if week == measure_start: novelty[f'{all_samps}-pre'] = [] novelty[f'{all_samps}-rec'] = [] novelty[f'{all_samps}-rev'] = [] chosen = df.iloc[all_indices].loc[:, ['illicit', 'revenue', 'importer.id']] chosen = chosen[~chosen['importer.id'].isin(old_IID)] # Recall and revenue try: pre = sum(chosen['illicit'])/chosen['illicit'].count() rec = sum(chosen['illicit'])/sum(alldata['illicit']) rev = sum(chosen['revenue'])/sum(alldata['revenue']) except: continue novelty[f'{all_samps}-pre'].append(pre) novelty[f'{all_samps}-rec'].append(rec) novelty[f'{all_samps}-rev'].append(rev) old_IID = old_IID.union(set(alldata['importer.id'].values)) print(f'# indices = {len(all_indices)}, # old_ID: {len(old_IID)}, # new trades: {len(chosen)}') return pd.DataFrame(novelty) exp1, exp2, exp3, exp4, exp5 = 18, 23, 32, 38, 44 rival1 = performanceOnNovel(summary.loc[exp1]) print('!!!!!!!') rival2 = performanceOnNovel(summary.loc[exp2]) print('!!!!!!!') rival3 = performanceOnNovel(summary.loc[exp3]) print('!!!!!!!') rival4 = performanceOnNovel(summary.loc[exp4]) print('!!!!!!!') rival5 = performanceOnNovel(summary.loc[exp5]) # Compare DATE performances: Between two experiments plt.figure() r1 = rival1['DATE-enhanced_bATE--rev'].rolling(window=14).mean() r2 = rival2['DATE-random--rev'].rolling(window=14).mean() r3 = rival3['DATE-badge--rev'].rolling(window=14).mean() r4 = rival4['DATE-bATE--rev'].rolling(window=14).mean() r5 = rival5['DATE--rev'].rolling(window=14).mean() plt.plot(r1.index, r1, label=summary.loc[exp1]['data']+'-'+summary.loc[exp1]['subsamplings']) plt.plot(r2.index, r2, label=summary.loc[exp2]['data']+'-'+summary.loc[exp2]['subsamplings']) plt.plot(r3.index, r3, label=summary.loc[exp1]['data']+'-'+summary.loc[exp1]['subsamplings']) plt.plot(r4.index, r4, label=summary.loc[exp2]['data']+'-'+summary.loc[exp2]['subsamplings']) plt.plot(r5.index, r5, label=summary.loc[exp2]['data']+'-'+summary.loc[exp2]['subsamplings']) plt.title('Compare performance for novel trade patterns') plt.legend(loc='lower right') plt.ylabel(var) plt.xlabel('numWeeks') plt.show() plt.close() ```
github_jupyter
# Huggingface Sagemaker-sdk - Spot instances example ### Binary Classification with `Trainer` and `imdb` dataset 1. [Introduction](#Introduction) 2. [Development Environment and Permissions](#Development-Environment-and-Permissions) 1. [Installation](#Installation) 2. [Development environment](#Development-environment) 3. [Permissions](#Permissions) 3. [Processing](#Preprocessing) 1. [Tokenization](#Tokenization) 2. [Uploading data to sagemaker_session_bucket](#Uploading-data-to-sagemaker_session_bucket) 4. [Fine-tuning & starting Sagemaker Training Job](#Fine-tuning-\&-starting-Sagemaker-Training-Job) 1. [Creating an Estimator and start a training job](#Creating-an-Estimator-and-start-a-training-job) 2. [Estimator Parameters](#Estimator-Parameters) 3. [Download fine-tuned model from s3](#Download-fine-tuned-model-from-s3) 3. [Attach to old training job to an estimator ](#Attach-to-old-training-job-to-an-estimator) 5. [_Coming soon_:Push model to the Hugging Face hub](#Push-model-to-the-Hugging-Face-hub) # Introduction Welcome to our end-to-end binary Text-Classification example. In this demo, we will use the Hugging Faces `transformers` and `datasets` library together with a custom Amazon sagemaker-sdk extension to fine-tune a pre-trained transformer on binary text classification. In particular, the pre-trained model will be fine-tuned using the `imdb` dataset. To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. This demo will also show you can use spot instances and continue training. ![image.png](attachment:image.png) _**NOTE: You can run this demo in Sagemaker Studio, your local machine or Sagemaker Notebook Instances**_ # Development Environment and Permissions ## Installation _*Note:* we only install the required libraries from Hugging Face and AWS. You also need PyTorch or Tensorflow, if you haven´t it installed_ ``` !pip install "sagemaker>=2.31.0" "transformers==4.4.2" "datasets[s3]==1.5.0" --upgrade ``` ## Development environment **upgrade ipywidgets for `datasets` library and restart kernel, only needed when prerpocessing is done in the notebook** ``` %%capture import IPython !conda install -c conda-forge ipywidgets -y IPython.Application.instance().kernel.do_shutdown(True) # has to restart kernel so changes are used import sagemaker.huggingface ``` ## Permissions _If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._ ``` import sagemaker sess = sagemaker.Session() # sagemaker session bucket -> used for uploading data, models and logs # sagemaker will automatically create this bucket if it not exists sagemaker_session_bucket=None if sagemaker_session_bucket is None and sess is not None: # set to default bucket if a bucket name is not given sagemaker_session_bucket = sess.default_bucket() role = sagemaker.get_execution_role() sess = sagemaker.Session(default_bucket=sagemaker_session_bucket) print(f"sagemaker role arn: {role}") print(f"sagemaker bucket: {sess.default_bucket()}") print(f"sagemaker session region: {sess.boto_region_name}") ``` # Preprocessing We are using the `datasets` library to download and preprocess the `imdb` dataset. After preprocessing, the dataset will be uploaded to our `sagemaker_session_bucket` to be used within our training job. The [imdb](http://ai.stanford.edu/~amaas/data/sentiment/) dataset consists of 25000 training and 25000 testing highly polar movie reviews. ## Tokenization ``` from datasets import load_dataset from transformers import AutoTokenizer # tokenizer used in preprocessing tokenizer_name = 'distilbert-base-uncased' # dataset used dataset_name = 'imdb' # s3 key prefix for the data s3_prefix = 'samples/datasets/imdb' # load dataset dataset = load_dataset(dataset_name) # download tokenizer tokenizer = AutoTokenizer.from_pretrained(tokenizer_name) # tokenizer helper function def tokenize(batch): return tokenizer(batch['text'], padding='max_length', truncation=True) # load dataset train_dataset, test_dataset = load_dataset('imdb', split=['train', 'test']) test_dataset = test_dataset.shuffle().select(range(10000)) # smaller the size for test dataset to 10k # sample for a smaller dataset for training #train_dataset = train_dataset.shuffle().select(range(2000)) # smaller the size for test dataset to 10k #test_dataset = test_dataset.shuffle().select(range(150)) # smaller the size for test dataset to 10k # tokenize dataset train_dataset = train_dataset.map(tokenize, batched=True, batch_size=len(train_dataset)) test_dataset = test_dataset.map(tokenize, batched=True, batch_size=len(test_dataset)) # set format for pytorch train_dataset.rename_column_("label", "labels") train_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels']) test_dataset.rename_column_("label", "labels") test_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels']) ``` ## Uploading data to `sagemaker_session_bucket` After we processed the `datasets` we are going to use the new `FileSystem` [integration](https://huggingface.co/docs/datasets/filesystems.html) to upload our dataset to S3. ``` import botocore from datasets.filesystems import S3FileSystem s3 = S3FileSystem() # save train_dataset to s3 training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train' train_dataset.save_to_disk(training_input_path,fs=s3) # save test_dataset to s3 test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test' test_dataset.save_to_disk(test_input_path,fs=s3) training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train' test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test' ``` # Fine-tuning & starting Sagemaker Training Job In order to create a sagemaker training job we need an `HuggingFace` Estimator. The Estimator handles end-to-end Amazon SageMaker training and deployment tasks. In a Estimator we define, which fine-tuning script should be used as `entry_point`, which `instance_type` should be used, which `hyperparameters` are passed in ..... ```python huggingface_estimator = HuggingFace(entry_point='train.py', source_dir='./scripts', base_job_name='huggingface-sdk-extension', instance_type='ml.p3.2xlarge', instance_count=1, transformers_version='4.4', pytorch_version='1.6', py_version='py36', role=role, hyperparameters = {'epochs': 1, 'train_batch_size': 32, 'model_name':'distilbert-base-uncased' }) ``` When we create a SageMaker training job, SageMaker takes care of starting and managing all the required ec2 instances for us with the `huggingface` container, uploads the provided fine-tuning script `train.py` and downloads the data from our `sagemaker_session_bucket` into the container at `/opt/ml/input/data`. Then, it starts the training job by running. ```python /opt/conda/bin/python train.py --epochs 1 --model_name distilbert-base-uncased --train_batch_size 32 ``` The `hyperparameters` you define in the `HuggingFace` estimator are passed in as named arguments. Sagemaker is providing useful properties about the training environment through various environment variables, including the following: * `SM_MODEL_DIR`: A string that represents the path where the training job writes the model artifacts to. After training, artifacts in this directory are uploaded to S3 for model hosting. * `SM_NUM_GPUS`: An integer representing the number of GPUs available to the host. * `SM_CHANNEL_XXXX:` A string that represents the path to the directory that contains the input data for the specified channel. For example, if you specify two input channels in the HuggingFace estimator’s fit call, named `train` and `test`, the environment variables `SM_CHANNEL_TRAIN` and `SM_CHANNEL_TEST` are set. To run your training job locally you can define `instance_type='local'` or `instance_type='local-gpu'` for gpu usage. _Note: this does not working within SageMaker Studio_ ``` !pygmentize ./scripts/train.py ``` ## Creating an Estimator and start a training job ``` from sagemaker.huggingface import HuggingFace # hyperparameters, which are passed into the training job hyperparameters={'epochs': 1, 'train_batch_size': 32, 'model_name':'distilbert-base-uncased', 'output_dir':'/opt/ml/checkpoints' } # s3 uri where our checkpoints will be uploaded during training job_name = "using-spot" checkpoint_s3_uri = f's3://{sess.default_bucket()}/{job_name}/checkpoints' huggingface_estimator = HuggingFace(entry_point='train.py', source_dir='./scripts', instance_type='ml.p3.2xlarge', instance_count=1, base_job_name=job_name, checkpoint_s3_uri=checkpoint_s3_uri, use_spot_instances=True, max_wait=3600, # This should be equal to or greater than max_run in seconds' max_run=1000, # expected max run in seconds role=role, transformers_version='4.4', pytorch_version='1.6', py_version='py36', hyperparameters = hyperparameters) # starting the train job with our uploaded datasets as input huggingface_estimator.fit({'train': training_input_path, 'test': test_input_path}) # Training seconds: 874 # Billable seconds: 262 # Managed Spot Training savings: 70.0% ``` ## Estimator Parameters ``` # container image used for training job print(f"container image used for training job: \n{huggingface_estimator.image_uri}\n") # s3 uri where the trained model is located print(f"s3 uri where the trained model is located: \n{huggingface_estimator.model_data}\n") # latest training job name for this estimator print(f"latest training job name for this estimator: \n{huggingface_estimator.latest_training_job.name}\n") # access the logs of the training job huggingface_estimator.sagemaker_session.logs_for_job(huggingface_estimator.latest_training_job.name) ``` ## Attach to old training job to an estimator In Sagemaker you can attach an old training job to an estimator to continue training, get results etc.. ``` from sagemaker.estimator import Estimator # job which is going to be attached to the estimator old_training_job_name='' # attach old training job huggingface_estimator_loaded = Estimator.attach(old_training_job_name) # get model output s3 from training job huggingface_estimator_loaded.model_data ```
github_jupyter
# Example: CanvasXpress nonlinearfit Chart No. 1 This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at: https://www.canvasxpress.org/examples/nonlinearfit-1.html This example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function. Everything required for the chart to render is included in the code below. Simply run the code block. ``` from canvasxpress.canvas import CanvasXpress from canvasxpress.js.collection import CXEvents from canvasxpress.render.jupyter import CXNoteBook cx = CanvasXpress( render_to="nonlinearfit1", data={ "y": { "vars": [ "S1", "S2", "S3", "S4", "S5", "S6", "S7", "S8", "S9", "S10", "S11", "S12" ], "smps": [ "Concentration", "V1" ], "data": [ [ 0.0009, 172 ], [ 0.0018, 177 ], [ 0.0037, 160 ], [ 0.0073, 166 ], [ 0.0146, 211 ], [ 0.0293, 248 ], [ 0.0586, 269 ], [ 0.117, 283 ], [ 0.234, 298 ], [ 0.469, 314 ], [ 0.938, 328 ], [ 1.88, 316 ] ] } }, config={ "decorations": { "nlfit": [ { "param": [ "164", "313", 0.031, -1.5, 1.2e-06, 1.9 ], "type": "cst", "label": "Custom Fit" }, { "type": "reg", "param": [ "164", "313", 0.031, 1.5, 1.2e-06, 1.9 ], "label": "Regular Fit" } ] }, "graphType": "Scatter2D", "setMaxY": 350, "setMinY": 100, "showDecorations": True, "theme": "CanvasXpress", "xAxisTransform": "log10", "xAxisTransformTicks": False, "yAxisExact": True }, width=613, height=613, events=CXEvents(), after_render=[], other_init_params={ "version": 35, "events": False, "info": False, "afterRenderInit": False, "noValidate": True } ) display = CXNoteBook(cx) display.render(output_file="nonlinearfit_1.html") ```
github_jupyter
#**SVM** ``` import numpy as np import matplotlib.pyplot as plt import random from numpy import linalg as LA ``` --- **Generating Random linearly separable data** --- ``` data = [[np.random.rand(), np.random.rand()] for i in range(10)] for i, point in enumerate(data): x, y = point if 0.5*x - y + 0.25 > 0: data[i].append(-1) else: data[i].append(1) ``` --- **Visualizing the above data** --- ``` for x, y, l in data: if l == 1: clr = 'red' else: clr = 'blue' plt.scatter(x, y, c=clr) plt.xlim(0,1) plt.ylim(0,1) ``` --- **Train a SVM classifier using gradient descent and return a Weight Matrix which is a numpy array of length (N + 1) where N is dimension of training samples. You can refer to Fig. 1 in [this](https://www.cs.huji.ac.il/~shais/papers/ShalevSiSrCo10.pdf) paper for implementation. You can add arguments to svm_function according to your implementation.** --- ``` def svm_function(x, y, epochs, l_rate): w = np.zeros(len(X[0])) eta = 1 for epoch in range(1,epochs): eta = 1/(l_rate*epoch) for i, x in enumerate(X): if (Y[i]*np.dot(X[i], w)) < 1: w = w + eta * ( (X[i] * Y[i]) - (l_rate* w)) else: w = w - eta * (l_rate* w) return w ``` --- **Run SVM Classifier** --- ``` data = np.asarray(data) X_temp = data[:,:2] Y = data[:,2] X = np.ones((X_temp.shape[0],X_temp.shape[1]+1)) X[:,:-1] = X_temp w = svm_function(X, Y, 10000, 0.01) print(w) ``` # **Visualize the classifier** --- Write a code to draw a lines corrosponding to 'w' vector you got as output from svm_function and for a line from which actual data was generated (0.5*x - y + 0.25). --- ``` for x, y, l in data: if l == 1: clr = 'red' else: clr = 'blue' plt.scatter(x, y, c=clr) x1 = np.linspace(-5,5,100) y1 = (-w[0]*x1-w[2])/w[1] plt.plot(x1, y1, c='black', label="SVM boundary") x2 = np.linspace(-5,5,100) y2 = 0.5*x2 + 0.25 plt.plot(x2, y2, c='green', label="Actual") plt.xlim(0,1) plt.ylim(0,1) ``` #**Linearly Non-separable Data** ``` from sklearn.datasets import make_circles from matplotlib import pyplot from pandas import DataFrame # generate 2d classification dataset X, y = make_circles(n_samples=100, noise=0.05) # scatter plot, dots colored by class value df = DataFrame(dict(x=X[:,0], y=X[:,1], label=y)) colors = {0:'red', 1:'blue'} fig, ax = pyplot.subplots() grouped = df.groupby('label') for key, group in grouped: group.plot(ax=ax, kind='scatter', x='x', y='y', label=key, color=colors[key]) pyplot.show() X1 = X[:, 0].reshape((-1, 1)) ## reshape(-1,1) convert 1 dim into 2 dim X2 = X[:, 1].reshape((-1, 1)) X3 = (X1**2 + X2**2) X = np.hstack((X, X3)) # visualizing data in higher dimension fig = plt.figure(figsize=(7,7)) axes = fig.add_subplot(111, projection = '3d') axes.scatter(X1, X2, X1**2 + X2**2, c = y, depthshade = True) plt.show() ``` --- **Train a SVM classifier on the linearly non-separable data by appropriate features crafted from input data For linearly non-separable data, you need to transform the data in a space where it can be linearly seprable. These features can be exponential, polynomial, trignometric or any other function of actual input features. For example, if your input data is (x1, x2) you can have hand-crafted features as (sin(x1), cos(x1), cos(x2), x1-x2). Here you need to think of which hand-crafted features can be best suited for data given to you. Write a function to convert input features to hand-crafted features. Use these features to train a SVM using svm_function. Note that, if you choose to have L hand-crafted features, SVM will return L+1 dimensional 'w'.** --- ``` def Gaussian_kernel3d(x, mean1, mean2, mean3, var1, var2, var3 ): num_fetures=3 size = (len(x) , num_fetures) F = np.zeros(size) for i in range(0,len(x)): F[i,0] = np.exp(-(((x[i,0]-mean1[0])**2) + ((x[i,1]-mean1[1])**2))/(var1)) F[i,1] = np.exp(-(((x[i,0]-mean2[0])**2) + ((x[i,1]-mean2[1])**2))/(var2)) F[i,1] = np.exp(-(((x[i,0]-mean3[0])**2) + ((x[i,1]-mean3[1])**2))/(var3)) return F ################################### X = X Y = y F = Gaussian_kernel3d(X, np.array([0.0,0.0]), np.array([0.0,0.0]),np.array([0.75,0.75]), 0.7, 1.0, 0.5 ) #Run SVM Classifier w = svm_function(F, Y, 10000, 0.01) ################################### ``` --- **Visualize the data points in the new feature space "if possible" to see whether they got separated or not.** --- ``` # This import registers the 3D projection from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import # Fixing random state for reproducibility np.random.seed(19680801) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') Y = Y.reshape((len(F),1)) F=np.append(F,Y,axis=1) #Visualizing the above data for x, y,z, l in F: if l == 1: clr = 'red' else: clr = 'blue' ax.scatter(x, y,z, c=clr) ```
github_jupyter
# GPs with boundary conditions In the paper entitled '' (https://export.arxiv.org/pdf/2002.00818), the author claims that a GP can be constrained to match boundary conditions. Consider a GP prior with covariance kernel $$k_F(x,y) = \exp\left(-\frac{1}{2}(x-y)^2\right)$$ Try and match the boundary conditions: $$f(0) = f'(0) = f(1) = f'(1) = 0$$ The posterior will be a GP with covariance equal to: $$\exp\left(-\frac{1}{2}(x-y)^2\right) - \frac{\exp\left(-\frac{1}{2}(x^2+y^2)\right)}{e^{-2} + 3e^{-1} + 1} \cdot \left( (xy+1) + (xy-x-y+2)e^{x+y-1} + (-2xy + x+y-1)(e^{x+y-2}+e^{-1}) + (xy-y+1)e^{y-2} + (xy-x+1)e^{x-2} + (y-x-2)e^{y-1} + (x-y-2)e^{x-1}\right)$$ This notebook compares the constrained and unconstrained kernels. ``` import GPy import numpy as np import matplotlib.pyplot as plt import seaborn as sns sample_size = 5 X = np.random.uniform(0, 1., (sample_size, 1)) Y = np.sin(X) + np.random.randn(sample_size, 1)*0.1 testX = np.linspace(0, 1, 100).reshape(-1, 1) def plotSamples(testY, simY, simMse, ax): testY = testY.squeeze() simY = simY.squeeze() simMse = simMse.squeeze() ax.plot(testX.squeeze(), testY, lw=0.2, c='k') ax.plot(X, Y, 'ok', markersize=5) ax.fill_between(testX.squeeze(), simY - 3*simMse**0.5, simY+3*simMse**0.5, alpha=0.1) ax.set_xlabel('Input') ax.set_ylabel('Output') ``` ## Unconstrained case ``` kU = GPy.kern.RBF(1, variance=1, lengthscale=1.) mU = GPy.models.GPRegression(X, Y, kU, noise_var=0.1) priorTestY = mU.posterior_samples_f(testX, full_cov=True, size=10) priorSimY, priorSimMse = mU.predict(testX) ``` Plot the kernel function ``` n = 101 xs = np.linspace(0, 1, n)[:,np.newaxis] KU = np.array([kU.K(x[np.newaxis,:], xs)[0] for x in xs]) ph0 = plt.pcolormesh(xs.T, xs, KU) plt.title('Unconstrained RBF') plt.colorbar(ph0) ``` ## Constrained case ``` def K(x, y=None): if y is None: y = x bb = (x*y+1) + (x*y-x-y+2)*np.exp(x+y-1) + (x+y-1-2*x*y)*(np.exp(x+y-2)+np.exp(-1)) + (x*y-y+1)*np.exp(y-2) + (x*y-x+1)*np.exp(x-2) + (y-x-2)*np.exp(y-1) + (x-y-2)*np.exp(x-1) k = np.exp(-0.5*(x-y)**2.0) - np.exp(-0.5*(x**2.0 + y**2.0)) / (np.exp(-2) - 3*np.exp(-1) + 1) * bb return k KC = [[K(x,y)[0] for x in xs] for y in xs] plt.pcolormesh(xs.T, xs, KC) plt.title('Constrained RBF') plt.colorbar() ``` ## Train the unconstrained model ``` mU.optimize() posteriorTestY = mU.posterior_samples_f(testX, full_cov=True, size=10) postSimY, postSimMse = mU.predict(testX) f, axs = plt.subplots(1, 2, sharey=True, figsize=(10,5)) plotSamples(priorTestY, priorSimY, priorSimMse, axs[0]) plotSamples(posteriorTestY, postSimY, postSimMse, axs[1]) sns.despine() ``` # GPy examples ## Combine normal and derivative observations ``` def plot_gp_vs_real(m, x, yreal, size_inputs, title, fixed_input=1, xlim=[0,11], ylim=[-1.5,3]): fig, ax = plt.subplots() ax.set_title(title) plt.plot(x, yreal, "r", label='Real function') rows = slice(0, size_inputs[0]) if fixed_input == 0 else slice(size_inputs[0], size_inputs[0]+size_inputs[1]) m.plot(fixed_inputs=[(1, fixed_input)], which_data_rows=rows, xlim=xlim, ylim=ylim, ax=ax) f = lambda x: np.sin(x)+0.1*(x-2.)**2-0.005*x**3 fd = lambda x: np.cos(x)+0.2*(x-2.)-0.015*x**2 N = 10 # Number of observations Npred = 100 # Number of prediction points sigma = 0.2 # Noise of observations sigma_der = 1e-3 # Noise of derivative observations x = np.array([np.linspace(1,10,N)]).T y = f(x) + np.array(sigma*np.random.normal(0,1,(N,1))) # M = 10 # Number of derivative observations # xd = np.array([np.linspace(2,8,M)]).T # yd = fd(xd) + np.array(sigma_der*np.random.normal(0,1,(M,1))) # Specify derivatives at end-points M = 2 xd = np.atleast_2d([0, 11]).T yd = np.atleast_2d([0, 0]).T xpred = np.array([np.linspace(0,11,Npred)]).T ypred_true = f(xpred) ydpred_true = fd(xpred) # squared exponential kernel: try: se = GPy.kern.RBF(input_dim = 1, lengthscale=1.5, variance=0.2) # We need to generate separate kernel for the derivative observations and give the created kernel as an input: se_der = GPy.kern.DiffKern(se, 0) except: se = GPy.kern.RBF(input_dim = 1, lengthscale=1.5, variance=0.2) # We need to generate separate kernel for the derivative observations and give the created kernel as an input: se_der = GPy.kern.DiffKern(se, 0) #Then gauss = GPy.likelihoods.Gaussian(variance=sigma**2) gauss_der = GPy.likelihoods.Gaussian(variance=sigma_der**2) # Then create the model, we give everything in lists, the order of the inputs indicates the order of the outputs # Now we have the regular observations first and derivative observations second, meaning that the kernels and # the likelihoods must follow the same order. Crosscovariances are automatically taken car of m = GPy.models.MultioutputGP(X_list=[x, xd], Y_list=[y, yd], kernel_list=[se, se_der], likelihood_list = [gauss, gauss_der]) m.optimize(messages=0, ipython_notebook=False) #Plot the model, the syntax is same as for multioutput models: plot_gp_vs_real(m, xpred, ydpred_true, [x.shape[0], xd.shape[0]], title='Latent function derivatives', fixed_input=1, xlim=[0,11], ylim=[-1.5,3]) plot_gp_vs_real(m, xpred, ypred_true, [x.shape[0], xd.shape[0]], title='Latent function', fixed_input=0, xlim=[0,11], ylim=[-1.5,3]) #making predictions for the values: mu, var = m.predict_noiseless(Xnew=[xpred, np.empty((0,1))]) ``` ## Fixed end-points using a Multitask GP with different likelihood functions ``` N = 10 # Number of observations Npred = 100 # Number of prediction points sigma = 0.25 # Noise of observations sigma_0 = 1e-3 # Noise of zero observations xlow = 0 xhigh = 10 x = np.array([np.linspace(xlow,xhigh,N)]).T y = f(x) + np.array(sigma*np.random.normal(0,1,(N,1))) M = 2 dx = 5 x0 = np.atleast_2d([xlow-dx, xhigh+dx]).T y0 = np.atleast_2d([0, 0]).T xpred = np.array([np.linspace(xlow-dx,xhigh+dx,Npred)]).T ypred_true = f(xpred) # squared exponential kernel: try: se = GPy.kern.RBF(input_dim = 1, lengthscale=1.5, variance=0.2) except: se = GPy.kern.RBF(input_dim = 1, lengthscale=1.5, variance=0.2) # Likelihoods for each task gauss = GPy.likelihoods.Gaussian(variance=sigma**2) gauss_0 = GPy.likelihoods.Gaussian(variance=sigma_0**2) # Create the model, we give everything in lists, the order of the inputs indicates the order of the outputs # Now we have the regular observations first and derivative observations second, meaning that the kernels and # the likelihoods must follow the same order. Crosscovariances are automatically taken car of m = GPy.models.MultioutputGP(X_list=[x, x0], Y_list=[y, y0], kernel_list=[se, se], likelihood_list = [gauss, gauss_0]) m.optimize(messages=0, ipython_notebook=False) # Plot ylims = [-1.5,3] fig, ax = plt.subplots(figsize=(8,5)) ax.set_title('Latent function with fixed end-points') ax.plot(xpred, ypred_true, 'k', label='Real function') ypred_mean, ypred_var = m.predict([xpred]) ypred_std = np.sqrt(ypred_var) ax.fill_between(xpred.squeeze(), (ypred_mean - 1.96*ypred_std).squeeze(), (ypred_mean + 1.96*ypred_std).squeeze(), color='r', alpha=0.1, label='Confidence') ax.plot(xpred, ypred_mean, 'r', label='Mean') ax.plot(x, y, 'kx', label='Data') ax.set_ylim(ylims) ax.plot(x0, y0, 'ro', label='Fixed end-points') ax.legend() sns.despine() ``` ## Fixed end-points using MixedNoise likelihood ``` # Squared exponential kernel: try: se = GPy.kern.RBF(input_dim = 1, lengthscale=1.5, variance=0.2) except: se = GPy.kern.RBF(input_dim = 1, lengthscale=1.5, variance=0.2) # MixedNoise Likelihood gauss = GPy.likelihoods.Gaussian(variance=sigma**2) gauss_0 = GPy.likelihoods.Gaussian(variance=sigma_0**2) mixed = GPy.likelihoods.MixedNoise([gauss, gauss_0]) # Create the model, we give everything in lists, the order of the inputs indicates the order of the outputs # Now we have the regular observations first and derivative observations second, meaning that the kernels and # the likelihoods must follow the same order. Crosscovariances are automatically taken car of xc = np.append(x, x0, axis=0) yc = np.append(y, y0, axis=0) ids = np.append(np.zeros((N,1), dtype=int), np.ones((M,1), dtype=int), axis=0) Y_metadata = {'output_index':ids} m = GPy.core.GP(xc, yc, se, likelihood=mixed, Y_metadata=Y_metadata) m.optimize(messages=0, ipython_notebook=False) # Plot fig, ax = plt.subplots(figsize=(8,5)) ax.set_title('Latent function with fixed end-points') ax.plot(xpred, ypred_true, 'k', label='Real function') #m.plot(fixed_inputs=[(1, 0)], which_data_rows=slice(0, x.shape[0]), xlim=[-dx,10+dx], ylim=[-1.5,3], ax=ax) ypred_mean, ypred_var = m.predict(xpred, Y_metadata={'output_index':np.zeros_like(xpred, dtype=int)}) ypred_std = np.sqrt(ypred_var) ax.fill_between(xpred.squeeze(), (ypred_mean - 1.96*ypred_std).squeeze(), (ypred_mean + 1.96*ypred_std).squeeze(), color='r', alpha=0.1, label='Confidence') ax.plot(xpred, ypred_mean, 'r-', label='Mean') ax.plot(x, y, 'kx', label='Data') ax.set_ylim(ylims) ax.plot(x0, y0, 'ro', label='Fixed end-points') ax.legend() sns.despine() m ```
github_jupyter
# Import Dataset, Split X,Y ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import pickle trainDF = pd.read_csv('E:/Sem5/ML-Final-Project/Dataset/train.csv', low_memory=False) valDF = pd.read_csv('E:/Sem5/ML-Final-Project/Dataset/val.csv', low_memory=False) testDF = pd.read_csv('E:/Sem5/ML-Final-Project/Dataset/test.csv', low_memory=False) train_X = trainDF.drop(columns=['data_IMDBscore']) train_Y = trainDF['data_IMDBscore'] val_X = valDF.drop(columns=['data_IMDBscore']) val_Y = valDF['data_IMDBscore'] test_X = testDF.drop(columns=['data_IMDBscore']) test_Y = testDF['data_IMDBscore'] ``` # Train Model, Plot Loss Curves ``` from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error train_ridge_model = Ridge(alpha=57.0, fit_intercept=True, copy_X=True, random_state=0, solver='lsqr') train_ridge_model.fit(train_X, train_Y) print(train_ridge_model.score(train_X,train_Y)) print(mean_squared_error(train_ridge_model.predict(train_X), train_Y)) print(train_ridge_model.n_iter_) train_score_list=[] train_error_list=[] val_score_list=[] val_error_list=[] for itr in range(1,26): train_model = Ridge(alpha=57.0, fit_intercept=True, copy_X=True, random_state=0, solver='lsqr', max_iter=itr) train_model.fit(train_X, train_Y) train_score_list.append(train_model.score(train_X, train_Y)) train_error_list.append(mean_squared_error(train_model.predict(train_X), train_Y)) val_score_list.append(train_model.score(val_X,val_Y)) val_error_list.append(mean_squared_error(train_model.predict(val_X), val_Y)) plt.style.use('seaborn') plt.scatter([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25],train_score_list) plt.plot([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25],train_score_list, label='Train Score vs Iterations') plt.xlabel('Iterations') plt.ylabel('Train Score') plt.legend() plt.savefig('Graphs/regression_train_vs_score_itr.png') plt.show() plt.style.use('seaborn') plt.scatter([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25],train_error_list) plt.plot([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25],train_error_list, label='Train Error vs Iterations') plt.xlabel('Iterations') plt.ylabel('Train Error') plt.legend() plt.savefig('Graphs/regression_train_vs_error_itr.png') plt.show() plt.style.use('seaborn') plt.scatter([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25],val_score_list) plt.plot([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25],val_score_list, label='Val Score vs Iterations') plt.xlabel('Iterations') plt.ylabel('Val Score') plt.legend() plt.savefig('Graphs/regression_val_vs_score_itr.png') plt.show() plt.style.use('seaborn') plt.scatter([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25],val_error_list) plt.plot([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25],val_error_list, label='Val Error vs Iterations') plt.xlabel('Iterations') plt.ylabel('Val Error') plt.legend() plt.savefig('Graphs/regression_val_vs_error_itr.png') plt.show() ```
github_jupyter
``` import os import itertools import pathlib import sys import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from torch.utils.data import ConcatDataset PROJECT_DIR = os.path.dirname(os.getcwd()) if PROJECT_DIR not in sys.path: sys.path.insert(0, PROJECT_DIR) from chord_recognition.cache import HDF5Cache from chord_recognition.dataset import prepare_datasource, ChromaDataset, undersample_dataset from chord_recognition.ann_utils import build_ann_df, convert_chord_label pd.set_option('display.max_columns', 50) pd.set_option('display.max_rows', 200) #pd.set_option('display.width', 1000) %matplotlib inline plt.rcParams['figure.figsize'] = (14, 5) sns.set(style="whitegrid") %load_ext autoreload %autoreload 2 # Helpers def plot_dataset(values, labels, title='chords distribution'): targets, counts = np.unique(values, return_counts=True) df = pd.DataFrame(counts, columns=['count']) chord_map = {idx: label for idx, label in enumerate(labels)} labels = [chord_map.get(t, 'unk') for t in targets] df.index = labels g = sns.catplot(data=df, x=df.index, y='count', kind="bar", height=8) g.fig.suptitle(title, fontsize=14) ds = prepare_datasource(('beatles',)) dataset = ChromaDataset( ds, window_size=8192, hop_length=4096, cache=HDF5Cache('chroma_cache.hdf5')) plot_dataset( values=[l for _, l in dataset], labels=dataset.chord_labels, title='beatles chord distribution') sampling_strategy = { 0: 8000, 2: 8000, 4: 8000, 5: 8000, 7: 8000, 9: 8000, 11: 8000, 24: 8000, } _, by = undersample_dataset(dataset, sampling_strategy, random_state=11) plot_dataset( values=by, labels=dataset.chord_labels, title='undersampled beatles chord distribution') ds = prepare_datasource(('robbie_williams',)) dataset = ChromaDataset( ds, window_size=8192, hop_length=4096, cache=HDF5Cache('chroma_cache.hdf5')) sampling_strategy = { 0: 8000, 2: 8000, 5: 8000, 7: 8000, 9: 8000, 24: 5000, } _, ry = undersample_dataset(dataset, sampling_strategy, random_state=11) plot_dataset( values=ry, labels=dataset.chord_labels, title='robbie_williams chord distribution') ds = prepare_datasource(('zweieck',)) dataset = ChromaDataset( ds, window_size=8192, hop_length=4096, cache=HDF5Cache('chroma_cache.hdf5')) zy = [yi for _, yi in dataset] plot_dataset( values=zy, labels=dataset.chord_labels, title='zweieck chord distribution') ds = prepare_datasource(('queen',)) dataset = ChromaDataset( ds, window_size=8192, hop_length=4096, cache=HDF5Cache('chroma_cache.hdf5')) sampling_strategy = { 2: 4500, } _, qy = undersample_dataset(dataset, sampling_strategy, random_state=11) plot_dataset( values=qy, labels=dataset.chord_labels, title='queen chord distribution') plot_dataset( values=by+ry+zy+qy, labels=dataset.chord_labels, title='overall') # https://www.music-ir.org/mirex/wiki/2020:Audio_Chord_Estimation maj_chords = ['C', 'C#', 'D', 'D#', 'E', 'F', 'F#', 'G', 'G#', 'A', 'A#', 'B'] min_chords = [c + 'm' for c in maj_chords] seventh_labels = ['maj7', 'min7', '7'] seventh_chords = set([c + l for l in seventh_labels for c in list(maj_chords)]) inversions_labels = ['/3', 'min/b3', '/5', 'min/5'] inversions_chords = set([c + l for l in inversions_labels for c in list(maj_chords)]) seventh_inversions_labels = ['maj7/3', 'min7/b3', '7/3', 'maj7/5', 'min7/5', '7/5', 'maj7/7', 'min7/b7', '7/b7'] # Define some usefull helpers def build_df(annotations): datasets = [] for ann in annotations: chords_df = build_ann_df(ann) datasets.append(chords_df) chords_df = pd.concat(datasets) chords_df = convert_chord_label(chords_df) return chords_df def audio_stats_df(annotations): index = [ann.split('/')[-1].replace('.lab', '') for ann in annotations] audio_df = pd.DataFrame( columns=list(maj_chords) + list(min_chords) + ['N'], index=index) for i, ann in enumerate(annotations): df = build_ann_df(ann) df = convert_chord_label(df) df = df[df.label.isin(maj_chords + min_chords + ['N'])] audio_df.iloc[i] = df['label'].value_counts() #audio_df = audio_df.replace(np.nan, '', regex=True) return audio_df def preprocess_chords(df): chords_df = df.copy() chords_df['vocab'] = chords_df.label.apply(lambda x: mark_vocabulary(x)) return chords_df def mark_vocabulary(label): if label in maj_chords: return 'maj' elif label in min_chords: return 'min' elif label in seventh_chords: return 'seventh' elif label in inversions_chords: return 'inv' elif label == 'N': # exclude `N` from other return 'N' else: return 'other' def chord_class_distribution(df, title='Chord Class distribution'): print(df.groupby('vocab').size() / len(df)) ax = sns.countplot(x="vocab", hue='vocab', data=df, dodge=False).set_title(title) def chord_distribution(df, column='label', minmaj=False, stats=False, title='Chord distribution'): df = df.copy() orig_count = df['label'].value_counts().sum() if minmaj: df = df[df.label.isin(list(maj_chords) + list(min_chords) + ['N'])] sns.countplot(column, data=df.sort_values([column])) if stats: df = pd.DataFrame(df['label'].value_counts()) #df['ratio'] = df.label.apply(lambda x: x / df.label.sum()) df['ratio'] = df.label.apply(lambda x: x / orig_count) print(df) print('Overall ratio: {0:.3f}'.format(df['ratio'].sum())) ds = prepare_datasource(('zweieck',)) ds = [lab for lab,_ in ds] audio_df = audio_stats_df(ds) audio_df ds = prepare_datasource(('queen',)) ds = [lab for lab,_ in ds] audio_df = audio_stats_df(ds) audio_df ds = [lab for lab,_ in prepare_datasource(('robbie_williams',))] audio_df = audio_stats_df(ds) audio_df ``` [Chord and Harmony annotations of the first five albums by Robbie Williams](https://www.researchgate.net/publication/260399240_Chord_and_Harmony_annotations_of_the_first_five_albums_by_Robbie_Williams) by M. Zanoni ``` excluded_files = ( '11-Man Machine', '01-Ghosts', '11-A Place To Crash', '08-Heaven From Here', '09-Random Acts Of Kindness', '05-South Of The Border', ) robbie_ds = [ann for ann, _ in prepare_datasource(('robbie_williams',), excluded_files=excluded_files)] robbie_df = build_df(robbie_ds) robbie_df = preprocess_chords(robbie_df) chord_distribution(robbie_df, column='vocab', title='Robbie Williams') chord_distribution(robbie_df, column='label', minmaj=True, title='Robbie Williams') ``` ### Isophonics datasets includes `The Beatles`, `Queen` and `Zweieck` [source](http://www.isophonics.net/datasets) ``` #isophonics_ds = prepare_datasource(('queen', 'beatles', 'zweieck')) excluded_files=( # zweieck '09_-_Mr_Morgan', '01_-_Spiel_Mir_Eine_Alte_Melodie', '11_-_Ich_Kann_Heute_Nicht', # queen '14 Hammer To Fall', '08 Save Me', ) isophonics_ds = [ann for ann, _ in prepare_datasource(('zweieck', 'queen'), excluded_files=excluded_files)] isophonics_df = build_df(isophonics_ds) isophonics_df = preprocess_chords(isophonics_df) #chord_distribution(isophonics_df, column='vocab', title='Isophonics') # Isophonics chords distribution chord_distribution(isophonics_df, minmaj=True) allowed_files = ( '06-Mr_Moonlight', '06-Yellow_Submarine', '03-I_m_Only_Sleeping', '09-Penny_Lane', '12-Wait', '11-Do_You_Want_To_Know_A_Secret', '12-A_Taste_Of_Honey', '04-I_m_Happy_Just_To_Dance_With_You', '03-If_I_Fell', '10-I_m_Looking_Through_You', '09-When_I_m_Sixty-Four', '06-Till_There_Was_You', '05-Octopus_s_Garden', '03-All_My_Loving', '05-And_I_Love_Her', '02-All_I_ve_Got_To_Do', '10-For_No_One', '08-Because', '06-She_s_Leaving_Home', '04-Chains', '10-Things_We_Said_Today', '09-One_After_909', '09-Girl', '14-Run_For_Your_Life', '04-Oh_Darling', '04-Don_t_Bother_Me', '06-I_Want_You_She_s_So_Heavy_', '06-Tell_Me_Why', ) beatles_ds = [ann for ann, _ in prepare_datasource(('beatles',), allowed_files=allowed_files)] beatles_df = build_df(beatles_ds) beatles_df = preprocess_chords(beatles_df) isophonics_df = pd.concat([isophonics_df, beatles_df]) # Robbie Williams and Isophonics chords distribution total_df = pd.concat([isophonics_df, robbie_df]) # Print imbalance ratio of a multi-class dataset # It may be usefull for initialization to speed up convergence of a network. chord_distribution(total_df, minmaj=True, stats=True) total_ds = beatles_ds + isophonics_ds + robbie_ds print('total ds len:', len(total_ds)) audio_df = audio_stats_df(total_ds) # ds = prepare_datasource(('beatles',)) # ds = [lab for lab,_ in ds] # audio_df = audio_stats_df(ds) # audio_df.sort_values('F#', ascending=False) audio_df.sort_values('G', ascending=False) # Take a look at a specific audio targets df = build_ann_df("data/beatles/chordlabs/Rubber_Soul/06-The_Word.lab") df = convert_chord_label(df) df = preprocess_chords(df) chord_distribution(df, stats=True, minmaj=True, title='03_-_She') ```
github_jupyter
# Лабораторная работа 1. Алгоритмы на графах. ``` import networkx as nx import pylab import matplotlib.pyplot as plt ``` Пусть задан граф множеством смежности: ``` pos = {0: {1, 2}, 1: {3, 4}, 2: {1, 4}, 3: {4}, 4: {1, 3, 5}, 5: {0, 2}} ``` Создадим соответствующий [направленный граф](https://networkx.github.io/documentation/networkx-1.10/reference/classes.digraph.html): ``` N = len(pos) G = nx.DiGraph() a = [(i, j) for i in range(N) for j in pos[i]] # генерация списка рёбер G.add_nodes_from(range(N)) G.add_edges_from(a) nx.draw(G, with_labels=True) pylab.figure() plt.show() ``` # 1. Алгоритмы обхода графа. Во многих приложениях нужно уметь выписывать все вершины графа по одному разу, начиная с некоторой. Это делается с помощью обходов в глубину или в ширину. Основная идея обходов: - на каждом шаге рассмотреть очередную необработанную вершину; - пометить эту вершину некоторым образом; - до/после обработки данной вершины осуществить обход из всех нерассмотренных соседей. Для упорядочивания вершин используется очередь (обход в ширину) или стек (обход в глубину). # 1.1. Поиск в ширину. Код программы, реализующей поиск в ширину (с записью предшественников): ``` def bfs(graph, s, out=0): parents = {v: None for v in graph} level = {v: None for v in graph} level[s] = 0 # уровень начальной вершины queue = [s] # добавляем начальную вершину в очередь while queue: # пока там что-то есть v = queue.pop(0) # извлекаем вершину for w in graph[v]: # запускаем обход из вершины v if level[w] is None: # проверка на посещенность queue.append(w) # добавление вершины в очередь parents[w] = v level[w] = level[v] + 1 # подсчитываем уровень вершины if out: print(level[w], level, queue) return level, parents ``` И программы, востанавливающей маршрут: ``` def PATH(end, parents): path = [end] parent = parents[end] while not parent is None: path.append(parent) parent = parents[parent] return path[::-1] ``` # 1.2. Поиск в глубину. Код программы, реализующей поиск в глубину (с записью предшественников): ``` def dfs(graph, s, out=0): level = {v: None for v in graph} level[s] = 0 # уровень начальной вершины queue = [s] # добавляем начальную вершину в очередь while queue: # пока там что-то есть v = queue.pop(-1) # извлекаем вершину for w in graph[v]: # запускаем обход из вершины v if level[w] is None: # проверка на посещенность queue.append(w) # добавление вершины в очередь level[w] = level[v] + 1 # подсчитываем уровень вершины if out: print(level[w], level, queue) return level dfs(pos, 0, 1) pos[2] ``` # Пример 1. Определим с помощью поиска в ширину кратчайший маршрут: ``` level, parents = bfs(pos, 0, out=0) level parents path = PATH(5, parents) print(path) ``` Визуализируем этот маршрут: ``` red_node = set(path) # вершины маршрута red_edges = [ (path[i],path[i+1]) for i in range(len(path)-1) ] # рёбра маршрута # разделение по цветам вершин и рёбер node_colours = ['g' if not node in red_node else 'red' for node in G.nodes()] black_edges = [edge for edge in G.edges() if edge not in red_edges] # построение графа #p = nx.spring_layout(G) p = {0: [ 0.38144628, -0.66882419], 1: [0.23970166, 0.49135202], 2: [0.41724407, 0.05678197], 3: [-0.55966794, 1. ], 4: [-0.44016179, 0.07245783], 5: [-0.03856228, -0.95176763]} nx.draw_networkx_nodes(G, p, cmap=plt.get_cmap('jet'), node_color=node_colours, node_size=500) nx.draw_networkx_labels(G, p) nx.draw_networkx_edges(G, p, edgelist=black_edges, width=2.0, edge_color='k', arrows=True) nx.draw_networkx_edges(G, p, edgelist=red_edges, width=3.0, edge_color='r', arrows=True) plt.show() # координаты вершин на рисунке p ``` # Упражнение 1 Две вершины (`v` и `u`) ориентированного графа называют сильно связными, если существует путь из `v` в `u` и существует путь из `u` в `v`. Ориентированный граф называется сильно связным, если любые две его вершины сильно связны. Напишите функцию, использующую модифицированый алгоритм поиска в глубину (алгоритм Косарайю) для определения компонент сильной связности. Алгоритм: 1. Инвертируем дуги исходного ориентированного графа. 2. Запускаем поиск в глубину на этом обращённом графе, запоминая, в каком порядке выходили из вершин. 3. Запускаем поиск в глубину на исходном графе, в очередной раз выбирая непосещённую вершину с максимальным номером в векторе, полученном в п.2. Полученные из п.3 деревья и являются сильно связными компонентами. Найдите и постройте графически с помощью этой функции компоненты сильной связности графа: ``` pos2 = {0: {1, 2}, 1: {3, 4}, 2: {1, 4}, 3: {4}, 4: {1, 3, 5}, 5: {0, 2}, 6: {3, 0, 5}, 7: {2, 1}, 8: {0, 7, 3}, 9: {2, 4, 6, 8}} N = 10 G = nx.DiGraph() a = [(i, j) for i in range(N) for j in pos2[i]] # генерация списка рёбер G.add_nodes_from(range(N)) G.add_edges_from(a) nx.draw(G, with_labels=True) pylab.figure() plt.show() def kosaraju(g): size = len(g) vis = [False] * size l = [0] * size x = size t = [[]] * size def visit(g, vis, u, x, l, t): if not vis[u]: vis[u] = True for v in g[u]: vis, x, l, t = visit(g, vis, v, x, l, t) t[v] = t[v] + [u] x -= 1 l[x] = u return vis, x, l, t for u in range(len(g)): vis, x, l, t = visit(g, vis, u, x, l, t) c = [0] * size def assign(vis, c, t, u, root): if vis[u]: vis[u] = False c[u] = root for v in t[u]: vis, c, t = assign(vis, c, t, v, root) return vis, c, t for u in l: vis, c, t = assign(vis, c, t, u, u) print('Компоненты сильной связности:') dup = {i: 0 for i in range(len(c))} for i in c: dup[i] += 1 css = [] for i in dup.keys(): if dup[i] > 1: for j in range(len(c)): if c[j] == i: print(j, ', ', sep='', end='') css.append(j) print() return css css = kosaraju(pos2) N = len(pos2) G = nx.DiGraph() a = [(i, j) for i in range(N) for j in pos2[i]] G.add_nodes_from(range(N)) G.add_edges_from(a) p = nx.spring_layout(G) red_node = set(css) red_edges = [ (css[i],css[i+1]) for i in range(len(css)-1) ] node_colours = ['b' if not node in red_node else 'red' for node in G.nodes()] black_edges = [edge for edge in G.edges() if edge not in red_edges] nx.draw_networkx_nodes(G, p, cmap=plt.get_cmap('jet'), node_color=node_colours, node_size=500) nx.draw_networkx_labels(G, p) nx.draw_networkx_edges(G, p, edgelist=black_edges, width=1.0, edge_color='k', arrows=True) nx.draw_networkx_edges(G, p, edgelist=red_edges, width=1.0, edge_color='k', arrows=True) pylab.figure() plt.show() ``` # Пример 2. Ход конём Создадим две строки, комбинация которых даст нам обозночения всех клеток шахматного поля: ``` letters = 'abcdefgh' numbers = '12345678' ``` Создадим структуру типа словарь для хранения графа в формате множества смежности: ``` graph = dict() graph ``` Заполним имена вершин графа: ``` for l in letters: for n in numbers: graph[l+n] = set() ``` Заполним множества смежности: ``` def add_edge(graph, v1, v2): graph[v1].add(v2) graph[v2].add(v1) for i in range(8): for j in range(8): v1 = letters[i]+numbers[j] v2 = '' if 0<=i+2<8 and 0<=j+1<8: v2 = letters[i+2]+numbers[j+1] add_edge(graph, v1, v2) if 0<=i-2<8 and 0<=j+1<8: v2 = letters[i-2]+numbers[j+1] add_edge(graph, v1, v2) if 0<=i+1<8 and 0<=j+2<8: v2 = letters[i+1]+numbers[j+2] add_edge(graph, v1, v2) if 0<=i-1<8 and 0<=j+2<8: v2 = letters[i-1]+numbers[j+2] add_edge(graph, v1, v2) ``` Проведём сканирование графа в ширину: ``` start = 'd4' end = 'f7' level, parents = bfs(graph, start) ``` И получим маршрут коня: ``` PATH(end, parents) ``` # Упражнение 2. Нарисуйте граф, соответствующий маршрутам коня по шахматной доске, и отметьте на нём найденный маршрут. ``` Kpath = PATH(end, parents) p = {} for l in range(len(letters)): for n in range(len(numbers)): p[letters[l]+numbers[n]] = [-1 + l * 0.25, -1 + n * 0.25] G = nx.DiGraph() a = [(i, j) for i in graph for j in graph[i]] G.add_edges_from(a) #p = nx.spring_layout(G) red_node = set(Kpath) red_edges = [ (Kpath[i],Kpath[i+1]) for i in range(len(Kpath)-1) ] node_colours = ['w' if not node in red_node else 'r' for node in G.nodes()] black_edges = [edge for edge in G.edges() if edge not in red_edges] nx.draw_networkx_nodes(G, p, cmap=plt.get_cmap('jet'), node_color=node_colours, node_size=500) nx.draw_networkx_labels(G, p) nx.draw_networkx_edges(G, p, edgelist=red_edges, width=1.0, edge_color='r', arrows=True) pylab.figure() plt.show() ``` # Упражнение 3. Раскраски. Рассмотрим граф `G(V,E)`, имеющий `V` вершин и `E` ребер. Раскраской графа `G` называется окрашивание вершин графа `G` такое, что никакие две смежные вершины не имеют одинаковый цвет. Хроматическое число графа `X(G)` - это наименьшее число цветов, которое используется для раскраски графа. Известен жадный алгоритм раскраски графа. Жадный алгоритм последовательного раскрашивания: Входные данные: граф `G(V,E)`. Выходные данные: массив `c[v]` раскрашенных вершин 1. Для всех вершин определить множество `А = {1,2,3..,n}` всех цветов. 2. Выбрать стартовую вершину (с которой начинаем алгоритм). Раскрасить вершину в цвет `color`. Вычеркнуть этот цвет из множества цветов всех вершин, смежных со стартовой. 3. Выбрать нераскрашенную вершину `v` 4. Раскрасить выбранную вершину в минимально возможный цвет из множества `А`. Вычеркнуть этот цвет из множества цветов всех вершин, смежных с вершиной `v`. 5. Проделать шаг 3, шаг 4 для всех нераскрашенных вершин графа. На основе этого алгоритма раскрасьте граф из задачи про коня. ``` def ggca(graph): colors = {i: None for i in graph.keys()} for i in graph.keys(): minimal_set = [] for s in graph[i]: if colors[s] is not None: minimal_set.append(colors[s]) for c in range(len(graph)): if c not in minimal_set: colors[i] = c break return colors r = ggca(graph) r G = nx.DiGraph() a = [(i, j) for i in graph for j in graph[i]] G.add_edges_from(a) #p = nx.spring_layout(G) node_colours = ['b' if r[node] == 0 else 'r' if r[node] == 1 else 'g' for node in G.nodes()] nx.draw_networkx_nodes(G, p, cmap=plt.get_cmap('jet'), node_color=node_colours, node_size=500) nx.draw_networkx_labels(G, p) pylab.figure() plt.show() ``` # Домашнее задание (базовое): # Задание 1. Алгоритм Дейкстры Напишите функцию, реализующую алгоритм Дейкстры. ``` # Так можно добавлять картинки from IPython.display import Image # вызов из библиотеки определённой функции Image("Dijkstra.png") # вызов функции и передача ей в качестве аргумента пути к файлу # (в данном случае фаил находится в той же папке) import sys class Graph(): def __init__(self, vertices): self.V = vertices self.graph = [[0 for column in range(vertices)] for row in range(vertices)] def min_distance(self, dist, sptSet): min = sys.maxsize for v in range(self.V): if dist[v] < min and sptSet[v] == False: min = dist[v] min_index = v return min_index def dijkstra(self, src): dist = [sys.maxsize] * self.V dist[src] = 0 sptSet = [False] * self.V for cout in range(self.V): u = self.min_distance(dist, sptSet) sptSet[u] = True for v in range(self.V): if self.graph[u][v] > 0 and sptSet[v] == False and dist[v] > dist[u] + self.graph[u][v]: dist[v] = dist[u] + self.graph[u][v] self.show(dist) def show(self, dist): print("Вершина Расстояние до источника") for node in range(self.V): print(node, "\t", dist[node]) ``` # Задание 2. Сгенерируйте случайный взвешенный граф. И определите на нём маршрут минимальной длины с помощью алгоритма Дейкстры. ``` import random dir(random) from datetime import datetime random.seed(datetime.now()) size = int(random.uniform(4, 17)) w = [[0] * size] * size for i in range(len(w)): for j in range(len(w[i])): w[i][j] = int(random.uniform(0, 101)) g = Graph(size) g.graph = w g.dijkstra(0) ``` # Задание 3. Проиллюстрируйте работу одного из алгоритмов (поиска в ширину или глубину, Дейкстры) с помощью визуализации действий с графом на каждой итерации с помощью библиотек `networkx` и `matplotlib`, аналогично примеру 1. ``` # возьмём поиск в ширину `bfs(graph, start)` и граф, заданный множеством смежности `pos` import time G = nx.DiGraph() a = [(i, j) for i in pos for j in pos[i]] G.add_edges_from(a) node_colours = ['b' for node in G.nodes()] edges = [i for i in G.edges()] p = nx.spring_layout(G) pylab.figure() plt.show() def redraw(prev, curr): if prev: node_colours[prev] = 'b' node_colours[curr] = 'r' nx.draw_networkx_nodes(G, p, cmap=plt.get_cmap('jet'), node_color=node_colours, node_size=500) nx.draw_networkx_labels(G, p) nx.draw_networkx_edges(G, p, edgelist=edges, width=1.0, edge_color='k', arrows=True) pylab.figure() plt.show() def bfs_vis(graph, s, out=0): parents = {v: None for v in graph} level = {v: None for v in graph} level[s] = 0 queue = [s] prev = None while queue: v = queue.pop(0) redraw(prev, v) prev = v time.sleep(1) for w in graph[v]: if level[w] is None: queue.append(w) parents[w] = v level[w] = level[v] + 1 if out: print(level[w], level, queue) return level, parents time.sleep(1) bfs_vis(pos, 0) ``` # Задание 4. В библиотеке. Используйте какой-нибудь интересный алгоритм из [библиотеки](https://networkx.github.io/documentation/stable/reference/algorithms/index.html). ``` print('Радиус построенного нами выше графа равен ', nx.algorithms.distance_measures.radius(G), '.', sep='') ``` # Домашнее задание (дополнительное): # Задание. Алгоритм Форда - Беллмана Напишите функцию, реализующую [алгоритм Форда-Беллмана](https://ru.wikipedia.org/wiki/%D0%90%D0%BB%D0%B3%D0%BE%D1%80%D0%B8%D1%82%D0%BC_%D0%91%D0%B5%D0%BB%D0%BB%D0%BC%D0%B0%D0%BD%D0%B0_%E2%80%94_%D0%A4%D0%BE%D1%80%D0%B4%D0%B0). ``` import math def ford_bellman(W, start): N = len(W) F = [[math.inf] * N for i in range(N)] F[0][start] = 0 for k in range(1, N): for i in range(N): F[k][i] = F[k - 1][i] for j in range(N): if F[k - 1][j] + W[j][i] < F[k][i]: F[k][i] = F[k - 1][j] + W[j][i] return F W = [[1, 2, 3], [1, 2, 9], [9, 5, 1]] ford_bellman(W, 0) ``` # Задание. Лабиринт. Найдите выход из лабирита с помощью различных алгоритмов и сравните их. (Взвести рёбра пропорционально их длине.) ``` # Так можно добавлять картинки from IPython.display import Image # вызов из библиотеки определённой функции Image("Лабиринт.png") # вызов функции и передача ей в качестве аргумента пути к файлу # (в данном случае фаил находится в той же папке) ``` # Задание. Электроскутер. У Еремея есть электросамокат, и он хочет доехать от дома до института, затратив как можно меньше энергии. Весь город расположен на холмистой местности и разделён на квадраты. Для каждого перекрёстка известна его высота в метрах над уровнем моря. Если ехать от перекрёстка с большей высотой до смежного с ним перекрёстка с меньшей высотой, то электроэнергию можно аккумулировать (заряжая скутор), а если наоборот, то расход энергии равен разнице высот между перекрёстками. Помогите Еремею спланировать маршрут, чтобы он затратил наименьшее возможное количество энергии от дома до института и определите это количество.
github_jupyter
# Knapsack With Integer Weights問題 各々重さ$(w_\alpha \geq 0)$と価値$(c_\alpha \geq 0)$の決まったN個のアイテムがあり、そのうちのいくつかをナップサックに入れるとき、総重量$ \displaystyle W(= \sum _ {\alpha = 1} ^ {N} w_{\alpha}x_{\alpha})$をある決められた値$W_{limit}$以下に抑えながら、価値の合計$ \displaystyle C(= \sum _ {\alpha = 1} ^ {N} c_{\alpha}x_{\alpha})$を最大化するような入れ方を探す問題をナップサック問題という。ここでは重量を整数とすることにします。 # ハミルトニアン ハミルトニアンは次の論文を参考にします( https://arxiv.org/abs/1302.5843 )。 $$H = H_A + H_B$$ $$\displaystyle H_A = A \left( 1 - \sum _ { i = 1 } ^ { W_{limit} } y_i \right) ^ { 2 }+ A \left( \sum _ { i = 1 } ^ { W_{limit} } i y_i - \sum _ { \alpha } w_\alpha x_\alpha \right) ^ { 2 }$$ $$\displaystyle H_B = -B \sum _ { \alpha } c_\alpha x_\alpha$$ $H_A$は$W \leq W_{limit}$であることを保証するための制約関数、$H_B$は価値の合計を最大化するための目的関数です(価値の合計が大きいほど$H_B$は小さくなり$H$も小さくなる)。 ## ハミルトニアンの詳細 ハミルトニアンには$x_\alpha$と$y_i$の二つの変数が登場します。 $x_\alpha$は$\alpha$番目のアイテムをナップサックに入れるかどうかを表す変数で、これが求めたい解そのものとなります。 $ x_\alpha = \begin{cases} 1 & (\alpha \in knapsack) \\ 0 & (otherwise) \end{cases} $ 一方、$y_i$はアイテムの総重量を表すための変数です。$W \leq W_{limit}$という制限を課すためには総重量$W$の値が必要ですが、これは求めたい解$x_\alpha$がないと(計算結果が出ないと)知ることができません。よって$x_\alpha$によって決まる別の補助変数$y_i$を導入します。 $ y_i = \begin{cases} 1 & (i = W) \\ 0 & (i \neq W) \end{cases} $ $y_i$は$W_{limit}$個の要素からなり、$W$番目のみが1となるような変数です。解$\{x_\alpha\}$が決まるとそれに応じて総重量$W$が決まり、$y_i = 1$となる$i$も決まるというイメージです。$W_{limit}$の大きさに応じて$y_i$のサイズも大きくなり、必要となる補助ビットの数も増えることになります。 $H_A$ではこの補助変数を用いて制約条件を表現しています。まず$H_A$の第一項目ですが、これは$y_i$がただ一つの$i = W$で1となるための項です。このときこの項は最小値0をとります。複数の$y_i$が1となったり、全ての$y_i$が0になったり($W \gt W_{limit}$のときは全て0ということになります)するとこの項の値は増加します。次に第二項目ですが、$\displaystyle \sum _ { i = 1 } ^ { W_{limit} } i y_i$と$\displaystyle \sum _ { \alpha } w_\alpha x_\alpha$はそれぞれ選択したアイテムの総重量を表しています。前者は少しわかりにくいですが、$i = W$の時のみ$y_i$が1なので、$0 + \dots + W + \dots + 0 = W$となります。このときこの項は最小値0をとります。$W \gt W_{limit}$のときには前者の和が$0 + \dots + 0 = 0$となり、この項は$W ^ 2$の値を取ります(値が増加します。) # QUBO行列の構築 まずはプログラムに落とし込みやすいように$H_A$を変形します。定数項は無関係なので省略しています。 $H _ { A }$ $ \displaystyle =A \left\{ -2 \left( \sum _ { i = 1 } ^ { W_{limit} } y_i \right) +\left( \sum _ { i = 1 } ^ { W_{limit} } y_i \right) ^ { 2 } +\left( \sum _ { i = 1 } ^ { W_{limit} } iy_i \right) ^ { 2 } -2 \left( \sum _ { i = 1 } ^ { W_{limit} } iy_i \right) \left( \sum _ \alpha w_\alpha x_\alpha \right) +\left( \sum _ \alpha w_\alpha x_\alpha \right) ^ 2 \right\} $ $ \displaystyle = A \left\{ \left( \sum _ { i = 1 } ^ { W_{limit} } -2 y_i \right) +\left( \sum _ { i = 1 } ^ { W_{limit} } y_i ^ 2 \right) +\left( \mathop { \sum \sum } _ { i \neq j } ^ { W_{limit} } 2 y_i y_j \right) +\left( \sum _ { i = 1 } ^ { W_{limit} } i ^ 2 y_i ^ 2 \right) \right. $ $ \displaystyle \quad \left. +\left( \mathop { \sum \sum } _ {i \neq j } ^ { W_{limit} } 2ij y_i y_j \right) +\left( \sum _ { i = 1 } ^ { W_{limit} } \sum _ { \alpha } \left( -2i w _ \alpha x _ \alpha y _ i \right) \right) +\left( \sum _ \alpha w_\alpha ^ 2 x_\alpha ^ 2 \right) +\left( \sum _ { \alpha } \sum _ { \beta } 2 w_\alpha w_\beta x_\alpha x_\beta\right ) \right\} $ $ \displaystyle = A \left\{ \sum _ { \alpha } w_\alpha ^ 2x_\alpha ^ 2 +\sum _ { \alpha } \sum _ { \beta } 2 w_\alpha w_\beta x_\alpha x_\beta +\sum _ { \alpha } \sum _ { i = 1 } ^ { W_{limit} }\left( -2 w_\alpha i \right) x_\alpha y_i +\sum _ { i = 1 } ^ { W_{limit} } \left( i ^ 2 - 1\right) y_i ^ 2 +\mathop { \sum \sum } _ { i \neq j } ^ { W_{limit} } 2 \left( 1 + ij \right) y_i y_j \right\} $ ここから実際にwildqatを使用して問題を解いていきます。 blueqatをインストールされていない方は以下のような方法でご準備ください。 ``` pip install blueqat ``` まず必要なライブラリをインポートします。 ``` import numpy as np import blueqat.wq as wq ``` アイテム用の簡単なクラス、qubo行列を計算する関数、答えを計算、表示する用の関数を用意しておきます ``` class Item(): def __init__(self, number, weight, cost): self.__number = number self.__weight = weight self.__cost = cost @property def weight(self): return self.__weight @property def cost(self): return self.__cost def __str__(self): return f"#{self.__number} (weight : {self.weight}, cost : {self.cost})" def get_qubo(items, wlimit, A, B): # qubo行列を作成 x_size = len(items) y_size = wlimit size = x_size + y_size qubo = np.zeros((size, size)) for i in range(0,size): for j in range(0, size): # 行列の下側は0のままでいい if i > j: continue wi = items[i].weight if i < x_size else 0 # 0の場合は使わないので実質的に意味はない wj = items[j].weight if j < x_size else 0 # 同上 ci = items[i].cost if i < x_size else 0 # 同上 wsum_i = i - x_size + 1 wsum_j = j - x_size + 1 # 対角成分 if i == j: if i < x_size: # xi*xi qubo[i][i] = A * wi ** 2 - B * ci else: # yi*yi qubo[i][i] = A * (-1 + wsum_i * wsum_j) # 非対角成分 else: # i < j if i < x_size and j < x_size: # xi*xj qubo[i][j] = 2 * A * wi * wj elif i < x_size and j >= x_size: # xi*yj qubo[i][j] = -2 * A * wi * wsum_j else: # yi*yj qubo[i][j] = 2 * A * (1 + wsum_i * wsum_j) return qubo def show_answer(q, items): print(q) answers = [] weight = 0 cost = 0 for i in range(len(items)): if q[i] > 0: answers.append(items[i]) weight += items[i].weight cost += items[i].cost for answer in answers: print(f"selected : {answer}") print(f"total weight : {weight}") print(f"total cost : {cost}") print("") def annealing(annealer): # しらみつぶしで探した最適解 answer_opt = [0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0] count = 0 found = False for _ in range(100): q = annealer.sa() count += 1 print(f"count : {count}") show_answer(q, items) if q == answer_opt: print(f"found in {count} times") found = True break if(found == False): print("optimum solution has not been found") ``` アイテムのリストを用意します。軽くて安いもの(#0,#1)、軽くて高めなもの(#2,#3)、中間的なもの(#4,#5)、重くて安いもの(#6,#7)、重くて高いもの(#8,#9)をそれぞれ2つずつ用意してみました。コスパの良い#2,#3は選ばれやすく、コスパの悪い#6,#7は選ばれにくいことが予想されます。 ``` items = [] items.append(Item(number=0, weight=1, cost=10)) items.append(Item(number=1, weight=2, cost=15)) items.append(Item(number=2, weight=2, cost=55)) items.append(Item(number=3, weight=3, cost=50)) items.append(Item(number=4, weight=4, cost=40)) items.append(Item(number=5, weight=5, cost=50)) items.append(Item(number=6, weight=7, cost=30)) items.append(Item(number=7, weight=8, cost=35)) items.append(Item(number=8, weight=7, cost=60)) items.append(Item(number=9, weight=8, cost=80)) ``` $A, B$と$W_{limit}$を決めて問題を解いてみます ``` # 重さの上限(問題設定) wlimit = 10 # コストの最大値から係数Bを決める A = 1 cmax = max(items, key = lambda item : item.cost).cost B = A / cmax * 0.9 # SA annealer = wq.Opt() annealer.qubo = get_qubo(items, wlimit, A, B) annealing(annealer) ``` 上記のプログラムを実際に実行してみると、100回計算しても補助変数まで完全に一致した解はなかなかでないことがわかります。よくみると重量が$W_{limit}$を超えている解が得られることが多いです。詳細は略しますが、$H_A$の二つの項のバランスを調整して(両方の係数をAにするのではなく、別々に設定する)$H_A$の第一項の制約を相対的に強めることで多少改善させることも可能です。
github_jupyter
# Import Libraries and Dataset ``` # Importing Libraries import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.metrics import accuracy_score, confusion_matrix from sklearn.neighbors import KNeighborsClassifier, LocalOutlierFactor,KNeighborsClassifier, NeighborhoodComponentsAnalysis from sklearn.decomposition import PCA # warning library import warnings warnings.filterwarnings("ignore") # Importing Data set data = pd.read_csv("cancer.csv") ``` # Descriptive Statistics ``` # Preview data data.head() # Dataset dimensions -(rows, columns) data.shape # Features data-type data.info() # Statisctical summary data.describe().T # Count of null values data.isnull().sum() ``` ## Observations: 1. There are a total of 569 records and 33 features in the dataset. 2. Each feature can be integer, float or object datatype. 3. There are zero NaN values in the dataset. 4. In the outcome column, M represents malignant cancer and B represents benign cancer. # Data Preprocessing ``` data.drop(["Unnamed: 32","id"], inplace=True, axis=1) data=data.rename(columns = {"diagnosis":"target"}) # Data count plot sns.countplot(data["target"]) print(data.target.value_counts()) # Target feature change to as 0 and 1 data["target"]=[1 if i.strip()=="M" else 0 for i in data.target] ``` # Explorer Data Analysis ``` # Correlation corr_matrix = data.corr() sns.clustermap(corr_matrix, annot = True, fmt = ".2f") plt.title("Correlation Between Features") plt.show() # Correllation with threshold 0.75 threshold = 0.75 filtre = np.abs(corr_matrix["target"]) > threshold corr_features = corr_matrix.columns[filtre].tolist() sns.clustermap(data[corr_features].corr(), annot = True, fmt = ".2f") plt.title("Correlation Between Features with Corr Threshold 0.75") ``` There are some corelated features. ``` # Pair plot sns.pairplot(data[corr_features],diag_kind="kde", markers = "+", hue="target") plt.show() ``` There is skewness. # Outlier Detection ``` y = data.target x = data.drop(["target"],axis=1) columns = x.columns.tolist() clf = LocalOutlierFactor() isOutlier = clf.fit_predict(x) Xscore = clf.negative_outlier_factor_ outlier_score = pd.DataFrame(Xscore,columns=["score"]) # Threshold for outlier threshold =-2.5 filt = outlier_score["score"] < threshold outlier_index= outlier_score[filt].index.tolist() # Drop outliers x = x.drop(outlier_index) y = y.drop(outlier_index).values ``` # Train Test Split ``` test_size = 0.3 X_train, X_test, Y_train, Y_test = train_test_split(x,y, test_size=test_size, random_state=42) ``` # Standardization ``` scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) X_train_df = pd.DataFrame (X_train, columns = columns) X_train_df["target"] = Y_train # Box plot data_melted = pd.melt(X_train_df,id_vars = "target", var_name = "features", value_name = "value") plt.figure(figsize=(12, 6)) sns.boxplot(x = "features", y = "value", hue = "target", data = data_melted) plt.xticks(rotation=90) plt.show() ``` # Basic KNN Method ``` knn = KNeighborsClassifier(n_neighbors=2) knn.fit(X_train, Y_train) y_pred = knn.predict(X_test) cm = confusion_matrix (Y_test, y_pred) acc = accuracy_score(Y_test, y_pred) print("CM: ", cm) print("Basic KNN Acc: ", acc) ``` # Choose Best Parameters ``` def KNN_Best_Params (x_train, x_test, y_train, y_test): k_range = list(range(1,31)) weight_options = ["uniform","distance"] p = [1,2] print() param_grid = dict(n_neighbors = k_range, weights = weight_options, p = p) knn = KNeighborsClassifier() grid = GridSearchCV(knn, param_grid, cv=10, scoring = "accuracy") grid.fit(x_train, y_train) print("Best training score: {} with parameters: {}".format(grid.best_score_, grid.best_params_)) print() knn = KNeighborsClassifier(**grid.best_params_) knn.fit(x_train, y_train) y_pred_test = knn.predict(x_test) y_pred_train = knn.predict(x_train) cm_test = confusion_matrix (y_test, y_pred_test) cm_train = confusion_matrix (y_train, y_pred_train) acc_test=accuracy_score(y_test, y_pred_test) acc_train=accuracy_score(y_train, y_pred_train) print("Test Score: {}, Train Score: {}".format(acc_test, acc_train)) print() print("CM test: ", cm_test) print("CM train: ", cm_train) return grid grid = KNN_Best_Params(X_train, X_test, Y_train, Y_test) ``` # PCA ``` # Since pca is unsupervised, scale operation is done on all data scaler = StandardScaler() x_scaled = scaler.fit_transform(x) pca = PCA(n_components = 2) pca.fit(x_scaled) X_reduced_pca = pca.transform(x_scaled) pca_data = pd.DataFrame (X_reduced_pca, columns = ["p1", "p2"]) pca_data["target"]=y plt.subplots(figsize=(10, 10)) sns.scatterplot(x = "p1", y="p2", hue="target", data = pca_data) plt.title("PCA: p1 vs p2") # Ratio of variables to explain the data set print(pca.explained_variance_ratio_) # What percentage of the variables explain the data set completely print(sum(pca.explained_variance_ratio_)) # KNN with reduced dimensionality X_train_pca, X_test_pca, Y_train_pca, Y_test_pca = train_test_split(X_reduced_pca ,y, test_size=test_size, random_state=42) grid_pca = KNN_Best_Params(X_train_pca, X_test_pca, Y_train_pca, Y_test_pca) ``` # NCA ``` nca = NeighborhoodComponentsAnalysis(n_components = 2, random_state=42) nca.fit(x_scaled,y) X_reduced_nca = nca.transform(x_scaled) nca_data = pd.DataFrame (X_reduced_nca, columns = ["p1", "p2"]) nca_data["target"]=y plt.subplots(figsize=(10, 10)) sns.scatterplot(x = "p1", y="p2", hue="target", data = nca_data) plt.title("NCA: p1 vs p2") X_train_nca, X_test_nca, Y_train_nca, Y_test_nca = train_test_split(X_reduced_nca ,y, test_size=test_size, random_state=42) grid_nca = KNN_Best_Params(X_train_nca, X_test_nca, Y_train_nca, Y_test_nca) ```
github_jupyter
# Lesson 2 Exercise 2: Creating Denormalized Tables <img src="images/postgresSQLlogo.png" width="250" height="250"> ## Walk through the basics of modeling data from normalized from to denormalized form. We will create tables in PostgreSQL, insert rows of data, and do simple JOIN SQL queries to show how these multiple tables can work together. #### Where you see ##### you will need to fill in code. This exercise will be more challenging than the last. Use the information provided to create the tables and write the insert statements. #### Remember the examples shown are simple, but imagine these situations at scale with large datasets, many users, and the need for quick response time. Note: __Do not__ click the blue Preview button in the lower task bar ### Import the library Note: An error might popup after this command has exectuted. If it does read it careful before ignoring. ``` import psycopg2 ``` ### Create a connection to the database, get a cursor, and set autocommit to true ``` try: conn = psycopg2.connect("host=127.0.0.1 dbname=studentdb user=student password=student") except psycopg2.Error as e: print("Error: Could not make connection to the Postgres database") print(e) try: cur = conn.cursor() except psycopg2.Error as e: print("Error: Could not get cursor to the Database") print(e) conn.set_session(autocommit=True) ``` #### Let's start with our normalized (3NF) database set of tables we had in the last exercise, but we have added a new table `sales`. `Table Name: transactions2 column 0: transaction Id column 1: Customer Name column 2: Cashier Id column 3: Year ` `Table Name: albums_sold column 0: Album Id column 1: Transaction Id column 3: Album Name` `Table Name: employees column 0: Employee Id column 1: Employee Name ` `Table Name: sales column 0: Transaction Id column 1: Amount Spent ` <img src="images/table16.png" width="450" height="450"> <img src="images/table15.png" width="450" height="450"> <img src="images/table17.png" width="350" height="350"> <img src="images/table18.png" width="350" height="350"> ### TO-DO: Add all Create statements for all Tables and Insert data into the tables ``` # TO-DO: Add all Create statements for all tables try: cur.execute("#####") except psycopg2.Error as e: print("Error: Issue creating table") print (e) try: cur.execute("#####") except psycopg2.Error as e: print("Error: Issue creating table") print (e) try: cur.execute("#####") except psycopg2.Error as e: print("Error: Issue creating table") print (e) try: cur.execute("#####") except psycopg2.Error as e: print("Error: Issue creating table") print (e) # TO-DO: Insert data into the tables try: cur.execute("INSERT INTO ##### (transaction_id, customer_name, cashier_id, year) \ VALUES (%s, %s, %s, %s)", \ (1, "Amanda", 1, 2000)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO ##### (transaction_id, customer_name, cashier_id, year) \ VALUES (%s, %s, %s, %s)", \ (2, "Toby", 1, 2000)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO ##### (transaction_id, customer_name, cashier_id, year) \ VALUES (%s, %s, %s, %s)", \ (3, "Max", 2, 2018)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO ##### (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (1, 1, "Rubber Soul")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO ##### (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (2, 1, "Let It Be")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO ##### (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (3, 2, "My Generation")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO ##### (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (4, 3, "Meet the Beatles")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO ##### (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (5, 3, "Help!")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO ##### (employee_id, employee_name) \ VALUES (%s, %s)", \ (1, "Sam")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO ##### (employee_id, employee_name) \ VALUES (%s, %s)", \ (2, "Bob")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO ##### (transaction_id, amount_spent) \ VALUES (%s, %s)", \ (1, 40)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO ##### (transaction_id, amount_spent) \ VALUES (%s, %s)", \ (2, 19)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO ##### (transaction_id, amount_spent) \ VALUES (%s, %s)", \ (3, 45)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) ``` #### TO-DO: Confirm using the Select statement the data were added correctly ``` print("Table: #####\n") try: cur.execute("SELECT * FROM #####;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() print("\nTable: #####\n") try: cur.execute("SELECT * FROM #####;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() print("\nTable: #####\n") try: cur.execute("SELECT * FROM #####;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() print("\nTable: #####\n") try: cur.execute("SELECT * FROM #####;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() ``` ### Let's say you need to do a query that gives: `transaction_id customer_name cashier name year albums sold amount sold` ### TO-DO: Complete the statement below to perform a 3 way `JOIN` on the 4 tables you have created. ``` try: cur.execute("#####") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() ``` #### Great we were able to get the data we wanted. ### But, we had to perform a 3 way `JOIN` to get there. While it's great we had that flexibility, we need to remember that `JOINS` are slow and if we have a read heavy workload that required low latency queries we want to reduce the number of `JOINS`. Let's think about denormalizing our normalized tables. ### With denormalization you want to think about the queries you are running and how to reduce the number of JOINS even if that means duplicating data. The following are the queries you need to run. #### Query 1 : `select transaction_id, customer_name, amount_spent FROM <min number of tables>` It should generate the amount spent on each transaction #### Query 2: `select cashier_name, SUM(amount_spent) FROM <min number of tables> GROUP BY cashier_name` It should generate the total sales by cashier ### Query 1: `select transaction_id, customer_name, amount_spent FROM <min number of tables>` One way to do this would be to do a JOIN on the `sales` and `transactions2` table but we want to minimize the use of `JOINS`. To reduce the number of tables, first add `amount_spent` to the `transactions` table so that you will not need to do a JOIN at all. `Table Name: transactions column 0: transaction Id column 1: Customer Name column 2: Cashier Id column 3: Year column 4: amount_spent` <img src="images/table19.png" width="450" height="450"> ### TO-DO: Add the tables as part of the denormalization process ``` # TO-DO: Create all tables try: cur.execute("#####") except psycopg2.Error as e: print("Error: Issue creating table") print (e) #Insert data into all tables try: cur.execute("INSERT INTO transactions (#####) \ VALUES (%s, %s, %s, %s, %s)", \ (#####)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO transactions (#####) \ VALUES (%s, %s, %s, %s, %s)", \ (#####)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO transactions (#####) \ VALUES (%s, %s, %s, %s, %s)", \ (#####)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) ``` ### Now you should be able to do a simplifed query to get the information you need. No `JOIN` is needed. ``` try: cur.execute("#####") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() ``` #### Your output for the above cell should be the following: (1, 'Amanda', 40)<br> (2, 'Toby', 19)<br> (3, 'Max', 45) ### Query 2: `select cashier_name, SUM(amount_spent) FROM <min number of tables> GROUP BY cashier_name` To avoid using any `JOINS`, first create a new table with just the information we need. `Table Name: cashier_sales col: Transaction Id Col: Cashier Name Col: Cashier Id col: Amount_Spent ` <img src="images/table20.png" width="350" height="350"> ### TO-DO: Create a new table with just the information you need. ``` # Create the tables try: cur.execute("#####") except psycopg2.Error as e: print("Error: Issue creating table") print (e) #Insert into all tables try: cur.execute("INSERT INTO ##### (#####) \ VALUES (%s, %s, %s, %s)", \ (##### )) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO ##### (#####) \ VALUES (%s, %s, %s, %s)", \ (##### )) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO ##### (#####) \ VALUES (%s, %s, %s, %s)", \ (#####)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) ``` ### Run the query ``` try: cur.execute("#####") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() ``` #### Your output for the above cell should be the following: ('Sam', 59)<br> ('Max', 45) #### We have successfully taken normalized table and denormalized them inorder to speed up our performance and allow for simplier queries to be executed. ### Drop the tables ``` try: cur.execute("DROP table ####") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table #####") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table #####") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table #####") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table #####") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table #####") except psycopg2.Error as e: print("Error: Dropping table") print (e) ``` ### And finally close your cursor and connection. ``` cur.close() conn.close() ```
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Post-training dynamic range quantization <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/lite/g3doc/performance/post_training_quant.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> ## Overview [TensorFlow Lite](https://www.tensorflow.org/lite/) now supports converting weights to 8 bit precision as part of model conversion from tensorflow graphdefs to TensorFlow Lite's flat buffer format. Dynamic range quantization achieves a 4x reduction in the model size. In addition, TFLite supports on the fly quantization and dequantization of activations to allow for: 1. Using quantized kernels for faster implementation when available. 2. Mixing of floating-point kernels with quantized kernels for different parts of the graph. The activations are always stored in floating point. For ops that support quantized kernels, the activations are quantized to 8 bits of precision dynamically prior to processing and are de-quantized to float precision after processing. Depending on the model being converted, this can give a speedup over pure floating point computation. In contrast to [quantization aware training](https://github.com/tensorflow/tensorflow/tree/r1.14/tensorflow/contrib/quantize) , the weights are quantized post training and the activations are quantized dynamically at inference in this method. Therefore, the model weights are not retrained to compensate for quantization induced errors. It is important to check the accuracy of the quantized model to ensure that the degradation is acceptable. This tutorial trains an MNIST model from scratch, checks its accuracy in TensorFlow, and then converts the model into a Tensorflow Lite flatbuffer with dynamic range quantization. Finally, it checks the accuracy of the converted model and compare it to the original float model. ## Build an MNIST model ### Setup ``` import logging logging.getLogger("tensorflow").setLevel(logging.DEBUG) import tensorflow as tf from tensorflow import keras import numpy as np import pathlib ``` ### Train a TensorFlow model ``` # Load MNIST dataset mnist = keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 to 1. train_images = train_images / 255.0 test_images = test_images / 255.0 # Define the model architecture model = keras.Sequential([ keras.layers.InputLayer(input_shape=(28, 28)), keras.layers.Reshape(target_shape=(28, 28, 1)), keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu), keras.layers.MaxPooling2D(pool_size=(2, 2)), keras.layers.Flatten(), keras.layers.Dense(10) ]) # Train the digit classification model model.compile(optimizer='adam', loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit( train_images, train_labels, epochs=1, validation_data=(test_images, test_labels) ) ``` For the example, since you trained the model for just a single epoch, so it only trains to ~96% accuracy. ### Convert to a TensorFlow Lite model Using the Python [TFLiteConverter](https://www.tensorflow.org/lite/convert/python_api), you can now convert the trained model into a TensorFlow Lite model. Now load the model using the `TFLiteConverter`: ``` converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() ``` Write it out to a tflite file: ``` tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/") tflite_models_dir.mkdir(exist_ok=True, parents=True) tflite_model_file = tflite_models_dir/"mnist_model.tflite" tflite_model_file.write_bytes(tflite_model) ``` To quantize the model on export, set the `optimizations` flag to optimize for size: ``` converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_quant_model = converter.convert() tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite" tflite_model_quant_file.write_bytes(tflite_quant_model) ``` Note how the resulting file, is approximately `1/4` the size. ``` !ls -lh {tflite_models_dir} ``` ## Run the TFLite models Run the TensorFlow Lite model using the Python TensorFlow Lite Interpreter. ### Load the model into an interpreter ``` interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file)) interpreter.allocate_tensors() interpreter_quant = tf.lite.Interpreter(model_path=str(tflite_model_quant_file)) interpreter_quant.allocate_tensors() ``` ### Test the model on one image ``` test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32) input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] interpreter.set_tensor(input_index, test_image) interpreter.invoke() predictions = interpreter.get_tensor(output_index) import matplotlib.pylab as plt plt.imshow(test_images[0]) template = "True:{true}, predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[0]), predict=str(np.argmax(predictions[0])))) plt.grid(False) ``` ### Evaluate the models ``` # A helper function to evaluate the TF Lite model using "test" dataset. def evaluate_model(interpreter): input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Run predictions on every image in the "test" dataset. prediction_digits = [] for test_image in test_images: # Pre-processing: add batch dimension and convert to float32 to match with # the model's input data format. test_image = np.expand_dims(test_image, axis=0).astype(np.float32) interpreter.set_tensor(input_index, test_image) # Run inference. interpreter.invoke() # Post-processing: remove batch dimension and find the digit with highest # probability. output = interpreter.tensor(output_index) digit = np.argmax(output()[0]) prediction_digits.append(digit) # Compare prediction results with ground truth labels to calculate accuracy. accurate_count = 0 for index in range(len(prediction_digits)): if prediction_digits[index] == test_labels[index]: accurate_count += 1 accuracy = accurate_count * 1.0 / len(prediction_digits) return accuracy print(evaluate_model(interpreter)) ``` Repeat the evaluation on the dynamic range quantized model to obtain: ``` print(evaluate_model(interpreter_quant)) ``` In this example, the compressed model has no difference in the accuracy. ## Optimizing an existing model Resnets with pre-activation layers (Resnet-v2) are widely used for vision applications. Pre-trained frozen graph for resnet-v2-101 is available on [Tensorflow Hub](https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4). You can convert the frozen graph to a TensorFLow Lite flatbuffer with quantization by: ``` import tensorflow_hub as hub resnet_v2_101 = tf.keras.Sequential([ keras.layers.InputLayer(input_shape=(224, 224, 3)), hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4") ]) converter = tf.lite.TFLiteConverter.from_keras_model(resnet_v2_101) # Convert to TF Lite without quantization resnet_tflite_file = tflite_models_dir/"resnet_v2_101.tflite" resnet_tflite_file.write_bytes(converter.convert()) # Convert to TF Lite with quantization converter.optimizations = [tf.lite.Optimize.DEFAULT] resnet_quantized_tflite_file = tflite_models_dir/"resnet_v2_101_quantized.tflite" resnet_quantized_tflite_file.write_bytes(converter.convert()) !ls -lh {tflite_models_dir}/*.tflite ``` The model size reduces from 171 MB to 43 MB. The accuracy of this model on imagenet can be evaluated using the scripts provided for [TFLite accuracy measurement](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/accuracy/ilsvrc). The optimized model top-1 accuracy is 76.8, the same as the floating point model.
github_jupyter
``` import pandas as pd import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.metrics.pairwise import cosine_similarity df = pd.read_csv("complete_course_data.csv") df.head() df features = ["course_title","platform","level"] def combine_features(row): return row['course_title']+" "+row['platform']+" "+row['level'] for feature in features: df[feature] = df[feature].fillna('') #filling all NaNs with blank string df["combined_features"] = df.apply(combine_features,axis=1) #applying combined_features() method over each rows of dataframe and storing the combined string in "combined_features" column df.head() df.iloc[0].combined_features cv = CountVectorizer() #creating new CountVectorizer() object count_matrix = cv.fit_transform(df["combined_features"]) #feeding combined strings(movie contents) to CountVectorizer() object cosine_sim = cosine_similarity(count_matrix) #cosine similarity matrix for count matrix #functions to get course title from course index and vice-versa. def get_title_from_index(index): return df[df.index == index]["course_title"].values[0] def get_index_from_title(title): return df[df.course_title == title]["index"].values[0] ``` # Tip: add a titile field and keywords field in data for improvement # Our next step is to get the title of the course that the user currently likes. Then we will find the index of that title. After that, we will access the row corresponding to this titile in the similarity matrix. Thus, we will get the similarity scores of all other title from the current course. Then we will enumerate through all the similarity scores of that movie to make a tuple of course index and similarity score. This will convert a row of similarity scores like this- [1 0.5 0.2 0.9] to this- [(0, 1) (1, 0.5) (2, 0.2) (3, 0.9)] . Here, each item is in this form- (course index, similarity score). ``` course_user_likes = "Ultimate Investment Banking Course" # here take the inputs of user course_index = get_index_from_title(course_user_likes) similar_courses = list(enumerate(cosine_sim[course_index])) #accessing the row corresponding to given course to find all the similarity scores for that course and then enumerating over it ``` # We will sort the list similar_course according to similarity scores in descending order. Since the most similar course to a given course will be itself, we will discard the first element after sorting the courses. ``` sorted_similar_courses = sorted(similar_courses,key=lambda x:x[1],reverse=True)[1:] #we will run a loop to print first 5 entries from sorted_similar_courses list. i=0 print("Top 5 similar courses to "+course_user_likes+" are:\n") for element in sorted_similar_courses: print(get_title_from_index(element[0])) i=i+1 if i>5: break ```
github_jupyter
# Importing the libraries ``` %matplotlib inline import IPython.display as ipd import matplotlib.pyplot as plt import numpy as np import pandas as pd from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix, roc_curve, auc from sklearn.model_selection import train_test_split, cross_val_score from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC from xgboost import XGBClassifier ``` # Loading the data ``` original_data = data = pd.read_csv('../data/heart.csv') data.info() data.head(10) data.tail(10) data.describe() ``` # Splitting the Data ``` y = original_data['target'] X = original_data.drop(columns=['target']) X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=.2, random_state=50) data = pd.concat([X_train, y_train], axis=1) # for EDA data.head() ``` # EDA ``` data.pivot_table(index='sex', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Sex') plt.show() sick = data[data['target'] == 1] not_sick = data[data['target'] == 0] sick['age'].plot.hist(color='red', alpha=0.5, density=True) not_sick['age'].plot.hist(color='blue', alpha=0.5, density=True) plt.xlabel('Age') plt.legend(['Heart Disease', 'No Heart Disease']) plt.show() cut_points = [18, 44.5, 54.5, 64.5, 100] labels = ['18-44', '45-54', '55-64', '65+'] data['age categories'] = pd.cut(data['age'], cut_points, labels=labels) data['age categories'].value_counts() data.pivot_table(index='age categories', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Age') plt.show() data.pivot_table(index='cp', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Chest Pain') plt.show() sick['trestbps'].plot.hist(color='red', alpha=0.5, density=True) not_sick['trestbps'].plot.hist(color='blue', alpha=0.5, density=True) plt.xlabel('Resting blood pressure') plt.legend(['Heart Disease', 'No Heart Disease']) plt.show() cut_points = [0, 120, 129.5, 139.5, 179.5, 220] labels = ['<120', '120-129', '130-139', '140-179', '>180'] data['trestbps categories'] = pd.cut(data['trestbps'], cut_points, labels=labels) data['trestbps categories'].value_counts() data.pivot_table(index='trestbps categories', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Resting Blood Pressure') plt.show() sick['chol'].plot.hist(color='red', alpha=0.5, density=True) not_sick['chol'].plot.hist(color='blue', alpha=0.5, density=True) plt.xlabel('Cholestrol') plt.legend(['Heart Disease', 'No Heart Disease']) plt.show() cut_points = [0, 99.5, 199.5, 299.5, 600] labels = ['<100', '100-200', '200-300', '>300'] data['chol categories'] = pd.cut(data['chol'], cut_points, labels=labels) data['chol categories'].value_counts() data.pivot_table(index='chol categories', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Cholestrol') plt.show() data.pivot_table(index='fbs', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Fasting Blood Sugar') plt.show() data.pivot_table(index='restecg', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Rest ECG') plt.show() sick['thalach'].plot.hist(color='red', alpha=0.5, density=True) not_sick['thalach'].plot.hist(color='blue', alpha=0.5, density=True) plt.xlabel('Max Heart Rate') plt.legend(['Heart Disease', 'No Heart Disease']) plt.show() data['thalach categories'] = pd.qcut(data['thalach'], 5) data['thalach categories'] = data['thalach categories'].astype(str).apply(lambda x: '%s-%s' % (x.split(',')[0].strip('('), x.split(',')[1].strip('] '))) data['thalach categories'].value_counts() data.pivot_table(index='thalach categories', values='target').plot.bar() plt.title('Heart Disease frequency by Max Heart Rate') plt.show() data.pivot_table(index='exang', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Exercise-induced Angina') plt.show() sick['oldpeak'].plot.hist(color='red', alpha=0.5, density=True) not_sick['oldpeak'].plot.hist(color='blue', alpha=0.5, density=True) plt.xlabel('ST Depression') plt.legend(['Heart Disease', 'No Heart Disease']) plt.show() data['oldpeak categories'] = pd.qcut(data['oldpeak'], 3) data['oldpeak categories'] = data['oldpeak categories'].astype(str).apply(lambda x: '%s-%s' % (x.split(',')[0].strip('('), x.split(',')[1].strip('] '))) data['oldpeak categories'].value_counts() data.pivot_table(index='oldpeak categories', values='target').plot.bar() plt.title('Heart Disease frequency by ST Depression') data.pivot_table(index='slope', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Slope of the peak exercise ST segment') plt.show() data.pivot_table(index='ca', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Number of major vessels (0-3) colored by flourosopy ') plt.show() data.pivot_table(index='thal', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by the Thalassemia disorder') plt.show() ``` # Data preprocessing ``` original_data.head() categorical_cols = ['sex', 'cp', 'fbs', 'restecg', 'exang', 'slope', 'ca', 'thal'] # One Hot Encoding of all Categorical Vairiables X_train = pd.get_dummies(X_train, columns=categorical_cols, drop_first=True) X_train['cp any'] = (X_train['cp_1'] + X_train['cp_2'] + X_train['cp_3']) # any chest pain = 1 (look at EDA) X_train = X_train.drop(columns=['cp_1', 'cp_2', 'cp_3']) cols = X_train.columns X_test = pd.get_dummies(X_test, columns=categorical_cols, drop_first=True) X_test.columns = [column.rstrip('.0') for column in X_test.columns] X_test['cp any'] = (X_test['cp_1'] + X_test['cp_2'] + X_test['cp_3']) X_test = X_test.drop(columns=['cp_1', 'cp_2', 'cp_3']) X_train.shape, X_test.shape X_train.head() X_test.head() # Add columns of training set that are missing in test set. missing_cols = list(set(X_train.columns) - set(X_test.columns)) for column in missing_cols: X_test[column] = 0 # To maintain same ordering of columns in train and test set. X_test = X_test[cols] X_train.shape[1] == X_test.shape[1] # number of features X_train.columns, X_test.columns ``` # Model Selection ``` rf = RandomForestClassifier(n_estimators=120, max_depth=5, random_state=10) scores = cross_val_score(rf, X_train, y_train, cv=5) print('Mean cross-validation score: ', scores.mean()) lr = LogisticRegression(solver='lbfgs', max_iter=5000) scores = cross_val_score(lr, X_train, y_train, cv=5) print('Mean cross-validation score: ', scores.mean()) xgb = XGBClassifier() scores = cross_val_score(xgb, X_train, y_train, cv=5) print('Mean cross-validation score: ', scores.mean()) svc = SVC(C=1000, gamma='scale', probability=True) scores = cross_val_score(svc, X_train, y_train, cv=5) print('Mean cross-validation score: ', scores.mean()) model = lr # as Logistic Regression gave the best accuracy cross-validation score. model.fit(X_train, y_train) pred = model.predict(X_test) pred_prob = model.predict_proba(X_test)[:, 1] ipd.display(pred[:10], pred_prob[:10]) ``` # Evaluating the model ``` print('Confusion Matrix:') cf = confusion_matrix(y_test, model.predict(X_test)) cf tp, fn, fp, tn = cf.flatten() accuracy = (tp + tn) / (tp + tn + fp + fn) print('Accuracy: ', accuracy) precision = tp / (tp + fp) print('Precision: ', precision) rs = tp / (tp + fn) print('Recall Sensitivity: ', rs) spec = tn / (tn + fp) print('Specificity: ', spec) f1 = (2*tp) / (2*tp +fp + fn) print('F1 score: ', f1) fpr, tpr, thresholds = roc_curve(y_test, pred_prob) plt.plot(fpr, tpr) plt.plot([0, 1], [0, 1], ls="--", c='.3') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.0]) plt.title('ROC curve for diabetes classifier') plt.xlabel('False Positive Rate (1 - Specificity)') plt.ylabel('True Positive Rate (Sensitivity)') plt.grid(True) auc(fpr, tpr) ```
github_jupyter
# Chapter 4 ## 4.2.1 均方誤差 (Mean Squared Error) ``` # 均方根函數 import numpy as np # y為預測輸出,t為正確答案 def mean_squared_error(y, t): return 0.5 * np.sum((y-t)**2) # 假設正確答案為"2" t = [0,0,1,0,0,0,0,0,0,0] # 例一:"2"的機率為最高時(0.6) y = [0.1,0.05,0.6,0.0,0.05,0.1,0.0,0.1,0.0,0.0] print('Example 1 - MSE: ', mean_squared_error(np.array(y), np.array(t))) # 例二:"7"的機率為最高時(0.6) y = [0.1,0.05,0.1,0.0,0.05,0.1,0.0,0.6,0.0,0.0] print('Example 2 - MSE: ', mean_squared_error(np.array(y), np.array(t))) print('As you could imagine, MSE of example 2 is higher than the one of example 1.') ``` ## 4.2.2 交叉熵誤差 (Cross Entropy Error) ``` # 建立交叉熵函數 import numpy as np def cross_entropy_error(y,t): delta = 1e-7 return -np.sum(t * np.log(y + delta)) t = [0,0,1,0,0,0,0,0,0,0] # 例一:"2"的機率為最高時(0.6) y = [0.1,0.05,0.6,0.0,0.05,0.1,0.0,0.1,0.0,0.0] print('Example 1 - Cross Entropy Error: ', cross_entropy_error(np.array(y), np.array(t))) # 例二:"7"的機率為最高時(0.6) y = [0.1,0.05,0.1,0.0,0.05,0.1,0.0,0.6,0.0,0.0] print('Example 2 - Cross Entropy Error: ', cross_entropy_error(np.array(y), np.array(t))) ``` ## 4.2.3 小批次學習 ``` import sys, os, numpy as np, from TextbookProgram.mnist import load_mnist sys.path.append(os.pardir) (x_train, t_train), (x_test, t_test) = load_mnist(normalize = True, one_hot_label = True) print(x_train.shape) print(t_train.shape) # Numpy 的 random choice 方法 import numpy as np random_choice_demo = np.random.choice(60000, 10) print(random_choice_demo) # 以Numpy隨機取出10張訓練資料,建立小樣本 train.size = x_train.shape[0] batch_size = 10 batch_mask = np.random.choice(train_size, batch_size) x_batch = x_train[batch_mask] t_batch = t_train[batch_mask] ``` ## 4.2.4 以"批次對應版"執行交叉熵誤差 ## (待完成) ``` # 建立新的交叉熵函數 import numpy as np def cross_entropy_error(y,t): if y.ndim == 1: t = t.reshape(1, t.size) y = y.reshape(1, y.size) batch_size = y.shape[0] return -np.sum(t * np.log(y + 1e-7)) / batch_size ``` ## 4.2.5 為什麼要設定損失函數? 展開神經網路的學習時,不可以把辨識準確度當作"指標"的理由,是因為當辨識度變成指標時,在任何位置,參數微分後幾乎都會變成0 ## 4.3.2 數值微分的範例 ``` # 建立數值微分函數 def numerical_diff(f,x): h = 1e-4 # 10^(-4) = 0.0001 return (f(x+h) - f(x-h)) / (2*h) def function_1(x): return 0.01 * x ** 2 + 0.1 * x import numpy as np import matplotlib.pylab as plt x = np.arange(0.0, 20.0, 0.1) # 建立從0到10間隔為0.1的x陣列 y = function_1(x) plt.xlabel('x') plt.ylabel('f(x)') plt.plot(x, y, 'r') plt.show() # 在x=5及x=10時,分別計算此函數的微分 print('X=5時的數值微分', numerical_diff(function_1, 5)) print('X=10時的數值微分', numerical_diff(function_1, 10)) ``` ## 4.3.3 偏微分 ``` # 建立有兩個變數的函式 def function_2(x): return x[0]**2 + x[1]**2 # 或 return np.sum(x**2) # 建立偏微分的函式 - x[0]=3, x[1]=4 # 偏微分x[0]時的改寫 def function_tmp1(x0): return x0*x0 + 4.0**2.0 print('對x[0]進行偏微分時', numerical_diff(function_tmp1, 3)) # 偏微分x[1]時的改寫 def function_tmp2(x1): return 3**2 + x1*x1 print('對x[1]進行偏微分時', numerical_diff(function_tmp2, 4)) ``` ## 4.4 梯度 ``` # 一次進行所有變數的偏微分 import numpy as np def numerical_gradient(f,x): h = 1e-4 # 0.0001 grad = np.zeros_like(x) # 產生和x同形狀的陣列 for idx in range(x.size): tmp_val = x[idx] # 計算f(x+h) x[idx] = tmp_val + h fxh1 = f(x) # 計算f(x-h) x[idx] = tmp_val - h fxh2 = f(x) grad[idx] = (fxh1 - fxh2) / (2*h) x[idx] = tmp_val # 恢復原值 return grad # 使用Numerical Gradient進行多變數的微分 print(numerical_gradient(function_2, np.array([3.0, 4.0]))) print(numerical_gradient(function_2, np.array([0.0, 2.0]))) print(numerical_gradient(function_2, np.array([3.0, 0.0]))) # 使用官網提供的python程式繪製2D梯度圖:f(x[0],x[1]) = x[0]**2 + x[1]**2 # coding: utf-8 # cf.http://d.hatena.ne.jp/white_wheels/20100327/p3 import numpy as np import matplotlib.pylab as plt from mpl_toolkits.mplot3d import Axes3D def _numerical_gradient_no_batch(f, x): h = 1e-4 # 0.0001 grad = np.zeros_like(x) for idx in range(x.size): tmp_val = x[idx] x[idx] = float(tmp_val) + h fxh1 = f(x) # f(x+h) x[idx] = tmp_val - h fxh2 = f(x) # f(x-h) grad[idx] = (fxh1 - fxh2) / (2*h) x[idx] = tmp_val # 値を元に戻す return grad def numerical_gradient(f, X): if X.ndim == 1: return _numerical_gradient_no_batch(f, X) else: grad = np.zeros_like(X) for idx, x in enumerate(X): grad[idx] = _numerical_gradient_no_batch(f, x) return grad def function_2(x): if x.ndim == 1: return np.sum(x**2) else: return np.sum(x**2, axis=1) def tangent_line(f, x): d = numerical_gradient(f, x) print(d) y = f(x) - d*x return lambda t: d*t + y if __name__ == '__main__': x0 = np.arange(-2, 2.5, 0.25) x1 = np.arange(-2, 2.5, 0.25) X, Y = np.meshgrid(x0, x1) X = X.flatten() Y = Y.flatten() grad = numerical_gradient(function_2, np.array([X, Y]).T).T plt.figure() plt.quiver(X, Y, -grad[0], -grad[1], angles="xy",color="#666666") plt.xlim([-2, 2]) plt.ylim([-2, 2]) plt.xlabel('x0') plt.ylabel('x1') plt.grid() plt.draw() plt.show() ``` ## 4.4.1 梯度法 ``` # 建立梯度下降法函式 # f:需最佳化的函數;init_x:預設值;lr:learning rate;step_num:使用梯度法重復的步鄹 import numpy as np def numerical_gradient(f,x): h = 1e-4 # 0.0001 grad = np.zeros_like(x) # 產生和x同形狀的陣列 for idx in range(x.size): tmp_val = x[idx] # 計算f(x+h) x[idx] = tmp_val + h fxh1 = f(x) # 計算f(x-h) x[idx] = tmp_val - h fxh2 = f(x) grad[idx] = (fxh1 - fxh2) / (2*h) x[idx] = tmp_val # 恢復原值 return grad def gradient_descent(f, init_x, lr=0.01, step_num=100): x = init_x for i in range(step_num): grad = numerical_gradient(f,x) x = x - lr * grad return x # 利用梯度法求出f(x[0],x[1]) = x[0]**2 + x[1]**2的最小值 def function_2(x): return x[0]**2 + x[1]**2 init_x = np.array([-3.0,4.0]) print(gradient_descent(function_2, init_x=init_x, lr=0.1, step_num=100)) # 使用官網提供的python程式繪製梯度下降法的過程 # coding: utf-8 import numpy as np import matplotlib.pylab as plt from TextbookProgram.gradient_2d import numerical_gradient def gradient_descent(f, init_x, lr=0.01, step_num=100): x = init_x x_history = [] for i in range(step_num): x_history.append( x.copy() ) grad = numerical_gradient(f, x) x -= lr * grad return x, np.array(x_history) def function_2(x): return x[0]**2 + x[1]**2 init_x = np.array([-3.0, 4.0]) lr = 0.1 step_num = 20 x, x_history = gradient_descent(function_2, init_x, lr=lr, step_num=step_num) plt.plot( [-5, 5], [0,0], '--b') plt.plot( [0,0], [-5, 5], '--b') plt.plot(x_history[:,0], x_history[:,1], 'o') plt.xlim(-3.5, 3.5) plt.ylim(-4.5, 4.5) plt.xlabel("X0") plt.ylabel("X1") plt.show() # 學習率太大的範例 init_x = np.array([-3.0, 4.0]) lr_too_large = gradient_descent(function_2, init_x=init_x, lr=10.0, step_num=100) print(lr_too_large) # 學習率太小的範例 init_x = np.array([-3.0, 4.0]) lr_too_small = gradient_descent(function_2, init_x=init_x, lr=1e-10, step_num=100) print(lr_too_small) ``` ## 4.4.2 神經網路的梯度 ``` # 建立簡單神經網路模型:Simple Net import sys, os sys.path.append(os.pardir) # Set import location import numpy as np from TextbookProgram.functions import softmax, cross_entropy_error from TextbookProgram.gradient import numerical_gradient class simpleNet: def __init__(self): self.W = np.random.randn(2,3) # 以常態初始化 def predict(self, x): return np.dot(x, self.W) def loss(self, x, t): z = self.predict(x) y = softmax(z) loss = cross_entropy_error(y,t) return loss net = simpleNet() print(net.W) # 權重參數 # 測試神經網路模型 x = np.array([0.6,0.9]) p = net.predict(x) print('Predicted value od p: ', p) print('最大值的索引值:', np.argmax(p)) t = np.array([0,0,1]) # 正確答案標簽 print('Loss: ', net.loss(x,t)) # 計算梯度 def f(w): return net.loss(x,t) dW = numerical_gradient(f, net.W) print('列印各權重的梯度:', dW) # 使用lambda來設定簡單的函數 f = lambda w: net.loss(x,t) dW = numerical_gradient(f, net.W) print('dW: ', dW) ``` ## 4.5.1 雙層神經網路的類別 ``` import sys, os import numpy as np from TextbookProgram.functions import * from TextbookProgram.gradient import numerical_gradient class TwoLayerNet: # __init__進行初始化,引數:輸入層的神經元數量、隱藏層的神經元數量、輸出層的神經元數量 def __init__(self, input_size, hidden_size, output_size, weight_init_std=0.01): # 權重初始化 # params:維持神經網路參數字典變數(實例變數) self.params = {} self.params['W1'] = weight_init_std * np.random.randn(input_size, hidden_size) self.params['b1'] = np.zeros(hidden_size) self.params['W2'] = weight_init_std * np.random.randn(hidden_size, output_size) self.params['b2'] = np.zeros(output_size) # 預測函數,x市屬 def predict(self,x): W1, W2 = self.params['W1'], self.params['W2'] b1, b2 = self.params['b1'], self.params['b2'] a1 = np.dot(x,W1) + b1 z1 = sigmoid(a1) a2 = np.dot(z1,W2) + b2 y = softmax(a2) return y # 計算損失函數 # x:輸入資料,t:訓練資料(正確資料) def loss(self,x,t): y = self.predict(x) return cross_entropy_error(y,t) # 計算辨識準確度 def accuracy(self,x,t): y = self.predict(x) y = np.argmax(y, axis=1) t = np.argmax(t, axis=1) accuracy = np.sum(y == t) / float(x.shape[0]) return accuracy # 計算權重參數的梯度 # x:輸入資料,t:訓練資料(正確資料) # grads:維持梯度的字典變數(numerical_gradient的回傳值) def numerical_gradient(self, x, t): loss_W = lambda W: self.loss(x,t) grads={} grads['W1'] = numerical_gradient(loss_W, self.params['W1']) grads['b1'] = numerical_gradient(loss_W, self.params['b1']) grads['W2'] = numerical_gradient(loss_W, self.params['W2']) grads['b2'] = numerical_gradient(loss_W, self.params['b2']) return grads # 範例一 net = TwoLayerNet(input_size=784, hidden_size=100, output_size=10) print('Shape of W1 is',net.params['W1'].shape) print('Shape of b1 is',net.params['b1'].shape) print('Shape of W2 is',net.params['W2'].shape) print('Shape of b2 is',net.params['b2'].shape) # 範例一的推論 x = np.random.rand(100, 784) y = net.predict(x) # 範例二 x = np.random.rand(100,784) # 虛擬輸入資料 t = np.random.rand(100,10) # 虛擬正確答案標簽 grads = net.numerical_gradient(x,t) # 計算梯度 print('Shape of gradient of W1 is ', grads['W1'].shape) print('Shape of gradient of b1 is ', grads['b1'].shape) print('Shape of gradient of W2 is ', grads['W2'].shape) print('Shape of gradient of b2 is ', grads['b2'].shape) ``` ## 4.5.2 執行小批次學習 ### (需要更新實測) ``` import numpy as np from TextbookProgram.mnist import load_mnist from TextbookProgram.two_layer_net import TwoLayerNet (x_train,t_train), (x_test,t_test) = load_mnist(normalize=True, one_hot_label=True) train_loss_list = [] # 設定超參數 iters_num = 10000 train_size = x_train.shape[0] batch_size = 100 learning_rate = 0.1 network = TwoLayerNet(input_size=784, hidden_sizee=50, output_size=10) for i in range(iters_num): # 取得小批次 batch_mask = np.random.choice(train_size, batch_size) x_batch = x_train[batch_mask] t_batch = t_train[batch_mask] # 計算梯度 grad = network.numerical_gradient(x_batch, t_batch) # grad = network.gradient(x_batch, t_batch) # 高速版 # 更新參數 for ley in ('W1', 'b1', 'W2', 'b2'): network.params[key] = network.params[key] - learning_rate * grad[key] # 記錄學習過程 loss = network.loss(x_batch, t_batch) train_loss_list.appned(loss) ``` ## 4.5.3 利用測試資料評估 ``` import numpy as np from TextbookProgram.mnist import load_mnist from TextbookProgram.two_layer_net import TwoLayerNet (x_train,t_train), (x_test,t_test) = load_mnist(normalize=True, one_hot_label=True) train_loss_list = [] # 新增加的部分 train_acc_list = [] test_acc_list = [] # 每 1 epoch 的重復次數 iter_per_epoch = max(train/batch_size, 1) # 設定超參數 iters_num = 10000 train_size = x_train.shape[0] batch_size = 100 learning_rate = 0.1 network = TwoLayerNet(input_size=784, hidden_sizee=50, output_size=10) for i in range(iters_num): # 取得小批次 batch_mask = np.random.choice(train_size, batch_size) x_batch = x_train[batch_mask] t_batch = t_train[batch_mask] # 計算梯度 grad = network.numerical_gradient(x_batch, t_batch) # grad = network.gradient(x_batch, t_batch) # 高速版 # 更新參數 for ley in ('W1', 'b1', 'W2', 'b2'): network.params[key] = network.params[key] - learning_rate * grad[key] # 記錄學習過程 loss = network.loss(x_batch, t_batch) train_loss_list.appned(loss) # 新增加的部分:計算 1 epoch 的便是準確度 if 1 % iter_per_epoch == 0: train_acc = network.accuracy(x_train, t_train) test_acc = network.accuracy(x_test, t_test) train_acc_list.append(train_acc) test_acc_list.append(test_acc) print('train acc, test acc | ' + str(train_acc) + ',' + str(test_acc)) ```
github_jupyter
# Image loading and generation notebook ## Notebook setup ``` # noqa import os COLAB = 'DATALAB_DEBUG' in os.environ if COLAB: !apt-get update !apt-get install git !git clone https://gist.github.com/oskopek/e27ca34cb2b813cae614520e8374e741 bstrap import bstrap.bootstrap as bootstrap else: wd = %%pwd if wd.endswith('notebooks'): print('Current directory:', wd) %cd .. %pwd import resources.our_colab_utils.bootstrap as bootstrap bootstrap.bootstrap(branch='master', packages='dotmap==1.2.20 keras==2.1.4 pydicom==1.0.2 Pillow==5.0.0') if COLAB: !rm -rf bstrap ``` ## Actual notebook ``` # noqa import csv import os from dotmap import DotMap import matplotlib import matplotlib.pyplot as plt import numpy as np from PIL import Image import pydicom import skimage.transform import tensorflow as tf import resources.data.loader as loader import resources.image_utils as imutils import resources.synthetic_data as synth_data %load_ext autoreload %autoreload 2 %matplotlib inline matplotlib.rcParams['figure.figsize'] = (10, 10) plt.rcParams['image.cmap'] = 'gray' # 'viridis', 'gray' ``` ## Data setup ``` breast_prefix = os.path.abspath('/home/oskopek/local/Breasts') loader.init(breast_prefix) ``` ## Define custom conversion and plotting ``` # def convert(img, img_meta): # img = imutils.standardize(img, img_meta) # img = imutils.downsample(img) # img_norm = imutils.normalize_gaussian(img) # return img, img_norm def show_img(img): f = plt.figure(figsize=(16, 8)) ax = f.add_subplot(1, 2, 1) ax2 = f.add_subplot(1, 2, 2) ax.imshow(img) ax2.hist(np.ravel(img)) plt.show() ``` ## inBreast test ``` # images, patients = loader.load_inbreast() # filter_id = 'cc9e66c5b31baab8' # for pid, p in patients.items(): # if not pid.startswith(filter_id): # continue # print("PatientID:", pid, "#images:", len(p.image_metadata)) # for i, img_meta in enumerate(p.image_metadata.values()): # print(i + 1, "\t", "Laterality:", img_meta.laterality, "View:", img_meta.view, "BiRads:", img_meta.birads, # "Cancer:", img_meta.cancer) # img = imutils.load_image(img_meta.image_path) # img_small, img_small_gaussian = convert(img, img_meta) # show_img(img_small_gaussian) ``` ## bcdr test ``` # images, patients = loader.load_bcdr('BCDR-D01') # 'BCDR-D02', 'BCDR-DN01' # filter_id = 3 # for pid, p in patients.items(): # if pid != filter_id: # continue # print("PatientID:", pid, "#images:", len(p.image_metadata)) # for i, img_meta in enumerate(p.image_metadata.values()): # print(i + 1, "\t", "Laterality:", img_meta.laterality, "View:", img_meta.view, "Age:", img_meta.age, "Cancer:", # img_meta.cancer) # img = imutils.load_image(img_meta.image_path) # img_small, img_small_gaussian = convert(img, img_meta) # show_img(img_small_gaussian) ``` ## Dataset summary stats ``` # def print_info(patients): # print("Patients:", len(patients)) # print("Images:", sum([len(p.image_metadata.values()) for p in patients.values()])) # def f(p): # return [1 if i.cancer else 0 for i in p.image_metadata.values()] # cancer = [sum(f(p)) for p in patients.values()] # print("Cancer:", sum(cancer)) # pcancer = 0 # pocancer = 0 # phealthy = 0 # pohealthy = 0 # for pid, p in patients.items(): # hcnt = 0 # ccnt = 0 # for img_meta in p.image_metadata.values(): # if img_meta.cancer: # ccnt += 1 # else: # hcnt += 1 # if hcnt > 0: # phealthy += 1 # if ccnt > 0: # pcancer += 1 # if hcnt == 0: # assert ccnt > 0 # pocancer += 1 # if ccnt == 0: # assert hcnt > 0 # pohealthy += 1 # print('pCancer:', pcancer, 'pHealthy:', phealthy, 'pOnlyCancer:', pocancer, 'pOnlyHealthy:', pohealthy, 'pTotal:', # len(patients)) # print("Inbreast") # _, patients_inb = loader.load_inbreast() # print_info(patients_inb) # print() # print("BCDR-D01") # _, patients_d01 = loader.load_bcdr('BCDR-D01') # print_info(patients_d01) # print() # print("BCDR-D02") # _, patients_d02 = loader.load_bcdr('BCDR-D02') # print_info(patients_d02) # print() # print("BCDR-DN01") # _, patients_dn01 = loader.load_bcdr('BCDR-DN01') # print_info(patients_dn01) # print() # size = (800, 800) # print("Gigabytes for all images in total with size {}: {}".format(size, (410 + 260 + 704 + 200) * # (size[0] * size[1]) * 8 / 1024**3)) ``` ## Convert datasets to trainable CycleGan Images * Add labels to BCDR images * Method to filter out CC view and resize and split according to label bcdr * Add labels to inBreast images * Method to filter out CC view and resize and split according to label inBreast * Merge them and copy the images to 2 folders based on label ``` def filter_view(images): res_healthy = [] res_cancer = [] for i, image in images.items(): if image.view == 'CC': if image.cancer: res_cancer.append(image) else: res_healthy.append(image) return res_healthy, res_cancer print("Inbreast") images_inb, _ = loader.load_inbreast() inb_healthy, inb_cancer = filter_view(images_inb) print("Healthy:", len(inb_healthy), "Cancer:", len(inb_cancer)) print() print("BCDR-D01") images_d01, _ = loader.load_bcdr('BCDR-D01') d01_healthy, d01_cancer = filter_view(images_d01) print("Healthy:", len(d01_healthy), "Cancer:", len(d01_cancer)) print() print("BCDR-D02") images_d02, _ = loader.load_bcdr('BCDR-D02') d02_healthy, d02_cancer = filter_view(images_d02) print("Healthy:", len(d02_healthy), "Cancer:", len(d02_cancer)) print() print("Overall") healthy = inb_healthy + d01_healthy + d02_healthy cancer = inb_cancer + d01_cancer + d02_cancer print("Healthy:", len(healthy), "Cancer:", len(cancer)) from multiprocessing import Pool as ThreadPool import imgaug from imgaug import augmenters as iaa from imgaug import parameters as iap from itertools import repeat SEED = 42 imgaug.seed(SEED) aug = iaa.Sequential([iaa.Affine(rotate=(-4, 4)), iaa.Affine(scale={"x": (0.98, 1.13), "y": (0.98, 1.13)})]) aug_img = iaa.ContrastNormalization((0.08, 1.2), per_channel=False) def transform_img(img, mask, img_meta, augment=False, size=(256, 256)): img = imutils.standardize(img, img_meta) mask = imutils.standardize(mask, img_meta) img = imutils.downsample(img, size=size) mask = imutils.downsample(mask, size=size) if augment: aug_det = aug.to_deterministic() img = imutils.normalize_gaussian(img) img = imutils.normalize(img, new_min=0, new_max=255) mask = imutils.normalize_gaussian(mask) mask = imutils.normalize(mask, new_min=0, new_max=255) img, mask = aug_det.augment_images([img, mask]) img = aug_img.augment_image(img) img = imutils.normalize_gaussian(img) img = imutils.normalize(img, new_min=0, new_max=255) mask = imutils.normalize_gaussian(mask) mask = imutils.normalize(mask, new_min=0, new_max=255) return img, mask def to_feature(img): img_to_bytes = tf.compat.as_bytes(img.astype(np.float32).tostring()) return tf.train.Feature(bytes_list=tf.train.BytesList(value=[img_to_bytes])) def to_example(img, mask, label): assert img.shape == mask.shape assert isinstance(label, int) feature = { 'image': to_feature(img), 'mask': to_feature(mask), 'width': tf.train.Feature(int64_list=tf.train.Int64List(value=[min_w])), 'height': tf.train.Feature(int64_list=tf.train.Int64List(value=[min_h])), 'label': tf.train.Feature(int64_list=tf.train.Int64List(value=[label])), } # Create an example protocol buffer example = tf.train.Example(features=tf.train.Features(feature=feature)) return example def f(fname_base, run_id, augment, label, size, inp): lst, thread_id = inp fname = "{}.r_{}.t_{}.tfrecord".format(fname_base, run_id, thread_id) imgaug.seed(SEED * run_id) options = tf.python_io.TFRecordOptions(compression_type=tf.python_io.TFRecordCompressionType.GZIP) with tf.python_io.TFRecordWriter(fname, options=options) as writer: for i, img_meta in lst: try: img = imutils.load_image(img_meta.image_path) except: print("Failed to load image", img_meta.image_path) continue try: mask = imutils.load_image(img_meta.mask_path) except: print("Failed to load mask", img_meta.mask_path) continue img, mask = transform_img(img, mask, img_meta, augment=augment, size=size) to_example(img, mask, label) def transform(lst, folder, run_id, augment, label, size): THREADS = 8 batch_size = len(lst) // THREADS + 1 lst = list(enumerate(lst)) lst = [lst[i:i + batch_size] for i in range(0, len(lst), batch_size)] lst = list(zip(lst, repeat(fname), repeat(run_id), repeat(augment), repeat(label), repeat(size))) print("Transforming (run_id={})".format(run_id)) pool = ThreadPool(THREADS) fname_base = os.path.join(folder, "label_{}.augment_{}".format(label, augment)) functools.partial(f, fname_base=fname_base, run_id=run_id, augment=augment, label=label, size=size) results = pool.map(f, lst) print("Transformed (run_id={})".format(run_id)) size = (256, 256) transformed = os.path.join(breast_prefix, "small_all_{}x{}".format(size[0], size[1])) # cancer and healthy should be lists of image_meta + fix masks # add to image_meta whjich dataset it is # shuffle image metas # split iamge metas or tfrecords augment_epochs = 1 for run_id in range(augment_epochs): transform(cancer, transformed, run_id, run_id != 0, label=1, size=size) for run_id in range(augment_epochs): transform(healthy, transformed, run_id, run_id != 0, label=0, size=size) ## TODO(oskopek): CBIS ``` ### Merge shards ``` def merge_shards(in_files, fname): options = tf.python_io.TFRecordOptions(compression_type=tf.python_io.TFRecordCompressionType.GZIP) for in_file in in_files: with tf.python_io.TFRecordWriter(fname, options=options) as writer: for record in tf.python_io.tf_record_iterator(in_file, options=options): writer.write(record) healthy = [] cancer = [] for file in os.listdir(transformed): path = os.path.join(transformed, file) if not file.endswith('tfrecord'): continue if file.startswith('label=1'): cancer.append(path) elif file.startswith('label=0'): healthy.append(path) else: raise ValueError('Invalid file: ' + str(path)) transformed_all = os.path.join(breast_prefix, "small_all_{}x{}_final".format(size[0], size[1])) merge_shards(healthy, os.path.join(tranformed_all, 'healthy.tfrecord')) merge_shards(cancer, os.path.join(tranformed_all, 'cancer.tfrecord')) ``` ## Synthetic data ``` data_gen = synth_data.generate_synth(size=(256, 256), max_thresh=2.5) for i in range(5): img, mask, img_meta = next(data_gen) # Go from img to img+mask in the GAN show_img(img + mask) show_img(mask) ```
github_jupyter
# Learning Rate and Convergence of stochastic gradient descent algorihtm (SGD) for a simple linear model This exercise is based on the tensorflow [playground](https://playground.tensorflow.org) program (developed by google to teach machine learning principles). You'll experiment with learning rate by performing two tasks. The interface for this exercise provides three buttons: | Button | What it Does | |:--------------------------|:---------------| | Reset (loopy arrow icon) | Resets `Iterations` to 0. Resets any weights that model had already learned.| | NextStep (triangle+bar icon)| Advance one epoch in the SGD procedure. With each epoch, the model changes—sometimes subtly and sometimes dramatically.| | `REGENERATE` | Generates a new data set. Does not reset Iterations. | ### Task 1 Run this [tensorflow playground program](https://playground.tensorflow.org/#activation=linear&batchSize=1&dataset=gauss&regDataset=reg-plane&learningRate=3&regularizationRate=0&noise=50&networkShape=&seed=0.17158&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false&playButton_hide=true&regularizationRate_hide=true&percTrainData_hide=true&numHiddenLayers_hide=true&noise_hide=true&problem_hide=true&regularization_hide=true&dataset_hide=true&activation_hide=true) (consider doing this Task in a separate tab) and notice the **Learning rate** menu at the top-right of Playground. The given Learning rate equal to $3$ is very high. **Exercise:** Observe how that high Learning rate affects your model by clicking the `NextStep` button $10$ or $20$ times. After each early iteration/epoch, notice how the model visualization changes dramatically. You might even see some instability after the model appears to have converged. Also notice the lines running from $X_1$ and $X_2$ to the model visualization. The weights of these lines indicate the weights of those features in the model. That is, a thick line indicates a small weight. ### Task 2 Run the same [tensorflow playground program](https://playground.tensorflow.org/#activation=linear&batchSize=1&dataset=gauss&regDataset=reg-plane&learningRate=3&regularizationRate=0&noise=50&networkShape=&seed=0.17158&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false&playButton_hide=true&regularizationRate_hide=true&percTrainData_hide=true&numHiddenLayers_hide=true&noise_hide=true&problem_hide=true&regularization_hide=true&dataset_hide=true&activation_hide=true) (in a separate tab) and do the following: - Press the Reset button. - Lower the Learning rate. - Press the `NextStep` button a bunch of times. **Questions:** - How did the lower learning rate impact convergence? Examine both the number of steps needed for the model to converge, and also how smoothly and steadily the model converges. - Experiment with even lower values of learning rate. Can you find a learning rate too slow to be useful? *Note*: Due to the non-deterministic nature of Playground exercises, your answers are specific to your data sets. But in many cases, we will retrieve the same behavior and similar values for the _interesting_ learning rates.
github_jupyter
##### Copyright 2020 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/io/tutorials/genome"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/io/blob/master/docs/tutorials/genome.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/io/blob/master/docs/tutorials/genome.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/genome.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> ## Overview This tutorial demonstrates the `tfio.genome` package that provides commonly used genomics IO functionality--namely reading several genomics file formats and also providing some common operations for preparing the data (for example--one hot encoding or parsing Phred quality into probabilities). This package uses the [Google Nucleus](https://github.com/google/nucleus) library to provide some of the core functionality. ## Setup ``` try: %tensorflow_version 2.x except Exception: pass !pip install tensorflow-io import tensorflow_io as tfio import tensorflow as tf ``` ## FASTQ Data FASTQ is a common genomics file format that stores both sequence information in addition to base quality information. First, let's download a sample `fastq` file. ``` # Download some sample data: !curl -OL https://raw.githubusercontent.com/tensorflow/io/master/tests/test_genome/test.fastq ``` ### Read FASTQ Data Now, let's use `tfio.genome.read_fastq` to read this file (note a `tf.data` API coming soon). ``` fastq_data = tfio.genome.read_fastq(filename="test.fastq") print(fastq_data.sequences) print(fastq_data.raw_quality) ``` As you see, the returned `fastq_data` has `fastq_data.sequences` which is a string tensor of all sequences in the fastq file (which can each be a different size) along with `fastq_data.raw_quality` which includes Phred encoded quality information about the quality of each base read in the sequence. ### Quality You can use a helper op to convert this quality information into probabilities if you are interested. ``` quality = tfio.genome.phred_sequences_to_probability(fastq_data.raw_quality) print(quality.shape) print(quality.row_lengths().numpy()) print(quality) ``` ### One hot encodings You may also want to encode the genome sequence data (which consists of `A` `T` `C` `G` bases) using a one hot encoder. There's a built in operation that can help with this. ``` one_hot = tfio.genome.sequences_to_onehot(fastq_data.sequences) print(one_hot) print(one_hot.shape) print(tfio.genome.sequences_to_onehot.__doc__) ```
github_jupyter
# Train faster, more flexible models with Amazon SageMaker Linear Learner Today Amazon SageMaker is launching several additional features to the built-in linear learner algorithm. Amazon SageMaker algorithms are designed to scale effortlessly to massive datasets and take advantage of the latest hardware optimizations for unparalleled speed. The Amazon SageMaker linear learner algorithm encompasses both linear regression and binary classification algorithms. These algorithms are used extensively in banking, fraud/risk management, insurance, and healthcare. The new features of linear learner are designed to speed up training and help you customize models for different use cases. Examples include classification with unbalanced classes, where one of your outcomes happens far less frequently than another. Or specialized loss functions for regression, where it’s more important to penalize certain model errors more than others. In this blog post we'll cover three things: 1. Early stopping and saving the best model 1. New ways to customize linear learner models, including: * Hinge loss (support vector machines) * Quantile loss * Huber loss * Epsilon-insensitive loss * Class weights options 1. Then we'll walk you through a hands-on example of using class weights to boost performance in binary classification ## Early Stopping Linear learner trains models using Stochastic Gradient Descent (SGD) or variants of SGD like Adam. Training requires multiple passes over the data, called *epochs*, in which the data are loaded into memory in chunks called *batches*, sometimes called *minibatches*. How do we know how many epochs to run? Ideally, we'd like to continue training until convergence - that is, until we no longer see any additional benefits. Running additional epochs after the model has converged is a waste of time and money, but guessing the right number of epochs is difficult to do before submitting a training job. If we train for too few epochs, our model will be less accurate than it should be, but if we train for too many epochs, we'll waste resources and potentially harm model accuracy by overfitting. To remove the guesswork and optimize model training, linear learner has added two new features: automatic early stopping and saving the best model. Early stopping works in two basic regimes: with or without a validation set. Often we split our data into training, validation, and testing data sets. Training is for optimizing the loss, validation is for tuning hyperparameters, and testing is for producing an honest estimate of how the model will perform on unseen data in the future. If you provide linear learner with a validation data set, training will stop early when validation loss stops improving. If no validation set is available, training will stop early when training loss stops improving. #### Early Stopping with a validation data set One big benefit of having a validation data set is that we can tell if and when we start overfitting to the training data. Overfitting is when the model gives predictions that are too closely tailored to the training data, so that generalization performance (performance on future unseen data) will be poor. The following plot on the right shows a typical progression during training with a validation data set. Until epoch 5, the model has been learning from the training set and doing better and better on the validation set. But in epochs 7-10, we see that the model has begun to overfit on the training set, which shows up as worse performance on the validation set. Regardless of whether the model continues to improve (overfit) on the training data, we want to stop training after the model starts to overfit. And we want to restore the best model from just before the overfitting started. These two features are now turned on by default in linear learner. The default parameter values for early stopping are shown in the following code. To tweak the behavior of early stopping, try changing the values. To turn off early stopping entirely, choose a patience value larger than the number of epochs you want to run. early_stopping_patience=3, early_stopping_tolerance=0.001, The parameter early_stoping_patience defines how many epochs to wait before ending training if no improvement is made. It's useful to have a little patience when deciding to stop early, since the training curve can be bumpy. Performance may get worse for one or two epochs before continuing to improve. By default, linear learner will stop early if performance has degraded for three epochs in a row. The parameter early_stopping_tolerance defines the size of an improvement that's considered significant. If the ratio of the improvement in loss divided by the previous best loss is smaller than this value, early stopping will consider the improvement to be zero. #### Early stopping without a validation data set When training with a training set only, we have no way to detect overfitting. But we still want to stop training once the model has converged and improvement has levelled off. In the left panel of the following figure, that happens around epoch 25. <img src="images/early_stop.png"> #### Early stopping and calibration You may already be familiar with the linear learner automated threshold tuning for binary classification models. Threshold tuning and early stopping work together seamlessly by default in linear learner. When a binary classification model outputs a probability (e.g., logistic regression) or a raw score (SVM), we convert that to a binary prediction by applying a threshold, for example: predicted_label = 1 if raw_prediction > 0.5 else 0 We might want to tune the threshold (0.5 in the example) based on the metric we care about most, such as accuracy or recall. Linear learner does this tuning automatically using the 'binary_classifier_model_selection_criteria' parameter. When threshold tuning and early stopping are both turned on (the default), then training stops early based on the metric you request. For example, if you provide a validation data set and request a logistic regression model with threshold tuning based on accuracy, then training will stop when the model with auto-thresholding reaches optimal performance on the validation data. If there is no validation set and auto-thresholding is turned off, then training will stop when the best value of the loss function on the training data is reached. ## New loss functions The loss function is our definition of the cost of making an error in prediction. When we train a model, we push the model weights in the direction that minimizes loss, given the known labels in the training set. The most common and well-known loss function is squared loss, which is minimized when we train a standard linear regression model. Another common loss function is the one used in logistic regression, variously known as logistic loss, cross-entropy loss, or binomial likelihood. Ideally, the loss function we train on should be a close match to the business problem we're trying to solve. Having the flexibility to choose different loss functions at training time allows us to customize models to different use cases. In this section, we'll discuss when to use which loss function, and introduce several new loss functions that have been added to linear learner. <img src="images/loss_functions.png"> ### Squared loss predictor_type='regressor', loss='squared_loss', $$\text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^{N} (w_0 + \mathbf{x_i}^\intercal \mathbf{w} - y_i)^2$$ We'll use the following notation in all of the loss functions we discuss: $w_0$ is the bias that the model learns $\mathbf{w}$ is the vector of feature weights that the model learns $y_i$ and $\mathbf{x_i}$ are the label and feature vector, respectively, from example $i$ of the training data $N$ is the total number of training examples Squared loss is a first choice for most regression problems. It has the nice property of producing an estimate of the mean of the label given the features. As seen in the plot above, squared loss implies that we pay a very high cost for very wrong predictions. This can cause problems if our training data include some extreme outliers. A model trained on squared loss will be very sensitive to outliers. Squared loss is sometimes known as mean squared error (MSE), ordinary least squares (OLS), or $\text{L}_2$ loss. Read more about [squared loss](https://en.wikipedia.org/wiki/Least_squares) on wikipedia. ### Absolute loss predictor_type='regressor', loss='absolute_loss', $$\text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^{N} |w_0 + \mathbf{x_i}^\intercal \mathbf{w} - y_i|$$ Absolute loss is less common than squared loss, but can be very useful. The main difference between the two is that training a model on absolute loss will produces estimates of the median of the label given the features. Squared loss estimates the mean, and absolute loss estimates the median. Whether you want to estimate the mean or median will depend on your use case. Let's look at a few examples: * If an error of -2 costs you \$2 and an error of +50 costs you \$50, then absolute loss models your costs better than squared loss. * If an error of -2 costs you \$2, while an error of +50 is simply unacceptably large, then it's important that your errors are generally small, and so squared loss is probably the right fit. * If it's important that your predictions are too high as often as they're too low, then you want to estimate the median with absolute loss. * If outliers in your training data are having too much influence on the model, try switching from squared to absolute loss. Large errors get a large amount of attention from absolute loss, but with squared loss, large errors get squared and become huge errors attracting a huge amount of attention. If the error is due to an outlier, it might not deserve a huge amount of attention. Absolute loss is sometimes also known as $\text{L}_1$ loss or least absolute error. Read more about [absolute loss](https://en.wikipedia.org/wiki/Least_absolute_deviations) on wikipedia. ### Quantile loss predictor_type='regressor', loss='quantile_loss', quantile=0.9, $$ \text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^N q(y_i - w_o - \mathbf{x_i}^\intercal \mathbf{w})^\text{+} + (1-q)(w_0 + \mathbf{x_i}^\intercal \mathbf{w} - y_i)^\text{+} $$ $$ \text{where the parameter } q \text{ is the quantile you want to predict}$$ Quantile loss lets us predict an upper or lower bound for the label, given the features. To make predictions that are larger than the true label 90% of the time, train quantile loss with the 0.9 quantile. An example would be predicting electricity demand where we want to build near peak demand since building to the average would result in brown-outs and upset customers. Read more about [quantile loss](https://en.wikipedia.org/wiki/Quantile_regression) on wikipedia. ### Huber loss predictor_type='regressor', loss='huber_loss', huber_delta=0.5, $$ \text{Let the error be } e_i = w_0 + \mathbf{x_i}^\intercal \mathbf{w} - y_i \text{. Then Huber loss solves:}$$ $$ \text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^N I(|e_i| < \delta) \frac{e_i^2}{2} + I(|e_i| >= \delta) |e_i|\delta - \frac{\delta^2}{2} $$ $$ \text{where } I(a) = 1 \text{ if } a \text{ is true, else } 0 $$ Huber loss is an interesting hybrid of $\text{L}_1$ and $\text{L}_2$ losses. Huber loss counts small errors on a squared scale and large errors on an absolute scale. In the plot above, we see that Huber loss looks like squared loss when the error is near 0 and absolute loss beyond that. Huber loss is useful when we want to train with squared loss, but want to avoid squared loss's sensitivity to outliers. Huber loss gives less importance to outliers by not squaring the larger errors. Read more about [Huber loss](https://en.wikipedia.org/wiki/Huber_loss) on wikipedia. ### Epsilon-insensitive loss predictor_type='regressor', loss='eps_insensitive_squared_loss', loss_insensitivity=0.25, For epsilon-insensitive squared loss, we minimize $$ \text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^N max(0, (w_0 + \mathbf{x_i}^\intercal \mathbf{w} - y_i)^2 - \epsilon^2) $$ And for epsilon-insensitive absolute loss, we minimize $$ \text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^N max(0, |w_0 + \mathbf{x_i}^\intercal \mathbf{w} - y_i| - \epsilon) $$ Epsilon-insensitive loss is useful when errors don't matter to you as long as they're below some threshold. Set the threshold that makes sense for your use case as epsilon. Epsilon-insensitive loss will allow the model to pay no cost for making errors smaller than epsilon. ### Logistic regression predictor_type='binary_classifier', loss='logistic', binary_classifier_model_selection_criteria='recall_at_target_precision', target_precision=0.9, Each of the losses we've discussed is for regression problems, where the labels are floating point numbers. The last two losses we'll cover, logistic regression and support vector machines, are for binary classification problems where the labels are one of two classes. Linear learner expects the class labels to be 0 or 1. This may require some preprocessing, for example if your labels are coded as -1 and +1, or as blue and yellow. Logistic regression produces a predicted probability for each data point: $$ p_i = \sigma(w_0 + \mathbf{x_i}^\intercal \mathbf{w}) $$ The loss function minimized in training a logistic regression model is the log likelihood of a binomial distribution. It assigns the highest cost to predictions that are confident and wrong, for example a prediction of 0.99 when the true label was 0, or a prediction of 0.002 when the true label was positive. The loss function is: $$ \text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^N y_i \text{log}(p) - (1 - y_i) \text{log}(1 - p) $$ $$ \text{where } \sigma(x) = \frac{\text{exp}(x)}{1 + \text{exp}(x)} $$ Read more about [logistic regression](https://en.wikipedia.org/wiki/Logistic_regression) on wikipedia. ### Hinge loss (support vector machine) predictor_type='binary_classifier', loss='hinge_loss', margin=1.0, binary_classifier_model_selection_criteria='recall_at_target_precision', target_precision=0.9, Another popular option for binary classification problems is the hinge loss, also known as a Support Vector Machine (SVM) or Support Vector Classifier (SVC) with a linear kernel. It places a high cost on any points that are misclassified or nearly misclassified. To tune the meaning of "nearly", adjust the margin parameter: It's difficult to say in advance whether logistic regression or SVM will be the right model for a binary classification problem, though logistic regression is generally a more popular choice then SVM. If it's important to provide probabilities of the predicted class labels, then logistic regression will be the right choice. If all that matters is better accuracy, precision, or recall, then either model may be appropriate. One advantage of logistic regression is that it produces the probability of an example having a positive label. That can be useful, for example in an ad serving system where the predicted click probability is used as an input to a bidding mechanism. Hinge loss does not produce class probabilities. Whichever model you choose, you're likely to benefit from linear learner's options for tuning the threshold that separates positive from negative predictions $$\text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^{N} y_i(\frac{m+1}{2} - w_0 - \mathbf{x_i}^\text{T}\mathbf{w})^\text{+} + (1-y_i)\frac{m-1}{2} + w_o + \mathbf{x_i}^\text{T}\mathbf{w})^\text{+}$$ $$\text{where } a^\text{+} = \text{max}(0, a)$$ Note that the hinge loss we use is a reparameterization of the usual hinge loss: typically hinge loss expects the binary label to be in {-1, 1}, whereas ours expects the binary labels to be in {0, 1}. This reparameterization allows LinearLearner to accept the same data format for binary classification regardless of the training loss. Read more about [hinge loss](https://en.wikipedia.org/wiki/Hinge_loss) on wikipedia. ## Class weights In some binary classification problems, we may find that our training data is highly unbalanced. For example, in credit card fraud detection, we're likely to have many more examples of non-fraudulent transactions than fraudulent. In these cases, balancing the class weights may improve model performance. Suppose we have 98% negative and 2% positive examples. To balance the total weight of each class, we can set the positive class weight to be 49. Now the average weight from the positive class is 0.98 $\cdot$ 1 = 0.98, and the average weight from the negative class is 0.02 $\cdot$ 49 = 0.98. The negative class weight multiplier is always 1. To incorporate the positive class weight in training, we multiply the loss by the positive weight whenever we see a positive class label. For logistic regression, the weighted loss is: Weighted logistic regression: $$ \text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^N p y_i \text{log}(\sigma(w_0 + \mathbf{x_i}^\intercal \mathbf{w})) - (1 - y_i) \text{log}(1 - \sigma(w_0 + \mathbf{x_i}^\intercal \mathbf{w})) $$ $$ \text{where } p \text{ is the weight for the positive class.} $$ The only difference between the weighted and unweighted logistic regression loss functions is the presense of the class weight, $p$ on the left-hand term in the loss. Class weights in the hinge loss (SVM) classifier are applied in the same way. To apply class weights when training a model with linear learner, supply the weight for the positive class as a training parameter: positive_example_weight_mult=200, Or to ask linear learner to calculate the positive class weight for you: positive_example_weight_mult='balanced', ## Hands-on example: Detecting credit card fraud In this section, we'll look at a credit card fraud detection dataset. The data set (Dal Pozzolo et al. 2015) was downloaded from [Kaggle](https://www.kaggle.com/mlg-ulb/creditcardfraud/data). We have features and labels for over a quarter million credit card transactions, each of which is labeled as fraudulent or not fraudulent. We'd like to train a model based on the features of these transactions so that we can predict risky or fraudulent transactions in the future. This is a binary classification problem. We'll walk through training linear learner with various settings and deploying an inference endpoint. We'll evaluate the quality of our models by hitting that endpoint with observations from the test set. We can take the real-time predictions returned by the endpoint and evaluate them against the ground-truth labels in our test set. Next, we'll apply the linear learner threshold tuning functionality to get better precision without sacrificing recall. Then, we'll push the precision even higher using the linear learner new class weights feature. Because fraud can be extremely costly, we would prefer to have high recall, even if this means more false positives. This is especially true if we are building a first line of defense, flagging potentially fraudulent transactions for further review before taking actions that affect customers. First we'll do some preprocessing on this data set: we'll shuffle the examples and split them into train and test sets. To run this under notebook under your own AWS account, you'll need to change the Amazon S3 locations. First download the raw data from [Kaggle](https://www.kaggle.com/mlg-ulb/creditcardfraud/data) and upload to your SageMaker notebook instance (or wherever you're running this notebook). Only 0.17% of the data have positive labels, making this a challenging classification problem. ``` import boto3 import io import matplotlib.pyplot as plt import numpy as np import os import pandas as pd import sagemaker import sagemaker.amazon.common as smac from sagemaker import get_execution_role from sagemaker.predictor import csv_serializer, json_deserializer # Set data locations bucket = '<your_s3_bucket_here>' # replace this with your own bucket prefix = 'sagemaker/DEMO-linear-learner-loss-weights' # replace this with your own prefix s3_train_key = '{}/train/recordio-pb-data'.format(prefix) s3_train_path = os.path.join('s3://', bucket, s3_train_key) local_raw_data = 'creditcard.csv.zip' role = get_execution_role() # Confirm access to s3 bucket for obj in boto3.resource('s3').Bucket(bucket).objects.all(): print(obj.key) # Read the data, shuffle, and split into train and test sets, separating the labels (last column) from the features raw_data = pd.read_csv(local_raw_data).as_matrix() np.random.seed(0) np.random.shuffle(raw_data) train_size = int(raw_data.shape[0] * 0.7) train_features = raw_data[:train_size, :-1] train_labels = raw_data[:train_size, -1] test_features = raw_data[train_size:, :-1] test_labels = raw_data[train_size:, -1] # Convert the processed training data to protobuf and write to S3 for linear learner vectors = np.array([t.tolist() for t in train_features]).astype('float32') labels = np.array([t.tolist() for t in train_labels]).astype('float32') buf = io.BytesIO() smac.write_numpy_to_dense_tensor(buf, vectors, labels) buf.seek(0) boto3.resource('s3').Bucket(bucket).Object(s3_train_key).upload_fileobj(buf) ``` We'll wrap the model training setup in a convenience function that takes in the S3 location of the training data, the model hyperparameters that define our training job, and the S3 output path for model artifacts. Inside the function, we'll hardcode the algorithm container, the number and type of EC2 instances to train on, and the input and output data formats. ``` from sagemaker.amazon.amazon_estimator import get_image_uri def predictor_from_hyperparams(s3_train_data, hyperparams, output_path): """ Create an Estimator from the given hyperparams, fit to training data, and return a deployed predictor """ # specify algorithm containers and instantiate an Estimator with given hyperparams container = get_image_uri(boto3.Session().region_name, 'linear-learner') linear = sagemaker.estimator.Estimator(container, role, train_instance_count=1, train_instance_type='ml.m4.xlarge', output_path=output_path, sagemaker_session=sagemaker.Session()) linear.set_hyperparameters(**hyperparams) # train model linear.fit({'train': s3_train_data}) # deploy a predictor linear_predictor = linear.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') linear_predictor.content_type = 'text/csv' linear_predictor.serializer = csv_serializer linear_predictor.deserializer = json_deserializer return linear_predictor ``` And add another convenience function for setting up a hosting endpoint, making predictions, and evaluating the model. To make predictions, we need to set up a model hosting endpoint. Then we feed test features to the endpoint and receive predicted test labels. To evaluate the models we create in this exercise, we'll capture predicted test labels and compare them to actuals using some common binary classification metrics. ``` def evaluate(linear_predictor, test_features, test_labels, model_name, verbose=True): """ Evaluate a model on a test set given the prediction endpoint. Return binary classification metrics. """ # split the test data set into 100 batches and evaluate using prediction endpoint prediction_batches = [linear_predictor.predict(batch)['predictions'] for batch in np.array_split(test_features, 100)] # parse raw predictions json to exctract predicted label test_preds = np.concatenate([np.array([x['predicted_label'] for x in batch]) for batch in prediction_batches]) # calculate true positives, false positives, true negatives, false negatives tp = np.logical_and(test_labels, test_preds).sum() fp = np.logical_and(1-test_labels, test_preds).sum() tn = np.logical_and(1-test_labels, 1-test_preds).sum() fn = np.logical_and(test_labels, 1-test_preds).sum() # calculate binary classification metrics recall = tp / (tp + fn) precision = tp / (tp + fp) accuracy = (tp + tn) / (tp + fp + tn + fn) f1 = 2 * precision * recall / (precision + recall) if verbose: print(pd.crosstab(test_labels, test_preds, rownames=['actuals'], colnames=['predictions'])) print("\n{:<11} {:.3f}".format('Recall:', recall)) print("{:<11} {:.3f}".format('Precision:', precision)) print("{:<11} {:.3f}".format('Accuracy:', accuracy)) print("{:<11} {:.3f}".format('F1:', f1)) return {'TP': tp, 'FP': fp, 'FN': fn, 'TN': tn, 'Precision': precision, 'Recall': recall, 'Accuracy': accuracy, 'F1': f1, 'Model': model_name} ``` And finally we'll add a convenience function to delete prediction endpoints after we're done with them: ``` def delete_endpoint(predictor): try: boto3.client('sagemaker').delete_endpoint(EndpointName=predictor.endpoint) print('Deleted {}'.format(predictor.endpoint)) except: print('Already deleted: {}'.format(predictor.endpoint)) ``` Let's begin by training a binary classifier model with the linear learner default settings. Note that we're setting the number of epochs to 40, which is much higher than the default of 10 epochs. With early stopping, we don't have to worry about setting the number of epochs too high. Linear learner will stop training automatically after the model has converged. ``` # Training a binary classifier with default settings: logistic regression defaults_hyperparams = { 'feature_dim': 30, 'predictor_type': 'binary_classifier', 'epochs': 40 } defaults_output_path = 's3://{}/{}/defaults/output'.format(bucket, prefix) defaults_predictor = predictor_from_hyperparams(s3_train_path, defaults_hyperparams, defaults_output_path) ``` And now we'll produce a model with a threshold tuned for the best possible precision with recall fixed at 90%: ``` # Training a binary classifier with automated threshold tuning autothresh_hyperparams = { 'feature_dim': 30, 'predictor_type': 'binary_classifier', 'binary_classifier_model_selection_criteria': 'precision_at_target_recall', 'target_recall': 0.9, 'epochs': 40 } autothresh_output_path = 's3://{}/{}/autothresh/output'.format(bucket, prefix) autothresh_predictor = predictor_from_hyperparams(s3_train_path, autothresh_hyperparams, autothresh_output_path) ``` ### Improving recall with class weights Now we'll improve on these results using a new feature added to linear learner: class weights for binary classification. We introduced this feature in the *Class Weights* section, and now we'll look into its application to the credit card fraud dataset by training a new model with balanced class weights: ``` # Training a binary classifier with class weights and automated threshold tuning class_weights_hyperparams = { 'feature_dim': 30, 'predictor_type': 'binary_classifier', 'binary_classifier_model_selection_criteria': 'precision_at_target_recall', 'target_recall': 0.9, 'positive_example_weight_mult': 'balanced', 'epochs': 40 } class_weights_output_path = 's3://{}/{}/class_weights/output'.format(bucket, prefix) class_weights_predictor = predictor_from_hyperparams(s3_train_path, class_weights_hyperparams, class_weights_output_path) ``` The first training examples used the default loss function for binary classification, logistic loss. Now let's train a model with hinge loss. This is also called a support vector machine (SVM) classifier with a linear kernel. Threshold tuning is supported for all binary classifier models in linear learner. ``` # Training a binary classifier with hinge loss and automated threshold tuning svm_hyperparams = { 'feature_dim': 30, 'predictor_type': 'binary_classifier', 'loss': 'hinge_loss', 'binary_classifier_model_selection_criteria': 'precision_at_target_recall', 'target_recall': 0.9, 'epochs': 40 } svm_output_path = 's3://{}/{}/svm/output'.format(bucket, prefix) svm_predictor = predictor_from_hyperparams(s3_train_path, svm_hyperparams, svm_output_path) ``` And finally, let's see what happens with balancing the class weights for the SVM model: ``` # Training a binary classifier with hinge loss, balanced class weights, and automated threshold tuning svm_balanced_hyperparams = { 'feature_dim': 30, 'predictor_type': 'binary_classifier', 'loss': 'hinge_loss', 'binary_classifier_model_selection_criteria': 'precision_at_target_recall', 'target_recall': 0.9, 'positive_example_weight_mult': 'balanced', 'epochs': 40 } svm_balanced_output_path = 's3://{}/{}/svm_balanced/output'.format(bucket, prefix) svm_balanced_predictor = predictor_from_hyperparams(s3_train_path, svm_balanced_hyperparams, svm_balanced_output_path) ``` Now we'll make use of the prediction endpoint we've set up for each model by sending them features from the test set and evaluating their predictions with standard binary classification metrics. ``` # Evaluate the trained models predictors = {'Logistic': defaults_predictor, 'Logistic with auto threshold': autothresh_predictor, 'Logistic with class weights': class_weights_predictor, 'Hinge with auto threshold': svm_predictor, 'Hinge with class weights': svm_balanced_predictor} metrics = {key: evaluate(predictor, test_features, test_labels, key, False) for key, predictor in predictors.items()} pd.set_option('display.float_format', lambda x: '%.3f' % x) display(pd.DataFrame(list(metrics.values())).loc[:, ['Model', 'Recall', 'Precision', 'Accuracy', 'F1']]) ``` The results are in! With threshold tuning, we can accurately predict 85-90% of the fraudulent transactions in the test set (due to randomness in training, recall will vary between 0.85-0.9 across multiple runs). But in addition to those true positives, we'll have a high number of false positives: 90-95% of the transactions we predict to be fraudulent are in fact not fraudulent (precision varies between 0.05-0.1). This model would work well as a first line of defense, flagging potentially fraudulent transactions for further review. If we instead want a model that gives very few false alarms, at the cost of catching far fewer of the fraudulent transactions, then we should optimize for higher precision: binary_classifier_model_selection_criteria='recall_at_target_precision', target_precision=0.9, And what about the results of using our new feature, class weights for binary classification? Training with class weights has made a huge improvement to this model's performance! The precision is roughly doubled, while recall is still held constant at 85-90%. Balancing class weights improved the performance of our SVM predictor, but it still does not match the corresponding logistic regression model for this dataset. Comparing all of the models we've fit so far, logistic regression with class weights and tuned thresholds did the best. #### Note on target vs. observed recall It's worth taking some time to look more closely at these results. If we asked linear learner for a model calibrated to a target recall of 0.9, then why didn't we get exactly 90% recall on the test set? The reason is the difference between training, validation, and testing. Linear learner calibrates thresholds for binary classification on the validation data set when one is provided, or else on the training set. Since we did not provide a validation data set, the threshold were calculated on the training data. Since the training, validation, and test data sets don't match exactly, the target recall we request is only an approximation. In this case, the threshold that produced 90% recall on the training data happened to produce only 85-90% recall on the test data (due to some randomness in training, the results will vary from one run to the next). The variation of recall in the test set versus the training set is dependent on the number of positive points. In this example, although we have over 280,000 examples in the entire dataset, we only have 337 positive examples, hence the large difference. The accuracy of this approximation can be improved by providing a large validation data set to get a more accurate threshold, and then evaluating on a large test set to get a more accurate benchmark of the model and its threshold. For even more fine-grained control, we can set the number of calibration samples to a higher number. It's default value is already quite high at 10 million samples: num_calibration_samples=10000000, #### Clean Up Finally we'll clean up by deleting the prediction endpoints we set up: ``` for predictor in [defaults_predictor, autothresh_predictor, class_weights_predictor, svm_predictor, svm_balanced_predictor]: delete_endpoint(predictor) ``` We've just shown how to use the linear learner new early stopping feature, new loss functions, and new class weights feature to improve credit card fraud prediction. Class weights can help you optimize recall or precision for all types of fraud detection, as well as other classification problems with rare events, like ad click prediction or mechanical failure prediction. Try using class weights in your binary classification problem, or try one of the new loss functions for your regression problems: use quantile prediction to put confidence intervals around your predictions by learning 5% and 95% quantiles. For more information about new loss functions and class weights, see the linear learner [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/linear-learner.html). ##### References Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015. See link to full license text on [Kaggle](https://www.kaggle.com/mlg-ulb/creditcardfraud).
github_jupyter
<a href="https://colab.research.google.com/github/Shantanu9326/Banking-Marketing-Campaign-with-Spark/blob/master/Banking_Marketing_Campaign_with_pySpark.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # SIT742: Modern Data Science **(Assessment Task 02: Bank Marketing Data Analytics)** --- - Materials in this module include resources collected from various open-source online repositories. - You are free to use, change and distribute this package. Prepared by **SIT742 Teaching Team** --- **Project Group Information:** - Names: KISAN KUMAR BEHERA, DISHA NAYAK, SHANTANU GUPTA - Student IDs: 218465331, 218017796, 218200234 - Emails: kbehera@deakin.edu.au , knayak@deakin.edu.au, guptasha@deakin.edu.au --- # Import Spark ``` #INSTALLING SPARK AND PIP PACKAGES !pip install wget !apt-get install openjdk-8-jdk-headless -qq > /dev/null !wget -q https://archive.apache.org/dist/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz !tar xf spark-2.4.0-bin-hadoop2.7.tgz !pip install -q findspark import os os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["SPARK_HOME"] = "/content/spark-2.4.0-bin-hadoop2.7" #IMPORTING SPARK SESSION import findspark findspark.init() from pyspark.sql import SparkSession from pyspark.sql import SQLContext ``` # Read & Check data ``` import wget link_to_data = 'https://github.com/tulip-lab/sit742/raw/master/Assessment/2019/data/bank.csv' DataSet = wget.download(link_to_data) !ls spark = SparkSession.builder.appName('ml-bank').getOrCreate() #IMPORTING BANK DATASET AS A SPARK DATA FRAME df = spark.read.csv('bank.csv', header = True, inferSchema = True) #SUMMARISING EACH COLUMN VALUES df.summary().show() # CHECKING DATA DISTRIBUTION df.printSchema() df.show(5) ``` # Select features ``` #Select features ('age', 'job', 'marital', 'education', 'default', 'balance', 'housing', 'loan', 'campaign', 'pdays', 'previous', 'poutcome', 'deposit') as df2 df2=df.select('age', 'job', 'marital', 'education', 'default', 'balance', 'housing', 'loan', 'campaign', 'pdays', 'previous', 'poutcome', 'deposit') df2.show(5) cols=df2.columns from pyspark.sql import SparkSession spark = SparkSession.builder.appName('SQL').getOrCreate() #REGISTERING DF2 AS SQL TABLE NAMED "BANK" df2.registerTempTable("bank") #FILTERING UNKNOWN VALUES USING FROM ALL THE COLUMNS USING "AND" "OR" LOGIC USING SPARKSQL sqlfilter=spark.sql("SELECT * FROM bank WHERE job!='unknown' AND education!='unknown' AND marital!='unknown' AND loan!='unknown' AND (poutcome == 'failure' OR poutcome == 'success')") #STORING IN NEW VARIABLE TO AVOID 'NONETYPE ERROR' df2=sqlfilter #DISPLAYING AND SUMMARIZING NEW DATA FRAME df2.show() df2.summary().show() from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer, VectorAssembler #SELECTING CATEGORICAL COLUMNS ONLY categoricalColumns = ['job', 'marital', 'education', 'default', 'housing', 'loan', 'poutcome'] #CREATING AN EMPTY LIST FOR PIPELINE AND ASSEMBLER stages = [] #APPLYING FOR LOOP TO INDEX AND ENCODE ALL THE SELECTED COLUMNS #APPLYING STRING INDEXER TO ALL THE CATEGORICAL COLUMNS AND STORING IT IN A NEW COLUMN WITH +INDEXED #APPLYING ONE HOT ENCODER TO ALL THE INDEXED COLUMNS AND STORING IT IN A NEW COLUMN WITH +ENCODED for categoricalCol in categoricalColumns: stringIndexer = StringIndexer(inputCol = categoricalCol, outputCol = categoricalCol + '_indexed') encoder = OneHotEncoderEstimator(inputCols=[stringIndexer.getOutputCol()], outputCols=[categoricalCol + "_encoded"]) stages += [stringIndexer, encoder] #INDEXING PREDICTOR COLUMN 'DEPOSIT' AS LABEL AND FEATURES label_stringIdx = StringIndexer(inputCol = 'deposit', outputCol = 'label') #CREATING STAGES FOR BOTH NUMERICAL AND CATEGORICAL COLUMNS stages += [label_stringIdx] numericCols = ['age', 'balance', 'campaign', 'pdays', 'previous'] #ADDING BOTH TI ASSEMBLER assemblerInputs = [c + "_encoded" for c in categoricalColumns] + numericCols #VECTORIZING TO CREATE A NEW FEATURES COLUMN WITH INDEXED AND ENCODED VALUES assembler = VectorAssembler(inputCols=assemblerInputs, outputCol="features") stages += [assembler] from pyspark.ml import Pipeline #COMBINING ALL THE STAGES INTO ONE, FITTING DF2 AND TRANSFORMING IT pipeline = Pipeline(stages = stages) pipelineModel = pipeline.fit(df2) df2 = pipelineModel.transform(df2) #STORING IN NEW VARIABLE TO AVOID 'NONETYPE ERROR' df3=df2 df3.show(5) #ADDING ALL THE ORIGINAL COLUMNS TO THE NEW DATA FRAME selectedCols = ['label', 'features'] + cols df3 = df3.select(selectedCols) #DATA DISTRIBUTION OF NEW DATA FRAME df3.printSchema() df3.show(5) ``` ## Normalisation ``` #SELECTING ONLY THE ENCODED COLUMNS TO NORMALIZE from pyspark.ml.feature import MinMaxScaler norm_vars=['features','job_encoded','marital_encoded','loan_encoded','default_encoded','education_encoded','housing_encoded','poutcome_encoded'] #USING MIN-MAX SCALER FUNCTION TO SCLE IT DOWN BETWEEN 0 AND 1 scaler = [MinMaxScaler(inputCol=scale_features ,outputCol=scale_features+ "_SCALED") for scale_features in norm_vars] #PIPELINING FOR ALL THE COLUMNS AND FITTING IT AGAIN TO DF2 pipeline = Pipeline(stages=scaler) scalerModel = pipeline.fit(df2) scaledData = scalerModel.transform(df2) #DISPLAYING ALL THE NORMALIZED VALUES scaledData.show(5) #SELECTING ONLY THE REQUIRED COLUMNS FOR FURTHER SUPERVISED AND UNSUPERVISED LEARNING df4=scaledData.select('deposit','label','features','job_encoded_SCALED','marital_encoded_SCALED','loan_encoded_SCALED','default_encoded_SCALED','education_encoded_SCALED','housing_encoded_SCALED','poutcome_encoded_SCALED','features_SCALED') df4.show(5) df2.take(1) ``` # Unsupervised learning ## K-means ``` # Perform unsupervised learning on df2 with k-means # You can use whole df2 as both training and testing data, # Evaluate the clustering result using Accuracy. from pyspark.ml.clustering import KMeans from pyspark.ml.evaluation import ClusteringEvaluator import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_blobs from pyspark.ml.evaluation import MulticlassClassificationEvaluator from pyspark.ml.evaluation import BinaryClassificationEvaluator %matplotlib inline from pyspark.ml import Pipeline #PERFORMING KMEANS CLUSTERING ON DF2 DATA FRAME #WHOLE DATA IS USED AS TRAINING AND TESTING DATA AS IT IS UNSUPERVISED kmeans = KMeans().setK(2).setSeed(742).setFeaturesCol("features") model = kmeans.fit(df4) predictions = model.transform(df4) predictions.select('label', 'prediction').show(10) #APPLYING KMEANS USING ELBOW METHOD TO GET THE OPTIMAL VALUE OF K TO HELP IN ANALYSIS IN REPORT kmeans3=KMeans(featuresCol="features",k=3) kmeans2=KMeans(featuresCol="features",k=2) model_k3 = kmeans3.fit(df4) model_k2 = kmeans2.fit(df4) wssse_k3 = model_k3.computeCost(df4) wssse_k2 = model_k2.computeCost(df4) print("With K=3") print("Within Set Sum of Squared Errors = " + str(wssse_k3)) print('--'*30) print("With K=2") print("Within Set Sum of Squared Errors = " + str(wssse_k2)) #APPLYING FOR LOOP FOR REST OF THE VALUES OF K for k in range(2,9): kmeans = KMeans(featuresCol='features_SCALED',k=k) models = kmeans.fit(df4) wssse = models.computeCost(df4) print("With K={}".format(k)) print("Within Set Sum of Squared Errors = " + str(wssse)) print('--'*30) #FINDING CLUSTER CENTRES FOR THE FIRST KMEANS WE HAD APPLIED TO DF2 centers = model.clusterCenters() print("Cluster Centers: ") for center in centers: print(center) #CONVERTING THE PREDICTION AND LABEL DATA FRAME TO PANDAS FOR ACCURACY import numpy as np from sklearn.metrics import accuracy_score #USING TOPANDAS FUNCTION FOR CONVERSION y_true=predictions.toPandas() y_pred=predictions.toPandas() #USING ACCURACY FUNCTION TO FIND ACCURACY accuracy_score(y_true.label, y_pred.prediction) ``` ## Principal Component Analysis(PCA) ``` #IMPORTING PACKAGES FOR PCA from pyspark.ml.feature import PCA from pyspark.mllib.linalg.distributed import RowMatrix from sklearn.feature_extraction import DictVectorizer from __future__ import print_function from pyspark.ml.feature import VectorAssembler from pyspark.mllib.linalg import Vectors from pyspark.ml import Pipeline import numpy as np #APPLYING PCA FUNCTION TO NORMALIZED FEATURE COLUMN ONLY pca = PCA(k=2, inputCol='features_SCALED', outputCol='pcaFeature') #FITTING AND TRANSFORMING THE DATA model = pca.fit(df4) resultdf = model.transform(df4).show(5) #DISPLAYING ONLY PREDICTOR VARIABLE AND ITS PCA FEATURES result = model.transform(df4).select("deposit","pcaFeature") result.show(truncate=False) #IMPORTING PACKAGES FOR SCATTER PLOT from pyspark.ml.linalg import Vectors import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set() #SEPRATING THE PCA FEATURES AS PER DEPOSIT RESPONSE0 #COLLECT FUNCTION TO CONVERT IT FROM SPARK DATAFRAME TO NUMPY DATA FRAME x=result.select('pcaFeature').where(result.deposit=='yes').collect() y=result.select('pcaFeature').where(result.deposit=='no').collect() #CREATING EMPTY LIST FOR APPENDING ycomp1=[] ncomp1=[] ycomp2=[] ncomp2=[] #FOR LOOP FOR FIRST ARRAY for row in x: array=row['pcaFeature'].toArray() ycomp1.append(array[0]) ncomp1.append(array[1]) #FOR LOOP FOR SECOND ARRAY for row in y: array=row['pcaFeature'].toArray() ycomp2.append(array[0]) ncomp2.append(array[1]) #USING SEABORN PACKAGE FOR PLOTTING sns.scatterplot(ycomp1,ncomp1) sns.scatterplot(ycomp2,ncomp2) plt.xlabel("PC1") plt.ylabel("PC2") plt.legend(["yes",'no']) ``` # Supervised learning ``` #SPLITTING NORMALIZED DATA FRAME INTO 70% AND 30% RATIO train, test = df4.randomSplit([0.7, 0.3], seed = 742) print("Training Dataset Count: " + str(train.count())) print("Test Dataset Count: " + str(test.count())) ``` ## Logistic Regression Model ``` #IMPORTING PACKAGES FOR LOGISTIC REGRESSION from pyspark.ml.classification import LogisticRegression from pyspark.ml.evaluation import MulticlassClassificationEvaluator from pyspark.ml.evaluation import BinaryClassificationEvaluator #FITTING A LOGISTIC REGRESSION MODEL TO TRAIN DATA USING NORMALIZED FEATURES lr = LogisticRegression(featuresCol = 'features_SCALED', labelCol = 'label', maxIter=10) lrModel = lr.fit(train) #SORTING AND PLOTTING ALL THE 23 COEFFICIENTS OF THE MODEL import matplotlib.pyplot as plt import numpy as np beta = np.sort(lrModel.coefficients) #PLOTTING AND LABELLING plt.plot(beta) plt.ylabel('Beta Coefficients') plt.show() #PRINTING COEFFICIENTS AND INTERCEPT FOR THE MODEL print(beta) print("Coefficients: " + str(lrModel.coefficients)) print("Intercept: " + str(lrModel.intercept)) #CALCULATING ROC AND PLOTTING IT #USING SUMMARY FUNCTION TO GET ALL THE PARAMETERS trainingSummary = lrModel.summary roc = trainingSummary.roc.toPandas() plt.plot(roc['FPR'],roc['TPR']) plt.ylabel('False Positive Rate') plt.xlabel('True Positive Rate') plt.title('ROC Curve') plt.show() print('Training set areaUnderROC: ' + str(trainingSummary.areaUnderROC)) ``` BETTER THE ROC BETTER THE MODEL ``` #PLOTTING RECALL VS PRECISION GRAPH pr = trainingSummary.pr.toPandas() plt.plot(pr['recall'],pr['precision']) plt.ylabel('Precision') plt.xlabel('Recall') plt.show() #CALCULATING PREDICTION AND PROBABILITY FOR ALL THE FEATURES predictionsLR = lrModel.transform(test) predictionsLR.select( 'features','label', 'rawPrediction', 'prediction', 'probability').show(10) #USING BINARYCLASS EVALUATOR FOR TEST AREA UNDER ROC CALCULATION #DEFAULT METRIC FOR BINARY CLASS IS AREA UNDER ROC from pyspark.ml.evaluation import BinaryClassificationEvaluator evaluator = BinaryClassificationEvaluator() print('Test Area Under ROC', evaluator.evaluate(predictionsLR)) #PRINTING ONLY LABEL AND PREDICTION FOR ACCURACY CALCULATION accdf=predictions.select("label","prediction").show(5) #MULTICLASS EVALUATOR FOR ACCURACY USING PREDICTION AND LABEL COLUMNS from pyspark.ml.evaluation import MulticlassClassificationEvaluator my_eval1 = MulticlassClassificationEvaluator(predictionCol='prediction',labelCol='label', metricName='accuracy') acc = my_eval1.evaluate(predictionsLR) print("accuracy=%g" %(acc)) #CALCULATING CONFUSION MATRIX OF THE LR MODEL USING MULTICLASS METRICS from pyspark.mllib.evaluation import MulticlassMetrics predandlabel=predictionsLR.select( 'label', 'prediction').rdd metrics = MulticlassMetrics(predandlabel) print(metrics.confusionMatrix()) #METRICS FOR PRECISION, RECALL AND F1SCORE cm=metrics.confusionMatrix().toArray() precision=(cm[0][0])/(cm[0][0]+cm[1][0]) recall=(cm[0][0])/(cm[0][0]+cm[0][1]) f1score =((2*precision*recall )/ (precision + recall)) print("Logistic regression:precision,recall,f1score",precision,recall,f1score) #PLOTTING HEATMAP OF ALL THE METRICS PARAMETERS USING SEABORN PACKAGE import seaborn as sns sns.heatmap(cm, annot=True) ``` ## Decision Tree Model ``` #IMPORTING PACKAGE FOR DECISION TREE MODEL #FITTING TRAIN AND TEST DATA USING FEATURES AND LABEL COLUMNS from pyspark.ml.classification import DecisionTreeClassifier dt = DecisionTreeClassifier(featuresCol = 'features', labelCol = 'label', maxDepth = 3) dtModel = dt.fit(train) predictions1 = dtModel.transform(test) predictions1.select( 'label', 'rawPrediction', 'prediction', 'probability').show(10) #CALCULATING AREA UNDER ROC evaluator = BinaryClassificationEvaluator() print("Test Area Under ROC: " + str(evaluator.evaluate(predictions1, {evaluator.metricName: "areaUnderROC"}))) #CALCULATING ACCURACY USING MULTICLASS EVALUATOR from pyspark.ml.evaluation import MulticlassClassificationEvaluator my_eval2 = MulticlassClassificationEvaluator(predictionCol='prediction',labelCol='label', metricName='accuracy') acc = my_eval2.evaluate(predictions1) print("accuracy=%g" %(acc)) #PRINTING CONFUSION MATRIX FOR DECISION TREE MODEL from pyspark.mllib.evaluation import MulticlassMetrics predandlabel=predictions1.select( 'label', 'prediction').rdd metrics1 = MulticlassMetrics(predandlabel) print(metrics1.confusionMatrix()) #METRICS FUNCTION TO EVALUATE RECALL, PRECISION AND F1SCORE cm=metrics1.confusionMatrix().toArray() precision=(cm[0][0])/(cm[0][0]+cm[1][0]) recall=(cm[0][0])/(cm[0][0]+cm[0][1]) f1score =((2*precision*recall )/ (precision + recall)) print("Decision Tree:precision,recall,f1score",precision,recall,f1score) #PRINTING ALL THE IMPORTANT FEATURES dtModel.featureImportances #PLOTTING METRICS AS SEABORN import seaborn as sns sns.heatmap(cm, annot=True) ``` ## Naive Bayes Model ``` #IMPORTING NAIVE BAYES PACKAGE from pyspark.ml.classification import NaiveBayes #SELECTING NORMALIZED COLUMNS FOR MODELLING nbdf=predictions.select('label','job_encoded_SCALED','marital_encoded_SCALED','loan_encoded_SCALED','default_encoded_SCALED','education_encoded_SCALED','housing_encoded_SCALED','poutcome_encoded_SCALED','features_SCALED',"prediction") #RENAMING SCALED FEATURES COLUMN TO 'FEATURES' FOR NAIVE BAYES MODEL nbdf = nbdf.selectExpr("label as label","features_SCALED as features") nbdf.show(3) #SPLITTING THE LABEL AND FEATURES DATA AS TRAIN AND TEST DATA trainbn, testnb = nbdf.randomSplit([0.7, 0.3], seed = 742) #APPLYTING NAIVE BAYES AND FITTING INTO TEST AND TRAIN DATA #USING MULTINOMIAL METHOD BECAUSE BERNOULLI REQUIRES ONLY BINARY INPUT nb1=NaiveBayes(modelType="multinomial") nbmodel=nb1.fit(trainbn) nb_predictions=nbmodel.transform(testnb) nb_evaluator=MulticlassClassificationEvaluator(labelCol="label",predictionCol="prediction",metricName="accuracy") #EVALUATING PREDICTION AND PROBABILITY VALUES nb_accuracy=nb_evaluator.evaluate(nb_predictions) nb_predictions.select( 'label', 'rawPrediction', 'prediction', 'probability').show(10) print("Test Area Under ROC: " + str(evaluator.evaluate(nb_predictions, {evaluator.metricName: "areaUnderROC"}))) #CALCULATING ACCURACY print("accuracy=%g" %(nb_accuracy)) #PRINTING CONFUSION MATRIX FOR NAIVE BAYES MODEL from pyspark.mllib.evaluation import MulticlassMetrics predandlabel=nb_predictions.select( 'label', 'prediction').rdd metrics2 = MulticlassMetrics(predandlabel) print(metrics2.confusionMatrix()) #CALCULATING PRECISION, RECALL AND F1SCORE cm2=metrics2.confusionMatrix().toArray() precision=(cm2[0][0])/(cm2[0][0]+cm2[1][0]) recall=(cm2[0][0])/(cm2[0][0]+cm2[0][1]) f1score =((2*precision*recall )/ (precision + recall)) print("NAIVE BAYES:precision,recall,f1score",precision,recall,f1score) #PLOTTING METRICS VALUE IN SEABORN import seaborn as sns sns.heatmap(cm2, annot=True) ```
github_jupyter
``` import re import os import random import itertools import numpy as np import pandas as pd import seaborn as sns import tensorflow as tf from sklearn import metrics from tensorflow import keras from sklearn.ensemble import RandomForestClassifier from tensorflow.keras import backend as K import matplotlib.pyplot as plt import xgboost as xgb from sklearn.utils import shuffle from tensorflow.keras.models import Sequential from tensorflow.keras.preprocessing import sequence from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report,confusion_matrix from tensorflow.keras.layers import LSTM, Dense, Dropout, Embedding, Conv2D, MaxPooling2D, Flatten from tensorflow.keras.callbacks import EarlyStopping,ModelCheckpoint,ReduceLROnPlateau from pyspark.ml.feature import Tokenizer, RegexTokenizer from pyspark.ml.classification import LinearSVC from pyspark.sql.functions import col, udf from pyspark.sql.types import IntegerType from pyspark.ml.feature import NGram,HashingTF, IDF from sklearn.feature_extraction.text import TfidfVectorizer from pyspark.ml.feature import StandardScaler from pyspark.sql.functions import lit from pyspark.mllib.feature import StandardScaler, StandardScalerModel from pyspark.mllib.linalg import Vectors from pyspark.mllib.util import MLUtils from pyspark.ml.classification import LogisticRegression, OneVsRest from pyspark.ml import Pipeline from pyspark.sql import Row from sklearn.feature_extraction.text import TfidfVectorizer from pyspark.ml.feature import RegexTokenizer, StopWordsRemover, CountVectorizer from sklearn.linear_model import LogisticRegression SIZE = 100 BATCH_SIZE = 16 EPOCHS = 100 SEED = 0 os.environ['PYTHONHASHSEED']=str(SEED) random.seed(SEED) np.random.seed(SEED) df=pd.read_csv("/Users/abhinavshinow/Documents/GitHub/Mal_URL/Data/mal_2.csv") df2=pd.read_csv("/Users/abhinavshinow/Documents/GitHub/Mal_URL/Data/mal_3.csv") df.drop(df.columns[df.columns.str.contains('unnamed',case = False)],axis = 1, inplace = True) df.drop('label',axis = 1, inplace = True) df=df.rename(columns={'result': 'type'}) df2=df2.rename(columns={'label': 'type'}) df2['type']=df2['type'].replace({'bad':1,'good':0}) df.head() df2.head() def getTokens(input): tokensBySlash = str(input.encode('utf-8')).split('/') allTokens=[] for i in tokensBySlash: tokens = str(i).split('-') tokensByDot = [] for j in range(0,len(tokens)): tempTokens = str(tokens[j]).split('.') tokentsByDot = tokensByDot + tempTokens allTokens = allTokens + tokens + tokensByDot allTokens = list(set(allTokens)) if 'com' in allTokens: allTokens.remove('com') return allTokens #Model--1 data1 = np.array(df) y1=[d[1] for d in data1] url1=[d[0] for d in data1] vectorised_url1=TfidfVectorizer() x1=vectorised_url1.fit_transform(url1) x_train1, x_test1, y_train1, y_test1 = train_test_split(x1,y1,test_size=0.2,shuffle='True',stratify=y1) #Model--2 data2 = np.array(df2) y2=[d[1] for d in data2] url2=[d[0] for d in data2] vectorised_url2=TfidfVectorizer() x2=vectorised_url2.fit_transform(url2) x_train2, x_test2, y_train2, y_test2 = train_test_split(x2,y2,test_size=0.2,shuffle='True',stratify=y2) #Logistic Regression model_lg1 = LogisticRegression(solver='lbfgs', max_iter=10000) model_lg2 = LogisticRegression(solver='lbfgs', max_iter=10000) #XGBoost model_xg1 = xgb.XGBClassifier(n_jobs = 8) model_xg2 = xgb.XGBClassifier(n_jobs = 8) #Random Forest model_rf1 = RandomForestClassifier(n_estimators=100) model_rf2 = RandomForestClassifier(n_estimators=100) # model_dc1=DecisionTreeClassifier() # model_dc2=DecisionTreeClassifier() model_lg1.fit(x_train1,y_train1) model_lg1.score(x_test1,y_test1) model_lg2.fit(x_train2,y_train2) model_lg2.score(x_test2,y_test2) model_xg1.fit(x_train1,y_train1) model_xg1.score(x_test1,y_test1) model_xg2.fit(x_train2,y_train2) model_xg2.score(x_test2,y_test2) model_rf1.fit(x_train1,y_train1) model_rf1.score(x_test1,y_test1) model_rf2.fit(x_train2,y_train2) model_rf2.score(x_test2,y_test2) pred_lg1 = model_lg1.predict(x_test1) pred_lg2 = model_lg2.predict(x_test2) pred_xg1 = model_xg1.predict(x_test1) pred_xg2 = model_xg2.predict(x_test2) pred_rf1 = model_rf1.predict(x_test1) pred_rf2 = model_rf2.predict(x_test2) model_xg2.fit(x_train2,y_train2) model_xg2.score(x_test2,y_test2) model_rf1.fit(x_train1,y_train1) model_rf1.score(x_test1,y_test1) model_rf2.fit(x_train2,y_train2) model_rf2.score(x_test2,y_test2) print(classification_report(y_test1,pred_lg1)) print(classification_report(y_test2,pred_lg2)) print(classification_report(y_test1,pred_xg1)) print(classification_report(y_test2,pred_xg2)) print(classification_report(y_test1,pred_rf1)) print(classification_report(y_test1,pred_rf2)) ```
github_jupyter
# Getting Started with SYMPAIS [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ethanluoyc/sympais/blob/master/notebooks/getting_started.ipynb) ## Setup ``` try: import google.colab IN_COLAB = True except: IN_COLAB = False ``` ### Install SYMPAIS ``` # (TODO(yl): Simplify when we make this public) GIT_TOKEN = "" if IN_COLAB: !pip install -U pip setuptools wheel if GIT_TOKEN: !pip install git+https://{GIT_TOKEN}@github.com/ethanluoyc/sympais.git#egg=sympais else: !pip install git+https://github.com/ethanluoyc/sympais.git#egg=sympais ``` ### Download and install pre-built RealPaver v0.4 ``` if IN_COLAB: !curl -L "https://drive.google.com/uc?export=download&id=1_Im0Ot5TjkzaWfid657AV_gyMpnPuVRa" -o realpaver !chmod u+x realpaver !cp realpaver /usr/local/bin import jax import jax.numpy as jnp from sympais import tasks from sympais import methods from sympais.methods import run_sympais, run_dmc import seaborn as sns import matplotlib.pyplot as plt import matplotlib import numpy as onp import math %load_ext autoreload %autoreload 2 %matplotlib inline ``` ## Load a task ``` task = tasks.Sphere(nd=3) task.profile task.constraints task.domains ``` ## Run DMC baseline ``` dmc_output = run_dmc(task, seed=0, num_samples=int(1e8)) print(dmc_output) ``` ## Run SYMPAIS ``` sympais_output = run_sympais( task, key=jax.random.PRNGKey(0), num_samples=int(1e6), num_proposals=100, tune=False, init='realpaver', num_warmup_steps=500, window_size=100 ) print(sympais_output) ``` ## Create your own problem In this section, we will show how to implement a new probabilistic analysis task similar to the sphere task above. A probabilistic ananlysis `Task` consists of an input `Profile` $p(\mathbf{x})$ and a list of constraints `cs`. A user create a new `Task` either by calling the super class constructor or subclassing the base class. Consider a two-dimensional problems where we would like to know the probablity that the the inputs $x \in [-10, 10]$ and $y \in [-10, 10]$ are jointly in the interior of a two-dimensional _cube_. The set of constraints is $$ \begin{align} x + y &\leq 1.0, \\ x + y &\geq -1.0, \\ y - x &\geq -1.0, \\ y - x &\leq 1.0. \end{align} $$ First, let's import the related modules used for defining the tasks ``` import sympy from sympais import tasks from sympais import profiles from sympais import distributions as dist ``` ### Independent profile We will first show how to define a task when the input variables are _independent_. We use `Profile` for defining the input distribution and SymPy expressions for defining the constraints. The `Profile` uses the following iterface. To create a customized profile, the user needs to implement `profile.log_prob` and `profile.sample` functions. Note that unlike numpyro distributions, the samples are represented as a dictionary from variable names to their values. This is so that it is easier to integrate with a symbolic execution engine. ``` help(profiles.Profile) ``` When the input random variables are independent, we provide a convenience `IndependentProfile` class which allows you to specify the per-component distribution. `IndependentProfile` implements `sample` and `log_prob` by dispatching to the individual components and then aggretating the results. We are now ready to define a task for the `cube` problem. The code is shown below. ``` class IndependentCubeTask(tasks.Task): def __init__(self): profile = profiles.IndependentProfile({ "x": dist.Normal(loc=-2, scale=1), "y": dist.Normal(loc=-2, scale=1) }) domains = {"x": (-10., 10.), "y": (-10., 10.)} b = 1.0 x = sympy.Symbol("x") y = sympy.Symbol("y") c1 = x + y <= b # type: sympy.Expr c2 = x + y >= -b # type: sympy.Expr c3 = y - x >= -b # type: sympy.Expr c4 = y - x <= b # type: sympy.Expr super().__init__(profile, [c1, c2, c3, c4], domains) ``` Let us create some helper functions for visualizing the profile and the constraints. ``` b = 1. def f1(x): return b - x def f2(x): return -b - x def f3(x): return -b + x def f4(x): return b + x x = sympy.Symbol('x') x1, = sympy.solve(f1(x)-f3(x)) x2, = sympy.solve(f1(x)-f4(x)) x3, = sympy.solve(f2(x)-f3(x)) x4, = sympy.solve(f2(x)-f4(x)) y1 = f1(x1) y2 = f1(x2) y3 = f2(x3) y4 = f2(x4) N = 200 X, Y = jnp.meshgrid(jnp.linspace(-4,4,N), jnp.linspace(-4, 4, N)) xr = jnp.linspace(-3, 3, 100) def plot_constraints(ax): ax.plot(x1, y1, 'k', markersize=5) ax.plot(x2, y2, 'k', markersize=5) ax.plot(x3, y3, 'k', markersize=5) ax.plot(x4, y4, 'k', markersize=5) ax.fill([x1,x2,x4,x3],[y1,y2,y4,y3],'gray', alpha=0.5); y1r = f1(xr) y2r = f2(xr) y3r = f3(xr) y4r = f4(xr) ax.plot(xr, y1r, 'w--') ax.plot(xr, y2r, 'w--') ax.plot(xr, y3r, 'w--') ax.plot(xr, y4r, 'w--') cube_task = IndependentCubeTask() logp = cube_task.profile.log_prob( {'x': X.reshape(-1), "y": Y.reshape(-1)}).reshape((N, N)) fig, ax = plt.subplots(1, 1, figsize=(3,3)) ax.contourf(X, Y, logp, levels=20, cmap='Blues_r') plot_constraints(ax) ax.set(xlim=(-3,2), ylim=(-3,2), xlabel='$x$', ylabel='$y$'); ``` ### Correlated profile In the general case, the inputs may be correlated. In this case, the user needs to provide a custom implementation of `Profile`. We will show how to do this for the case where $x$ and $y$ are jointly Gaussian. ``` from numpyro import distributions as numpyro_dist class CorrelatedProfile(profiles.Profile): def __init__(self): self._dist = numpyro_dist.MultivariateNormal( loc=jnp.array([-2, -2]), covariance_matrix=jnp.array([[1.0, 0.8], [0.8, 1.5]]) ) def sample(self, rng, sample_shape=()): samples = self._dist.sample(rng, sample_shape=sample_shape) # We needs the [..., ] to maintain batch dimensions. return {'x': samples[..., 0], 'y': samples[..., 1]} def log_prob(self, samples): samples = jnp.stack([samples['x'], samples['y']], -1) return self._dist.log_prob(samples) class CorrelatedCubeTask(tasks.Task): def __init__(self): b = 1.0 x = sympy.Symbol("x") y = sympy.Symbol("y") c1 = x + y <= b # type: sympy.Expr c2 = x + y >= -b # type: sympy.Expr c3 = y - x >= -b # type: sympy.Expr c4 = y - x <= b # type: sympy.Expr profile = CorrelatedProfile() domains = {"x": (-10., 10.), "y": (-10., 10.)} super().__init__(profile, [c1, c2, c3, c4], domains) ``` All of the benchmarks are define similarly to the examples shown above. If you are interested, check our the source code in src/sympais/tasks for more examples. ``` correlated_cube_task = CorrelatedCubeTask() logp = correlated_cube_task.profile.log_prob( {'x': X.reshape(-1), "y": Y.reshape(-1)}).reshape((N, N)) fig, ax = plt.subplots(1, 1, figsize=(3,3)) ax.contourf(X, Y, logp, levels=20, cmap='Blues_r') plot_constraints(ax) ax.set(xlim=(-3,2), ylim=(-3,2), xlabel='$x$', ylabel='$y$'); ``` ### Run samplers Now we have our new task definitions, let's run DMC and SYMPAIS on these tasks. ``` dmc_output = run_dmc(correlated_cube_task, seed=0, num_samples=int(1e8), batch_size=int(1e6)) print(dmc_output) sympais_output = run_sympais( correlated_cube_task, key=jax.random.PRNGKey(0), num_samples=int(1e6), num_proposals=100, tune=False, init='realpaver', num_warmup_steps=500, window_size=100 ) print(sympais_output) ```
github_jupyter
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. ### Agent Testing - Single Job Set In this notebook we test the performance of the agent trained with a single job set. We can then compare its performance to the random and shortest-job-first agents in the exploration notebook. Notice that in this case we are using the same job set for all agents. Then we show the performance of the agent for an unseen job set and notice that it performs poorly, almost like a random agent. This lab was tested with Ray version 0.8.5. Please make sure you have this version installed in your Compute Instance. ``` !pip install ray[rllib]==0.8.5 ``` Import the necessary packages. ``` import sys, os sys.path.insert(0, os.path.join(os.getcwd(), '../agent_training/training_scripts/environment')) os.environ.setdefault('PYTHONPATH', os.path.join(os.getcwd(), '../agent_training/training_scripts/environment')) import ray import ray.rllib.agents.pg as pg from ray.rllib.models.torch.torch_modelv2 import TorchModelV2 from ray.rllib.models import ModelCatalog from ray.rllib.utils.annotations import override from ray.tune.registry import register_env import gym from gym import spaces from environment import Parameters, Env import torch import torch.nn as nn import numpy as np ``` Here we define the RL environment class according to the Gym specification in the same way that was done in the agent training script. The difference is that here we add two new methods, *observe* and *plot_state_img*, allowing us to visualize the states of the environment as the agent acts. Details about how to work with custom environment in RLLib can be found [here](https://docs.ray.io/en/master/rllib-env.html#configuring-environments). We also introduce a new parameter to the environment constructor, *unseen*, which is a flag telling the environment to use unseen job sets, meaning job sets different than the ones used for training. ``` class CustomEnv(gym.Env): def __init__(self, env_config): simu_len = env_config['simu_len'] num_ex = env_config['num_ex'] unseen = env_config['unseen'] pa = Parameters() pa.simu_len = simu_len pa.num_ex = num_ex pa.unseen = unseen pa.compute_dependent_parameters() self.env = Env(pa, render=False, repre='image') self.action_space = spaces.Discrete(n=pa.num_nw + 1) self.observation_space = spaces.Box(low=0, high=1, shape=self.env.observe().shape, dtype=np.float) def reset(self): self.env.reset() obs = self.env.observe() return obs def step(self, action): next_obs, reward, done, info = self.env.step(action) info = {} return next_obs, reward, done, info def observe(self): return self.env.observe() def plot_state_img(self): return self.env.plot_state_img() ``` Define the RL environment constructor and register it for use in RLLib. ``` def env_creator(env_config): return CustomEnv(env_config) register_env('CustomEnv', env_creator) ``` Here we define the custom model for the agent policy. RLLib supports both TensorFlow and PyTorch and here we are using the PyTorch interfaces. The policy model is a simple 2-layer feedforward neural network that maps the environment observation array into one of possible 6 actions. It also defines a value function network as a branch of the policy network, to output a single scalar value representing the expected sum of rewards. This value can be used as the baseline for the policy gradient algorithm. More details about how to work with custom policy models with PyTorch in RLLib can be found [here](https://docs.ray.io/en/master/rllib-models.html#pytorch-models). ``` class CustomModel(TorchModelV2, nn.Module): def __init__(self, obs_space, action_space, num_outputs, model_config, name): TorchModelV2.__init__(self, obs_space, action_space, num_outputs, model_config, name) nn.Module.__init__(self) self.hidden_layers = nn.Sequential(nn.Linear(20*124, 32), nn.ReLU(), nn.Linear(32, 16), nn.ReLU()) self.logits = nn.Sequential(nn.Linear(16, 6)) self.value_branch = nn.Sequential(nn.Linear(16, 1)) @override(TorchModelV2) def forward(self, input_dict, state, seq_lens): obs = input_dict['obs'].float() obs = obs.view(obs.shape[0], 1, obs.shape[1], obs.shape[2]) obs = obs.view(obs.shape[0], obs.shape[1] * obs.shape[2] * obs.shape[3]) self.features = self.hidden_layers(obs) actions = self.logits(self.features) return actions, state @override(TorchModelV2) def value_function(self): return self.value_branch(self.features).squeeze(1) ``` Now we register the custom policy model for use in RLLib. ``` ModelCatalog.register_custom_model('CustomModel', CustomModel) ``` Here we create a copy of the default Policy Gradient configuration in RLLib and set the relevant parameters for testing a trained agent. In this case we only need the parameters related to the custom model and to our environment. ``` config = pg.DEFAULT_CONFIG my_config = config.copy() my_params = { 'use_pytorch' : True, 'model': {'custom_model': 'CustomModel'}, 'env': 'CustomEnv', 'env_config': {'simu_len': 50, 'num_ex': 1, 'unseen': False} } for key, value in my_params.items(): my_config[key] = value ``` Initialize the Ray backend. Here we run Ray locally. ``` ray.init() ``` Instantiate the policy gradient trainer object from RLLib. ``` trainer = pg.PGTrainer(config=my_config) ``` We can verify the policy model architecture by getting a reference to the policy object from the trainer and a reference to the model object from the policy. ``` policy = trainer.get_policy() model = policy.model print(model.parameters) ``` Here we load the model checkpoint, corresponding to the single job set training, into the trainer. ``` checkpoint_path = '../model_checkpoints/single_jobset/checkpoint-300' trainer.restore(checkpoint_path=checkpoint_path) ``` We then perform a rollout of the trained policy into the RL environment, using the same single job set used for training. ``` import numpy as np from IPython import display import matplotlib.pyplot as plt import time from random import randint env = CustomEnv(env_config = my_params['env_config']) img = env.plot_state_img() plt.figure(figsize = (16,16)) plt.grid(color='w', linestyle='-', linewidth=0.5) plt.text(2, -2, "RESOURCES") plt.text(-4, 10, "CPU") plt.text(-4, 30, "MEM") plt.text(14, -2, "JOB QUEUE #1") plt.text(26, -2, "JOB QUEUE #2") plt.text(38, -2, "JOB QUEUE #3") plt.text(50, -2, "JOB QUEUE #4") plt.text(62, -2, "JOB QUEUE #5") plt.text(76, 20, "BACKLOG") plt.imshow(img, vmax=1, cmap='CMRmap') ax = plt.gca() ax.set_xticks(np.arange(-.5, 100, 1)) ax.set_xticklabels([]) ax.set_yticks(np.arange(-.5, 100, 1)) ax.set_yticklabels([]) ax.tick_params(axis=u'both', which=u'both',length=0) image = plt.imshow(img, vmax=1, cmap='CMRmap') display.display(plt.gcf()) actions = [] rewards = [] done = False s = 0 txt1 = plt.text(0, 45, '') txt2 = plt.text(0, 47, '') obs = env.observe() while not done: a = trainer.compute_action(obs) actions.append(a) obs, reward, done, info = env.step(a) rewards.append(reward) s += 1 txt1.remove() txt2.remove() txt1 = plt.text(0, 44, 'STEPS: ' + str(s), fontsize=14) txt2 = plt.text(0, 46, 'TOTAL AVERAGE JOB SLOWDOWN: ' + str(round(-sum(rewards))), fontsize=14) img = env.plot_state_img() image.set_data(img) display.display(plt.gcf()) display.clear_output(wait=True) ``` And finally we perform another rollout of the trained policy, but now using an unseen job set, meaning a job set different from the one used for training. We notice here that the agent is not able to generalize well and its performance is similar to the performance of a random policy. This will be mitigated by training the agent with multiple distinct job sets. ``` import numpy as np from IPython import display import matplotlib.pyplot as plt import time from random import randint env_config = my_params['env_config'] env_config['unseen'] = True env = CustomEnv(env_config=env_config) img = env.plot_state_img() plt.figure(figsize = (16,16)) plt.grid(color='w', linestyle='-', linewidth=0.5) plt.text(2, -2, "RESOURCES") plt.text(-4, 10, "CPU") plt.text(-4, 30, "MEM") plt.text(14, -2, "JOB QUEUE #1") plt.text(26, -2, "JOB QUEUE #2") plt.text(38, -2, "JOB QUEUE #3") plt.text(50, -2, "JOB QUEUE #4") plt.text(62, -2, "JOB QUEUE #5") plt.text(76, 20, "BACKLOG") plt.imshow(img, vmax=1, cmap='CMRmap') ax = plt.gca() ax.set_xticks(np.arange(-.5, 100, 1)) ax.set_xticklabels([]) ax.set_yticks(np.arange(-.5, 100, 1)) ax.set_yticklabels([]) ax.tick_params(axis=u'both', which=u'both',length=0) image = plt.imshow(img, vmax=1, cmap='CMRmap') display.display(plt.gcf()) actions = [] rewards = [] done = False s = 0 txt1 = plt.text(0, 45, '') txt2 = plt.text(0, 47, '') obs = env.observe() while not done: a = trainer.compute_action(obs) actions.append(a) obs, reward, done, info = env.step(a) rewards.append(reward) s += 1 txt1.remove() txt2.remove() txt1 = plt.text(0, 44, 'STEPS: ' + str(s), fontsize=14) txt2 = plt.text(0, 46, 'TOTAL AVERAGE JOB SLOWDOWN: ' + str(round(-sum(rewards))), fontsize=14) img = env.plot_state_img() image.set_data(img) display.display(plt.gcf()) display.clear_output(wait=True) ``` Shutdown the Ray backend. ``` ray.shutdown() ```
github_jupyter
# Create a Learner for inference ``` from fastai.gen_doc.nbdoc import * ``` In this tutorial, we'll see how the same API allows you to create an empty [`DataBunch`](/basic_data.html#DataBunch) for a [`Learner`](/basic_train.html#Learner) at inference time (once you have trained your model) and how to call the `predict` method to get the predictions on a single item. ``` jekyll_note("""As usual, this page is generated from a notebook that you can find in the <code>docs_src</code> folder of the <a href="https://github.com/fastai/fastai">fastai repo</a>. We use the saved models from <a href="/tutorial.data.html">this tutorial</a> to have this notebook run quickly.""") ``` ## Vision To quickly get acces to all the vision functionality inside fastai, we use the usual import statements. ``` from fastai.vision import * ``` ### A classification problem Let's begin with our sample of the MNIST dataset. ``` mnist = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) ``` It's set up with an imagenet structure so we use it to split our training and validation set, then labelling. ``` data = (ImageItemList.from_folder(mnist) .split_by_folder() .label_from_folder() .add_test_folder('test') .transform(tfms, size=32) .databunch() .normalize(imagenet_stats)) ``` Now that our data has been properly set up, we can train a model. We already did in the [look at your data tutorial](/tutorial.data.html) so we'll just load our saved results here. ``` learn = create_cnn(data, models.resnet18).load('mini_train') ``` Once everything is ready for inference, we just have to call `learn.export` to save all the information of our [`Learner`](/basic_train.html#Learner) object for inference: the stuff we need in the [`DataBunch`](/basic_data.html#DataBunch) (transforms, classes, normalization...), the model with its weights and all the callbacks our [`Learner`](/basic_train.html#Learner) was using. Everything will be in a file named `export.pkl` in the folder `learn.path`. If you deploy your model on a different machine, this is the file you'll need to copy. ``` learn.export() ``` To create the [`Learner`](/basic_train.html#Learner) for inference, you'll need to use the [`load_learner`](/basic_train.html#load_learner) function. Note that you don't have to specify anything: it remembers the classes, the transforms you used or the normalization in the data, the model, its weigths... The only argument needed is the folder where the 'export.pkl' file is. ``` learn = load_learner(mnist) ``` You can now get the predictions on any image via `learn.predict`. ``` img = data.train_ds[0][0] learn.predict(img) ``` It returns a tuple of three things: the object predicted (with the class in this instance), the underlying data (here the corresponding index) and the raw probabilities. You can also do inference on a larger set of data by adding a *test set*. This is done by passing an [`ItemList`](/data_block.html#ItemList) to [`load_learner`](/basic_train.html#load_learner). ``` learn = load_learner(mnist, test=ImageItemList.from_folder(mnist/'test')) preds,y = learn.get_preds(ds_type=DatasetType.Test) preds[:5] ``` ### A multi-label problem Now let's try these on the planet dataset, which is a little bit different in the sense that each image can have multiple tags (and not just one label). ``` planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) ``` Here each images is labelled in a file named `labels.csv`. We have to add [`train`](/train.html#train) as a prefix to the filenames, `.jpg` as a suffix and indicate that the labels are separated by spaces. ``` data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') .random_split_by_pct() .label_from_df(label_delim=' ') .transform(planet_tfms, size=128) .databunch() .normalize(imagenet_stats)) ``` Again, we load the model we saved in [look at your data tutorial](/tutorial.data.html). ``` learn = create_cnn(data, models.resnet18).load('mini_train') ``` Then we can export it before loading it for inference. ``` learn.export() learn = load_learner(planet) ``` And we get the predictions on any image via `learn.predict`. ``` img = data.train_ds[0][0] learn.predict(img) ``` Here we can specify a particular threshold to consider the predictions to be correct or not. The default is `0.5`, but we can change it. ``` learn.predict(img, thresh=0.3) ``` ### A regression example For the next example, we are going to use the [BIWI head pose](https://data.vision.ee.ethz.ch/cvl/gfanelli/head_pose/head_forest.html#db) dataset. On pictures of persons, we have to find the center of their face. For the fastai docs, we have built a small subsample of the dataset (200 images) and prepared a dictionary for the correspondance fielname to center. ``` biwi = untar_data(URLs.BIWI_SAMPLE) fn2ctr = pickle.load(open(biwi/'centers.pkl', 'rb')) ``` To grab our data, we use this dictionary to label our items. We also use the [`PointsItemList`](/vision.data.html#PointsItemList) class to have the targets be of type [`ImagePoints`](/vision.image.html#ImagePoints) (which will make sure the data augmentation is properly applied to them). When calling [`transform`](/tabular.transform.html#tabular.transform) we make sure to set `tfm_y=True`. ``` data = (PointsItemList.from_folder(biwi) .random_split_by_pct(seed=42) .label_from_func(lambda o:fn2ctr[o.name]) .transform(get_transforms(), tfm_y=True, size=(120,160)) .databunch() .normalize(imagenet_stats)) ``` As before, the road to inference is pretty straightforward: load the model we trained before, export the [`Learner`](/basic_train.html#Learner) then load it for production. ``` learn = create_cnn(data, models.resnet18, lin_ftrs=[100], ps=0.05).load('mini_train'); learn.export() learn = load_learner(biwi) ``` And now we can a prediction on an image. ``` img = data.valid_ds[0][0] learn.predict(img) ``` To visualize the predictions, we can use the [`Image.show`](/vision.image.html#Image.show) method. ``` img.show(y=learn.predict(img)[0]) ``` ### A segmentation example Now we are going to look at the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/) (at least a small sample of it), where we have to predict the class of each pixel in an image. Each image in the 'images' subfolder as an equivalent in 'labels' that is its segmentations mask. ``` camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ``` We read the classes in 'codes.txt' and the function maps each image filename with its corresponding mask filename. ``` codes = np.loadtxt(camvid/'codes.txt', dtype=str) get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ``` The data block API allows us to uickly get everything in a [`DataBunch`](/basic_data.html#DataBunch) and then we can have a look with `show_batch`. ``` data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch(bs=16, path=camvid) .normalize(imagenet_stats)) ``` As before, we load our model, export the [`Learner`](/basic_train.html#Learner) then create a new one with [`load_learner`](/basic_train.html#load_learner). ``` learn = unet_learner(data, models.resnet18).load('mini_train'); learn.export() learn = load_learner(camvid) ``` And now we can a prediction on an image. ``` img = data.train_ds[0][0] learn.predict(img); ``` To visualize the predictions, we can use the [`Image.show`](/vision.image.html#Image.show) method. ``` img.show(y=learn.predict(img)[0]) ``` ## Text Next application is text, so let's start by importing everything we'll need. ``` from fastai.text import * ``` ### Language modelling First let's look a how to get a language model ready for inference. Since we'll load the model trained in the [visualize data tutorial](/tutorial.data.html), we load the vocabulary used there. ``` imdb = untar_data(URLs.IMDB_SAMPLE) vocab = Vocab(pickle.load(open(imdb/'tmp'/'itos.pkl', 'rb'))) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text', vocab=vocab) .random_split_by_pct() .label_for_lm() .databunch()) ``` Like in vision, we just have to type `learn.export()` after loading our pretrained model to save all the information inside the [`Learner`](/basic_train.html#Learner) we'll need. In this case, this includes all the vocabulary we created. The only difference is that we will specify a filename, since we have several model in the same path (language model and classifier). ``` learn = language_model_learner(data_lm, AWD_LSTM, pretrained=False).load('mini_train_lm', with_opt=False); learn.export(fname = 'export_lm.pkl') ``` Now let's define our inference learner. ``` learn = load_learner(imdb, fname = 'export_lm.pkl') ``` Then we can predict with the usual method, here we can specify how many words we want the model to predict. ``` learn.predict('This is a simple test of', n_words=20) ``` You can also use beam search to generate text. ``` learn.beam_search('This is a simple test of', n_words=20, beam_sz=200) ``` ### Classification Now let's see a classification example. We have to use the same vocabulary as for the language model if we want to be able to use the encoder we saved. ``` data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text', vocab=vocab) .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch(bs=42)) ``` Again we export the [`Learner`](/basic_train.html#Learner) where we load our pretrained model. ``` learn = text_classifier_learner(data_clas, AWD_LSTM, pretrained=False).load('mini_train_clas', with_opt=False); learn.export(fname = 'export_clas.pkl') ``` Now let's use [`load_learner`](/basic_train.html#load_learner). ``` learn = load_learner(imdb, fname = 'export_clas.pkl') ``` Then we can predict with the usual method. ``` learn.predict('I really loved that movie!') ``` ## Tabular Last application brings us to tabular data. First let's import everything we'll need. ``` from fastai.tabular import * ``` We'll use a sample of the [adult dataset](https://archive.ics.uci.edu/ml/datasets/adult) here. Once we read the csv file, we'll need to specify the dependant variable, the categorical variables, the continuous variables and the processors we want to use. ``` adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] ``` Then we can use the data block API to grab everything together. ``` data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) ``` We define a [`Learner`](/basic_train.html#Learner) object that we fit and then save the model. ``` learn = tabular_learner(data, layers=[200,100], metrics=accuracy) learn.fit(1, 1e-2) learn.save('mini_train') ``` As in the other applications, we just have to type `learn.export()` to save everything we'll need for inference (here it includes the inner state of each processor). ``` learn.export() ``` Then we create a [`Learner`](/basic_train.html#Learner) for inference like before. ``` learn = load_learner(adult) ``` And we can predict on a row of dataframe that has the right `cat_names` and `cont_names`. ``` learn.predict(df.iloc[0]) ```
github_jupyter
``` import numpy as np import numpy.random as npr from sklearn.linear_model import LinearRegression, Ridge from sklearn.decomposition import PCA import statsmodels.api as sm from numpy.linalg import cond N=2000 D=5 # number of features mean = np.zeros(D) corr = 0.9 y_noise = 0.1 # designate the core feature num_corefea = np.int(D/2) true_cause = np.arange(num_corefea).astype(int) ``` ## generate simulated datasets with core and spurious features The outcome model is the same in training and testing; the outcome only depends on the core feature. In the training set, the covariates have high correlation. In the test set, the covariates have low correlation. ``` # simulate strongly correlated features for training train_cov = np.ones((D, D)) * corr + np.eye(D) * (1 - corr) train_x_true = npr.multivariate_normal(mean, train_cov, size=N) train_x_true = train_x_true * np.concatenate([-1 * np.ones(D//2), np.ones(D - D//2)]) # create both positive and negatively correlated covariates # train_x_true = np.exp(npr.multivariate_normal(mean, train_cov, size=N)) # exponential of gaussian; no need to be gaussian # simulate weakly correlated features for testing test_cov = np.ones((D, D)) * (1 - corr) + np.eye(D) * corr test_x_true = npr.multivariate_normal(mean, test_cov, size=N) # test_x_true = np.exp(npr.multivariate_normal(mean, test_cov, size=N)) # exponential of gaussian; no need to be gaussian # add observation noise to the x # spurious correlation more often occurs when the signal to noise ratio is lower x_noise = np.array(list(np.ones(num_corefea)*0.4) + list(np.ones(D-num_corefea)*0.3)) train_x = train_x_true + x_noise * npr.normal(size=[N,D]) test_x = test_x_true + x_noise * npr.normal(size=[N,D]) print("\ntrain X correlation\n", np.corrcoef(train_x.T)) print("\ntest X correlation\n",np.corrcoef(test_x.T)) # generate outcome # toy model y = x + noise truecoeff = npr.uniform(size=num_corefea) * 10 train_y = train_x_true[:,true_cause].dot(truecoeff) + y_noise * npr.normal(size=N) test_y = test_x_true[:,true_cause].dot(truecoeff) + y_noise * npr.normal(size=N) ``` # baseline naive regression on all features ``` # regularization parameter for ridge regression alpha = 10 def fitcoef(cov_train, train_y, cov_test=None, test_y=None): # linearReg print("linearReg") reg = LinearRegression() reg.fit(cov_train, train_y) print("coef", reg.coef_, "intercept", reg.intercept_) print("train accuracy", reg.score(cov_train, train_y)) if cov_test is not None: print("test accuracy", reg.score(cov_test, test_y)) # # linearReg with statsmodels # print("linearReg with statsmodels") # model = sm.OLS(train_y,sm.add_constant(cov_train, prepend=False)) # result = model.fit() # print(result.summary()) # ridgeReg print("ridgeReg") reg = Ridge(alpha=alpha) reg.fit(cov_train, train_y) print("coef", reg.coef_, "intercept", reg.intercept_) print("train accuracy", reg.score(cov_train, train_y)) if cov_test is not None: print("test accuracy", reg.score(cov_test, test_y)) ``` all three features have coefficient different from zeuo test accuracy degrades much from training accuracy. ``` print("\n###########################\nall features") cov_train = np.column_stack([train_x]) cov_test = np.column_stack([test_x]) fitcoef(cov_train, train_y, cov_test, test_y) ``` next consider oracle, regression only on the core feature ``` print("\n###########################\nall features") cov_train = np.column_stack([train_x[:,true_cause]]) cov_test = np.column_stack([test_x[:,true_cause]]) fitcoef(cov_train, train_y, cov_test, test_y) ``` ## causal-rep now try adjust for pca factor, then learn feature coefficient, construct a prediction function using the learned feature mapping, predict on the test set ``` # fit pca to high correlated training dataset pca = PCA(n_components=1) pca.fit(train_x) pca.transform(train_x) # consider features 0,1 (have to consider a subset of features; # alternatively one can consider features 0,2 # cannot consider all three due to colinearity issues # (a.k.a. violation of overlap)) print("\n###########################\ncore + spurious 1 + pca") candidate_trainfea = train_x[:,:-1] candidate_testfea = test_x[:,:-1] adjust_trainC = pca.transform(train_x) cov_train = np.column_stack([candidate_trainfea, adjust_trainC]) print("linearReg") feareg = LinearRegression() feareg.fit(cov_train, train_y) print("coef", feareg.coef_, "intercept", feareg.intercept_) print("train accuracy", feareg.score(cov_train, train_y)) # cond(candidate_trainfea.dot(candidate_trainfea.T)) ``` above, after adjusting for pca factor, the spurious feature 1 returns close to zero coefficient ``` # construct a prediction model using the learned # feature combination of "core + spurious 1" learned_fea_train = candidate_trainfea.dot(feareg.coef_[:candidate_trainfea.shape[1]])[:,np.newaxis] predreg = LinearRegression() predreg.fit(learned_fea_train, train_y) print("trainfea_coef", predreg.coef_, "intercept", predreg.intercept_) print("trainfea accuracy", predreg.score(learned_fea_train, train_y)) # apply the prediction model on the test data learned_fea_test = candidate_testfea.dot(feareg.coef_[:candidate_trainfea.shape[1]])[:,np.newaxis] print("testfea accuracy", predreg.score(learned_fea_test, test_y)) ``` above, the test accuracy no longer degrades much from the training accuracy. also note that the test accuracy is very close to the oracle accuracy.
github_jupyter