code
stringlengths
38
801k
repo_path
stringlengths
6
263
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.6 64-bit (''base'': conda)' # name: python3 # --- # # Chapter 1: Introduction to Data # # #### Walkthrough of the chapter's *Guided Practice* and Exercises. # ### Guided Practice 1.1 # The proportion of patients in treatment group who had a stroke by the end of their first year can be calculated as # \begin{equation} # \frac{45}{45 + 179} = \frac{45}{224} = 0.20 = 20\% # \end{equation} # ### Exercise 1.1 - Migraine and acupuncture, Part I # # * (a) Around 23.26% of those who received acupuncture were pain free in the treatment group after 24 hours. # * (b) In the control group, around 4.34% were pain free after 24 hours. # * (c) In treatment group we find the highest percentage of pain free patients. # * (d) We might have sampled a population that is not representative of the whole population who suffer from migraine. Even if bad sampling can be an issue, it might not be the only one though. # # ### Exercise 1.2 - Sinusitis and antibiotics, Part I # * (a) Around 77.65% of patients in the treatment group reported improvements in symptoms. # * (b) Around 80.25% of patients in the control group reported improvements in symptoms. # * (c) We have a slightly greater percentage in the control group. # * (d) First of all, we see in this sample a higher percentage in the control group. However, the difference in the percentage is so small it could be from random fluctuations which are normal in these kinds of studies. From this sample, we can't deduct anything real. # # ### Guided Practice 1.2 # The grade of the first loan (as shown in the book) is __A__. The home ownership is __rent__. # ### Guided Practice 1.3 # An feasible organization of grades could be the following: # # | Name | Description | # |-----------------|--------------------------------------------| # | `student_name` | The student name | # | `homework_type` | The type (can be assignment, quiz or exam) | # | `class` | The class for which the grade refers to | # | `grade` | The actual grade | # # It is not exhaustive but it gets the job done. # ### Guided Practice 1.4 # # We can set up a data matrix such as: # # | Name | Description | # |----------------------------------------------------|--------------------------------------------| # | `county` | The county name | # | `state` | The state in which it is located. | # | `population_in_2017` | The class for which the grade refers to | # | `population_change_2010_2017` | The actual grade | # | `poverty` | Poverty index. | # | `etc...` | The additional six characteristics | # # ### Guided Practice 1.6 # The variable `group` is categorical, while the variable `num_migraines` is discrete. # ### Guided Practice 1.7 # In order to create questions, we need to see the data matrix for the dataset `loan50`. To do so we import it with `pandas`. Let's start by importing all the relevant data analysis libraries. # + import numpy as np import pandas as pd import matplotlib.pyplot as plt from pathlib import Path # - # Now let's read the csv file which contains our dataset. # + datasets_folder = Path("../datasets/") loan50_file = datasets_folder / "loan50.csv" loan50_df = pd.read_csv(loan50_file) # - # Let's get an idea about the data by showing the first 10 rows. display(loan50_df.head(10)) # The questions that I would ask are: # * Is there an association between `annual_income` and/or `total_income` and `homeownership`? # * How does `loan_amount` affect `interest_rate`? # # The first question comes from a personal experience of knowledge according to which those on a low income (either annual or total) usually rent houses rather than owning houses (unless, of course, someone inherited a family member house). # # The second question comes from the intuition according to which the amount of the loan influences somehow the interest rate. # ### Exercise 1.3 - Air pollution and birth outcomes, study components. # * (a) The research question of the study could be: do certain levels of air pollutants cause preterm births? # * (b) The subjects were __143,196 births__ between the years 1989 and 1993, taken accordingly. # * (c) The continuous explanatory variables in the study are levels of CO, nitrogen dioxide, ozone, PM10 subjects were exposed to which were calculated during gestation. Then we have a discrete explanatory variable which is the year the observation is collected. The response variable is whether or not the preterm birth happened, and this is definitely a categorical variable. If we were to predict, let's say, how many weeks in advance the preterm occurred, then we would state that such variable would be ordinal. # ### Exercise 1.4 - Buteyko method, study components. # # * (a) The research question is whether the Buteyko method reduces asthma symptoms / improve quality of life. # * (b) The subjects were 600 asthma patients. # * (c) Here we have multiple response variables, due to the fact that we are testing the effectiveness of such method on multiple outcomes, on a scale from 1 to 10: this makes the response variables ordinal categorical. The explanatory variable used is the categorical variable which tells us if the patient took the method or not. # ### Exercise 1.5 - Cheaters, study components. # # * (a) The research question, as always, can be deducted from the purpose of the study: given the age and the explicit instruction or not, what is the relationship between this and the person's honesty? # * (b) Subjects of the experiment are 160 children aged from 5 to 15. # * (c) Recorded variables are age (discrete numerical), sex (nominal categorical) and the fact that the child is an only child or not (nominal categorical). An additional variable which was recorded is the outcome of the fair coin toss, and this is also a categorical variable. # ### Exercise 1.6 - Stealers, study components. # # * (a) The main research question is: given the socio-economic class, how is it likely the individual behaves unethically? # * (b) The subject are 129 undergraduate from Berkeley. # * (c) Explanatory variables recorded are the three ordinal categorical variables that states their money, education and job tier/profiles. The response variable is the number of candies they took after the survey (which a numerical discrete variable). # ### Exercise 1.7 - Migraine and acupuncture, Part II. # The explanatory variable recorded is the kind of acupuncture people received (in the tratment group, migraine-specific acupuncture while in the control group a placebo one). The response variable is the indicator variable which states whether or not they were pain-free after 24 hours from the treatment. # ### Exercise 1.8 - Sinusitis and antibiotics, Part II. # # The explanatory variable is the kind of treatment the patients received. The response variable is whether they had improvements in symptoms or not. # # ### Exercise 1.9 - Fisher’s irises. # # * (a) There are $50 * 3 = 150$ cases (observations). # * (b) The numerical variables included are sepal lenght, sepal width, petal lenght and petal width, which are continuous numerical. # * (c) The only categorical variable is the specie of iris flower, which can be one of the following three levels: _setosa_, _versicolor_ and _virginica_. # ### Exercise 1.10 - Smoking habits of UK residents. # # * (a) Each row represents an observation, a fact which includes personal details and smoking habits from a UK resident taking part in this study. # * (b) In this survey they included 1691 partecipants. # * (c) The variables are: _sex_ (nominal categorical), _age_ (discrete numerical), _marital_ (nominal categorical), _grossIncome_ (ordinal categorical), _smoke_ (nominal categorical), _amtWeekends_ (discrete numerical) amd _amtWeekdays_ (discrete numerical). # ### Exercise 1.11 - US Airports. # # * (a) Variables used are _latitude_ and _longitude_ (continuous numerical), an indicator variable which indicates if the airport is for private or public use (nominal categorical) and an indicator variable which instead indicates the ownership (private or public) of the airport itself. # * (b) Answered above. # ### Exercise 1.12 - UN Votes. # # * (a) The variables included are: _issue_ (nominal categorical), _% of yes_ (continuous numerical), _country_ (nominal categorical) and _year_ (discrete numerical). # * (b) Answered above. # ### Guided Practice 1.9 # # In (2) the target population is all the Duke undergrads in the past 5 years. An individual case can contain all the student's informations and the years it took for him to complete his degree. In (3) the target population is all the people with severe heart disease, and an individual case might contain all the person's information including the severity of his heart disease. # ### Guided Practice 1.11 # # Online reviews are an example of a convenience sample, since online reviews are first done by peole who care about evaluating their experience, and second they are usually from people who has to complain or even people astonished by their products. An average, non-caring user who might be more frequent that what we might think, may not be included. # ### Guided Practice 1.12 # # The answer is no, since we are just observing the apparent casual correlation, but we are not making any experiment where we separate partecipants and, most importantly, we select partecipants carefully. # ### Guided Practice 1.13 # # One confounding variable can be _salary_: those who have higher salaries own a house, and usually who buys a house does not buy it in a multi-unit structure. # ### Exercise 1.13 - Air pollution and birth outcomes, scope of inference. # # * (a) Since the study explicitly states that it is being analyzed the relationship between air pollutants and preterm births in Southern California, I would argue that the population of interest are the prebirths in Southern California. The sample consists of 143,196 births between 1989 and 1993. # * (b) Studies can probably be generalized to the population, but one thing that might help out is to sample from other years and see if we always achieve the same results. # ### Exercise 1.14 - Cheaters, scope of inference. # # * (a) The population of interest is all children aged from 5 to 15. # * (b) Unlike the previous exercise, here we have a smaller sample (160 children). However, we saw certain differences among the groups, as well as among the children characteristics. Therefore, we can cautiously generalize, but in order to be safer, more samples have to be taken. # ### Exercise 1.15 - Buteyko method, scope of inference. # # * (a) The population of interest is ashtma patients (potentially all of them). # * (b) One experiment will never be safe to generalize. Usually it is necessary to have more samples on which we can make the same experiment. And once we see consistent results, we can generalize. Therefore, I am not sure we can generalize such a fact, especially in a medical context. # ### Exercise 1.16 - Stealers, scope of inference. # # * (a) The population of interest is basically the world population since we are studying the relationship between the socio-economic class the individual belongs to and unethical behavior. # * (b) It cannot be generalized because in the first place we are using a convenience sample, recruited in one place (Berkeley). Berkeley students are not representative of the world population. Another fallacy is the method with which the study was designed. A casual relationship cannot be established. # ### Exercise 1.17 - Relaxing after work. # # * (a) An observation. # * (b) A variable. # * (c) A sample statistic. # * (d) A population parameter. # ### Exercise 1.18 - Cats on YouTube. # # * (a) A population parameter. # * (b) A sample statistic. # * (c) An observation. # * (d) A variable. # ### Exercise 1.19 - Course satisfaction across sections. # # * (a) This seems to be an observational study, but whether or not the surveys can take places multiple times throughout the single section or all sections, the study can be prospective or retrospective. # * (b) To evaluate a particular section, professor can sample students who were previously in that section, but currently are in their current ones: students, in this way, never evaluate the current section, thus making an unbiased judgement. In this way, we will have a fully retrospective study since the section students are evaluating already happened. And such a sampling strategy can either be clustered sampling or multistage if we sample within the clusters. # ### Exercise 1.20 - Housing proposal across dorms. # # * (a) This seems to be a retrospective study since it is collected at some point, when students have probably inhabitated the dorms for a while. # * (b) They should definitely use stratified sampling to get a fair survey. # ### Exercise 1.21 - Internet use and life expectancy. # # * (a) Apparently, the more the internet is used in the world, the longer people live. # * (b) This is an observational study. # * (c) Percentage of internet users have increased over time. And over time, we have developed better medicine. So a potential confounding variable can be related to medicine and quality of life. # ### Exercise 1.22 - Stressed out, Part I. # # * (a) This is an observational study. # * (b) There definitely can be truth in the fact that increased stress favors muscle cramps, though such a casual relationship is hard to prove without biological evidence. Furthermore, we have seen that stress makes some behaviors arise. # * (c) Drinking coffee and sleeping less might be confounding since they may biologically promote cramps. # ### Exercise 1.23 - Evaluate sampling methods. # # Point (a) is the least reasonable. Point (b) can be good but the field of study might induce bias due to the fact that some university courses are attended by a certain socio-economic class. Point (c) is reasonable since we are sampling according to age. # ### Exercise 1.24 - Random digit dialing. # # A possible reason can be an increase in randomness. Using a phone book we might get people on the same area, and we might need more samples to truly achieve randomness. # ### Exercise 1.25 - Haters are gonna hate, study confirms. # # * (a) Cases are the 200 randomly sampled individuals. # * (b) The response variable is the reaction towards the oven. # * (c) The explanatory variable is the reaction to the subjects on the dispositional attitude measurement. # * (d) This study makes use of random sampling. # * (e) This is an observational study: we are just observing how, given the first reactions, the individual will react to the oven. We are not making any assumption and we are not controlling variables/separating individuals. # * (f) We can't generalize, as one confounding variable can be the mood, or other things this study factors out. # * (g) A group of 200 is not indicative of the whole population. # ### Exercise 1.26 - Family size. # # Elementary school kids still live with their parents, therefore we will definitely have a bias that will make the family size bigger, since kids live in numerous families usually. The value will definitely be overestimated. # ### Exercise 1.27 - Sampling strategies. # # * (a) Random sampling. We can expect a mean which is not representative of the population mean. # * (b) Giving it only to his friends is not a technical sample and this does will induce a great bias due to the fact that the friends might have similiar habits / patters of usage of social networks. # * (c) This is a convenience sample: only Facebook users will be able to partecipate. Also, this might bias the result, since Facebook is a social network, and those who do not use will not be included. # * (d) This is a multistage sampling. This is the least biased method presented. # ### Exercise 1.28 - Reading the paper. # * (a) We can't conclude that smoking causes dementia because 25% is not a significant percentage and therefore other factors should be held into account. # * (b) The statement is not justified, since sleeping disorders can be caused by the same reasons that cause behavioral disorders, including stress and psychological issues. These are indeed confounding variables. # ### Guided Practice 1.16 # # It is an experiment but it is definitely not blinded since patients know they go the stents. # ### Guided Practice 1.17 # # Stents are invasive. You cannot just replicate them with a placebo: the risks will outweight the benefits. # # __NOTE__: see __Sham Surgery__. # ### Exercise 1.29 - Light and exam performance. # # * (a) The response variable is the exam(s) performance. # * (b) The explanatory variable is the type of lighting which can hold three levels (fluorescent overhead lighting, yellow overhead lighting and no overhead lighting which is desk lamp). # * (c) Gender is the blocking variable, since it might get different effect and therefore we have to make two blocks (males and females) from which we perform our random samples. # ### Exercise 1.30 - Vitamin supplements. # # * (a) It is an experiment since we are randomizing the partecipants into the four treatment groups, and we are testing an actual hypothesis (Vitamic C reduces cold symptoms duration). # * (b) The explanatory variable is the kind of pill the patients are prescribed (placebo pill, 1g pill, 3g pill, 3g pill with additives). # * (c) The patients were blinded. # * (d) Study is double-blind since researchers do not interact with patients. Nurses do, but nurses here do not play any role in the research. # * (e) It may definitely introduce a confounding variable, because if people are not taking it, any potential effect of that pill to these people will not occur. So we may get to wrong conclusions. One thing may be to exclude those who do not take a pill, or having that pill administered by a nurse (which does not need to be blind since she does not play a significant role in the research). # ### Exercise 1.31 - Light, noise, and exam performance. # # * (a) This is an experiment. # * (b) In this study, we have the light factor (as in Exercise 1.29), and the noise factor which has three levels: no noise, construction noise and human chatter noise. # * (c) The sex variable is a blocking variable since between males and females might get different effects from light and noise. # ### Exercuse 1.32 - Music and learning. # # I would randomly select students from elementary, high university/college level schools. From the previous exercise, we have a blocking variable which is sex, then we make the two initial blocks. One other blocking variable might be their performance levels, which can be categorized in discrete levels: this too can be a blocking variable since those who have higher grades might have a better attention. If we include people with learning disorders, we have to make the appropriate blocks. We also want to have a stratified sample, since we want to include students from different university courses. The sample size can vary from 500 to 1000. But given the broad scope of the study, we might even need to recruit more partecipants. Of course, I will not tell them the purpose of the study, so they will be blind. Music will be played in a way they might not realize (I might have them study in a coffee shop with a background music with/without lyrics). They have to be unaware of the fact that they are being studied to understand how music impacts their learning experience. I will let some researchers who are unaware of the treatment groups assess individual students knowledge. # ### Exercise 1.33 - Soda preference. # # In this case, sample size can be one third or one fourth of the class. Gender is a blocking variable so we have to equally include both males and females. In order for our experiment to have more benefits than risks, we might exclude students with diabetes. Whether or not coke has sugar is not shown in the can/glass. Instead, they will have two letters: __A__ and __B__. Letter __A__ can represent the diet coke while __B__ can represent standard coke and vice versa, i.e. the letter is not tied to one or the other type of coke. Through a randomized Python script I will bind each student to the __A/B__ combination, and for each one of them the combination is different and randomly chosen. # + import random def generate_combination(): soda_type = ["diet", "standard"] glass_letter = ["A", "B"] random.shuffle(soda_type) return dict(zip(glass_letter, soda_type)) def bind_partecipants_to_combinations(partecipants): bindings = {} for partecipant in partecipants: bindings[partecipant] = generate_combination() return bindings # - # After all, I can know the combinations without influencing anything. I will handle the two glasses and record their preferences. I will then make a chart to visually describe my findings. # ### Exercise 1.34 - Exercise and mental health. # # * (a) This is an experiment since we are assigning the treatment and control groups which will receive or not the prescription to exercise. # * (b) The treatment group is the one that gets exercise, while the control group is the one that does not. # * (c) There is no blocking. # * (d) There's no blinding mechanism since those who gets prescribed not to exercise are not receiving any placebo who can make them "unaware" of them belonging to the control group. # * (e) From one experiment, it is impossible to fully and confidently generalize on the whole population, but we can definitely keep going on making more and more well designed experiments. We can infer that there might be (if any) a casual relationship, though full certainty can be given by further experiments, which are blind or, even better, double-blind. In this way, we don't get any emotional or personal bias. # * (f) If we can make this experiment blind or double-blind, then I would definitely fund it. Having a second researcher who does the actual assesment, and even better if this researcher is skeptical about the study, will make it a study worth funding. # ### Exercise 1.35 - Pet names. # # * (a) This is an observational study. # * (b) The most common dog name is Lucy. The most common cat name is Luna. # * (b) All the names under the line are most common for cats than for dogs. These names are: Sophie, Oliver and Lily. # * (d) The line shows a positive relationship: this means that dogs and cats tend to share the same names, with some names being more common than others, in both categories. # ### Exercise 1.36 - Stressed out, Part II. # # * (a) This is an experiment. # * (b) Since people volunteered, they might not be helpful as they might already have an inner bias. # ### Exercise 1.37 - Chia seeds and weight loss. # # * (a) This is an experiment. # * (b) Experimental group is made up of those receiving chia seeds, while control group receives a placebo. # * (c) The variable gender is the blocking variable in this study. # * (d) Blinding has been employed since patients don't know what they are taking. # * (e) Sample size is very small, though it is possible that there is a causal relationship, even though for that sample it is different to generalize to the whole population. # ### Exercise 1.38 - City council survey. # # * (a) Random sampling. Randomness is always positive, but we can leave out neighborhoods and some areas. # * (b) Since we are dividing the city into its neighborhoods, we are employing cluster sampling. In this way we will get a dataset which is completely representative of the whole city, even though we might not include all the potential representative cases of a neighborhood. It would be very difficult to sample from each neighborhood. # * (c) This might be a convenient way of a sampling random clusters, and then randomly sample within these clusters. The sample size might not represent the whole population since three neighborhoods are a very small number. # * (d) This sampling is similiar to the above, but we are using more clusters, and randomly sample within these clusters. This can get a quite good representative sample, but still we may miss 12 neighborhoods which can be particularly interesting for our survey. # * (e) This is probably the worse way of sampling since we will definitely get a biased response. # ### Exercise 1.39 - Flawed reasoning. # # * (a) The fact that kids are handing them to parents and back to teachers might push parents to be optimistic even if they are finding actual difficulties. Maybe handing them through email would be a better way. # * (b) There can be congenital respiratory problems in the parents which can be handed over to the kids. Also, we have a blocking variable which is a potential pollution index in the place each woman lives, which has to be taken into account in order to perform a correct and representative sampling. Furthermore, these kinds of studies need to be longitudinal, i.e. multiple studies throughout time. # * (c) First of all, this is an observational study which is needed to raise an hypothesis and not to conclude a fact. Those who run are tendentially healthier and fitter than those who do not, and therefore the study is already biased. # ### Exercise 1.40 - Income and education in US counties. # # * (a) The explanatory variable is the percent with Bachelor's Degree. The response variable is the Per Capita Income. # * (b) The greater is the percent of Bachelor's Degree holders, the higher Per Capita Income. However, it is clear that for a very short number of counties, we still have a higher income even with a low percentage of Bachelor's holders. # * (c) Since the relationship is positive, we can say that having a Bachelor's Degree increases the salary, and this contributes to increasing both the percentage of Bachelor's holder and thus the Per Capita Income. # ### Exercise 1.41 - Eat better, feel better? # # * (a) This is an experiment. # * (b) The explanatory variable is the dietary regimen imposed to each of the three groups. The response variable is the psychological well-being (though which particular indicator is not specified). # * (c) The study cannot be generalized as we are just employing students, who are likely to be healthy, in a country which has a great health index. There could potentially be confounding variables. Also, we have volunteers, and this might bias the response. # * (d) As above, it is difficult to tell if this study can be generalized. # * (e) We can just say that giving fruits and vegetables might have positive mental and physical health outcomes. We have to reinforce the possibility, and we should leave out the time frame, since it might not be precise. # ### Exercise 1.42 - Screens, teens, and psychological well-being. # # * (a) This is an observational (prospective) study. # * (b) The most obvious explanatory variable is the screen time. Then we record child’s sex and age, and on the mother’s education, ethnicity, psychological distress, and employment. # * (c) The response variable is psychological well-being index. # * (d) This study has a sample size which is very big, and if the data is truthful, then we can generalize the findings. # * (e) Since we found no great evidence, there is no causal relationship. # ### Exercise 1.43 - Stanford Open Policing. # # * (a) We can see that at each stop, the Driver's race was recorded, whether or not that driver has been searched, whether or not the driver was arrested and the geographical informations. # * (b) County, State and Driver's Race are nominal categorical. Number of stops per year is a discrete numerical, while the percentages of cars searched and drivers arrested are continuous numerical. # * (c) In this case, the average search rate is the response variable (summarizing the search rate for each race), and the race, % of searched cars, county and state are the explanatory variables. # ### Exercise 1.44 - Space launches. # # * (a) It was recorded whether the company performing the launch is public or private, whether the launch was successful or not, and in which year the launch happened. # * (b) Year is numerical discrete, and the other two indicator variables are categorical. # * (c) As in the previous exercise, the success rate is a statistic we can compute as the average of successful launches and failed launches. In this case, this becomes the response variable while the type of company, the year and the outcome of the launch will become the explanatory variables. # The main points where I made an error can be summarized by the following explanatory picture which is present in the course slides ([link](https://www.openintro.org/go?id=slide_stat_latex_intro_to_data&referrer=/book/os/index.php), page 85). # # ![Study Matrix](assets/study_matrix.png) #
chapter1/chapter1_walkthrough.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction to Data Science – Relational Databases # *COMP 5360 / MATH 4100, University of Utah, http://datasciencecourse.net/* # # Up to now, we've mainly used flat tables to store and process data. Most structured data in the real world, however, is stored in databases, and specifically in [relational databases](https://en.wikipedia.org/wiki/Relational_database). Other database types have specific use cases, such as performance (NoSQL), are suitable for graphs (graph databases, e.g., Neo4j), or are compatibility with in-memory OO data structures (Object Oriented Databases). Relational databases, and their implementations – Relational Database Management Systems (RDBMS), are still the dominant way to store enterprise data though. # # Unlike with APIs or web-scarping, it is unlikely that you will be able to access a public database directly. Databases are powerful, but also can be somewhat difficult to use. It's more likely that an API that you interact with is powerd by a relational database behind the scenes, hiding the complexity. # # Also, while we've mainly considered reading data, databases are meant also for writing, increasing the potential for abuse. So the most likely scenario for having read/write access to a database is if the database is managed by you or your organization. # What's the purpose of a relational database? It separates data into multiple tables to avoid redundancy. Here is a simple example for an online transaction in a flat table that describes the data: # # | Product Name | Price | Product Description | Customer Name | Customer Address| # | ------------- | - | - | - | -| # | MacBook | 2700 | MacBook Pro 15' | <NAME> | San Francisco | # | Dongle HDMI | 40 | USB-C to HDMI | <NAME> | San Francisco | # | Dongle USB-B | 40 | USB-C to USB-B | <NAME> | San Francisco | # # The problem here is that we store the name of the customer multiple times, and, if multiple customers purchase the same item, also all the information about the products. Obviously, that's neither great from a storage efficiency perspective nor from an update perspective. If <NAME> wants to update his address, for example, than we'd have to update all three rows. # A different approach is to separate this table into multiple, meaningful tables. We'll introduce a Customers, Products and a Transactions table. And we'll introduce keys for each row in this table: # # **Customers Table**: # # | CID | Customer Name | Customer Address | # | - | - | - | # | 1 | <NAME> | San Francisco | # # **Products Table**: # # | PID | Product Name | Price | Product Description | # | - | - | - | - | # | 1 | MacBook | 2700 | MacBook Pro 15' | # | 2 | Dongle HDMI | 40 | USB-C to HDMI | # | 3 | Dongle USB-B | 40 | USB-C to USB-B | # # CID and PID are called **primary keys**, as they uniquely identify the row in the table. # # The Transaction Table now refers to the products and the customers only by their keys, which are called **foreign keys** in this context, as they are the primary keys of a "foreign" table. # # **Transactions Table**: # # | TID | PID | CID | # | - | - | - | # | 1 | 1 | 1 | # | 2 | 3 | 1 | # | 3 | 2 | 1 | # # Now, if <NAME> wants to update his address, or if the seller wants to update the product description, we can do this in a single place. # # Of course, if we want a record of all transactions including price and products in a flat table for data analysis, we have to do a little more work. In this lecture we'll learn how to do that and more. # ## Relational Database Management System (RDBMS) Transactions # # RDBMS are designed to support **read, write, and update** transactions, in addition to operations that **create tables**, etc. In practice, for a database to remain consistent, these transactions have to guarantee [certain properties](https://en.wikipedia.org/wiki/ACID). We'll largely ignore updates, creating tables, etc. – these are operations that you'll likely need if you build an application on top of a database. For the purpose of our data science class, we'll stick to reading/querying for data, as we want to learn **how to get data out of a database**. You can learn more about databases in CS 6530 or the undergrad version CS 5530. # ## SQL - The Structured Query Language # # # [SQL](https://en.wikipedia.org/wiki/SQL) is a domain specific language to execute the transactions described above. We'll look mainly at queries to read and aggregate data. # # Here is a very simple query: # # ```SQL # SELECT * FROM products # ``` # # This statement selects and retreives all rows (`*`) from the table `products`. # # We can restrict the number of lines to retrieve with a `WHERE` clause: # # ```SQL # SELECT * FROM products WHERE 'Price' > 100 # ``` # # This retrieves all the rows where the price is higher than 100. # # We'll go through more specific SQL statements on real examples. You can learn more about SQL [online](https://www.codecademy.com/learn/learn-sql). # # ## SQLite # # Most database management systems are implemented as client-server systems, i.e., the database is hosted on a dedicated server where multiple users can read and write. Examples are [PostgreSQL](https://www.postgresql.org/), [MySQL](https://www.mysql.com/) or [Oracle](https://www.oracle.com/database/index.html). These database management systems are fast and can handle multiple users withouth causing conflicts. However, they require a separate server and installation, so we'll use [SQLite](https://sqlite.org/), a database that works for individual users and stores the whole database in a single file. SQLite is widely used for single-user cases and will serve our purposes. # # While these different databases have different features and performance characteristics, they all support SQL, so the skills you will learn here transfer widely. # # We'll also use the [sqlite python interface](https://docs.python.org/3/library/sqlite3.html). # ## Sample Database # # This tutorial uses the SQLite sample database found [here](http://www.sqlitetutorial.net/sqlite-sample-database/). # # Here is a chart of the database schema: # # ![](database_schema.png) # * The `employees` table stores employees data such as employee id, last name, first name, etc. It also has a field named `ReportsTo` to specify who reports to whom. # * The `customers` table stores customers data. # * The `invoices` & `invoice_items` tables: these two tables store invoice data. The invoices table stores invoice header data and the invoice_items table stores the invoice line items data. # * The `artists` table stores artists data. It is a simple table that contains only artist id and name. # * The `albums` table stores data about a list of tracks. Each album belongs to one artist. However, one artist may have multiple albums. # * The `media_types` table stores media types such as MPEG audio file, ACC audio file, etc. # * The `genres` table stores music types such as rock, jazz, metal, etc. # * The `tracks` table store the data of songs. Each track belongs to one album. # * The `playlists` & `playlist_track` tables: playlists table store data about playlists. Each playlist contains a list of tracks. Each track may belong to multiple playlists. The relationship between the playlists table and tracks table is many-to-many. The playlist_track table is used to reflect this relationship. # # The diagram above highlights the **primary keys** in each table and the relationships, using **foreign keys** to each other table. Note that `employees` has a self-reference, to capture the reports-to relationship. # ## Querying the Table # + import pandas as pd import sqlite3 as sq # we connect to the database, which - in the case of sqlite - is a local file conn = sq.connect("./chinook.db") # we retreive the "cursor" on the database. c = conn.cursor() # now we can execute a SQL statement c.execute("SELECT * FROM albums") # and print the first line from the result print(c.fetchone()) # - # Here, we used a cursor for accessing the data. To retrieve data after executing a SELECT statement, you can either treat the cursor as an iterator, call the cursor’s `fetchone()` method to retrieve a single matching row, or call `fetchall()` to get a list of the matching rows. # # This is how you use it as an iterator: for row in c: print(row) # And here we retrieve all the data at once, as an array of tuples. c.execute("SELECT * FROM albums") c.fetchall() # ## SQL and Pandas # An alternative approach that fits well into our previous workflows is to use pandas to store the table directly, using the [`read_sql()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql.html) function: # this is similar to the read_csv()function pd.read_sql("""SELECT * FROM albums""", conn).head() # This, of course, works only for reading data, but we'll use it for now. # ## Selective Results # When we **specify individual columns** instead of using the `*`, we get only those columns: pd.read_sql("""SELECT Title FROM albums""", conn).head() # Here, we've used `head()` to display a short list, but we can also do that in the SQL query with the `LIMIT` keyword. Of course, in this case, the data will actually be limited: pd.read_sql("""SELECT Title FROM albums LIMIT 3""", conn).head() # We can run more **restrictive queries** with the `WHERE` keyword: # note that the triple quotes allow us to break lines pd.read_sql("""SELECT * FROM albums WHERE Title = 'Let There Be Rock'""", conn) # Legal operations are: # # * `=` is. # * `<>` is not # * Numerical comparisons: `>, >=, <, <=` # * `IN` allows you to pass a list of options. # * `BETWEEN` allows you to define a range. # * `LIKE` can be used with pattern matching, similar to regular expressions. # * `IS NULL` or `IS NOT NULL` test for empty values. # The `is not` operator: pd.read_sql("""SELECT * FROM albums WHERE Title <> 'Let There Be Rock'""", conn).head(10) pd.read_sql("""Select * FROM invoices WHERE Total > 10""", conn) # The `BETWEEN` keyword: pd.read_sql("""Select * FROM invoices WHERE Total BETWEEN 15 and 20""", conn) # The `IN` keyword: pd.read_sql("""Select * FROM invoices WHERE BillingCity in ('Chicago', 'London', 'Berlin')""", conn) # Expressions with `LIKE`: # starts with a C pd.read_sql("""Select * FROM invoices WHERE BillingCity LIKE 'C%'""", conn) # ends with rt pd.read_sql("""Select * FROM invoices WHERE BillingCity LIKE '%rt'""", conn) # We can use `ORDER BY` to sort the output. pd.read_sql("""SELECT * FROM albums ORDER BY title""", conn) # ### Exercise 1: Simple Queries # # 1. List all the rows in the genres table. # 2. List only the genre names (nothing else) in the table. # 3. List the genre names ordered by name. # 4. List the genre entries with IDs between 13 and 17. # 5. List the genre entries that start with an S. # 6. List the GenreIds of Rock, Jazz, and Reggae (in one query). # ## Referencing and Renaming # When we write queries involving multiple tables, we have to make it clear which table we mean in an expression. In the following we are using a fully qualified name `tracks.Name` to refer to the name column in the tracks table. Additionally, we can give a temporary name to a column, so that futher expressions are shorter to write. Here we rename `tracks.Name` to `TrackName`. pd.read_sql("""SELECT tracks.Name as TrackName FROM tracks;""", conn).head() # ## Joining Tables # # There are many ways to aggregate and group by with SQL, and we won't be able to cover them in detail. We'll use a simple example for an `INNER JOIN` to resolve a foreign key relationship here. # # We want to create a table that contains one row for each track and for each track also contains the album title. Let's take a looks at the tracks table: pd.read_sql("""SELECT * FROM tracks;""", conn).head() # And the albums table: pd.read_sql("""SELECT * FROM albums;""", conn).head(5) # We can see that AlbumID appears in both tables. This is the relationship we care about: # # ![](albums_tracks_tables.jpg) # # An INNER JOIN simply merges tables based on a primary key / foreign key relationship. # # Here is an example: pd.read_sql("""SELECT trackid, tracks.name as Track, albums.title as Album FROM tracks INNER JOIN albums ON albums.albumid = tracks.albumid""", conn).head(30) # Let's take this statement apart: # # ```SQL # SELECT # trackid, # tracks.name as Track, # albums.title as Album # ``` # Here, we chose what to select, and we also rename the columns, so it's clear what we mean with the `as` statement. # # ```SQL # FROM # tracks # INNER JOIN albums ON albums.albumid = tracks.albumid; # ``` # # This is where the magic happens: we say that the tracks and albums table should be joined on the `albums.albumid = tracks.albumid` relationship. # Using `INNER JOIN`, you can construct flat tables out of databases that you then can use in the data science process. SQL is of course much more powerful, you can do mathematical operations, group-by/aggregates, etc. However, we can do most of this on a dataframe also, but doing a join like that would be rather painful without the INNER JOIN SQL statement. # Next, we want to also add the artist name. We can just append another `INNER JOIN` statement. pd.read_sql("""SELECT tracks.name as Track, albums.title as Album, artists.name as Artist FROM tracks INNER JOIN albums ON albums.albumid = tracks.albumid INNER JOIN artists ON artists.artistid = albums.artistid""", conn).head(40) # Now we can combine that with a `WHERE` condition to get a more specific result. Here we're looking at all the Rolling Stones albums in this dataset. pd.read_sql("""SELECT tracks.name as Track, albums.title as Album, artists.name as Artist FROM tracks INNER JOIN albums ON albums.albumid = tracks.albumid INNER JOIN artists ON artists.artistid = albums.artistid WHERE artists.name = 'The Rolling Stones'""", conn).head(40) # ## Group By # # We've seen group by before for dataframes, and we could just use pandas' group by since we're using a dataframe anyways. However, it's useful to understand how group by works in SQL as it can be much more efficient to just get the right data out in the first place. # # Here is a statement that gives us the number of tracks on an album. We use the `COUNT` aggregation function to get the number of tracks. pd.read_sql("""SELECT albumid, COUNT(trackid) as '# Tracks' FROM tracks GROUP BY albumid;""", conn).head() # Of course, album IDs aren't readable, so we want to combined them with album titles. We first do the `INNER JOIN` and then the `GROUP BY`. pd.read_sql("""SELECT tracks.albumid, albums.title as Album, COUNT(tracks.trackid) FROM tracks INNER JOIN albums ON albums.albumid = tracks.albumid GROUP BY tracks.albumid""", conn).head(30) # We can combine this with filters, using the `HAVING` keyword, which works like the `WHERE` keyword but for aggregate functions. Here we use `HAVING COUNT`. pd.read_sql("""SELECT tracks.albumid, albums.title as Album, COUNT(tracks.trackid) FROM tracks INNER JOIN albums ON albums.albumid = tracks.albumid GROUP BY tracks.albumid HAVING COUNT(tracks.trackid) > 15;""", conn).head(30) # Of course, we can also use a varaible for the aggregated value: pd.read_sql("""SELECT tracks.albumid, albums.title as album, COUNT(tracks.trackid) as trackcount FROM tracks INNER JOIN albums ON albums.albumid = tracks.albumid GROUP BY tracks.albumid HAVING trackcount > 15;""", conn).head(10) # Here are several other aggregation functions, `AVG`, `MAX` and `SUM`. We also do some simple math to convert miliseconds into minutes: pd.read_sql("""SELECT tracks.albumid, albums.title as Album, COUNT(tracks.trackid), AVG(tracks.Milliseconds)/1000/60 as 'Average(minutes)', MAX(tracks.Milliseconds)/1000/60 as 'Max(minutes)', SUM(tracks.Milliseconds)/1000/60 as 'Sum(minutes)' FROM tracks INNER JOIN albums ON albums.albumid = tracks.albumid GROUP BY tracks.albumid ORDER BY AVG(tracks.Milliseconds) DESC""", conn).head(30) # ## Exercise 2: Joining # # 1. Create a table that contains track names, genre name and genre ID for each track. Hint: the table is sorted by genres, look at the tail of the dataframe to make sure it works correctly. # 2. Create a table that contains the counts of tracks in a genre by using the GenreID. # 3. Create a table that contains the genre name and the count of tracks in that genre. # 4. Sort the previous table by the count. Which are the biggest genres? Hint: the DESC keyword can be added at the end of the sorting expression. # ## Security # We can use python variables to specify the columns. However, you typically shouldn't trust the content of the variables, especially in a user facing system. Imagine, you read in which attribute to query for from a website field where a user can specify a name. The user could use this to attack your SQL server using [SQL Injection](https://en.wikipedia.org/wiki/SQL_injection). For example, for this statement: title = "Use Your Illusion I" # executescript runs multiple sql statements. # This isn't very helpful for quering, but can be great for other operations. c.executescript("SELECT * FROM albums WHERE Title = '" + title + "'") # Let's look at the tables in the database: pd.read_sql("""SELECT name FROM sqlite_master WHERE type='table' ORDER BY name; """, conn) # Now, if we were to read `title` from a user, and adversarial user could write: # # ```SQL # "a';DROP TABLE invoice_items;" # ``` # # This selects everything from table "a", then concludes the SQL command with `;`, and then executed the next SQL command `DROP TABLE invoice_items`, which – you guessed it – deletes the table invoice_items. title = "a';DROP TABLE invoice_items;" print("""SELECT * FROM albums WHERE Title = '""" + title + "'") c.executescript("""SELECT * FROM albums WHERE Title = '""" + title + "'") # This throws an error, but the table is gone: pd.read_sql("""SELECT name FROM sqlite_master WHERE type='table' ORDER BY name; """, conn) # Instead, you should use a ? for substitution, and pass in a tuple. Also, avoid using `executescript`. This will make sure that no additional statements will be executed. title = ("a';DROP TABLE playlists;",) # this will result in an error c.execute('SELECT * FROM albums WHERE title=?', title) # The `playlists` table is still alive and well: pd.read_sql("""SELECT name FROM sqlite_master WHERE type='table' ORDER BY name; """, conn) # Here is a working example with safe code: title = ("Use Your Illusion I",) # this will result in an error c.execute('SELECT * FROM albums WHERE title=?', title) c.fetchone() # Of course that's not all to say about making your code secure, but it's a start! # ![](exploits_of_a_mom.png)
24-Databases/24-databases.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np # plt.style.use('helvet2') df = pd.read_csv('./Support_Files/df_35158.csv', index_col=0) df.head() # + plt.scatter(df['time'], df['h98y2']) plt.plot(df['time'], df['h98y2'], lw=0.4, color='k', ls='--') plt.show() # + plt.scatter(df['nev'], df['h98y2']) plt.plot(df['nev'], df['h98y2'], lw=0.4, color='k', ls='--') plt.show() # + fig, ax = plt.subplots(nrows=1, ncols=3, sharey=True, figsize=(6,3), dpi=150) nev_exponent = 20 ax[0].set_ylabel(r'$\mathrm{n_{e,v}[10^{%2.0d}m^{-3}]}$'%(nev_exponent)) nev_yval = df['nev']*np.power(10.0,-20) intervals = np.array([[1.0,2.0],[3.0,4.0]]) clrs_cycle = ['C0', 'C1', 'C2', 'C4', 'C5', 'C6'] clr = ['k']*len(df['time']) ax[0].scatter(df['h98y2'], nev_yval, c='k') ax[1].scatter(df['taue']*1e3, nev_yval, c='k') ax[2].scatter(df['beta'], nev_yval, c='k') for interv, i in zip(intervals, range(len(intervals))): msk = (df['time']>=interv[0])&(df['time']<interv[1]) ax[0].scatter(df['h98y2'][msk], nev_yval[msk], c=clrs_cycle[i]) ax[1].scatter(df['taue'][msk]*1e3, nev_yval[msk], c=clrs_cycle[i]) ax[2].scatter(df['beta'][msk], nev_yval[msk], c=clrs_cycle[i]) ax[0].plot(df['h98y2'], nev_yval, lw=0.4, color='k', ls='--') ax[1].plot(df['taue']*1e3, nev_yval, lw=0.4, color='k', ls='--') ax[2].plot(df['beta'], nev_yval, lw=0.4, color='k', ls='--') ax[0].set_xlabel(r'$\mathrm{H_{98,y2}}$') ax[1].set_xlabel(r'$\mathrm{\tau\,[ms]}$') ax[2].set_xlabel(r'$\mathrm{\beta_{N}}$') plt.tight_layout() plt.show() # - def triple_plot_df(df, intervals=[[1.0,2.0]], nev_exponent=20): fig, ax = plt.subplots(nrows=1, ncols=3, sharey=True, figsize=(6,3), dpi=150) ax[0].set_ylabel(r'$\mathrm{n_{e,v}[10^{%2.0d}m^{-3}]}$'%(nev_exponent)) nev_yval = df['nev']*np.power(10.0,-20) intervals = np.array([[1.0,2.0],[3.0,4.0]]) clrs_cycle = ['C0', 'C1', 'C2', 'C4', 'C5', 'C6'] clr = ['k']*len(df['time']) ax[0].scatter(df['h98y2'], nev_yval, c='k') ax[1].scatter(df['taue']*1e3, nev_yval, c='k') ax[2].scatter(df['beta'], nev_yval, c='k') for interv, i in zip(intervals, range(len(intervals))): msk = (df['time']>=interv[0])&(df['time']<interv[1]) ax[0].scatter(df['h98y2'][msk], nev_yval[msk], c=clrs_cycle[i]) ax[1].scatter(df['taue'][msk]*1e3, nev_yval[msk], c=clrs_cycle[i]) ax[2].scatter(df['beta'][msk], nev_yval[msk], c=clrs_cycle[i]) ax[0].plot(df['h98y2'], nev_yval, lw=0.4, color='k', ls='--') ax[1].plot(df['taue']*1e3, nev_yval, lw=0.4, color='k', ls='--') ax[2].plot(df['beta'], nev_yval, lw=0.4, color='k', ls='--') ax[0].set_xlabel(r'$\mathrm{H_{98,y2}}$') ax[1].set_xlabel(r'$\mathrm{\tau\,[ms]}$') ax[2].set_xlabel(r'$\mathrm{\beta_{N}}$') plt.tight_layout() plt.show() rdf = df[['h98y2','taue','beta','nev']] sns_plot = sns.pairplot(rdf, hue='nev', size=2.5)
devel_plot_df.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # # NbDocs. Build docs from jupyter notebooks. # Create pages with jupyter notebooks. # Convert it to markdown and build docs with MkDocs. #
nbs/README.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Machine Learning and Mathmatical methods # # <table> # <thead> # <tr> # <th>Type of ML Problem</th> # <th>Description Example</th> # </tr> # </thead> # <tbody> # <tr> # <td>Classification</td> # <td>Pick one of the N labels, samples, or nodes</td> # </tr> # <tr> # <td>(Linear) Regression</td> # <td> # Predict numerical or time based values<br /> # Click-through rate # </td> # </tr> # <tr> # <td>Clustering</td><td>Group similar examples (relevance)</td> # </tr> # <tr> # <td>Association rule learning</td><td>Infer likely association patterns in data</td> # </tr> # <tr> # <td>Structured output</td> # <td> # Natural language processing<br /> # image recognition # <td> # </tr> # </tbody> # </table> # # <p>In this lesson we will focus on an Association rule learning (<b>Affinity Analysis</b>) and Structured output (<b>bag of words</b>)</p> # ### Affinity Analysis - to determine hobby recomendations # # Affinity Analysis is data wrangling used to determine the similiarities between objects (samples). Its used in: # * Sales, Advertising, or similar Recommendations (you liked "Movie" you might also like "other movie") # * Mapping Familia Human Genes (i.e. the "do you share ancestors?" people) # * Social web-maps that associate friend "likes" and "sharing" (guess which we are doing) # # It works by finding associations among samples, that is "finding combinations of items, objects, or anything else which happen frequently together". Then it builds rules which use these to determine the likelyhood (probablity) of items being related...and then we build another graph (no kidding, well kind-of. depends on application). # # We are going to use this with our data but typically this would be hundreds to millions of transactions to ensure statistical significance. # # <small>If your really interested in how these work (on a macro level): <a href="https://medium.com/@smirnov.am/e-commerce-recommendation-systems-basket-analysis-518009d46b79">Smirnov has a great blog about it</a></small> # ### We start by building our data into a set of arrays (real arrays) # # So we will be using numpy (a great library that allows for vector, array, set, and other numeric operations on matrix/arrays with Python) - the arrays it uses are grids of values (all same type), indexed by a tuple of nonnegative integers. # # The dataset can be thought of as each hobby (or hobby type) as a column with a -1 (dislike), 0 (neutral), or 1 (liked) based on friends - we will assume all were strong relationships at this point. Weighting will be added later. So think of it like (except we are dropping the person cause I don't care who the person is in this): # # <table> # <thead> # <tr> # <th>Person</th> # <th>Football</th> # <th>Reading</th> # <th>Chess</th> # <th>Sketching</th> # <th>video games</th> # </tr> # </thead> # <tbody> # <tr> # <th>Josiah</th> # <th>1</th> # <th>1</th> # <th>1</th> # <th>-1</th> # <th>1</th> # </tr> # <tr> # <th>Jill</th> # <th>1</th> # <th>0</th> # <th>0</th> # <th>1</th> # <th>-1</th> # </tr> # <tr> # <tr> # <th>Mark</th> # <th>-1</th> # <th>1</th> # <th>1</th> # <th>1</th> # <th>-1</th> # </tr> # </tbody> # </table> # <p> Now lets look for our <b>premise<b> (the thing to find out): <i>A person that likes Football will also like Video Games</i></p> # + from numpy import array #faster to load and all we need here # We are going to see if a person that likes Football also likes Video Games (could do reverse too) # Start by building our data (fyi, capital X as in quantity, and these will be available in other cells) X = array([ [1,1,1,-1,1], [1,0,0,1,-1], [-1,1,1,1,-1] ]) features = ["football", "reading", "chess", "sketching", "video games"] n_features = len(features) # for interating over features # + football_fans = 0 # Even though it is a numpy array we can still just use it like an interator for sample in X: if sample[0] == 1: #Person really likes football football_fans += 1 print("{}: people love Football".format(football_fans)) #So we could already figure out just that it's 50% right now # - # ### Lets build some rule sets # <p>The simplest measurements of rules are <b>support</b> and <b>confidence</b>.<br /> # <br /><b>Support</b> = Number of times rule occurs (frequency count) # <br /><b>Confidence</b> = Percentage of times rule applies when our premise applies<br /><br /> # We will use dictionaries (defaultdicts supply a default value) to compute these. We will count the number of valid results and a simple frequency of our premises. To test multiple premises we will make a large loop over them. By then end they will have: # <ul><li>A Set as the key (0,4) for Football vs. Video Games</li><li>The count of valid/invalid/total occurances (based on dict)</li></ul></p> # # #### Why must we test multiple premises? Because this is ML, its analytics - it is not based on a human querying but statistical calc # # <sub><i>Those who have done Python may see areas where comprehensions, enumerators, generators, and caches could speed this up - if so great! but let's start simple.</i></sub> # # # <sub>We call this simple rule sets but they are the same that are used for much more complex data: <a href="https://charleshsliao.wordpress.com/2017/06/10/movie-recommender-affinity-analysis-of-apriori-in-python/">See lines 59, 109, and 110</a></sub> # + from collections import defaultdict valid_rules = defaultdict(int) #count of completed rules num_occurances = defaultdict(int) #count of any premise # - for sample in X: for premise in range(n_features): if sample[premise] == 1: #We are only looking at likes right now num_occurances[premise] += 1 # That's one like people for conclusion in range(n_features): if premise == conclusion: continue #i.e. if we are looking at the same idx move to next if sample[conclusion] == 1: valid_rules[(premise, conclusion)] +=1 #conlusion shows "Like" or 1 so valid rule # ### Now we determine the confidence of our rules # # Make a copy of our collection of valid rules and counts (the valid_rule dict). Then loop over the set and divide the frequency of valid occurances by the total frequency....if this reminds you of one item in your ATM project - well...it should. support = valid_rules ## two indexes (0,4) compared as the key: count of matching 1s (likes) as value # The key is actually a set confidence = defaultdict(float) for (premise, conclusion) in valid_rules.keys(): rule = (premise, conclusion) confidence[rule] = valid_rules[rule] / num_occurances[premise] ## set of indexes as key: # of valid occurances / total occurances as value # ### Then it's just time to print out the results (lets say top 2) # + # Let's find the top 7 rules (by occurance not confidence) sorted_support = sorted(support.items(), key=itemgetter(1), # sort in the order of the values of the dictionary reverse=True) # Descending sorted_confidence = sorted(confidence.items(), key=itemgetter(1), reverse=True) # Now these dicts are in same order # Now just print out the top 2 for i in range(2): print("Associated Rule {}".format(i + 1)) premise, conclusion = sorted_support[i][0] print_rule(premise, conclusion, support, confidence, features) # - ### Function would usually go at top but for notebook I can just run this before earlier cell and want to show progression def print_rule(premise, conclusion, support, confidence, features): premise_name = features[premise] #so if 0 = football, 1 = ... conclusion_name = features[conclusion] print("rule: if someone likes {} they will also like {}".format(premise_name, conclusion_name)) print("confidence: {0:.3f} : idx {1} vs. idx {2}".format( confidence[(premise, conclusion)], premise, conclusion)) print("support:{}".format(support[(premise, conclusion)])) # ## Prints Associated Rule 1 rule: if someone likes reading they will also like chess confidence: 1.000 : idx 1 vs. idx 2 support:2 Associated Rule 2 rule: if someone likes chess they will also like reading confidence: 1.000 : idx 2 vs. idx 1 support:2 Associated Rule 3 rule: if someone likes football they will also like reading confidence: 0.500 : idx 0 vs. idx 1 support:1 Associated Rule 4 rule: if someone likes football they will also like chess confidence: 0.500 : idx 0 vs. idx 2 support:1 Associated Rule 5 rule: if someone likes football they will also like video games confidence: 0.500 : idx 0 vs. idx 4 support:1 Associated Rule 6 rule: if someone likes reading they will also like football confidence: 0.500 : idx 1 vs. idx 0 support:1 Associated Rule 7 rule: if someone likes reading they will also like video games confidence: 0.500 : idx 1 vs. idx 4 support:1
DataEngineering/AffinityAnalysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction to Sets - Lab # # ## Introduction # # Probability theory is all around. A common example is in the game of poker or related card games, where players try to calculate the probability of winning a round given the cards they have in their hands. Also, in a business context, probabilities play an important role. Operating in a volatile economy, companies need to take uncertainty into account and this is exactly where probability theory plays a role. # # As mentioned in the lesson before, a good understanding of probability starts with understanding of sets and set operations. That's exactly what you'll learn in this lab! # # ## Objectives # # You will be able to: # # * Use Python to perform set operations # * Use Python to demonstrate the inclusion/exclusion principle # # # ## Exploring Set Operations Using a Venn Diagram # # Let's start with a pretty conceptual example. Let's consider the following sets: # # - $\Omega$ = positive integers between [1, 12] # - $A$= even numbers between [1, 10] # - $B = \{3,8,11,12\}$ # - $C = \{2,3,6,8,9,11\}$ # # # #### a. Illustrate all the sets in a Venn Diagram like the one below. The rectangular shape represents the universal set. # # <img src="./images/venn_diagr.png" width="600"> # # #### b. Using your Venn Diagram, list the elements in each of the following sets: # # - $ A \cap B$ # - $ A \cup C$ # - $A^c$ # - The absolute complement of B # - $(A \cup B)^c$ # - $B \cap C'$ # - $A\backslash B$ # - $C \backslash (B \backslash A)$ # - $(C \cap A) \cup (C \backslash B)$ # # # # #### c. For the remainder of this exercise, let's create sets A, B and C and universal set U in Python and test out the results you came up with. Sets are easy to create in Python. For a guide to the syntax, follow some of the documentation [here](https://www.w3schools.com/python/python_sets.asp) # Create set A A = None 'Type A: {}, A: {}'.format(type(A), A) # "Type A: <class 'set'>, A: {2, 4, 6, 8, 10}" # Create set B B = None 'Type B: {}, B: {}'.format(type(B), B) # "Type B: <class 'set'>, B: {8, 11, 3, 12}" # Create set C C = None 'Type C: {}, C: {}'.format(type(C), C) # "Type C: <class 'set'>, C: {2, 3, 6, 8, 9, 11}" # Create universal set U U = None 'Type U: {}, U: {}'.format(type(U), U) # "Type U: <class 'set'>, U: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}" # Now, verify your answers in section 1 by using the correct methods in Python. To provide a little bit of help, you can find a table with common operations on sets below. # # | Method | Equivalent | Result | # | ------ | ------ | ------ | # | s.issubset(t) | s <= t | test whether every element in s is in t # | s.issuperset(t) | s >= t | test whether every element in t is in s # | s.union(t) | s $\mid$ t | new set with elements from both s and t # | s.intersection(t) | s & t | new set with elements common to s and t # | s.difference(t) | s - t | new set with elements in s but not in t # | s.symmetric_difference(t) | s ^ t | new set with elements in either s or t but not both # #### 1. $ A \cap B$ A_inters_B = None A_inters_B # {8} # #### 2. $ A \cup C $ A_union_C = None A_union_C # {2, 3, 4, 6, 8, 9, 10, 11} # #### 3. $A^c$ (you'll have to be a little creative here!) A_comp = None A_comp # {1, 3, 5, 7, 9, 11, 12} # #### 4. $(A \cup B)^c $ A_union_B_comp = None A_union_B_comp # {1, 5, 7, 9} # #### 5. $B \cap C' $ B_inters_C_comp = None B_inters_C_comp # {12} # #### 6. $A\backslash B$ compl_of_B = None compl_of_B # {2, 4, 6, 10} # #### 7. $C \backslash (B \backslash A) $ C_compl_B_compl_A = None C_compl_B_compl_A # {2, 6, 8, 9} # #### 8. $(C \cap A) \cup (C \backslash B)$ C_inters_A_union_C_min_B= None C_inters_A_union_C_min_B # {2, 6, 8, 9} # ## The Inclusion Exclusion Principle # Use A, B and C from exercise one to verify the inclusion exclusion principle in Python. # You can use the sets A, B and C as used in the previous exercise. # # Recall from the previous lesson that: # # $$\mid A \cup B\cup C\mid = \mid A \mid + \mid B \mid + \mid C \mid - \mid A \cap B \mid -\mid A \cap C \mid - \mid B \cap C \mid + \mid A \cap B \cap C \mid $$ # # Combining these main commands: # # | Method | Equivalent | Result | # | ------ | ------ | ------ | # | a.union(b) | A $\mid$ B | new set with elements from both a and b # | a.intersection(b) | A & B | new set with elements common to a and b # # along with the `len(x)` function to get to the cardinality of a given x ("|x|"). # # What you'll do is translate the left hand side of the equation for the inclusion principle in the object `left_hand_eq`, and the right hand side in the object `right_hand_eq` and see if the results are the same. # left_hand_eq = None print(left_hand_eq) # 9 elements in the set right_hand_eq = None print(right_hand_eq) # 9 elements in the set None # Use a comparison operator to compare `left_hand_eq` and `right_hand_eq`. Needs to say "True". # ## Set Operations in Python # # Mary is preparing for a road trip from her hometown, Boston, to Chicago. She has quite a few pets, yet luckily, so do her friends. They try to make sure that they take care of each other's pets while someone is away on a trip. A month ago, each respective person's pet collection was given by the following three sets: Nina = set(["Cat","Dog","Rabbit","Donkey","Parrot", "Goldfish"]) Mary = set(["Dog","Chinchilla","Horse", "Chicken"]) Eve = set(["Rabbit", "Turtle", "Goldfish"]) # In this exercise, you'll be able to use the following operations: # # |Operation | Equivalent | Result| # | ------ | ------ | ------ | # |s.update(t) | $s \mid t$ |return set s with elements added from t| # |s.intersection_update(t) | s &= t | return set s keeping only elements also found in t| # |s.difference_update(t) | s -= t |return set s after removing elements found in t| # |s.symmetric_difference_update(t) | s ^= t |return set s with elements from s or t but not both| # |s.add(x) | | add element x to set s| # |s.remove(x) | | remove x from set s| # |s.discard(x) | | removes x from set s if present| # |s.pop() | | remove and return an arbitrary element from s| # |s.clear() | |remove all elements from set s| # # Sadly, Eve's turtle passed away last week. Let's update her pet list accordingly. None Eve # should be {'Rabbit', 'Goldfish'} # This time around, Nina promised to take care of Mary's pets while she's away. But she also wants to make sure her pets are well taken care of. As Nina is already spending a considerable amount of time taking care of her own pets, adding a few more won't make that much of a difference. Nina does want to update her list while Marie is away. None Nina # {'Chicken', 'Horse', 'Chinchilla', 'Parrot', 'Rabbit', 'Donkey', 'Dog', 'Cat', 'Goldfish'} # Mary, on the other hand, wants to clear her list altogether while away: None Mary # set() # Look at how many species Nina is taking care of right now. None n_species_Nina # 9 # Taking care of this many pets is weighing heavily on Nina. She remembered Eve had a smaller collection of pets lately, and that's why she asks Eve to take care of the common species. This way, the extra pets are not a huge effort on Eve's behalf. Let's update Nina's pet collection. None Nina # 7 # Taking care of 7 species is something Nina feels comfortable doing! # ## Writing Down the Elements in a Set # # # Mary dropped off her Pet's at Nina's house and finally made her way to the highway. Awesome, her vacation has begun! # She's approaching an exit. At the end of this particular highway exit, cars can either turn left (L), go straight (S) or turn right (R). It's pretty busy and there are two cars driving close to her. What you'll do now is create several sets. You won't be using Python here, it's sufficient to write the sets down on paper. A good notion of sets and subsets will help you calculate probabilities in the next lab! # # Note: each set of action is what _all three cars_ are doing at any given time # # a. Create a set $A$ of all possible outcomes assuming that all three cars drive in the same direction. # # b. Create a set $B$ of all possible outcomes assuming that all three cars drive in a different direction. # # c. Create a set $C$ of all possible outcomes assuming that exactly 2 cars turn right. # # d. Create a set $D$ of all possible outcomes assuming that exactly 2 cars drive in the same direction. # # # e. Write down the interpretation and give all possible outcomes for the sets denoted by: # - I. $D'$ # - II. $C \cap D$, # - III. $C \cup D$. # ## Optional Exercise: European Countries # # Use set operations to determine which European countries are not in the European Union. You just might have to clean the data first with pandas. # + import pandas as pd #Load Europe and EU europe = pd.read_excel('Europe_and_EU.xlsx', sheet_name = 'Europe') eu = pd.read_excel('Europe_and_EU.xlsx', sheet_name = 'EU') #Use pandas to remove any whitespace from names # - europe.head(3) #preview dataframe eu.head(3) # + # Your code comes here # - # ## Summary # In this lab, you practiced your knowledge on sets, such as common set operations, the use of Venn Diagrams, the inclusion exclusion principle, and how to use sets in Python!
source_files/index.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Trabalho de Álgebra Linear - Compressão usando SVD e Reconhecimento de Imagens # **Objetivo:** # # Realizar compressão de imagens através do método fatoração SVD (Singular Value Decomposition ou Decomposição em Valores Singulares), conseguindo escolher o nível de compressão desejada e, consequentemente analisar a influência desse método no reconhecimento de dígitos do Banco de Dados "Written Numbers MNIST". # # --- # # **Motivação:** # # A compressão de imagens tem como finalidade reduzir a quantidade de dados redundantes, conservando apenas as informações importantes que ajude na indentificação dos dígitos. # Existem duas classificações quanto a compressão de imagens: a compressão sem perda de dados e a compressão com perda de dados. # O método SVD se encaixa na segunda categoria que abrange problemas como: # # # * Velocidade de transmissão de dados via redes # * Reduzir imagens com qualidade acima do que o olho humano detecta (formato JPEG e GIF) # * Compressão de imagens de satélites # * Eliminação de ruídos # # import gzip import numpy as np import matplotlib.pyplot as plt import timeit # # Acessando Conjunto Treino: # ### Acessando os Labels do Conjunto Treino: # + file_train_labels = gzip.open('train-labels-idx1-ubyte.gz','r') file_train_labels.read(8) buf_train_labels = file_train_labels.read(12000) data_train_labels = np.frombuffer(buf_train_labels, dtype=np.uint8).astype(np.int32) train_labels = data_train_labels # Print dos labels do Conjunto Treino print(train_labels[:10]) # - # ### Acessando as Imagens do Conjunto Treino: # + file_train_images = gzip.open('train-images-idx3-ubyte.gz','r') image_size = 28 num_images_train = 12000 file_train_images.read(16) buf_train_images = file_train_images.read(image_size * image_size * num_images_train) data_train_images = np.frombuffer(buf_train_images, dtype=np.uint8).astype(np.float32) data_train_images = data_train_images.reshape(num_images_train, image_size, image_size, 1) train_images = data_train_images # Plot das Imagens do Conjunto Treino for i in range(0, 3): plt.imshow(np.asarray(data_train_images[i]).squeeze(), cmap='gray') plt.show() print("Foto {}".format(i+1)) print() # - # # Acessando Conjunto Teste: # ### Acessando os Labels do Conjunto Teste: # + file_test_labels = gzip.open('t10k-labels-idx1-ubyte.gz','r') file_test_labels.read(8) buf_test_labels = file_test_labels.read(2000) data_test_labels = np.frombuffer(buf_test_labels, dtype=np.uint8).astype(np.int32) test_labels = data_test_labels # Pritando os labels do Conjunto Teste print(test_labels[:10]) # - # ### Acessando as Imagens do Conjunto Teste: # + file_test_images = gzip.open('t10k-images-idx3-ubyte.gz','r') image_size = 28 num_images_test = 2000 file_test_images.read(16) buf_test_images = file_test_images.read(image_size * image_size * num_images_test) data_test_images = np.frombuffer(buf_test_images, dtype=np.uint8).astype(np.float32) data_test_images = data_test_images.reshape(num_images_test, image_size, image_size, 1) test_images = data_test_images for i in range(0, 3): plt.imshow(np.asarray(data_test_images[i]).squeeze(), cmap='gray') plt.show() print("Foto {}".format(i+1)) print() # - # # Tratamento das Imagens: # ### Tratando as imagens do Conjunto Treino: # Criando matriz de dados **Não-Centralizada** (**Matriz de imagens**): X_train = np.asarray(train_images).squeeze().reshape(num_images_train, 784) print(X_train.shape) # Calculando media e centralizando dados: # + train_mean = np.mean(X_train, axis = 0) X_train = (X_train - train_mean) print(X_train.shape) # - # Calculando Decomposição de Valor Singular - SVD: # + start_time = timeit.default_timer() U_train, S_train, Vt_train = np.linalg.svd(X_train, full_matrices=False) V_train = Vt_train.T elapsed = timeit.default_timer() - start_time print('U shape =', np.shape(U_train), 'S length =', np.shape(S_train), 'Vt shape =', np.shape(Vt_train)) print('Tempo de processamento: {:6.2f} s'.format(elapsed)) # - # Estabelecendo o número de autovalores a ser utilizado para remodelar a imagem: autovalor_num = 50 # 50 é a quantidade de autovalores com maior acuracia # Redimensionalizando as Imagens e Projetando no Plano dos Autovetores: Y_train = np.dot(U_train[:,:autovalor_num], np.diag(S_train)[:autovalor_num, :autovalor_num]) X_train = np.dot(Y_train, Vt_train[:autovalor_num, :]) print(X_train.shape) # Retornando a media: X_train = (X_train + train_mean) print(X_train.shape) # Voltando ao formato imagem: train_image_plot = X_train[0] # Foi escolhido como exemplo a image[0], ou seja primeira imagem do Conjunto Treino print(train_image_plot.shape) train_image_plot = train_image_plot.reshape((28, 28)) print(train_image_plot.shape) # Plot da imagem final (Exemplo): train_image_plot = np.asarray(train_image_plot) plt.imshow(train_image_plot, cmap='gray') plt.show() # ### Tratando as imagens do Conjunto Teste: # Criando matriz de dados **Não-Centralizada**: X_test = np.asarray(test_images).squeeze().reshape(num_images_test, 784) print(X_test.shape) # # Resultados: # ### Reconhecimento (Usando Distância Euclidiana): # + start_time = timeit.default_timer() successes = 0 for i in range(0, num_images_test): image_test_recognition = X_test[i].reshape((784, 1)) image_train_recognition = X_train[0].reshape((784, 1)) aux_image = image_test_recognition - image_train_recognition min_distance = np.linalg.norm(aux_image) index = 0 for j in range(0, num_images_train): image_train_recognition = X_train[j].reshape((784, 1)) aux_image = image_test_recognition - image_train_recognition if np.linalg.norm(aux_image) < min_distance: min_distance = np.linalg.norm(aux_image) index = j if test_labels[i] == train_labels[index]: successes += 1 min_distance = 0 print("Accuracy = {:.2f}% | {} successes | {} autovalores".format((successes/num_images_test)*100, successes, autovalor_num)) elapsed = timeit.default_timer() - start_time print('Tempo de processamento: {:6.2f} s'.format(elapsed)) # - # # Gráficos: # ### Gráfico da Acurácia X Número de Valores Singulares: # + start_time = timeit.default_timer() list_accuracy = [] for num_autovalor in range(10, 784, 10): # Criando a Matriz Final Y_train = np.dot(U_train[:,:num_autovalor], np.diag(S_train)[:num_autovalor, :num_autovalor]) X_train = np.dot(Y_train, Vt_train[:num_autovalor, :]) # Retonando a media X_train = (X_train + train_mean) successes = 0 for i in range(0, num_images_test): image_test_recognition = X_test[i].reshape((784, 1)) image_train_recognition = X_train[0].reshape((784, 1)) aux_image = image_test_recognition - image_train_recognition min_distance = np.linalg.norm(aux_image) index = 0 for j in range(0, num_images_train): image_train_recognition = X_train[j].reshape((784, 1)) aux_image = image_test_recognition - image_train_recognition if np.linalg.norm(aux_image) < min_distance: min_distance = np.linalg.norm(aux_image) index = j if test_labels[i] == train_labels[index]: successes += 1 min_distance = 0 accuracy = (successes/num_images_test)*100 list_accuracy.append(accuracy) print("Accuracy = {:.2f}% | {} successes | {} autovalores".format(accuracy, successes, num_autovalor)) elapsed = timeit.default_timer() - start_time print('\nTempo de processamento: {:6.2f} s'.format(elapsed)) # - plt.plot(np.arange(10, 784, 10), list_accuracy) plt.xlabel('Número de Valores Singulares') plt.ylabel('Acurácia %') plt.axis([0, 800, 80, 100]) plt.show() # Dando um zoom: plt.plot(np.arange(10, 784, 10), list_accuracy) plt.xlabel('Número de Valores Singulares') plt.ylabel('Acurácia %') plt.axis([0, 500, 85, 95]) plt.show() # ### Gráfico da Acurácia X Gráfico da Variabilidade Acumulada: # ### Gráfico da Variabilidade Acumulada X Número de Valores Singulares: # + total_var_train = np.sum(S_train**2) y_plot = np.cumsum(S_train**2) / total_var_train x_plot = np.array(range(len(S_train))) plt.plot(x_plot, y_plot*100) plt.xlabel('Acurácia %') plt.ylabel('Variabilidade Acumulada %') plt.show() # -
.ipynb_checkpoints/SVD_NOVO-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline import warnings warnings.filterwarnings('ignore') import kornia import torch from PIL import Image import matplotlib.pyplot as plt import numpy as np from kornia.feature import * from kornia.geometry import * def visualize_LAF(img, LAF, img_idx = 0): x, y = kornia.feature.laf.get_laf_pts_to_draw(LAF, img_idx) plt.figure() plt.imshow(kornia.utils.tensor_to_image(img[img_idx])) plt.plot(x, y, 'r') plt.show() return img = Image.open('img/siemens.png') # Image with synthetic pattern from SFOP paper: # http://www.ipb.uni-bonn.de/data-software/sfop-keypoint-detector/ timg = kornia.utils.image_to_tensor(np.array(img)).float() / 255. / 255. #Yes, it is not a typo. #This specific image somehow has [0, 255**2] range plt.imshow(kornia.utils.tensor_to_image(timg[0])) # - # Kornia has a module ScaleSpaceDetector for local feature extraction. # It consists of several modules, each one is tunable and differentiable: # # 1. Scale pyramid # 2. Responce (aka "cornerness") # 3. Soft non-maxima-suppression # 4. Affine shape detector # 5. Orientation detector # # The output is two tensors: with responces and with local affine frames (LAFs). You can feed LAFs to extract_patches function to then describe the corresponding patches # + #Lets detect Harris corners n_feats = 50 mr_size = 3.0 nms = kornia.geometry.ConvSoftArgmax3d(kernel_size=(3,3,3), # nms windows size (scale, height, width) stride=(1,1,1), # stride (scale, height, width) padding=(0, 1, 1)) # nms windows size (scale, height, width) harris = kornia.feature.responses.CornerHarris(0.04) harris_local_detector = ScaleSpaceDetector(n_feats, resp_module=harris, nms_module=nms, mr_size=mr_size) print (harris_local_detector) # - lafs, resps = harris_local_detector(timg) visualize_LAF(timg,lafs) # + # Now Shi-Tomasi gftt_local_detector = ScaleSpaceDetector(n_feats, resp_module=kornia.feature.CornerGFTT(), nms_module=nms, mr_size=mr_size) lafs, resps = gftt_local_detector(timg) visualize_LAF(timg,lafs) # + #And hessian blobs hessian_local_detector = ScaleSpaceDetector(n_feats, resp_module=kornia.feature.BlobHessian(), nms_module=nms, mr_size=mr_size) lafs, resps = hessian_local_detector(timg) visualize_LAF(timg,lafs) # + # What about harris-affine features? harris_affine_local_detector = ScaleSpaceDetector(n_feats, resp_module=kornia.feature.CornerHarris(0.04), nms_module=nms, mr_size=mr_size, aff_module=kornia.feature.LAFAffineShapeEstimator(patch_size=19)) lafs, resps = harris_affine_local_detector(timg) visualize_LAF(timg,lafs) # + # Now lets also detect feature orientation harris_affine_local_detector = ScaleSpaceDetector(n_feats, resp_module=kornia.feature.CornerHarris(0.04), nms_module=nms, mr_size=mr_size, aff_module=kornia.feature.LAFAffineShapeEstimator(patch_size=19), ori_module=kornia.feature.LAFOrienter(patch_size=19)) lafs, resps = harris_affine_local_detector(timg) visualize_LAF(timg,lafs) # + #Lets describe patches with SIFT descriptor descriptor = kornia.SIFTDescriptor(32) patches = kornia.feature.extract_patches_from_pyramid(timg, lafs) B, N, CH, H, W = patches.size() # Descriptor accepts standard tensor [B, CH, H, W], while patches are [B, N, CH, H, W] shape # So we need to reshape a bit :) descs = descriptor(patches.view(B * N, CH, H, W)).view(B, N, -1) print (descs.shape) print (descs[0, 0]) # -
examples/feature_detection/local_feature_detection_example.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## Configuration # _Initial steps to get the notebook ready to play nice with our repository. Do not delete this section._ # Code formatting with [black](https://pypi.org/project/nb-black/). # %load_ext lab_black import os import pathlib this_dir = pathlib.Path(os.path.abspath("")) data_dir = this_dir / "data" import pytz import glob import requests import pandas as pd import json from datetime import datetime, date from bs4 import BeautifulSoup import regex as re # ## Download # Retrieve the page url = "https://utility.arcgis.com/usrsvcs/servers/9ccc4670c77442f7b12b198a904f4a51/rest/services/HHS/Covid/MapServer/0/query?f=json&returnGeometry=false&outFields=*&where=1=1" r = requests.get(url) data = r.json() # ## Parse dict_list = [] for item in data["features"]: d = dict( county="Marin", area=item["attributes"]["Name"], confirmed_cases=item["attributes"]["CumulativePositives"], ) dict_list.append(d) df = pd.DataFrame(dict_list) # Get timestamp headers = {"User-Agent": "Mozilla/5.0"} url = "https://coronavirus.marinhhs.org/surveillance" page = requests.get(url, headers=headers) soup = BeautifulSoup(page.content, "html.parser") last_updated_sentence = soup.find("div", {"class": "last-updated"}).text last_updated_sentence date = re.search("[0-9]{2}.[0-9]{2}.2[0-9]{1}", last_updated_sentence).group() df["county_date"] = pd.to_datetime(date).date() # ## Vet # Ensure we're getting all 54 areas of Marin County try: assert not len(df) > 54 except AssertionError: raise AssertionError("Marin County's scraper has more rows than before") try: assert not len(df) < 54 except AssertionError: raise AssertionError("Marin's scraper is missing rows") # ## Export # Set date tz = pytz.timezone("America/Los_Angeles") today = datetime.now(tz).date() slug = "marin" df.to_csv(data_dir / slug / f"{today}.csv", index=False) # ## Combine csv_list = [ i for i in glob.glob(str(data_dir / slug / "*.csv")) if not str(i).endswith("timeseries.csv") ] df_list = [] for csv in csv_list: if "manual" in csv: df = pd.read_csv(csv, parse_dates=["date"]) else: file_date = csv.split("/")[-1].replace(".csv", "") df = pd.read_csv(csv, parse_dates=["county_date"]) df["date"] = file_date df_list.append(df) df = pd.concat(df_list).sort_values(["date", "area"]) df.to_csv(data_dir / slug / "timeseries.csv", index=False)
places/marin.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.10 64-bit # name: python3 # --- from sklearn.feature_extraction.text import CountVectorizer from sklearn.model_selection import cross_val_predict from sklearn.naive_bayes import MultinomialNB from nltk.tokenize import TweetTokenizer from nltk.stem import WordNetLemmatizer from sklearn.pipeline import Pipeline import matplotlib.pyplot as plt from nltk import word_tokenize from sklearn import metrics from sklearn import svm import pandas as pd import nltk import re data = pd.read_csv('Tweets_Mg.csv', encoding='utf-8') data.head() data.columns data.Classificacao.value_counts() df.Classificacao.value_counts().plot(kind='bar') df.count() data.drop_duplicates(['Text'], inplace=True) data.Text.count() tweets = data['Text'] classes = data['Classificacao'] # + # de, a, para, com, ... nltk.download('stopwords') # Radical das palavras pedreiro --> pedra, pedreira --> pedra nltk.download('rslp') nltk.download('punkt') nltk.download('wordnet') # - def RemoveStopWords(instancia): stopwords = set(nltk.corpus.stopwords.words('portuguese')) palavras = [i for i in instancia.split() if not i in stopwords] return (" ".join(palavras)) def Stemming(instancia): stemmer = nltk.stem.RSLPStemmer() palavras = [] for word in instancia.split(): palavras.append(stemmer.stem(word)) return (" ".join(palavras)) def Limpeza_dados(instancia): instacia = re.sub(r"http\S+", "", instancia).lower().replace('.','').replace(';','').replace('-','').replace(':','').replace(')','') return (instacia) # + wordnet_lemmatizer = WordNetLemmatizer() def Lemmatization(instancia): palavras = [] for word in instancia.split(): palavras.append(wordnet_lemmatizer.lemmatize(word)) return (" ".join(palavras)) # - RemoveStopWords('Eu não gosto do partido, e também não votaria novamente nesse governante!') Stemming('Eu não gosto do partido, e também não votaria novamente nesse governante!') Limpeza_dados('Assita aqui o video do Governador falando sobre a CEMIG https://www.uol.com.br :) ;)') Lemmatization('Os carros são bonitos') # + def Preprocessing(instancia): stemmer = nltk.stem.RSLPStemmer() instancia = re.sub(r"http\S+", "", instancia).lower().replace('.','').replace(';','').replace('-','').replace(':','').replace(')','') stopwords = set(nltk.corpus.stopwords.words('portuguese')) palavras = [stemmer.stem(i) for i in instancia.split() if not i in stopwords] return (" ".join(palavras)) tweets = [Preprocessing(i) for i in tweets] # - Preprocessing('Eu não gosto do partido, e também não votaria novamente nesse governante. Assita o video aqui https:// :)') tweets[:50] df = pd.read_csv('Tweets_Mg.csv', encoding='utf-8') df.drop_duplicates(['Text'], inplace=True) tweets = df['Text'] classes = df['Classificacao'] frase = 'A live do @blogminerando é show! :) :-) ;) =D ⛪ ' word_tokenize(frase) tweet_tokenizer = TweetTokenizer() tweet_tokenizer.tokenize(frase) vectorizer = CountVectorizer(analyzer='word', tokenizer=tweet_tokenizer.tokenize) fre_tweets = vectorizer.fit_transform(tweets) type(fre_tweets) fre_tweets.shape modelo = MultinomialNB() modelo.fit(fre_tweets, classes) fre_tweets.A testes = ['Esse governo está no início, vamos ver o que vai dar', 'Estou muito feliz com o governo de Minas esse ano', 'O estado de Minas Gerais decretou calamidade financeira!!!', 'A segurança desse país está deixando a desejar', 'O governador de Minas é mais uma vez do PT', 'O governador de Minas é muito bom'] freq_testes = vectorizer.transform(testes) for t, c in zip (testes,modelo.predict(freq_testes)): print(t +", "+ c) print(modelo.classes_) modelo.predict_proba(freq_testes).round(2) def marque_negacao(texto): negacoes = ['não', 'not'] negacoes_detectada = False resultado = [] palavras = texto.split() for word in palavras: word = word.lower() if negacoes_detectada == True: word = word + '_NEG' if word in negacoes: negacoes_detectada = True resultado.append(word) return (" ".join(resultado)) marque_negacao('Eu gosto do partido, votaria novamente nesse governante!') marque_negacao('Eu Não gosto do partido e também não votaria novamente nesse governante!') pipeline_simples = Pipeline( [ ('counts', CountVectorizer()), ('classifier', MultinomialNB()) ] ) pipeline_negacoes = Pipeline( [ ('counts', CountVectorizer(tokenizer=lambda texto: marque_negacao(texto))), ('classifier', MultinomialNB()) ] ) pipeline_simples.fit(tweets, classes) pipeline_simples.steps pipeline_negacoes.fit(tweets, classes) pipeline_negacoes.steps pipeline_simples_svm = Pipeline( [ ('counts', CountVectorizer()), ('classifier', svm.SVC(kernel='linear')) ] ) pipeline_negacoes_svm = Pipeline( [ ('counts', CountVectorizer(tokenizer=lambda texto: marque_negacao(texto))), ('classifier', svm.SVC(kernel='linear')) ] ) resultados = cross_val_predict(pipeline_simples, tweets, classes, cv=10) metrics.accuracy_score(classes, resultados) sentimento=['Positivo','Negativo','Neutro'] print (metrics.classification_report(classes,resultados,sentimento)) print (pd.crosstab(classes, resultados, rownames=['Real'], colnames=['Predito'], margins=True)) def Metricas(modelo, tweets, classes): resultados = cross_val_predict(modelo, tweets, classes, cv=10) return 'Acurácia do modelo: {}'.format(metrics.accuracy_score(classes,resultados)) Metricas(pipeline_simples,tweets,classes) Metricas(pipeline_negacoes,tweets,classes) Metricas(pipeline_simples_svm,tweets,classes) Metricas(pipeline_negacoes_svm,tweets,classes)
NLP/sentiment_analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # Evaluate two model results based on new metrics definition import numpy as np import os import matplotlib.pyplot as plt get_ipython().magic(u'matplotlib inline') # import matplotlib as mpl # mpl.rcParams['figure.dpi'] = 300 from sklearn.metrics import r2_score, mean_squared_error from functions.functions import load_data_forGridSearch, load_object from matplotlib.ticker import FormatStrFormatter def evaluate_generic_metrics(labels, predictions): # label_norm = np.sqrt(np.sum(labels**2, axis=1)) # prediction_norm = np.sqrt(np.sum(predictions**2, axis=1)) label_norm = [np.linalg.norm(y) for y in labels] prediction_norm = [np.linalg.norm(y) for y in predictions] # R^2 r2_c = r2_score(y_true=labels, y_pred=predictions, multioutput='raw_values') r2 = r2_score(y_true=labels, y_pred=predictions) r2_norm = r2_score(y_true=label_norm, y_pred=prediction_norm) # Root mean squared error rmse_c = np.sqrt(mean_squared_error(y_true=labels, y_pred=predictions, multioutput='raw_values')) rmse = np.sqrt(mean_squared_error(y_true=labels, y_pred=predictions)) rmse_norm = np.sqrt(mean_squared_error(y_true=label_norm, y_pred=prediction_norm)) return {"R2_x": r2_c[0], "R2_y": r2_c[1], "R2_z": r2_c[2], "R2": r2, "R2_norm": r2_norm, "RMSE_x_mT": rmse_c[0]*1000, "RMSE_y_mT": rmse_c[1]*1000, "RMSE_z_mT": rmse_c[2]*1000, "RMSE_mT": rmse*1000, "RMSE_norm_mT": rmse_norm*1000} mlp_results_folder = "../Models/Small_training_set/ANN" rf_results_folder = "../Models/Small_training_set/RF" # load testing data X_test, y_test = load_data_forGridSearch("../Data", "test") # + # load predictions run = 6 p_list = np.arange(0.1, 1.0, 0.1) p_list = p_list.round(decimals=1) ANN_R2_norm_results = [] ANN_RMSE_norm_results = [] RF_R2_norm_results = [] RF_RMSE_norm_results = [] for p in p_list: ann_predictions = np.load(os.path.join(mlp_results_folder, "predictions_ANN{}_{}.npy".format(run, p))) mlp_results = evaluate_generic_metrics(y_test, ann_predictions) ANN_R2_norm_results.append(mlp_results["R2_norm"]) ANN_RMSE_norm_results.append(mlp_results["RMSE_norm_mT"]) rf_predictions = np.load(os.path.join(rf_results_folder, "predictions_RF{}_{}.npy".format(run, p))) rf_results = evaluate_generic_metrics(y_test, rf_predictions) RF_R2_norm_results.append(rf_results["R2_norm"]) RF_RMSE_norm_results.append(rf_results["RMSE_norm_mT"]) # + # used this one in paper, just to keep consistent fig, axs = plt.subplots(2, 1, figsize=(8, 10)) x_ticks_list = ['new', '10%', '20%', '30%', '40%', '50%', '60%', '70%', '80%', '90%'] axs[0].plot(p_list, ANN_R2_norm_results, linestyle = "-", marker='D', color = 'k', label = "ANN") axs[0].plot(p_list, RF_R2_norm_results, linestyle = "-", marker='s', label = "RF", color='g') axs[0].set_ylabel(r"$R_{norm}^2$", size=16) axs[0].yaxis.set_major_formatter(FormatStrFormatter('%.2f')) axs[0].legend(loc="lower right", prop={'size': 14}) axs[0].set_xticklabels(x_ticks_list) # axes.set_xticklabels(labels, fontdict=None, minor=False) axs[0].tick_params(axis="x", labelsize=12) axs[0].tick_params(axis="y", labelsize=12) axs[1].plot(p_list, ANN_RMSE_norm_results, linestyle = "-", marker='D', color = 'k', label = "ANN") axs[1].plot(p_list, RF_RMSE_norm_results, linestyle = "-", marker='s', label = "RF", color='g') axs[1].set_ylabel(r"$RMSE_{norm} (mT)$", size=16) axs[1].yaxis.set_major_formatter(FormatStrFormatter('%.2f')) axs[1].legend(loc="upper right", prop={'size': 14}) axs[1].set_xticklabels(x_ticks_list) axs[1].tick_params(axis="x", labelsize=12) axs[1].tick_params(axis="y", labelsize=12) axs[1].set_xlabel("\n Percentage of Training Data", size=16) plt.show() # save figure # fig.savefig("../Figures/less_training_single_trial.png", dpi=300) # used in the paper
Code/Misc/10_Small_Training_Set.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (Anaconda) # language: python # name: anaconda3 # --- # ## **Itertools** import itertools as it # #### accumulate(_p_,_[,func]_) print(list(it.accumulate([1,2,3,4]))) print(list(it.accumulate([1,2,3,4],min))) print(list(it.accumulate([1,2,3,4],max))) print(list(it.accumulate([1,2,3,4],operator.mul))) print(list(it.accumulate([1,2,3,4],lambda x,y: x**2-y))) print(list(it.accumulate(['A','2','3','4']))) # #### chain(_iterables_) print(list(it.chain([1,2,3,4],range(5),['a','b','c'],'qwerty'))) # #### chain.from_iterable(_iterable_) print(list(it.chain.from_iterable(['abcd','1234']))) # #### combinations(_iterable_,_r_) print(list(it.combinations('abcde',2))) print(list(it.combinations('abcde',3))) print(list(it.combinations([1,2,3,4],2))) print(list(it.combinations(range(5),2))) # #### combinations_with_replacement(_iterable_,_r_) print(list(it.combinations_with_replacement('abcde',2))) print(list(it.combinations_with_replacement(range(5),2))) # #### compress(_data_,_selector_) print(list(it.compress(range(5),[True,False,True,True]))) print(list(it.compress(range(5),[0,1,0]))) print(list(it.compress(range(5),[2,1,0]))) print(list(it.compress(range(5),['a','b','c']))) # #### count(_start=0_,_step=1_) print(list(zip('01234',it.count(2,1)))) print(list(zip('01234',it.count()))) print(list(zip('01234',it.count(0,5)))) print(list(zip('01234',it.count(0)))) # ### cycle(_iterable_) print(list(zip('01234',it.cycle('ABCDEFGHI')))) print(list(zip(range(10),it.cycle('ABCD')))) # #### dropwhile(_predicate_,_iterable_) print(list(it.dropwhile(lambda x:x<5,rabnge(10)))) print(list(it.dropwhile(lambda x:x<5,[0,1,2,3,4,5,6,7,8,6,5,4,3,2,1]))) # #### filterfalse(_predicate_,_iterable_) print(list(it.filterfalse(lambda x:x%2,range(11)))) # #### groupby(_iterable_,_key=None_) print([k for k,g in it.groupby('AAAAAABBBBBBAAbBBCCCCCBBBBAAACCBB')]) print([list(g) for k,g in it.groupby('AAAAAABBBBBBAAbBBCCCCCBBBBAAACCBB')]) # ### islice(_iterable_,_stop_) || islice(_iterable_,_start_,_stop_**[**,_step_**]**) print(list(it.islice('ABCDEFGHI',2))) print(list(it.islice('ABCDEFGHI',2,4))) print(list(it.islice('ABCDEFGHI',2,None,3))) # #### permutations(_iterable_,_r=None_) print(list(it.permutations('ABCDEFG',2))) print(list(it.permutations('ABCD',3))) print(list(it.permutations('ABC'))) # #### product(_*iterables_,_repeat=1_) print(list(it.product(range(3),range(2)))) print(list(it.product(range(3),range(2),repeat=2))) print(list(it.product(range(2),repeat=3))) # #### repeat(_object_**[**,_times_**]**) print(list(map(pow,range(10),it.repeat(2)))) # #### starmap(_function_,_iterable_) print(list(it.starmap(pow,[(0,1),(1,2),(3,4)]))) # #### takewhile(_predicate_,_iterable_) print(list(it.takewhile(lambda x:x<5,range(10)))) print(list(it.takewhile(lambda x:x<5,[1,5,2,3,4,5,6,5,4,3,2]))) # #### tee(_iterable_,_n=2_) print([i for i in it.tee(range(10))]) print([list(i) for i in it.tee(range(10),3)]) # #### zip\_longest(_*iterables_,_fillvalue=None_) print(list(it.zip_longest(range(3),range(5)))) print(list(it.zip_longest(range(3),'range(5)',fillvalue='Sas')))
Standard_Library/itertools.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.5.3 # language: julia # name: julia-1.5 # --- ## Preloads using Statistics using FFTW using Plots using BenchmarkTools using Profile using LinearAlgebra using Measures using HDF5 push!(LOAD_PATH, pwd()) using DHC_2DUtils using MLDatasets using Images theme(:juno) using DSP using Interpolations using Distributed using ProgressMeter addprocs(30); using Random Random.seed!(100); test_angles = 2π*rand(10000); train_angles = 2π*rand(60000); # + train_x, train_y = MNIST.traindata() test_x, test_y = MNIST.testdata() lst_train = Array{Any}(undef, 0) for i = 1:60000 push!(lst_train,train_x[:,:,i]) end lst_test = Array{Any}(undef, 0) for i = 1:10000 push!(lst_test,test_x[:,:,i]) end # + @everywhere begin using Statistics using BenchmarkTools using LinearAlgebra using Distributed using ProgressMeter push!(LOAD_PATH, pwd()) using DHC_2DUtils using FFTW using MLDatasets using Images using Interpolations # filter bank filter_hash = fink_filter_hash(1,8,nx=256,wd=2) function mnist_pad(im; θ=0.0) impad = zeros(Float64,128,128) impad[78:-1:51,51:78] = im' imbig = imresize(impad,(256,256),Lanczos4OpenCV()) if θ != 0.0 imrot = imrotate(imbig, θ, axes(imbig), Lanczos4OpenCV()) imrot[findall(imrot .!= imrot)] .= 0.0 return imrot end return imbig end function mnist_DHC(params) θ, x = params image = mnist_pad(x[:,:], θ=θ) WST = DHC_compute(image, filter_hash, filter_hash) return WST end end mnist_DHC_out = @showprogress pmap(mnist_DHC, zip(train_angles,lst_train)) mnist_DHC_out = hcat(mnist_DHC_out...) h5write("mnist_DHC_train_RR_wd2.h5", "main/data", mnist_DHC_out) h5write("mnist_DHC_train_RR_wd2.h5", "main/angles", train_angles) # + @everywhere begin using Statistics using BenchmarkTools using LinearAlgebra using Distributed using ProgressMeter push!(LOAD_PATH, pwd()) using DHC_2DUtils using FFTW using MLDatasets using Images using Interpolations # filter bank filter_hash = fink_filter_hash(1,8,nx=256,wd=2) function mnist_pad(im; θ=0.0) impad = zeros(Float64,128,128) impad[78:-1:51,51:78] = im' imbig = imresize(impad,(256,256),Lanczos4OpenCV()) if θ != 0.0 imrot = imrotate(imbig, θ, axes(imbig), Lanczos4OpenCV()) imrot[findall(imrot .!= imrot)] .= 0.0 return imrot end return imbig end function mnist_DHC(params) θ, x = params image = mnist_pad(x[:,:], θ=θ) WST = DHC_compute(image, filter_hash, filter_hash) return WST end end mnist_DHC_out = @showprogress pmap(mnist_DHC, zip(test_angles,lst_test)) mnist_DHC_out = hcat(mnist_DHC_out...) h5write("mnist_DHC_test_RR_wd2.h5", "main/data", mnist_DHC_out) h5write("mnist_DHC_test_RR_wd2.h5", "main/angles", test_angles) # + Random.seed!(56); test_angles = 2π*rand(10000); train_angles = 2π*rand(60000); @everywhere begin using Statistics using BenchmarkTools using LinearAlgebra using Distributed using ProgressMeter push!(LOAD_PATH, pwd()) using DHC_2DUtils using FFTW using MLDatasets using Images using Interpolations # filter bank filter_hash = fink_filter_hash(1,8,nx=256,wd=2) function mnist_pad(im; θ=0.0) impad = zeros(Float64,128,128) impad[78:-1:51,51:78] = im' imbig = imresize(impad,(256,256),Lanczos4OpenCV()) if θ != 0.0 imrot = imrotate(imbig, θ, axes(imbig), Lanczos4OpenCV()) imrot[findall(imrot .!= imrot)] .= 0.0 return imrot end return imbig end function mnist_DHC(params) θ, x = params image = mnist_pad(x[:,:], θ=θ) WST = DHC_compute(image, filter_hash, filter_hash) return WST end end mnist_DHC_out = @showprogress pmap(mnist_DHC, zip(test_angles,lst_test)) mnist_DHC_out = hcat(mnist_DHC_out...) h5write("mnist_DHC_test_RR_wd2_0.h5", "main/data", mnist_DHC_out) h5write("mnist_DHC_test_RR_wd2_0.h5", "main/angles", test_angles) # + Random.seed!(5968); test_angles = 2π*rand(10000); train_angles = 2π*rand(60000); @everywhere begin using Statistics using BenchmarkTools using LinearAlgebra using Distributed using ProgressMeter push!(LOAD_PATH, pwd()) using DHC_2DUtils using FFTW using MLDatasets using Images using Interpolations # filter bank filter_hash = fink_filter_hash(1,8,nx=256,wd=2) function mnist_pad(im; θ=0.0) impad = zeros(Float64,128,128) impad[78:-1:51,51:78] = im' imbig = imresize(impad,(256,256),Lanczos4OpenCV()) if θ != 0.0 imrot = imrotate(imbig, θ, axes(imbig), Lanczos4OpenCV()) imrot[findall(imrot .!= imrot)] .= 0.0 return imrot end return imbig end function mnist_DHC(params) θ, x = params image = mnist_pad(x[:,:], θ=θ) WST = DHC_compute(image, filter_hash, filter_hash) return WST end end mnist_DHC_out = @showprogress pmap(mnist_DHC, zip(test_angles,lst_test)) mnist_DHC_out = hcat(mnist_DHC_out...) h5write("mnist_DHC_test_RR_wd2_1.h5", "main/data", mnist_DHC_out) h5write("mnist_DHC_test_RR_wd2_1.h5", "main/angles", test_angles) # + Random.seed!(4729); test_angles = 2π*rand(10000); train_angles = 2π*rand(60000); @everywhere begin using Statistics using BenchmarkTools using LinearAlgebra using Distributed using ProgressMeter push!(LOAD_PATH, pwd()) using DHC_2DUtils using FFTW using MLDatasets using Images using Interpolations # filter bank filter_hash = fink_filter_hash(1,8,nx=256,wd=2) function mnist_pad(im; θ=0.0) impad = zeros(Float64,128,128) impad[78:-1:51,51:78] = im' imbig = imresize(impad,(256,256),Lanczos4OpenCV()) if θ != 0.0 imrot = imrotate(imbig, θ, axes(imbig), Lanczos4OpenCV()) imrot[findall(imrot .!= imrot)] .= 0.0 return imrot end return imbig end function mnist_DHC(params) θ, x = params image = mnist_pad(x[:,:], θ=θ) WST = DHC_compute(image, filter_hash, filter_hash) return WST end end mnist_DHC_out = @showprogress pmap(mnist_DHC, zip(test_angles,lst_test)) mnist_DHC_out = hcat(mnist_DHC_out...) h5write("mnist_DHC_test_RR_wd2_2.h5", "main/data", mnist_DHC_out) h5write("mnist_DHC_test_RR_wd2_2.h5", "main/angles", test_angles) # + Random.seed!(154); test_angles = 2π*rand(10000); train_angles = 2π*rand(60000); @everywhere begin using Statistics using BenchmarkTools using LinearAlgebra using Distributed using ProgressMeter push!(LOAD_PATH, pwd()) using DHC_2DUtils using FFTW using MLDatasets using Images using Interpolations # filter bank filter_hash = fink_filter_hash(1,8,nx=256,wd=2) function mnist_pad(im; θ=0.0) impad = zeros(Float64,128,128) impad[78:-1:51,51:78] = im' imbig = imresize(impad,(256,256),Lanczos4OpenCV()) if θ != 0.0 imrot = imrotate(imbig, θ, axes(imbig), Lanczos4OpenCV()) imrot[findall(imrot .!= imrot)] .= 0.0 return imrot end return imbig end function mnist_DHC(params) θ, x = params image = mnist_pad(x[:,:], θ=θ) WST = DHC_compute(image, filter_hash, filter_hash) return WST end end mnist_DHC_out = @showprogress pmap(mnist_DHC, zip(test_angles,lst_test)) mnist_DHC_out = hcat(mnist_DHC_out...) h5write("mnist_DHC_test_RR_wd2_3.h5", "main/data", mnist_DHC_out) h5write("mnist_DHC_test_RR_wd2_3.h5", "main/angles", test_angles) # + Random.seed!(985); test_angles = 2π*rand(10000); train_angles = 2π*rand(60000); @everywhere begin using Statistics using BenchmarkTools using LinearAlgebra using Distributed using ProgressMeter push!(LOAD_PATH, pwd()) using DHC_2DUtils using FFTW using MLDatasets using Images using Interpolations # filter bank filter_hash = fink_filter_hash(1,8,nx=256,wd=2) function mnist_pad(im; θ=0.0) impad = zeros(Float64,128,128) impad[78:-1:51,51:78] = im' imbig = imresize(impad,(256,256),Lanczos4OpenCV()) if θ != 0.0 imrot = imrotate(imbig, θ, axes(imbig), Lanczos4OpenCV()) imrot[findall(imrot .!= imrot)] .= 0.0 return imrot end return imbig end function mnist_DHC(params) θ, x = params image = mnist_pad(x[:,:], θ=θ) WST = DHC_compute(image, filter_hash, filter_hash) return WST end end mnist_DHC_out = @showprogress pmap(mnist_DHC, zip(test_angles,lst_test)) mnist_DHC_out = hcat(mnist_DHC_out...) h5write("mnist_DHC_test_RR_wd2_4.h5", "main/data", mnist_DHC_out) h5write("mnist_DHC_test_RR_wd2_4.h5", "main/angles", test_angles) # + Random.seed!(46); test_angles = 2π*rand(10000); train_angles = 2π*rand(60000); @everywhere begin using Statistics using BenchmarkTools using LinearAlgebra using Distributed using ProgressMeter push!(LOAD_PATH, pwd()) using DHC_2DUtils using FFTW using MLDatasets using Images using Interpolations # filter bank filter_hash = fink_filter_hash(1,8,nx=256,wd=2) function mnist_pad(im; θ=0.0) impad = zeros(Float64,128,128) impad[78:-1:51,51:78] = im' imbig = imresize(impad,(256,256),Lanczos4OpenCV()) if θ != 0.0 imrot = imrotate(imbig, θ, axes(imbig), Lanczos4OpenCV()) imrot[findall(imrot .!= imrot)] .= 0.0 return imrot end return imbig end function mnist_DHC(params) θ, x = params image = mnist_pad(x[:,:], θ=θ) WST = DHC_compute(image, filter_hash, filter_hash) return WST end end mnist_DHC_out = @showprogress pmap(mnist_DHC, zip(test_angles,lst_test)) mnist_DHC_out = hcat(mnist_DHC_out...) h5write("mnist_DHC_test_RR_wd2_5.h5", "main/data", mnist_DHC_out) h5write("mnist_DHC_test_RR_wd2_5.h5", "main/angles", test_angles) # + Random.seed!(22); test_angles = 2π*rand(10000); train_angles = 2π*rand(60000); @everywhere begin using Statistics using BenchmarkTools using LinearAlgebra using Distributed using ProgressMeter push!(LOAD_PATH, pwd()) using DHC_2DUtils using FFTW using MLDatasets using Images using Interpolations # filter bank filter_hash = fink_filter_hash(1,8,nx=256,wd=2) function mnist_pad(im; θ=0.0) impad = zeros(Float64,128,128) impad[78:-1:51,51:78] = im' imbig = imresize(impad,(256,256),Lanczos4OpenCV()) if θ != 0.0 imrot = imrotate(imbig, θ, axes(imbig), Lanczos4OpenCV()) imrot[findall(imrot .!= imrot)] .= 0.0 return imrot end return imbig end function mnist_DHC(params) θ, x = params image = mnist_pad(x[:,:], θ=θ) WST = DHC_compute(image, filter_hash, filter_hash) return WST end end mnist_DHC_out = @showprogress pmap(mnist_DHC, zip(test_angles,lst_test)) mnist_DHC_out = hcat(mnist_DHC_out...) h5write("mnist_DHC_test_RR_wd2_6.h5", "main/data", mnist_DHC_out) h5write("mnist_DHC_test_RR_wd2_6.h5", "main/angles", test_angles)
Daily/2021_04_04RCjl.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import gensim from gensim.models import Word2Vec,KeyedVectors import tensorflow as tf from tensorflow.contrib.tensorboard.plugins import projector # - # <h3>Load Saved Model</h3> # + FOLDER_PATH = "E:/TEMP/GGL_W2V" # Load Google's pre-trained Word2Vec model. model = KeyedVectors.load_word2vec_format(FOLDER_PATH+'/GoogleNews-vectors-negative300.bin', binary=True) # - print("Vocabulary Size: {0}".format(len(model.vocab))) #3,000,000 # + VOCAB_SIZE = len(model.vocab)-1 EMBEDDING_DIM = model["is"].shape[0]#model.trainables.layer1_size w2v = np.zeros((VOCAB_SIZE, EMBEDDING_DIM)) # - # <h3>Prepare Vocab File</h3> #save .tsv with vocabulary tsv_file_path = FOLDER_PATH+"/tensorboard/metadata.tsv" with open(tsv_file_path,'w+', encoding='utf-8') as file_metadata: for i,word in enumerate(model.index2word[:VOCAB_SIZE]): w2v[i] = model[word] file_metadata.write(word+'\n') # <h3>Tensorflow Projector</h3> # + TENSORBOARD_FILES_PATH = FOLDER_PATH+"/tensorboard" #Tensorflow Graph Variables X_init = tf.placeholder(tf.float32, shape=(VOCAB_SIZE, EMBEDDING_DIM), name="embedding") X = tf.Variable(X_init) #Initialize Variables init = tf.global_variables_initializer() sess = tf.Session() #sess.run(tf.initialize_all_variables(), feed_dict={X_init: w2v}) sess.run(init, feed_dict={X_init: w2v}) saver = tf.train.Saver() writer = tf.summary.FileWriter(TENSORBOARD_FILES_PATH, sess.graph) #projector config = projector.ProjectorConfig() embed = config.embeddings.add() embed.metadata_path = tsv_file_path # + projector.visualize_embeddings(writer,config) saver.save(sess, TENSORBOARD_FILES_PATH+'/model.ckpt', global_step = VOCAB_SIZE) #run python -m tensorboard.main --logdir=C:\.... # - sess.close() w2v.shape # <h3>Scrap Code</h3> # + TENSORBOARD_FILES_PATH = FOLDER_PATH+"/tensorboard" sess = tf.Session() X_init = tf.placeholder(tf.float32, shape=(VOCAB_SIZE, EMBEDDING_DIM)) X = tf.Variable(X_init) # The rest of the setup... sess.run(tf.initialize_all_variables(), feed_dict={X_init: w2v}) # + saver = tf.train.Saver() writer = tf.summary.FileWriter(TENSORBOARD_FILES_PATH, sess.graph) #projector config = projector.ProjectorConfig() embed = config.embeddings.add() embed.metadata_path = tsv_file_path # - projector.visualize_embeddings(writer,config) saver.save(sess, TENSORBOARD_FILES_PATH+'/model.ckpt', global_step = VOCAB_SIZE) sess.close() # + TENSORBOARD_FILES_PATH = FOLDER_PATH+"\tensorboard" sess = tf.InteractiveSession() with tf.device("/cpu:0"): embedding = tf.Variable(w2v, trainable=False, name = "embedding") tf.global_variables_initializer().run() saver = tf.train.Saver() writer = tf.summary.FileWriter(TENSORBOARD_FILES_PATH, sess.graph) #projector config = projector.ProjectorConfig() embed = config.embeddings.add() embed.metadata_path = tsv_file_path
WordEmbeddings/TF_VisualizeTrainedModel_Google_Newsletter.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.9 64-bit (''bachi'': conda)' # name: python379jvsc74a57bd09ebc6147973a05ce4bbd06f13fbfeff7bb540ef572b036e6c34da4f1ef7b2b65 # --- # + [markdown] id="Jeo3-FO-6fJm" # # Policy Gradient Reinforcement Learning # + [markdown] id="GNsuE0M_7aAN" # ### **Team Member:** 108024507 張文騰 / 108024512 吳紹丞 / 108024519 劉怡禎 / 109062659 蘇瑞揚 # + [markdown] id="3c_HHsUL70tk" # Below, we divide our training code into 3 parts which are PPO, A3C, and PG. And we have tried framed-based RL based on PPO. # + [markdown] id="mXC676Wh7c06" # # **Content** # #### 0. **Game Statistics** # - EDA result # #### 1. **PPO** # - State-Based # #### 2. **A3C** # - State-Based # #### 3. **PG** # - State-Based # #### 4. **Conclusion** # + [markdown] id="vIgSeuma8Pbh" # # Set up Environment # + id="ecNjvmEJ6fJo" outputId="f502fd2f-7901-41cf-ca0c-5fc0e541c93b" import sys sys.version # + id="ik2Az0d_6fJp" outputId="01d7bc5c-674e-4f53-a308-6d4c301d2361" import tensorflow as tf import numpy as np import os print(os.getcwd()) # + id="pLE8NcBU6fJp" # limit the uasge of memory os.environ['CUDA_VISIBLE_DEVICES'] = "1" gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: tf.config.experimental.set_virtual_device_configuration( gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=500)]) except RuntimeError as e: print(e) # + id="xjxJufGP6fJq" outputId="cb1d6992-0d16-413b-e87c-9b5f82209c52" os.environ["SDL_VIDEODRIVER"] = "dummy" # this line make pop-out window not appear from ple.games.flappybird import FlappyBird from ple import PLE game = FlappyBird() env = PLE(game, fps=30, display_screen=False) # environment interface to game env.reset_game() # + [markdown] id="OZ0o7HMMJdLF" # # 0. Game Statistics # + [markdown] id="_ha9dYcI6fJx" # #### Game EDA # + id="nI3sYD516fJy" import matplotlib.pyplot as plt from scipy import ndimage env.reset_game() T = 0 rewards = [] frames = [] states = [] # + id="RtwmhHhR6fJy" # to pass first pipe # 0 -> 38 times -> 7 # 1 -> 22 times -> go rewards.append(env.act(env.getActionSet()[0])) states.append(TA_state()) print(game.getGameState()) frames.append(env.getScreenRGB()) plt.imshow(ndimage.rotate(frames[T], 270)) print("\nT: {} REWARD: {}".format(T, rewards[T])) T += 1 rewards.append(env.act(env.getActionSet()[1])) states.append(TA_state()) print(game.getGameState()) frames.append(env.getScreenRGB()) plt.imshow(ndimage.rotate(frames[T], 270)) print("\nT: {} REWARD: {}".format(T, rewards[T])) T += 1 # + [markdown] id="41xI1TZr6fJy" # 1. T = 61 -> REWARD = -5 # 2. T = 97 -> REWARD = -5 # 3. T = 18 -> Earliest Dead # 4. if player_y > 390 -> die # 5. if player_y < -5 -> die # 6. next_pipe_dist_to_player is the distance between player and the exit point of pipe, which means if one hits the pipe directly at first, there are still 61 units away from the exit point of pipe. # # **Conclusion:** # - Reward function should weigh up while passing thru the pipe, the inequality form is 61 <= next_pipe_dist_to_player <= 0 # + [markdown] id="p_2MgaBrJj2x" # # 1. PPO - state-based # + [markdown] id="VZtzkpwF6fJq" # ### Define Make Movie Function # + id="AMXgr-_96fJq" import moviepy.editor as mpy def make_anim(images, fps=60, true_image=False): duration = len(images) / fps def make_frame(t): try: x = images[int(len(images) / duration * t)] except: x = images[-1] if true_image: return x.astype(np.uint8) else: return ((x + 1) / 2 * 255).astype(np.uint8) clip = mpy.VideoClip(make_frame, duration=duration) clip.fps = fps return clip # + [markdown] id="q_iyPlB46fJr" # ### Define Actor # + id="sR1_OwpV6fJr" outputId="81c1396f-6564-4c22-a348-c7e0878d9bfa" Actor = tf.keras.Sequential() # Actor.add(tf.keras.layers.Dense(32, input_dim = 8, activation='relu',kernel_initializer = 'random_uniform', bias_initializer = 'random_uniform')) Actor.add(tf.keras.layers.Dense(32, input_dim = 8, kernel_initializer = 'random_uniform', bias_initializer = 'random_uniform')) Actor.add(tf.keras.layers.LeakyReLU(alpha=0.3)) Actor.add(tf.keras.layers.Dense(32, kernel_initializer = 'random_uniform', bias_initializer = 'random_uniform')) Actor.add(tf.keras.layers.LeakyReLU(alpha=0.3)) Actor.add(tf.keras.layers.Dense(64, kernel_initializer = 'random_uniform', bias_initializer = 'random_uniform')) Actor.add(tf.keras.layers.LeakyReLU(alpha=0.3)) Actor.add(tf.keras.layers.Dense(2,kernel_initializer = 'random_uniform', bias_initializer = 'random_uniform')) Actor.add(tf.keras.layers.Softmax()) Actor.build() Actor_opt = tf.keras.optimizers.Adam(learning_rate=1e-3) print(Actor.summary()) # + [markdown] id="M107PKOf6fJs" # ### TA_state # + id="1Ia9Ia-t6fJs" import copy def TA_state(): state = copy.deepcopy(game.getGameState()) state['next_next_pipe_bottom_y'] -= state['player_y'] state['next_next_pipe_top_y'] -= state['player_y'] state['next_pipe_bottom_y'] -= state['player_y'] state['next_pipe_top_y'] -= state['player_y'] relative_state = list(state.values()) # # return the state in tensor type, with batch dimension # relative_state = tf.convert_to_tensor(relative_state, dtype=tf.float32) # relative_state = tf.expand_dims(relative_state, axis=0) return relative_state # + [markdown] id="05hwVC_x6fJt" # ### Define Forward Advantage Function (Not Efficient) # + id="CQamC1SB6fJt" def Advantage_func(rewards, time, gamma): dis_rewards = [] for t in range(1,time+1,1): t_prime = t dis_reward = 0 count = t - 1 for i in range(count,len(rewards),1): dis_reward += rewards[i] * (gamma ** (t_prime-t)) t_prime += 1 dis_rewards.append(dis_reward) # naive baseline # baseline = np.mean(dis_rewards) # Advantage = dis_rewards-baseline Advantage = dis_rewards return (Advantage) # + [markdown] id="AamX9Jwa6fJt" # ### Define Backward Advantage Function (Efficient) # + id="LgL_lka-6fJu" def Advantage_func_fromback(rewards, time, gamma): dis_rewards = [] dis_reward = 0 count = 0 for t in range(time,0,-1): dis_reward = dis_reward * gamma + rewards[t-1] dis_rewards.append(dis_reward) count += 1 # naive baseline # baseline = np.mean(dis_rewards) # Advantage = dis_rewards-baseline Advantage = list(reversed(dis_rewards)) return (Advantage) # + [markdown] id="gq5SUb-KA8t1" # #### Check the result of two advantage functions (should be the same) # + id="sQ8eBUdC6fJu" outputId="d97dad7c-06cc-475f-e216-c9ec6a59b275" r = np.random.normal(size = 1000, loc = 0, scale = 1) print(r[:5]) print(Advantage_func(r, len(r), 0.9)[:5]) # as size goes large, it'll take too much time print(Advantage_func_fromback(r, len(r), 0.9)[:5]) # + [markdown] id="gpErlGvH6fJv" # ### Define Loss Function (-Objective Function) with For Loop (Not Efficient) # + id="y1IUJclf6fJv" def J_func(probs, old_probs, adv, epsilon): J = [] for up, op, a in zip(probs, old_probs, adv): p_ratio = up/op s1 = (p_ratio * a) s2 = (tf.clip_by_value(p_ratio, 1-epsilon, 1+epsilon) * a) J.append(-tf.math.minimum(s1,s2)) return(J) # + id="4taXB22C6fJw" outputId="64773497-617d-4616-819e-435d3fb93988" r = [40,30,20,10] adv = Advantage_func_fromback(r, len(r), 0.9) print(adv) probs = [0.8,0.7,0.6,0.5] old_probs = [0.7,0.6,0.6,0.7] J_func(probs, old_probs, adv, 0.05) # + [markdown] id="9TVuVu1-6fJw" # ### Define Loss Function (-Objective Function) with TF API (Efficient) # + id="ewUifGVm6fJw" def J_func_tf(probs, old_probs, adv, epsilon): p_ratio = tf.divide(probs,old_probs) s1 = tf.multiply(p_ratio,tf.cast(adv, dtype = tf.float32)) s2 = tf.multiply(tf.clip_by_value(p_ratio, 1-epsilon, 1+epsilon), tf.cast(adv, dtype = tf.float32)) J = -tf.math.minimum(s1,s2) return(J) # + id="0TBKxxqp6fJw" outputId="e8c53991-98fe-46a3-8a02-1e763f9827b7" r = [40,30,20,10] adv = Advantage_func_fromback(r, len(r), 0.9) print(adv) probs = tf.constant([0.8,0.7,0.6,0.5]) old_probs = tf.constant([0.7,0.6,0.6,0.7]) adv = tf.constant(adv) J_func_tf(probs, old_probs, adv, 0.05) # + [markdown] id="XhdOpRbL6fJx" # ### Define Learning Function # + id="_X0W2cvi6fJx" EPSILON = 0.1 def train_step(states, actions, adv, probs, ep): with tf.GradientTape() as tape: prs = [] pr = Agent(states) # print(pr) for idx, a in enumerate(actions): prs.append(pr[idx][a]) prs = tf.stack(prs, axis = 0) probs = tf.constant(probs) adv = tf.stack(adv, axis = 0) EXP_J = J_func_tf(prs, probs, adv, EPSILON) actor_loss = (tf.math.reduce_mean(EXP_J)) # print("P: ", prs,'\n') # print("loss: ", actor_loss,'\n') grads = tape.gradient(actor_loss, Agent.trainable_variables) Actor_opt.apply_gradients(zip(grads, Agent.trainable_variables)) return actor_loss # + [markdown] id="sO01NKZC6fJz" # #### Define Reward Function - Add some rewards and penalties thru the road to pipe # - When 141 >= next_distance_to_pipe > 61, if bird is higher or lower than the top or bottom of the pipe, penalty = -0.5; if between, reward = 1.5 # - When 61 >= next_distance_to_pipe > 0, if bird is higher or lower than the top or bottom of the pipe, penalty = 0; if between, reward = 2.5 # - Note that the original reward and penalty in game are 1 and -5 when bird flies through the pipe or die # # We add additional rewards criteria (first 2 bullet points) onto the original rewards in PPO. # + id="YnJkYHJF6fJz" REWARD_DIST = 20 def Reward_func(states, rewards): Rewards_adj = [] for i in range(1, len(states), 1): next_state = states[i][0] if next_state[2] > 61 and next_state[2] <= 61 + 4 * REWARD_DIST: if next_state[0] <= next_state[3]+next_state[0]: re = -0.5 elif next_state[0] >= next_state[4]+next_state[0]: re = -0.5 else: re = 1.5 # randomly assign a value # the minimum of next_pipe_distance = 1 elif next_state[2] <= 61 and next_state[2] > 0: if next_state[0] <= next_state[3]+next_state[0]: re = 0 elif next_state[0] >= next_state[4]+next_state[0]: re = 0 else: re = 2.5 # randomly assign a value else: re = 0 re = tf.dtypes.cast(re, tf.float32) Rewards_adj.append(re) Rewards_all = tf.constant(rewards) + Rewards_adj return Rewards_all # + [markdown] id="_3SgCtG06fJz" # #### Start Training # + id="1ehsDWmK6fJ0" # model_path = "./model_rewardsum442/" # Agent = tf.keras.models.load_model(model_path, compile=False) # + id="ev7oabp_6fJ0" outputId="0d5c155f-ab29-4cc3-d1c8-a2e11e497d78" tags=[] # tf.random.set_seed(1) Agent = Actor NUM_EPISODE = 50000 GAMMA = 0.95 EXPLORE_RATIO_STAGE1 = 0.85 EXPLORE_RATIO_STAGE2 = 0.9 EXPLORE_LIMIT_STAGE1 = 5000 EXPLORE_LIMIT_STAGE2 = 8000 EXPLORE_LIMIT_CEILING = 10000 START_PPO_UPDATE = 60 EPOCHS = 10 best = 200 Cum_reward = [] Ts = [] iter_num = 0 for episode in range(0, NUM_EPISODE + 1, 1): # Reset the environment env.reset_game() frames = [env.getScreenRGB()] cum_reward = 0 # all_aloss = [] # all_closs = [] rewards = [] states = [] actions = [] old_probs = [] # values = [] # feed current state and select an action state = tf.constant(np.array(TA_state()).reshape(1,8)) states.append(state) T = 0 print("EPISODE: {}".format(episode)) while not env.game_over(): # feed current state and select an action Stochastic = Agent(state)[0].numpy() # Exploration if episode < EXPLORE_LIMIT_STAGE1: if Stochastic[0] > EXPLORE_RATIO_STAGE1: Stochastic[0] = EXPLORE_RATIO_STAGE1 Stochastic[1] = 1 - EXPLORE_RATIO_STAGE1 elif Stochastic[0] < 1-EXPLORE_RATIO_STAGE1: Stochastic[0] = 1 - EXPLORE_RATIO_STAGE1 Stochastic[1] = EXPLORE_RATIO_STAGE1 if episode >= EXPLORE_LIMIT_STAGE2 & episode < EXPLORE_LIMIT_CEILING: if Stochastic[0] > EXPLORE_RATIO_STAGE2: Stochastic[0] = EXPLORE_RATIO_STAGE2 Stochastic[1] = 1 - EXPLORE_RATIO_STAGE2 elif Stochastic[0] < 1-EXPLORE_RATIO_STAGE2: Stochastic[0] = 1 - EXPLORE_RATIO_STAGE2 Stochastic[1] = EXPLORE_RATIO_STAGE2 action = np.random.choice(2 ,p = Stochastic) prob = Stochastic[action] # value = Agent.critic(state).numpy() # execute the action and get reward reward = env.act(env.getActionSet()[action]) frames.append(env.getScreenRGB()) # collect trajectory actions.append(action) rewards.append(reward) old_probs.append(prob) # values.append(value) state = np.array(TA_state()).reshape(1,8) states.append(state) if T>500 and T%100 == 0: print("T_IN_TRAJECTORY: {}".format(T)) T += 1 if T>500: print("MAX_T_BEFORE_PPO_STAGE2: {}".format(T)) # value = Agent.critic(state).numpy() # values.append(value) # print(states) Rewards = Reward_func(states, rewards) cum_reward = np.sum(Rewards) states = tf.constant(np.array(states[:-1]).reshape(len(states[:-1]),8) ) # [[[],[]]] actions = np.array(actions, dtype=np.int32) Cum_reward.append(np.round(cum_reward,3)) Ts.append(T) print(Rewards) # CALCULATE ADVANTAGE BASED ON THE NEW REWARDS adv = Advantage_func_fromback(Rewards, len(Rewards), GAMMA) # print("EPISODE: ",episode,'\n') # print("STATES: ",states,'\n') # print("PROBS: ",old_probs,'\n') # print("ACTIONS: ",actions,'\n') # print("REWARDS: ",rewards,'\n') # print("NEW REWARDS: ",Rewards,'\n') # print("ADVANTAGE: ", adv,'\n') # print("CUM REWARDS: ", cum_reward,'\n') if T <= START_PPO_UPDATE: actor_loss = train_step(states, actions, adv, old_probs, episode) print("[{}] epochs: {}".format(episode, 0)) print("LOSS: {}".format(actor_loss)) else: for epochs in range(EPOCHS): actor_loss = train_step(states, actions, adv, old_probs, episode) print("[{}] epochs: {}".format(episode, epochs)) print("LOSS: {}".format(actor_loss)) START_PPO_UPDATE = T print( "time_live: {}\ncumulated reward: {}\navg_time_live: {}\navg_cum_reward: {}\nmax_time_live: {}\nmax_cum_reward: {}\n". format(T, np.round(cum_reward,3), np.round(np.mean(Ts)), np.round(np.mean(Cum_reward),3), np.max(Ts), np.round(np.max(Cum_reward),3))) if (T>best): print('\n') # Agent.save('test{}.h5'.format(T)) tf.keras.models.save_model(Agent, filepath='./model_rewardsum{}/'.format(T)) print('\n') clip = make_anim(frames, fps=60, true_image=True).rotate(-90) clip.write_videofile("/home/ingmember03/DL2020/DL2020_07/comp4/PPO_rewardsum_demo-{}.webm".format(T), fps=60) # display(clip.ipython_display(fps=60, autoplay=1, loop=1, maxduration=120)) best = T # pipe1 -> 62 - 78 # pipe2 -> 98 - 114 # pipe3 -> 134 - 150 # + [markdown] id="xs4c2fz4MZXd" # # 2. A3C - state-based # + [markdown] id="anuwV0UT7Ie9" # ### Parameters # Here we define parameters used in A3C. Also, we change the original rewards in the game. # + id="av2QJOa36ohm" args = { 'gamma' : 0.9, 'update_interval':300, 'actor_lr':0.001, 'critic_lr':0.001, 'entropy_beta':0.05, 'reward_no_die':0.01, 'reward_die':-5, 'reward_through':1.5 } CUR_EPISODE = 0 # + [markdown] id="0NkBLOeP78jV" # ### Define Actor and Critic model # + id="L1zWfIRl731L" class Actor: def __init__(self, state_dim, action_dim): self.state_dim = state_dim self.action_dim = action_dim self.model = self.create_model() self.opt = tf.keras.optimizers.Adam(args['actor_lr']) self.entropy_beta = args['entropy_beta'] def create_model(self): return tf.keras.Sequential([ Input((self.state_dim,)), Dense(256, activation='relu'), Dense(128, activation='relu'), Dense(64, activation='relu'), Dense(self.action_dim, activation='softmax') ]) def compute_loss(self, actions, logits, advantages): ce_loss = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True) entropy_loss = tf.keras.losses.CategoricalCrossentropy( from_logits=True) actions = tf.cast(actions, tf.int32) policy_loss = ce_loss( actions, logits, sample_weight=tf.stop_gradient(advantages)) entropy = entropy_loss(logits, logits) return policy_loss - self.entropy_beta * entropy def train(self, states, actions, advantages): with tf.GradientTape() as tape: logits = self.model(states, training=True) loss = self.compute_loss( actions, logits, advantages) grads = tape.gradient(loss, self.model.trainable_variables) self.opt.apply_gradients(zip(grads, self.model.trainable_variables)) return loss class Critic: def __init__(self, state_dim): self.state_dim = state_dim self.model = self.create_model() self.opt = tf.keras.optimizers.Adam(args['critic_lr']) def create_model(self): return tf.keras.Sequential([ Input((self.state_dim,)), Dense(256, activation='relu'), Dense(128, activation='relu'), Dense(64, activation='relu'), Dense(1, activation='linear') ]) def compute_loss(self, v_pred, td_targets): mse = tf.keras.losses.MeanSquaredError() return mse(td_targets, v_pred) def train(self, states, td_targets): with tf.GradientTape() as tape: v_pred = self.model(states, training=True) assert v_pred.shape == td_targets.shape loss = self.compute_loss(v_pred, tf.stop_gradient(td_targets)) grads = tape.gradient(loss, self.model.trainable_variables) self.opt.apply_gradients(zip(grads, self.model.trainable_variables)) return loss # + [markdown] id="k5lrX6fy8KNB" # ### Define Global Agent # In A3C, we have to define a global model, which collect games played from all threads in CPU and then update the global model parametors. # + id="_h6Ik2vF8Yqi" class Agent: def __init__(self, env_name): env = make_new() self.env_name = env_name self.state_dim = TA_state(env).shape[1] self.action_dim = len(env.getActionSet()) self.global_actor = Actor(self.state_dim, self.action_dim) self.global_critic = Critic(self.state_dim) self.num_workers = cpu_count() #self.num_workers = 1 def train(self, max_episodes=1000000): workers = [] for i in range(self.num_workers): env = make_new() workers.append(WorkerAgent( env, self.global_actor, self.global_critic, max_episodes)) for worker in workers: worker.start() for worker in workers: worker.join() # + [markdown] id="h2zCua-g9UHN" # ### Define Local Agent # This agent is used to play in threads. Note that we update the global parametors once after collecting 300 games. # + id="okzJRQHp90Px" class WorkerAgent(Thread): def __init__(self, env, global_actor, global_critic, max_episodes): Thread.__init__(self) self.lock = Lock() self.env = env self.state_dim = TA_state(self.env).shape[1] self.action_dim = len(self.env.getActionSet()) self.max_episodes = max_episodes self.global_actor = global_actor self.global_critic = global_critic self.actor = Actor(self.state_dim, self.action_dim) self.critic = Critic(self.state_dim) self.actor.model.set_weights(self.global_actor.model.get_weights()) self.critic.model.set_weights(self.global_critic.model.get_weights()) def n_step_td_target(self, rewards, next_v_value, done): td_targets = np.zeros_like(rewards) cumulative = 0 if not done: cumulative = next_v_value for k in reversed(range(0, len(rewards))): cumulative = args['gamma'] * cumulative + rewards[k] td_targets[k] = cumulative return td_targets def advatnage(self, td_targets, baselines): return td_targets - baselines def list_to_batch(self, list): batch = list[0] for elem in list[1:]: batch = np.append(batch, elem, axis=0) return batch def train(self): global CUR_EPISODE while self.max_episodes >= CUR_EPISODE: state_batch = [] action_batch = [] reward_batch = [] episode_reward, done = 0, False self.env.reset_game() state = TA_state(self.env) total_loss =0 while not done: probs = self.actor.model.predict(state) action = np.random.choice(self.action_dim, p=probs[0]) reward = reward_trans(self.env.act(self.env.getActionSet()[action])) next_state = TA_state(self.env) done = self.env.game_over() action = np.reshape(action, [1, 1]) reward = np.reshape(reward, [1, 1]) state_batch.append(state) action_batch.append(action) reward_batch.append(reward) if len(state_batch) >= args['update_interval'] or done: states = self.list_to_batch(state_batch) actions = self.list_to_batch(action_batch) rewards = self.list_to_batch(reward_batch) next_v_value = self.critic.model.predict(next_state) td_targets = self.n_step_td_target( rewards, next_v_value, done) advantages = td_targets - self.critic.model.predict(states) with self.lock: actor_loss = self.global_actor.train( states, actions, advantages) critic_loss = self.global_critic.train( states, td_targets) self.actor.model.set_weights( self.global_actor.model.get_weights()) self.critic.model.set_weights( self.global_critic.model.get_weights()) total_loss+=actor_loss total_loss+=critic_loss state_batch = [] action_batch = [] reward_batch = [] td_target_batch = [] advatnage_batch = [] episode_reward += reward[0][0] state = next_state if CUR_EPISODE % 100 == 0: print('EP{} EpisodeReward={} TotalLoss={}\n'.format(CUR_EPISODE, episode_reward,total_loss)) wandb.log({'Reward': episode_reward,'Total Loss':total_loss}) CUR_EPISODE += 1 # + [markdown] id="CIhS5ZP5-oEz" # ### Start training # + id="juRsBNuk-szi" agent = Agent("flappy_bird_A3C") agent.train(100000) # + [markdown] id="ue1cE34DMaGj" # # 3. PG - state-based # + [markdown] id="smGtlfKVd379" # ### Training # + id="Zc_JClOUd3m1" import numpy as np from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Reshape, Flatten from tensorflow.keras.optimizers import Adam from IPython.display import Image, display import moviepy.editor as mpy import tensorflow as tf import os os.environ["SDL_VIDEODRIVER"] = "dummy" import copy from ple.games.flappybird import FlappyBird from ple import PLE multiple_return_values = False gpu_number = 0 seed = 2021 gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: tf.config.experimental.set_visible_devices(gpus[0], 'GPU') # Currently, memory growth needs to be the same across GPUs for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) logical_gpus = tf.config.experimental.list_logical_devices('GPU') print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs") except RuntimeError as e: # Memory growth must be set before GPUs have been initialized print(e) game = FlappyBird() env = PLE(game, fps=30, display_screen=False, rng=seed) # game environment interface env.reset_game() def TA_state(): state = copy.deepcopy(game.getGameState()) state['next_next_pipe_bottom_y'] -= state['player_y'] state['next_next_pipe_top_y'] -= state['player_y'] state['next_pipe_bottom_y'] -= state['player_y'] state['next_pipe_top_y'] -= state['player_y'] relative_state = list(state.values()) # return the state in tensor type, with batch dimension relative_state = tf.convert_to_tensor(relative_state, dtype=tf.float32) relative_state = tf.expand_dims(relative_state, axis=0) return relative_state def MY_reward(n,p): a = n['next_pipe_bottom_y'] b = n['next_pipe_top_y'] re_n = (a+b)/2 re_n -= n['player_y'] a = p['next_pipe_bottom_y'] b = p['next_pipe_top_y'] re_p = (a+b)/2 re_p -= p['player_y'] re_n = -(np.absolute(re_n)) re_p = -(np.absolute(re_p)) return (re_n - re_p)/16 model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(32, input_dim = 8, activation='relu', kernel_initializer='random_normal')) model.add(tf.keras.layers.Dense(32, activation='relu', kernel_initializer='random_normal')) model.add(tf.keras.layers.Dense(32, activation='relu', kernel_initializer='random_normal')) model.add(tf.keras.layers.Dense(2, activation = "softmax")) model.build() optimizer = tf.keras.optimizers.Adam(learning_rate = 0.001) compute_loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) print(model.summary()) def discount_rewards(r, gamma = 0.5): discounted_r = np.zeros_like(r) running_add = 0 for t in reversed(range(0, r.size)): running_add = running_add * gamma + r[t] discounted_r[t] = running_add return discounted_r class GradUpdate: def __init__(self, model): self.Buffer = model.trainable_variables self.zero() def zero(self): for ix, grad in enumerate(self.Buffer): self.Buffer[ix] = grad * 0 def update(self, ep_memory): for grads, r in ep_memory: for ix, grad in enumerate(grads): self.Buffer[ix] += grad * r def get_action(model, s): s = s.reshape([1,4]) logits = model(s) a_dist = logits.numpy() # Choose random action with p = action dist a = np.random.choice(a_dist[0],p=a_dist[0]) a = np.argmax(a_dist == a) return logits, a episodes = 2000 scores = [] update_every = 5 gradBuffer = GradUpdate(model) h = 0 for e in range(episodes): env.reset_game() frames = [env.getScreenRGB()] ep_memory = [] ep_score = 0 done = False t = 0 s_n = game.getGameState() while not env.game_over(): with tf.GradientTape() as tape: #forward pass state = TA_state() logits = model(state) a_dist = logits.numpy() #print(a_dist) # Choose random action with p = action dist action = np.random.choice(a_dist[0],p=a_dist[0]) action = np.argmax(a_dist == action) loss = compute_loss([action], logits) # make the choosen action reward = env.act(env.getActionSet()[action]) frames.append(env.getScreenRGB()) ep_score +=reward s_p = s_n s_n = game.getGameState() reward += MY_reward(s_n, s_p) grads = tape.gradient(loss, model.trainable_variables) ep_memory.append([grads,reward]) scores.append(ep_score) t+=1 if(t>h): model.save('test{}.h5'.format(t)) h = t # Discound the rewards ep_memory = np.array(ep_memory) ep_memory[:,1] = discount_rewards(ep_memory[:,1]) gradBuffer.update(ep_memory) if e % update_every == 0: optimizer.apply_gradients(zip(gradBuffer.Buffer, model.trainable_variables)) gradBuffer.zero() if e % 100 == 0: print("Episode {} Score {}".format(e, np.mean(scores[-100:]))) print(t) # + [markdown] id="cBVoC3uwEQvL" # # 4. Conclusion # + [markdown] id="3mwQCXa9EYx-" # - After trying above 3 models, we find out that although original policy gradients quite simple, its performance is still good comparing to others. # - While trying PPO (off-policy), we find that it's not necessary to update more than 1 time in every trajectory, we can just focus on longer trajectories and update many times during training. In addition, during the early epsiodes, it is also not necessary to launch several updates per trajectory, since it doesn't take much time to collect trajectory in the early stage. # - We includes state information to define two different reward functions: # 1. based on the bird height comparing to the middle point of pipe, therefore, we can create a continuous reward function. # 2. based on the distance of the bird to pipe, for example, if bird is 20 time steps away from the pipe, if it flies over or lower than the pipe's top or bottom, we penalize it, o.w., we reward. # - For a well-defined reward function, we suggest that the baseline in advantage function can be removed since rewards are in both positive and negative already. # - While building model, we find out that leaky relu activation function is better than relu.
Flappy Bird Deep Reinforcement Learning.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 동기 & 비동기 # - 동기 : 순차적 코드 수행 # - 비동기 : 동시에 수행 # #### Thread # 프로그램이 동작될 때 실행시간이 오래 걸린다면, 두 가지 문제가 있기 때문입니다. # 1. I/O 바운드 : 파일을 읽고 쓰거나, 네트워크 시간 등의 문제 # 2. CPU 바운드 : 계산량이 많은 경우 # # - I/O 바운드의 문제를 Thread를 통해서 어느정도 해결할 수 있습니다. # - CPU 바운드의 문제는 코드의 효율화나 알고리즘을 통해서 해결해야 합니다. # # - 동기 sync # - 작업이 끝나야 다음 작업이 수행됩니다. # - 비동기 sync # - 작업이 독립적으로 수행됩니다. import threading ls1, ls2 = [], [] # + def th_func1(number): for idx in range(number): ls1.append(idx) if idx % 100 == 0: print("ls1 : ", idx) def th_func2(number): for idx in range(number): ls2.append(idx) if idx % 100 == 0: print("ls2 : ", idx) # - th1 = threading.Thread(target=th_func1, args=(12000,)) # args에서 ,는 필수! th2 = threading.Thread(target=th_func2, args=(10000,)) th1.start() th2.start() len(ls1), len(ls2) # #### Multi-Thread # - 여러 task를 같은시간단위로 실행한다. # - task1(1초) -> task2(1초) -> task3(1초) -> task1(1초) ..... 이런식으로.... # - 비동기식이고, 언제 끝날지 모른다.
python/01_python_syntax/09_Thread.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %pylab inline from nilearn.plotting import plot_anat plot_anat('../../wacv_Journal_material/Data/Data_Origin/IBSR_18/IBSR_V2.0_nifti_stripped/IBSR_nifti_stripped/IBSR_03/IBSR_03_ana.nii.gz', display_mode='ortho', dim=-1, draw_cross=False, annotate=False) def plot(path): plot_anat(path, display_mode='ortho', dim=-1, draw_cross=False, annotate=False) plot('../../wacv-Journal-material/Data/Data_Origin/IBSR_18/IBSR_V2.0_nifti_stripped/IBSR_nifti_stripped/IBSR_01/IBSR_01_ana.nii') plot('../../wacv-Journal-material/Data/Data_Origin/IBSR_18/IBSR_V2.0_nifti_stripped/IBSR_nifti_stripped/IBSR_01/IBSR_01_ana 2.nii') plot('../../wacv-Journal-material/Data/Data_Origin/IBSR_18/IBSR_V2.0_nifti_stripped/IBSR_nifti_stripped/IBSR_01/IBSR_01_ana.nii.gz') plot('../../wacv-Journal-material/Data/Data_Origin/IBSR_18/IBSR_V2.0_nifti_stripped/IBSR_nifti_stripped/IBSR_01/IBSR_01_ana_brainmask.nii') plot('../../wacv-Journal-material/Data/Data_Origin/IBSR_18/IBSR_V2.0_nifti_stripped/IBSR_nifti_stripped/IBSR_01/IBSR_01_ana_strip.nii') plot('../../wacv-Journal-material/Data/Data_Origin/IBSR_18/IBSR_V2.0_nifti_stripped/IBSR_nifti_stripped/IBSR_01/IBSR_01_seg_ana.nii')
notebooks/Nipype-notebook.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import transportation_tutorials as tt from matplotlib import pyplot as plt import pandas as pd import numpy as np # # Histograms and Frequency Plots # # In this section, we will review the creation of histograms and # frequency plots for data. These two kinds of plots are very # similar: histograms plot the distribution of continuous data # by grouping similar values together in (typically homogeneously # sized) bins, while frequency plots are a similar visualization # for discrete (categorical) data that is by its nature already # "binned". # ## Computing Frequency Data # We'll create some frequency plots of trip mode using data from the # example `trips` data. trips = pd.read_csv(tt.data('SERPM8-BASE2015-TRIPS')) trips.info() # The pandas.Series object has a `value_counts` method that # counts the number of each of unique values in the series. # It returns a new Series, with the original values as the # index. By default the result is sorted in decreasing order # of frequency, but by setting `sort` to False we can get # the results in order by the original values, which in this # case will make a more readable figure (as similar modes have # adjacent code numbers). trip_mode_counts = trips.trip_mode.value_counts(sort=False) trip_mode_counts # There are 20 distinct trip modes recognized in the SERPM # trip file. They are identified by code numbers, but we # can create a python dictionary that maps the code numbers # to slightly more human-friendly names that will be used in our # figures. trip_mode_dictionary = { 1: "DRIVEALONEFREE", 2: "DRIVEALONEPAY", 3: "SHARED2GP", 4: "SHARED2PAY", 5: "SHARED3GP", 6: "SHARED3PAY", 7: "TNCALONE", 8: "TNCSHARED", 9: "WALK", 10: "BIKE", 11: "WALK_MIX", 12: "WALK_PRMW", 13: "WALK_PRMD", 14: "PNR_MIX", 15: "PNR_PRMW", 16: "PNR_PRMD", 17: "KNR_MIX", 18: "KNR_PRMW", 19: "KNR_PRMD", 20: "SCHBUS", } # We can apply this dictionary to the `trip_mode_counts` index using the # `map` method. trip_mode_counts.index = trip_mode_counts.index.map(trip_mode_dictionary) trip_mode_counts # ## Plotting Frequency Data # # Now we have a Series that contains the data we want to use. # We can plot this data using the `plot` method. This convenient # method makes it easy to generate a quick visualization of the # data. trip_mode_counts.plot(kind='bar'); # The `plot` method takes a `kind` argument to define # the kind of plot to generate -- a variety of kinds # are available, including the 'bar' chart we see above, # or a horizontally oriented version in a 'barh' chart: trip_mode_counts.plot(kind='barh'); # or a 'pie' chart: trip_mode_counts.plot(kind='pie'); # You might notice the plotting function isn't necessarily # smart about things like overlapping labels. For a quick # visualization created to help you understand your own data, # this might not be a big deal, but for generating figures # that will be inserted into reports or shared with others, # you'll want to manage these kinds of problems by manipulating # the visualization data before plotting it. # ### Customizing Plots # # If the default output isn't exactly what you want, it # is possible to customize the output, both by giving additional # arguments to the `plot` method, and by manipulating the # resulting chart (actually a `Axes` object) before # rendering the results. # # The arguments that the `plot` method will accept vary # depending on the plot kind, to customize # the appearance of the result. For example, the `barh` # plot can accept arguments for `color` and `figsize` to # make a tall figure with red bars: trip_mode_counts.plot(kind='barh', color='red'); # But using the `color` argument on a pie chart doens't make # sense and results in an error: try: trip_mode_counts.plot(kind='pie', color='red') except TypeError as err: print(err) # A more powerful set of figure customization tools is # available by manipulating the return value of the `plot` # method, which is a matplotlib `Axes` object. # This object is can be further modified or customized to create a well # crafted output figure. For example, we can change the # colors, add some axis labels, and format the tick marks like this: ax = trip_mode_counts.plot(kind='bar') ax.set_title("Trip Mode Frequency", fontweight='bold') ax.set_xlabel("Trip Mode") ax.set_ylabel("Number of Trips"); ax.set_yticklabels([f"{i:,.0f}" for i in ax.get_yticks()]) ax.set_yticks([5000,15000,25000,35000], minor=True); # This figure has a lot of detail that is probably unnecessary # for our presentation. In particular, there are a lot of different # variations of transit modes, but most have a trivial share of # trips (at least within the Jupiter study area). It might be better # to aggregate all the transit trips into a single bucket for this # figure. We could do so in a variety of ways -- we could manipulate # the final counts to sum up all the transit parts, or we could manipulate # the original source data before we tally the numbers. Let's do the # latter: we'll add a new code (21) to the dictionary for general transit, # map all the transit modes to it, and recompute the tally. tm = {11,12,13,14,15,16,17,18,19} trip_mode_dictionary[21] = 'TRANSIT' trip_mode_counts = trips.trip_mode.map(lambda x: 21 if x in tm else x).value_counts(sort=False) trip_mode_counts.index = trip_mode_counts.index.map(trip_mode_dictionary) trip_mode_counts ax = trip_mode_counts.plot(kind='bar', color='green') ax.set_title("Trip Mode Frequency") ax.set_xlabel("Trip Mode") ax.set_ylabel("Number of Trips"); # ## Plotting Histogram Data # # We'll create some histograms of household income using data from the # example `households` data. hh = pd.read_csv(tt.data('SERPM8-BASE2015-HOUSEHOLDS'), index_col=0) hh.set_index('hh_id', inplace=True) hh.info() # As with plotting general figures, pandas includes a pre-made method for # making simple histograms, using the `hist` method on a pandas.Series. hh.income.hist(); # Also like the `plot` method, the `hist` method includes # a limited ability to customize the output by passing # certain arguments. For example, we can increase the # number of bins from the default 10 to a more interesting # 50, get rid of the grid lines, and change the color to # red: hh.income.hist(bins=50, grid=False, color='red'); # It's also possible to give explicit boundaries to the bins # argument of `hist` by using a list or array, but this is # not generally useful as this function doesn't properly # normalize the result, giving a histogram that is at best # misleading: bins = np.array([0,10,20,40,60,70,80,90,100,125,150,200,1000]) * 1000 hh.income.hist(bins=bins); # Instead, if non-uniform bin sizes are desired, it's better # to use the `hist` function from a matplotlib `Axes`, which allows for # correct normalization for density: fig, ax = plt.subplots() ax.hist(hh.income, bins=bins, density=True); # You'll note that the y-axis on this figure has changed scale, as # instead of giving the count of observations in each bin it now # gives the density (i.e., expected number of households within each # one-dollar-width interval, averaged across all such intervals in each bin). # The resulting figure looks much more similar to the red histogram # shown above. # Also like `plot`, the return value of the `hist` # method is a matplotlib `Axes` object, # that can be further modified or customized to create a well # crafted output figure. For example, we can change the # plot limits on the x axis so we don't give so much area # to plot a long tail, add some axis labels, and format the # tick marks like this: ax = hh.income.hist(grid=False, bins=80) ax.set_xlim(0,350_000) ax.set_title("Household Income Histogram"); ax.set_xticklabels([f"${i/1000:.0f}K" for i in ax.get_xticks()]); ax.set_ylabel("Relative Frequency"); ax.set_yticks([]);
course-content/visualization/histograms.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from pycalphad import equilibrium, Database import pycalphad.variables as v import timeit import numpy as np db_alfe = Database('/home/rotis/git/pycalphad/research/alfe_sei.TDB') my_phases_alfe = ['LIQUID', 'HCP_A3', 'AL5FE2', 'AL2FE', 'AL5FE4', 'FCC_A1', 'B2_BCC', 'AL13FE4'] results = [] for nprocs in [1, 2, 4, 8, 16]: ct = timeit.timeit("equilibrium(db_alfe, ['AL', 'FE', 'VA'], my_phases_alfe, {v.T: (300,2000,100), v.P: 101325,v.X('AL'): (0,1,0.02)}, verbose=False, pbar=False, nprocs=nprocs)", number=1, globals=globals()) results.append([nprocs, ct]) results = np.array(results) # %matplotlib inline import matplotlib.pyplot as plt fig = plt.figure(figsize=(9,6)) fig.gca().plot(results[:,0], results[:,1], marker='.') fig.gca().set_xlabel('Number of Subprocesses', fontsize=18) fig.gca().set_ylabel('Calculation Time (seconds)', fontsize=18) fig.gca().set_xlim((1,None)) fig.gca().set_title('Al-Fe Phase Diagram Calculation (Quad-core CPU)', fontsize=18) fig.gca().tick_params(labelsize=18) plt.show() from pycalphad.core.equilibrium import worker_process from cloudpickle import dumps, loads proc = dumps(db_alfe) func = loads(proc) func == db_alfe
DebugMultiprocessing.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # FAQs for Regression, MAP and MLE # * So far we have focused on regression. We began with the polynomial regression example where we have training data $\mathbf{X}$ and associated training labels $\mathbf{t}$ and we use these to estimate weights, $\mathbf{w}$ to fit a polynomial curve through the data: # \begin{equation} # y(x, \mathbf{w}) = \sum_{j=0}^M w_j x^j # \end{equation} # # * We derived how to estimate the weights using both maximum likelihood estimation (MLE) and maximum a-posteriori estimation (MAP). # # * Then, last class we said that we can generalize this further using basis functions (instead of only raising x to the jth power): # \begin{equation} # y(x, \mathbf{w}) = \sum_{j=0}^M w_j \phi_j(x) # \end{equation} # where $\phi_j(\cdot)$ is any basis function you choose to use on the data. # # # * *Why is regression useful?* # * Regression is a common type of machine learning problem where we want to map inputs to a value (instead of a class label). For example, the example we used in our first class was mapping silhouttes of individuals to their age. So regression is an important technique whenever you want to map from a data set to another value of interest. *Can you think of other examples of regression problems?* # # # * *Why would I want to use other basis functions?* # * So, we began with the polynomial curve fitting example just so we can have a concrete example to work through but polynomial curve fitting is not the best approach for every problem. You can think of the basis functions as methods to extract useful features from your data. For example, if it is more useful to compute distances between data points (instead of raising each data point to various powers), then you should do that instead! # # # * *Why did we go through all the math derivations? You could've just provided the MLE and MAP solution to us since that is all we need in practice to code this up.* # * In practice, you may have unique requirements for a particular problem and will need to decide upon and set up a different data likelihood and prior for a problem. For example, we assumed Gaussian noise for our regression example with a Gaussian zero-mean prior on the weights. You may have a application in which you know the noise is Gamma distributed and have other requirements for the weights that you want to incorporate into the prior. Knowing the process used to derive the estimate for weights in this case is a helpful guide for deriving your solution. (Also, on a practical note for the course, stepping through the math served as a quick review of various linear algebra, calculus and statistics topics that will be useful throughout the course.) # # # * *What is overfitting and why is it bad?* # * The goal of a supervised machine learning algorithm is to be able to learn a mapping from inputs to desired outputs from training data. When you overfit, you memorize your training data such that you can recreate the samples perfectly. This often comes about when you have a model that is more complex than your underlying true model and/pr you do not have the data to support such a complex model. However, you do this at the cost of generalization. When you overfit, you do very well on training data but poorly on test (or unseen) data. So, to have useful trained machine learning model, you need to avoid overfitting. You can avoid overfitting through a number of ways. The methods we discussed in class are using *enough* data and regularization. Overfitting is related to the "bias-variance trade-off" (discussed in section 3.2 of the reading). There is a trade-off between bias and variance. Complex models have low bias and high variance (which is another way of saying, they fit the training data very well but may oscillate widely between training data points) where as rigid (not-complex-enough) models have high bias and low variance (they do not oscillate widely but may not fit the training data very well either). # # # * *What is the goal of MLE and MAP?* # * MLE and MAP are general approaches for estimating parameter values. For example, you may have data from some unknown distribution that you would like to model as best you can with a Gaussian distribution. You can use MLE or MAP to estimate the Gaussian parameters to fit the data and determine your estimate at what the true (but unknown) distribution is. # # # * *Why would you use MAP over MLE (or vice versa)?* # * As we saw in class, MAP is a method to add in other terms to trade off against the data likelihood during optimization. It is a mechanism to incorporate our "prior belief" about the parameters. In our example in class, we used the MAP solution for the weights in regression to help prevent overfitting by imposing the assumptions that the weights should be small in magnitude. When you have enough data, the MAP and the MLE solution converge to the same solution. The amount of data you need for this to occur varies based on how strongly you impose the prior (which is done using the variance of the prior distribution). # # Probabilistic Generative Models # # * So far we have focused on regression. Today we will begin to discuss classification. # * Suppose we have training data from two classes, $C_1$ and $C_2$, and we would like to train a classifier to assign a label to incoming test points whether they belong to class 1 or 2. # * There are *many* classifiers in the machine learning literature. We will cover a few in this class. Today we will focus on probabilistic generative approaches for classification. # * A *generative* approach for classification is one in which we estimate the parameters for distributions that generate the data for each class. Then, when we have a test point, we can compute the posterior probability of that point belonging to each class and assign the point to the class with the highest posterior probability. # + import numpy as np import matplotlib.pyplot as plt from scipy.stats import multivariate_normal # %matplotlib inline mean1 = [-1.5, -1] mean2 = [1, 1] cov1 = [[1,0], [0,2]] cov2 = [[2,.1],[.1,.2]] N1 = 250 N2 = 100 def generateData(mean1, mean2, cov1, cov2, N1=100, N2=100): # We are generating data from two Gaussians to represent two classes. # In practice, we would not do this - we would just have data from the problem we are trying to solve. class1X = np.random.multivariate_normal(mean1, cov1, N1) class2X = np.random.multivariate_normal(mean2, cov2, N2) fig = plt.figure() ax = fig.add_subplot(*[1,1,1]) ax.scatter(class1X[:,0], class1X[:,1], c='r') ax.scatter(class2X[:,0], class2X[:,1]) plt.show() return class1X, class2X class1X, class2X = generateData(mean1, mean2,cov1,cov2, N1,N2) # - # In the data we generated above, we have a "red" class and a "blue" class. When we are given a test sample, we will want to assign the label of either red or blue. # # We can compute the posterior probability for class $C_1$ as follows: # # \begin{eqnarray} # p(C_1 | x) &=& \frac{p(x|C_1)p(C_1)}{p(x)}\\ # &=& \frac{p(x|C_1)p(C_1)}{p(x|C_1)p(C_1) + p(x|C_2)p(C_2)}\\ # \end{eqnarray} # # We can similarly compute the posterior probability for class $C_2$: # # \begin{eqnarray} # p(C_2 | x) &=& \frac{p(x|C_2)p(C_2)}{p(x|C_1)p(C_1) + p(x|C_2)p(C_2)}\\ # \end{eqnarray} # # Note that $p(C_1|x) + p(C_2|x) = 1$. # # So, to train the classifier, what we need is to determine the parametric forms and estimate the parameters for $p(x|C_1)$, $p(x|C_2)$, $p(C_1)$ and $p(C_2)$. # # For example, we can assume that the data from both $C_1$ and $C_2$ are distributed according to Gaussian distributions. In this case, # \begin{eqnarray} # p(\mathbf{x}|C_k) = \frac{1}{(2\pi)^{D/2}}\frac{1}{|\Sigma|^{1/2}}\exp\left\{ - \frac{1}{2} (\mathbf{x}-\mu_k)^T\Sigma_k^{-1}(\mathbf{x}-\mu_k)\right\} # \end{eqnarray} # # Given the assumption of the Gaussian form, how would you estimate the parameter for $p(x|C_1)$ and $p(x|C_2)$? *You can use maximum likelihood estimate for the mean and covariance!* # # The MLE estimate for the mean of class $C_k$ is: # \begin{eqnarray} # \mu_{k,MLE} = \frac{1}{N_k} \sum_{n \in C_k} \mathbf{x}_n # \end{eqnarray} # where $N_k$ is the number of training data points that belong to class $C_k$ # # The MLE estimate for the covariance of class $C_k$ is: # \begin{eqnarray} # \Sigma_k = \frac{1}{N_k} \sum_{n \in C_k} (\mathbf{x}_n - \mu_{k,MLE})(\mathbf{x}_n - \mu_{k,MLE})^T # \end{eqnarray} # # We can determine the values for $p(C_1)$ and $p(C_2)$ from the number of data points in each class: # \begin{eqnarray} # p(C_k) = \frac{N_k}{N} # \end{eqnarray} # where $N$ is the total number of data points. # # # + #Estimate the mean and covariance for each class from the training data mu1 = np.mean(class1X, axis=0) print(mu1) cov1 = np.cov(class1X.T) print(cov1) mu2 = np.mean(class2X, axis=0) print(mu2) cov2 = np.cov(class2X.T) print(cov2) # Estimate the prior for each class pC1 = class1X.shape[0]/(class1X.shape[0] + class2X.shape[0]) print(pC1) pC2 = class2X.shape[0]/(class1X.shape[0] + class2X.shape[0]) print(pC2) # + #We now have all parameters needed and can compute values for test samples from scipy.stats import multivariate_normal x = np.linspace(-5, 4, 100) y = np.linspace(-6, 6, 100) xm,ym = np.meshgrid(x, y) X = np.dstack([xm,ym]) #look at the pdf for class 1 y1 = multivariate_normal.pdf(X, mean=mu1, cov=cov1); plt.imshow(y1) # - #look at the pdf for class 2 y2 = multivariate_normal.pdf(X, mean=mu2, cov=cov2); plt.imshow(y2) #Look at the posterior for class 1 pos1 = (y1*pC1)/(y1*pC1 + y2*pC2 ); plt.imshow(pos1) #Look at the posterior for class 2 pos2 = (y2*pC2)/(y1*pC1 + y2*pC2 ); plt.imshow(pos2) #Look at the decision boundary plt.imshow(pos1>pos2) # *How did we come up with using the MLE solution for the mean and variance? How did we determine how to compute $p(C_1)$ and $p(C_2)$? # # * We can define a likelihood for this problem and maximize it! # # \begin{eqnarray} # p(\mathbf{t}, \mathbf{X}|\pi, \mu_1, \mu_2, \Sigma_1, \Sigma_2) = \prod_{n=1}^N \left[\pi N(x_n|\mu_1, \Sigma_1)\right]^{t_n}\left[(1-\pi)N(x_n|\mu_2, \Sigma_2) \right]^{1-t_n} # \end{eqnarray} # # * *How would we maximize this?* As usual, we would use our "trick" and take the log of the likelihood function. Then, we would take the derivative with respect to each parameter we are interested in, set the derivative to zero, and solve for the parameter of interest. # ## Reading Assignment: Read Section 4.2 and Section 2.5.2
Lecture06_ProbGenModels/Lecture 06 - Probabilistic Generative Models.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Factiva Sample Notebooks - Intro # # With the aim to help speeding up the solution development when conencting to Factiva Analytics services, the current set of notebooks helps on getting started with the main concepts and implementing quick samples of most available operations. These notebooks rely on Factiva RDL (Rapid Development Library) which is a set of packages that make the interaction to Factiva APIs more easy and intuitive. # # ## Contents # # All notebooks in this repository use RDL packages. For more details about all available operations, please refer to the official documentation. # # - **[factiva-news](https://factiva-news-python.readthedocs.io/)** - Package reference # # These packages enable multiple operations offered under the [Dow Jones Developer Platform](https://developer.dowjones.com). The following list shows a list of sample notebooks to get started. # # * Setup # * [Intro](0.0_intro.ipynb) (This file) # * [Installation](0.1_installation.ipynb) # * [Configuration](0.2_configuration.ipynb) # * Quickstart # * [User Statistics](1.1_user_statistics.ipynb) # * [Taxonomies (DJID)](1.2_taxonomies_djid.ipynb) # * [Company Identifiers](1.3_company_identifiers.ipynb) # * [Snapshot Explain](1.4_snapshot_explain.ipynb) # * [Snapshot Analytics](1.5_snapshot_analytics.ipynb) # * [Snapshot Extraction](1.6_snapshot_extraction.ipynb) # * [Snapshot Update](1.7_snapshot_update.ipynb) # * [Snapshot Download](1.8_snapshot_download.ipynb) # * [Working with Snapshot Files](1.8_snapshot_files.ipynb) # * Stream Management # * Advanced # * Complex and Large Queries # * Consume Stream Content # # ## Contributions # # If you see an error or would like to contribute with code snippets or a good practice, feel free to create a fork and open a pull request. # # As an alternative, leave your contribution in the issues section. # # Thanks!
0.0_intro.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Process differential expression datasets of oligodendrocyte differentiation # + import pandas import numpy # %matplotlib inline # - columns = ['experiment', 'entrez_gene_id', 'gene_symbol', 'L2FC', 'p_value'] # ## Process Dugas dataset # # Differentially expressed genes in rat oligodendrocyte differentiation from [Dugas et al 2006](https://doi.org/10.1523/jneurosci.2572-06.2006), which I previously mapped to human. url = 'https://gist.githubusercontent.com/dhimmel/45bcff9500cd99f85200/raw/fa13c2c96c59a53b5afe9ed02f8deef72813555d/OPC-differentiation-DEGs.tsv' dugas_df = pandas.read_table(url) dugas_df['L2FC'] = numpy.sign(dugas_df.fold_change) * numpy.log2(dugas_df.fold_change.abs()) dugas_df = dugas_df.rename(columns = {'hgnc_symbol_manual' : 'gene_symbol'}) dugas_df['experiment'] = 'in_vitro_rat_OPC_diff' dugas_df = dugas_df.drop_duplicates('gene_symbol') dugas_df = dugas_df[[x for x in columns if x in dugas_df.columns]] dugas_df = dugas_df.dropna() dugas_df.head(2) len(dugas_df) dugas_df.L2FC.hist(); # ## Process Antel dataset # # Here I assume that `logFC` refers to log2 fold change, which may not be correct antel_df = pandas.read_excel('download/Jack_Antel (A+_A-)_forDaniel.xlsx', skiprows=1) antel_df = antel_df.rename(columns={'Entrez': 'entrez_gene_id', 'Symbol': 'gene_symbol', 'logFC': 'L2FC', 'adj.P.Val': 'p_value'}) antel_df = antel_df.sort_values('p_value').drop_duplicates('entrez_gene_id') antel_df['experiment'] = 'human_OPC_diff' antel_df = antel_df[columns] antel_df = antel_df.dropna() antel_df.head(2) len(antel_df) antel_df.L2FC.hist(); # ## Process iPS dataset ips_df = pandas.read_excel('download/O4 Oligos vs NSCs(Log_FC)_for_Daniel.xlsx') ips_df = ips_df.query("Group == 'Coding'") ips_df = ips_df.rename(columns={'Gene Symbol': 'gene_symbol', 'Log_FC (O4H vs. NSCs)': 'L2FC', 'FDR p-value (O4H vs. NSCs)': 'p_value'}) ips_df['experiment'] = 'iPS_OPC_diff' ips_df = ips_df[[x for x in columns if x in ips_df.columns]] ips_df = ips_df.sort_values('p_value').drop_duplicates('gene_symbol') ips_df = ips_df.dropna() ips_df = ips_df.query("p_value <= 0.05") ips_df = ips_df[ips_df.L2FC.abs() >= 4] ips_df.head(2) ips_df.L2FC.hist(); # ## Combine datasets opc_diff_df = pandas.concat([dugas_df, ips_df, antel_df]) opc_diff_df.experiment.value_counts() opc_diff_df.to_csv('data/OPC-differentiation-diffex-genes.tsv', sep='\t', index=False, float_format='%.3g')
diffex.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.3 64-bit (''lpthw'': venv)' # language: python # name: python3 # --- a = '密码不符合要求,长度不足8,没有包含大写符号,没包含数字,没包含标点,' a import string string.ascii_lowercase string.ascii_uppercase 0b01001 bin(0b01000 & 0b00001) bin(0b01000 | 0b00001) 'ttgg' def generate_password(): pass print(generate_password) type(None) test = None print(test) import random random.seed(3) random.randint(1, 200) ''.join(random.sample('fhjiweoafyhiuewafyhru',k=9)) f"sds{a}deferg" a = 'sdefef' string.printable string.printable[:-6] a = (1, 2, 3, 4, 5) random.shuffle(a) random.sample(a, k=len(a)) ''.join(random.sample('fdfgrgrfrfg',len('fdfgrgrfrfg'))) random.choice('1234567890')
2021Practical_Python_Programming/pass_gen.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <h1 align="center">Actividad 1: Agrupar datos con algoritmos de Clustering en Python</h1> # <div style="border: 2px solid #1c75c8; background-color: #c5ddf6;"> # <h2> Preámbulo</h2> # <p> Esta activad se inspira de ejercicios disponibles en los recursos siguientes:<p> # <ul> # <li> Guías y código Python de <a href="http://brandonrose.org/"><NAME></li> # <li>Curso de <a href="https://www.datacamp.com/courses/unsupervised-learning-in-python">DataCamp</a> y código disponible en la cuenta GitHub de <a href="https://github.com/benjaminwilson/python-clustering-exercises"><NAME></a></li> # </ul> # <p> La actividad requiere el uso de Python 3.x y <a href="http://jupyter.org/install">Jupyter Notebook</a>. El código entregado fue probado con Python 3.6.1. Para saber cuál versión de Python usted está utilizando, ejecutar la celda siguiente (está información es importante cuando se necesitará instalar nuevos paquetes.</p> # </div> # !python3 -V # <div style="border: 2px solid #D24747; background-color:#F8B4B4"> # <h2>Objetivos de la actividad</h2> # <p>El <b>objetivo general</b> de esta actividad consiste en saber explorar la estructura oculta de un dataset, implementando una metodología de agrupamiento de datos clásico, utilizando unos algoritmos de clustering estándares (K-means, Ward clustering, DBSCAN) sobre datos estructurados y no estructurados y describiendo sus principales características.</p> # # <p> Un <i>objetivo secundario</i> consiste en programar con algunas librerías Python para analizar y visualizar datos (<a href="https://pandas.pydata.org/">Pandas</a>, Sci-Kit learn, Matplotlib, etc.)</p> # </div> # <h2>0. Antes de empezar: unas palabras sobre las herramientas de Python para la Ciencia de los Datos...</h2> # # <img src="python-packages.png" alt="python-packages"></img> # # <p>Cada toolkit de Python tiene sus propios objetivos:</p> # <ul> # <li><b>Numpy</b> agrega funcionalidades en Python para soportar arreglos y matrices de gran tamaño y funciones matemáticas para manipularlas.</li> # <li><b>SciPy</b> es una colección de algoritmos matemáticos y funciones programadas con NumPy. Agrega funciones y clases de alto nivel para facilitar la manipulación y visualización de datos.</li> # <li><b>Pandas</b> ofrece estructuras de datos y operaciones para manipular y analizar matrices de datos numéricos y series de tiempo.</li> # <li><b>Scikit-learn</b> es una librería Python para el Machine Learning, contiene una implementación de los principales algoritmos estandares para el aprendizaje supervisado y no supervisado.</li> # </ul> # # <p> En la versión actual de Scikit-lean, se puede encontrar en particular los algoritmos de clustering siguiente:</p> # <img src="clustering-algorithm.png" alt="clustering-algorithm."></img> # # <h2>1. Ejercicio 1: descubrir K-means sobre datos estructurados bi-dimensionales</h2> # <p>El primer dataset que queremos explorar consiste en un archivo CSV donde se encuentra un conjunto 300 observaciones (o instancias) descritas por 2 características numéricas. # <br>Ejemplo:<i>1.70993371252,0.698852527208</i></p> # <ul><li>La primera etapa consiste en cargar los datos en un objeto <i>DataFrame</i>. Un DataFrame es una de las estructuras de datos provistas por Pandas para representar los datos, consiste en una matriz en dos dimensiones (ver <a href="https://pandas.pydata.org/pandas-docs/stable/dsintro.html">más detalles</a>) donde cada fila es un dato y cada columna una característica sobre los datos.</li></ul> import pandas as pd dataframe = pd.read_csv('datasets/dataset1.csv') dataframe # <ul><li>Para tener una primera comprensión de nuestros datos, queremos visualizarlos en un <i><a href="https://en.wikipedia.org/wiki/Scatter_plot">Scatter plot</a></i>, a través la librería Matplotlib:</li></ul> # + import matplotlib.pyplot as plt #Crear un arreglo 'coordinates_x' que contiene los valores de la columna 0 de nuestro dataframe coordinates_x = dataframe.values[:,0] #lo mismo con los valores de la columna 1 del dataframe coordinates_y = dataframe.values[:,1] #Crear y mostrar el scatter plot pasando las coordinadas como parametros de la función plt.scatter(). plt.scatter(coordinates_x, coordinates_y) plt.show() # - # <p> Como pueden verlo, nuestro dataset tiene una estructura bastante simple y explicita, aparecen 3 grupos de datos (o <i>clústers</i>). Sin embargo, este caso es particularmente simple ya que los datos tienen solamente 2 dimensiones y que los clústers están bien separados. # El algoritmo K-means (o algoritmo de Lloyd) es un método de agrupamiento, que tiene como objetivo la partición de un conjunto de n observaciones en k grupos en el que cada observación pertenece al grupo cuyo valor medio es más cercano. El problema es computacionalmente difícil (NP-hard). Sin embargo, hay eficientes heurísticas que se emplean comúnmente y convergen rápidamente a un óptimo local (ver <a href="https://en.wikipedia.org/wiki/K-means_clustering">más detalles</a>). # <ul><li>La librería SciKit-learn de Python ofrece una implementación de este algoritmo, que se puede utilizar con la API siguiente:</li></ul> from sklearn.cluster import KMeans #Declaración de un modelo de clustering especificando el número a priori de clusters que queremos encontrar. ##En este caso, hemos elegido por casualidad n_clusters=5. modelKmeans = KMeans(n_clusters=5) #Entrenamiento del modelo de clustering con los datos de nuestro dataframe modelKmeans.fit(dataframe.values) # <div style="border: 1px solid #000000; padding: 5px;"> # <b>Preguntas:</b> # <ol> # <li> ¿Cuáles son las etapas del algoritmo de Lloyd?</li> # <li> ¿Por qué es necesario initializar varias veces el algoritmo? De qué sirve el parametro n_init?</li> # <li> ¿Cómo elegir el número de inicializaciones e iteraciones? (n_init y max_iter)</li> # </ol> # <p><b>Respuestas:</b></p> # <p><b> ¿Cuáles son las etapas del algoritmo de Lloyd?</b><br> # R: El algoritmo de Lloyd busca conjuntos de puntos igualmente espaciados y particiones de estos subconjuntos, encontrando, repetidamente, el centroide de cada conjunto en cada partición y, luego, volviendo a particionar la entrada de acuerdo con cuál de estos centroides es el más cercano.<br> # Los pasos:<br> # 1. Ubicación inicial aleatoria de un número k de grupos de puntos (k centroides) en el dominio de entrada.<br> # 2. Se construye una nueva partición, asociando cada elemento con el centroide más cercano.<br> # 3. Se recalculan los centroides.<br> # Se itera hasta que los centroides se estabilicen o converjan.</p> # # <p><b> ¿Por qué es necesario inicializar varias veces el algoritmo? ¿De qué sirve el parametro n_init?</b><br> # R: Debido a que, a pesar de que es un algoritmo rápido, tiene la falencia de caer en mínimos locales, por lo que es de utilidad reiniciarlo varias veces (y quedarse con el mejor resultado).<br> # De sklearn: "n_init es el número de veces que el algoritmo se ejecutará con diferentes semillas para los centroides. Los resultados finales serán la mejor salida de n_init ejecuciones consecutivas, en términos de inercia".</p> # # <p><b> ¿Cómo elegir el número de inicializaciones e iteraciones? (n_init y max_iter)</b><br> # R: La cantidad de inicializaciones y de iteraciones puede determinarse experimentalmente y depende de la naturaleza de los datos de entrada.</p> # # </div> # # <ul> # <li>Ahora queremos visualizar cómo el algoritmo agrupó los datos en 5 grupos: # </ul> # # + #Crear un arreglo de datos donde cada valor corresponde a la decision del modelo K-Means a la pregunta siguiente: #¿A qué clúster pertenece el dato corriente de la dataframe? labels = modelKmeans.predict(dataframe.values) print(labels) #Crear un Scatter Plot donde cada punto tiene un color asociado a un grupo plt.scatter(dataframe.values[:,0], dataframe.values[:,1], c=labels) plt.show() # - # <ul><li>Se puede utilizar el mismo modelo para clasificar nuevos datos. NB: Sin embargo, si el objetivo aplicativo consiste en clasificar datos según ciertas categorías es recomendable seguir una metodología de aprendizaje supervisado.</li></ul> # + #Cargar un dataset con nuevos datos dataframe2 = pd.read_csv('datasets/dataset2.csv') #Utilizar el modelo K-Means anterior para clasificar los nuevos datos labels2 = modelKmeans.predict(dataframe2.values) #Visualizar el resultado de la predicción en un Scatter Plot plt.scatter(dataframe2.values[:,0], dataframe2.values[:,1], c=labels2) plt.show() # - # <div style="border: 1px solid #000000; padding: 5px;"> # <b>Preguntas:</b> # <ol> # <li> ¿Cómo el algoritmo de Lloyd/K-means permitió predecir la clase de los nuevos datos?</li> # <li> ¿Cómo se podría definir el concepto de <i>'centroid'</i>?</li> # <li> ¿Cuáles son los limites del método que utiliza K-means para calcular los <i>'centroid'</i>?</li> # </ol> # # # <p><b>Respuestas:</b></p> # <p><b> ¿Cómo el algoritmo de Lloyd/K-means permitió predecir la clase de los nuevos datos?</b><br> # R: Utilizando los centroides previamente calculados, y las distancias a esos centroides. </p> # # <p><b> ¿Cómo se podría definir el concepto de <i>'centroid'</i>?</b><br> # R: El centroide puede entenderse como el punto medio o centro de gravedad de cada conjunto. La "media" de los datos que tiene asignados.</p> # # <p><b> ¿Cuáles son los limites del método que utiliza K-means para calcular los <i>'centroid'</i>?</b><br> # R: Los centroides deben estar suficientemente separados entre sí, evitando que todos los datos estén a distancias similares de todos los centroides.</p> # # </div> # # <ul><li>Visualizemos los <i>centroids</i> de cada clúster:</li></ul> # + #en el API del modelo k-means existe un metodo permitiendo de obtener un arreglo de datos correspondiendo a los centroids centroids = modelKmeans.cluster_centers_ #Dibujamos el Scatter Plot de la dataframe inicial ... plt.scatter(dataframe.values[:,0], dataframe.values[:,1], c=labels) #...y agregamos los centroids en el mismo plot plt.scatter(centroids[:,0], centroids[:,1], marker='o', s=200) plt.show() # - # <ul><li>La distancia con el centroid permite clasificar los nuevos datos:</li></ul> #nuevos datos de la dataframe2 plt.scatter(dataframe2.values[:,0], dataframe2.values[:,1], c=labels2) #mismos centroids plt.scatter(centroids[:,0], centroids[:,1], marker='o', s=200) plt.show() # <div style="border: 1px solid #000000; padding: 5px;"> # <b>Preguntas:</b> # <ol> # <li> ¿Existe un número de clúster mejor que los otros para buscar la estructura oculta de los datos?</li> # <li> ¿Cómo determinar cuál es el mejor número de clúster?</li> # </ol> # # <p><b>Respuestas:</b></p> # <p><b> ¿Existe un número de clúster mejor que los otros para buscar la estructura oculta de los datos?</b><br> # R: A priori, no. Dependerá de la naturaleza de los datos y, sobre todo, del problema en solución. De todas formas se opta por un óptimo (ver pregunta 2, abajo).</p> # # <p><b> ¿Cómo determinar cuál es el mejor número de clúster?</b><br> # R: El número óptimo de clusters es, de cierta forma, subjetivo y depende del método utilizado para medir similitudes y de los parámetros utilizados para generar las particiones.<br> # Existen métodos directos (optimizar un criterio) como elbow y silhouette; y métodos de prueba estadística (comparar evidencia contra hipótesis nula) como la "estadística de brecha" o gap statistic.</p> # # </div> # <p>Existen varios métodos estadísticos para determinar el mejor número de clústers tales como los métodos <i>Elbow</i>, <i>Average Silhouette</i> y <i>Gap Statistics</i> (ver <a href="http://www.sthda.com/english/wiki/print.php?id=239#three-popular-methods-for-determining-the-optimal-number-of-clusters">detalles</a>). En la API de la librería SciKit-Learn también existe un método llamado <i>inertia</i> que permite estimar el mejor número k:</p> # + from sklearn.cluster import KMeans num_k = range(1, 6) inertias = [] for k in num_k: # Create a KMeans instance with k clusters: model model = KMeans(n_clusters=k) # Fit model to samples model.fit(dataframe) # Append the inertia to the list of inertias inertias.append(model.inertia_) import matplotlib.pyplot as plt # Plot ks vs inertias plt.plot(num_k, inertias, '-o') plt.xlabel('number of clusters, k') plt.ylabel('inertia') plt.xticks(num_k) plt.show() # - # <div style="border: 1px solid #000000; padding: 5px;"> # <b>Preguntas:</b> # <ol> # <li> ¿A qué método para buscar el mejor número de clústers corresponde el método <i>inertia</i> de Sci-Kit?</li> # <li> ¿Cuáles son las principales <b>ventajas</b> del algoritmo K-means?</li> # <li> ¿Cuáles son las principales <b>limites</b> del algoritmo K-means?</li> # </ol> # # <p><b>Respuestas:</b></p> # <p><b> ¿A qué método para buscar el mejor número de clústers corresponde el método <i>inertia</i> de Sci-Kit?</b><br> # R: Corresponde al método Elbow, ya que selecciona el centroide que minimiza la inercia (la suma de las distancias cuadradas al centroide más cercano).</p> # # <p><b> ¿Cuáles son las principales <b>ventajas</b> del algoritmo K-means?</b><br> # R: - Es un algoritmo "rápido" (es lineal en el número de datos, es decir, O(n) ).<br> # - Utilizable en grandes volúmenes de datos.</p> # # <p><b> ¿Cuáles son las principales <b>limites</b> del algoritmo K-means?</b><br> # R: - Funciona bien cuando la forma de los clusters es hiper esférica (o circular en 2 dimensiones). Si los clusters "naturales" del conjunto de datos no son esféricos, K-means puede no ser buena opción.<br> # - Comienza con una selección aleatoria de los centroides, por que puede producir diferentes resultados en diferentes ejecuciones. Esto hace que los resultados puedan ser no repetibles ni consistentes entre ejecuciones.</p> # </div> # <h2>2. Ejercicio 2: Descubrir los algoritmos de clustering jerárquico sobre datos estructurados multi-dimensionales</h2> # <div> # <div style="float:left;width:45%;" > # <p>En este segundo ejercicio, queremos explorar otra familia de algoritmos de clustering basada sobre la idea que en ciertos casos los datos pueden tener <b>relaciones jerarquícas</b> ocultas. El Algoritmo de Ward es parte de este grupo de algoritmos.</p> # # <p> Supongamos que trabajamos por una empresa de ingeniería genética que quiere entender las evoluciones en las especies de semillas de grano. Tenemos a nuestra disposición el dataset 'semillas-trigo.csv'.</p> # </div> # # <div style="float:right;width:45%;"> # <img src="images/trigo.jpeg" alt="trigo"> # </div> # <div style="clear:both; font-size:1px;"></div> # </div> # # <ul> # <li>Cargar los datos en un DataFrame:</li> # </ul> # + import pandas as pd seeds_df = pd.read_csv('datasets/semillas-trigo.csv') # Suprimir la columna 'grain_variety' del dataset. Utilizaremos esta información solamente como referencia al final varieties = list(seeds_df.pop('grain_variety')) # Extraer los datos como un arreglo NumPy samples = seeds_df.values # Mostrar el DataFrame seeds_df # - # <p>En SciPy, el método <i>linkage()</i> permite hacer clustering jerárquico o aglomerativo. Ver más detalles: <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html#scipy.cluster.hierarchy.linkage">linkage()</a> </p> # # <p> El clustering jerárquico consiste en calcular una distancia entre clusters. Los métodos más simples consisten en calcular una distancia entre 2 puntos referencias de cada clúster: Nearest Point Algorithm (o 'single' en SciPy), Farthest Point Algorithm (or Voor Hees Algorithm o 'complete' en SciPy), UPGMA (o 'average' en Scipy), centroids. El <b>método Ward</b> se diferencia de las otras utilizando un algoritmo recursivo para encontrar un agrupamiento que minimiza la varianza en las distancias entre clústers. # # <ul> # <li>Probar el método de clustering jerárquico con el método Ward y visualizar el resuldado con un Dendograma:</li> # </ul> # + from scipy.cluster.hierarchy import linkage, dendrogram import matplotlib.pyplot as plt mergings = linkage(samples, method='ward') plt.figure(figsize=(20,10)) dendrogram(mergings, labels=varieties, leaf_rotation=90, leaf_font_size=6, ) plt.show() # - # <ul> # <li>Probar el método de clustering jerárquico con el método Ward y visualizar el resuldado con un Dendograma:</li> # </ul> # + mergings2 = linkage(samples, method='complete') plt.figure(figsize=(20,10)) dendrogram(mergings2, labels=varieties, leaf_rotation=90, leaf_font_size=6, ) plt.show() # - # <h2>3. Ejercicio 3: Distance-based clustering vs. Density-based clustering</h2> # <ul> # <li> En este ejercicio queremos explorar los datos del dataset3.csv y hemos elegido utilizar el algoritmo K-Means. # <li> Cargar los datos:</li> # </ul> #Cargar los datos: import pandas as pd dataframe3 = pd.read_csv('datasets/dataset3.csv') #Encontrar el mejor número de cluser # <ul> # <li> Encontrar cuál es el mejor número de clusters: # </ul> # + from sklearn.cluster import KMeans #Prueba por k entre 1 y 10 num_k = range(1, 10) inertias = [] for k in num_k: model = KMeans(n_clusters=k) model.fit(dataframe3) inertias.append(model.inertia_) import matplotlib.pyplot as plt # Plot ks vs inertias plt.plot(num_k, inertias, '-o') plt.xlabel('number of clusters, k') plt.ylabel('inertia') plt.xticks(num_k) plt.show() # - # <ul> # <li> ¡El mejor número K parece ser 5! Clusterizemos con k=5 y visualizemos el resultado! # </ul> # + from sklearn.cluster import KMeans modelKmeans = KMeans(n_clusters=5) modelKmeans.fit(dataframe3.values) labels = modelKmeans.predict(dataframe3.values) plt.scatter(dataframe3.values[:,0], dataframe3.values[:,1], c=labels) plt.show() # - # <ul> # <li> ¿Cuál es su opinión sobre el análisis?</li> # <li> Probemos con el algoritmo DBSCAN: # </ul> # def set_colors(labels, colors='rgbykcm'): colored_labels = [] for label in labels: colored_labels.append(colors[label]) return colored_labels # + # %matplotlib inline from collections import Counter import random import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from sklearn.cluster import KMeans, DBSCAN dataframe3 = pd.read_csv('datasets/dataset1.csv') # Fit a DBSCAN estimator estimator = DBSCAN(eps=0.4, min_samples=5) #dataframe3.values = df_circ[["x", "y"]] estimator.fit(dataframe3) # Clusters are given in the labels_ attribute labels = estimator.labels_ print(Counter(labels)) colors = set_colors(labels) plt.scatter(dataframe3.values[:,0], dataframe3.values[:,1], c=colors) plt.xlabel("x") plt.ylabel("y") plt.show() # + from collections import Counter import random import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from sklearn.cluster import KMeans, DBSCAN dataframe3 = pd.read_csv('datasets/dataset3.csv') # Fit a DBSCAN estimator estimator = DBSCAN(eps=15, min_samples=5) #dataframe3.values = df_circ[["x", "y"]] estimator.fit(dataframe3) # Clusters are given in the labels_ attribute labels = estimator.labels_ print(Counter(labels)) colors = set_colors(labels) plt.scatter(dataframe3.values[:,0], dataframe3.values[:,1], c=colors) plt.xlabel("x") plt.ylabel("y") plt.show() # - # <div style="border: 1px solid #000000; padding: 5px;"> # <b>Preguntas:</b> # <ol> # <li> ¿De qué sirven los parametros epsilon y min_sample en DBSCAN?</li> # </ol> # <p>R: Epsilon es la distancia máxima entre dos muestras o datos para que se consideren dentro de un mismo "vecindario".<br> # min_sample es la cantidad de muestras o datos (peso total) en un "vecindario" para que un punto sea considerado un punto central, incluyendo al punto en sí. </p> # </div> # <h2>4. Ejercicio 4: ¿Cómo agrupar datos no estructurados multi-dimensionales?</h2> # <p>En el último ejercicio, vamos a explorar el agrupamiento de datos textuales con el algoritmo de Ward. # En general, los algoritmos K-Means, Ward o DBSCAN son limitados para agrupar datos textuales, y es preferible utilizar otro protocolo no supervisado como Latent Dirichlet Allocation (LDA). Sin embargo este ejercicio nos servirá en particular para empezar a utilizar la librería NLTK y revisar algunos preprocesamientos sobre datos textuales.</p> # # <ul> # <li>Tenemos a nuestra disposición un dataset con 58 discursos políticos de los presidentes de Estados-Unidos. Cada uno corresponde al primer discurso que hace el presidente cuando entre en la Casa Blanca. Cargar el dataset 'speeches.csv':</li> # </ul> # + import pandas as pd import re import nltk #Cargar el dataset de speeches df_speeches = pd.read_csv('datasets/speeches.csv') # - # <ul> # <li>SciKit-Learn viene con un API por defecto para transformar un dataset de textos brutos en una matrice donde cada texto es una representación vectorial del peso TFIDF de cada palabra. # </ul> # # <img src="images/tfidf.png" alt="tfidf"></img> # <ul> # <li>Transformar el dataset de textos en un matrice de pesos TFIDF:</li> # </ul> from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(stop_words='english') tfidf_matrix = vectorizer.fit_transform(df_speeches.values[:,4]) # <ul> # <li>Calcular la distancia entre cada documento:</li> # </ul> from sklearn.metrics.pairwise import cosine_similarity dist = 1 - cosine_similarity(tfidf_matrix) # <ul> # <li>Agrupar los documentos con el algoritmo de Ward y la distancia entre documentos, y visualizar el resultado con un dendograma:</li> # </ul> # + from scipy.cluster.hierarchy import ward, dendrogram linkage_matrix = ward(dist) #define the linkage_matrix using ward clustering pre-computed distances fig, ax = plt.subplots(figsize=(15, 20)) # set size ax = dendrogram(linkage_matrix, orientation="right", labels=df_speeches.values[:,1]); plt.tick_params(\ axis= 'x', # changes apply to the x-axis which='both', # both major and minor ticks are affected bottom='off', # ticks along the bottom edge are off top='off', # ticks along the top edge are off labelbottom='off') #plt.tight_layout() #show plot with tight layout #uncomment below to save figure plt.show() # - # <ul> # <li>Hacer lo mismo pero con un preprocesamiento de <i>Stemming</i> y <i>n-gram</i> antes: # </ul> # + # load nltk's SnowballStemmer as variabled 'stemmer' from nltk.stem.snowball import SnowballStemmer stemmer = SnowballStemmer("english") # load nltk's English stopwords as variable called 'stopwords' stopwords = nltk.corpus.stopwords.words('english') # here I define a tokenizer and stemmer which returns the set of stems in the text that it is passed def tokenize_and_stem(text): # first tokenize by sentence, then by word to ensure that punctuation is caught as it's own token tokens = [word for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)] filtered_tokens = [] # filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation) for token in tokens: if re.search('[a-zA-Z]', token): filtered_tokens.append(token) stems = [stemmer.stem(t) for t in filtered_tokens] return stems def tokenize_only(text): # first tokenize by sentence, then by word to ensure that punctuation is caught as it's own token tokens = [word.lower() for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)] filtered_tokens = [] # filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation) for token in tokens: if re.search('[a-zA-Z]', token): filtered_tokens.append(token) return filtered_tokens # + from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=200000, min_df=0.2, stop_words='english', use_idf=True, tokenizer=tokenize_and_stem, ngram_range=(1,3)) tfidf_matrix2 = tfidf_vectorizer.fit_transform(df_speeches.values[:,4]) from sklearn.metrics.pairwise import cosine_similarity dist = 1 - cosine_similarity(tfidf_matrix2) # + from scipy.cluster.hierarchy import ward, dendrogram linkage_matrix = ward(dist) #define the linkage_matrix using ward clustering pre-computed distances fig, ax = plt.subplots(figsize=(15, 20)) # set size ax = dendrogram(linkage_matrix, orientation="right", labels=df_speeches.values[:,1]); plt.tick_params(\ axis= 'x', # changes apply to the x-axis which='both', # both major and minor ticks are affected bottom='off', # ticks along the bottom edge are off top='off', # ticks along the top edge are off labelbottom='off') #plt.tight_layout() #show plot with tight layout #uncomment below to save figure plt.show()
actividades_LDA_unidad_2/INFO343-Notebook1-RBoegeholz.ipynb
# --- # jupyter: # jupytext: # formats: ipynb,py:light # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: bgen # language: python # name: bgen # --- #import pandas as pd from IPython.core.interactiveshell import InteractiveShell from functools import partial, partialmethod from toolz import curry import dask.dataframe as dd import dask.array as da import numpy as np from tqdm.auto import tqdm InteractiveShell.ast_node_interactivity = "all" import settings from Field import Field import itertools # + from dataclasses import dataclass, field from dataclasses import replace as dc_replace from typing import Union, Dict, ClassVar from multipledispatch import dispatch from inspect import isclass def pass_func(self, val): pass func_property = partial(property, fset= pass_func ) @dataclass(frozen=True) class FieldSet(): name: str = field(compare=False) field_class: ClassVar = Field #field(repr=False) #what will be shown to the user field_names: list = field(init=False,compare=False) #field(property(lambda self: list(self._fields.keys()), fset=lambda self,x: None)) filter_names: list = field(init=False,compare=False) #underlying data fields: Dict[str,Field] = field(repr=False, default_factory=dict) filters: dict = field(repr=False, default_factory=dict) @func_property def field_names(self): return list(self.fields.keys()) @func_property def filter_names(self): return list(self.filters.keys()) # @property # def fields(self): # return list(self._fields.keys()) # @fields.setter # def fields(self, call): # pass def add_fields(self,dict_or_list,*, overwrite=False, name: str=None, arrays=None, instances=None): field_dict = FieldSet.make_fields_dict(dict_or_list, array_list=arrays, instance_list=instances) overlap_keys = set(field_dict.keys()) & set(self.fields) is_overlap = len(overlap_keys) != 0 if is_overlap and not overwrite: raise ValueError(f"The following field name(s): {overlap_keys} is already in the set, remove field(s) or set overwrite=True to update old fields with new ones") #return a new copy of the FieldSet instance with the attribute changed if name is None : return dc_replace(self,fields = {**self.fields, **field_dict}) else: return dc_replace(self,fields = {**self.fields, **field_dict}, name=name) def rename_fields(self, rename_dict: Dict[str,str]): old_dict = self.fields new_dict = {(key if key not in rename_dict else rename_dict[key]): (value if key not in rename_dict else value.rename(rename_dict[key])) for key, value in old_dict.items()} return dc_replace(self, fields = new_dict) @func_property def ddf(self): if not self.fields: raise ValueError("No fields in FieldSet") return dd.concat([field.all_cols_df for field in self.fields.values()], join="inner", axis=1) def get_field_cols(self, request_fields: Union[list,str]) -> np.ndarray: valid_fields = [field in self.fields for field in request_fields] if not all(valid_fields): raise ValueError("invalid field(s) detected") field_obj_list = [self.fields[field] for field in request_fields] list_of_col_lists = [field_obj.all_cols_df.columns for field_obj in field_obj_list] #flatten list all_cols = np.concatenate(list_of_col_lists) return all_cols #def add_filter(self,field_name) def eval_filter(self,expr): pass @staticmethod def modify_dict_item(dict_value, array_list=None, instance_list=None): # ex: {"pheno": dict_value, "arrays": array_list, "instances": instance_list} if isinstance(dict_value, dict): return {**dict_value, **{"arrays": array_list, "instances": instance_list}} # ex: "Monocyte count" if isinstance(dict_value, str): return {"pheno": dict_value, "arrays": array_list, "instances": instance_list} raise ValueError("Must be dict or str") @staticmethod def modify_dict(data_dict, array_list=None, instance_list=None): new_dict = {key: FieldSet.modify_dict_item(value, array_list, instance_list) for key, value in data_dict.items()} return new_dict @staticmethod def make_fields_dict(data: Dict[str,Union[str,int,Field]], array_list=None, instance_list=None): """ parameters: * data: can be - single Field - list of Fields - dict of {field_name: field_args_dict} to automatically generate Field objects *returns: a dict of {name: Field_object} """ #dict of name: [Field/FieldID] if isinstance(data, dict): modified_data = FieldSet.modify_dict(data, array_list=array_list, instance_list= instance_list) return Field.make_fields_dict(modified_data) #{name: (value if isinstance(value, FieldSet.field_class) else Field.init_multi_type(value, name=name)) for name, value in data.items()} #list of fields if isinstance(data,list): if all((isinstance(ele,FieldSet.field_class) for ele in data)): return {field.name:field for field in data} if isinstance(data, FieldSet.field_class): return {data.name: data} raise TypeError(f"Can only accept list of {FieldSet.field_class} or Dictionary \{name: Field/FieldID \}") #return FieldSet #PhenoField = partialclass(PhenoField, data_dict = data_dict, pheno_df = pheno_df, coding_file_path_template=coding_file_path_template) #FieldSet = partialclass(FieldSet,field_class= Field) test_set = FieldSet("test_set") test_set
FieldSet.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data mining using pyiron tables # In this example, the data mining capabilities of pyiron using the `PyironTables` class is demonstrated by computing and contrasting the ground state properties of fcc-Al using various force fields. from pyiron import Project import numpy as np pr = Project("potential_scan") # ## Creating a dummy job to get list of potentials # # In order to get the list of available LAMMPS potentials, a dummy job with an Al bulk structure is created dummy_job = pr.create_job(pr.job_type.Lammps, "dummy_job") dummy_job.structure = pr.create_ase_bulk("Al") # Chosing only select potentials to run (you can play with these valuess) num_potentials = 5 potential_list = dummy_job.list_potentials()[:num_potentials] # ## Creating a Murnaghan job for each potential in their respective subprojects # # A separate Murnaghan job (to compute equilibrium lattice constant and the bulk modulus) is created and run for every potential for pot in potential_list: pot_str = pot.replace("-", "_") # open a subproject within a project with pr.open(pot_str) as pr_sub: # no need for unique job name if in different subprojects job_name = "murn_Al" # Use the subproject to create the jobs murn = pr_sub.create_job(pr.job_type.Murnaghan, job_name) job_ref = pr_sub.create_job(pr.job_type.Lammps, "Al_ref") job_ref.structure = pr.create_ase_bulk("Al", cubic=True) job_ref.potential = pot job_ref.calc_minimize() murn.ref_job = job_ref # Some potentials may not work with certain LAMMPS compilations. # Therefore, we need to have a little exception handling try: murn.run() except RuntimeError: pass # If you inspect the job table, you would find that each Murnaghan job generates various small LAMMPS jobs (see column `hamilton`). Some of these jobs might have failed with status `aborted`. pr.job_table() # ## Analysis using `PyironTables` # # The idea now is to go over all finished Murnaghan jobs and extract the equilibrium lattice parameter and bulk modulus, and classify them based of the potential used. # ### Defining filter functions # # Since a project can have thousands if not millions of jobs, it is necessary to "filter" the data and only apply the functions (some of which can be computationally expensive) to only this data. In this example, we need to filter jobs that are finished and are of type `Murnaghan`. This can be done in two ways: using the job table i.e. the entries in the database, or using the job itself i.e. using entries in the stored HDF5 file. Below are examples of filter functions acting on the job and the job table respectively. # + # Filtering using the database entries (which are obtained as a pandas Dataframe) def db_filter_function(job_table): # Returns a pandas Series of boolean values (True for entries that have status finished # and hamilton type Murnaghan.) return (job_table.status == "finished") & (job_table.hamilton == "Murnaghan") # Filtering based on the job def job_filter_function(job): # returns a boolean value if the status of the job #is finished and if "murn" is in it's job name return (job.status == "finished") & ("murn" in job.job_name) # - # Obviously, using the database is faster in this case but sometimes it might be necessary to filter based on some data that are stored in the HDF5 file of the job. The database filter is applied first followed by the job based filter. # ### Defining functions that act on jobs # # Now we define a set of functions that will be applied on each job to return a certain value. The filtered jobs will be loaded and these functions will be applied on the loaded jobs. The advantage of such functions is that the jobs do not have to be loaded every time such operations are performed. The filtered jobs are loaded once, and then they are passed to these functions to construct the table. # + # Getting equilibrium lattice parameter from Murnaghan jobs def get_lattice_parameter(job): return job["output/equilibrium_volume"] ** (1/3) # Getting equilibrium bulk modulus from Murnaghan jobs def get_bm(job): return job["output/equilibrium_bulk_modulus"] # Getting the potential used in each Murnaghan job def get_pot(job): child = job.project.inspect(job["output/id"][0]) return child["input/potential/Name"] # - # ### Creating a pyiron table # # Now that all the functions are defined, the pyiron table called "table" is created in the following way. This works like a job and can be reloaded at any time. # + # %%time # creating a pyiron table table = pr.create_table("table") # assigning a database filter function table.db_filter_function = db_filter_function # Alternatively/additionally, a job based filter function can be applied # (it does the same thing in this case). #table.filter_function = job_filter_function # Adding the functions using the labels you like table.add["a_eq"] = get_lattice_parameter table.add["bulk_modulus"] = get_bm table.add["potential"] = get_pot # Running the table to generate the data table.run(run_again=True) # - # The output can now be obtained as a pandas DataFrame table.get_dataframe() # You can now compare the computed equilibrium lattice constants for each potential to those computed in the NIST database for Al (fcc phase). https://www.ctcms.nist.gov/potentials/system/Al/#Al.
notebooks/data_mining.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Introduction # Implementation of cTAKES BoW method with extracted relative time information (added to the BoW cTAKES orig. pairs (Polarity-CUI)), evaluated against the annotations from: # > Gehrmann, Sebastian, et al. "Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives." PloS one 13.2 (2018): e0192360. # ## Import Packages # + # imported packages import multiprocessing import collections import itertools import re import os # xml and xmi from lxml import etree # arrays and dataframes import pandas import numpy # classifier from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.ensemble import GradientBoostingClassifier from sklearn.preprocessing import FunctionTransformer from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.multiclass import OneVsRestClassifier from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import Pipeline from sklearn.svm import SVC # plotting import matplotlib matplotlib.use('Agg') # server try: get_ipython # jupyter notebook # %matplotlib inline except: pass import matplotlib.pyplot as plt # - # import custom modules import context # set search path to one level up from src import evaluation # method for evaluation of classifiers # ## Define variables and parameters # + # variables and parameters # filenames input_directory = '../data/interim/cTAKES_output' input_filename = '../data/raw/annotations.csv' results_filename = '../reports/ctakes_time_bow_tfidf_results.csv' plot_filename_1 = '../reports/ctakes_time_bow_tfidf_boxplot_1.png' plot_filename_2 = '../reports/ctakes_time_bow_tfidf_boxplot_2.png' # number of splits and repeats for cross validation n_splits = 5 n_repeats = 10 # n_repeats = 1 # for testing # number of workers n_workers=multiprocessing.cpu_count() # n_workers = 1 # for testing # keep the conditions for which results are reported in the publication conditions = [ # 'cohort', 'Obesity', # 'Non.Adherence', # 'Developmental.Delay.Retardation', 'Advanced.Heart.Disease', 'Advanced.Lung.Disease', 'Schizophrenia.and.other.Psychiatric.Disorders', 'Alcohol.Abuse', 'Other.Substance.Abuse', 'Chronic.Pain.Fibromyalgia', 'Chronic.Neurological.Dystrophies', 'Advanced.Cancer', 'Depression', # 'Dementia', # 'Unsure', ] # - # ## Load and prepare data # ### Load and parse xmi data # %load_ext ipycache # + # %%cache --read 2.4-JS-ctakes-time-bow-tfidf_cache.pkl X def ctakes_xmi_to_df(xmi_path): records = [] tree = etree.parse(xmi_path) root = tree.getroot() mentions = [] for mention in root.iterfind('*[@{http://www.omg.org/XMI}id][@typeID][@polarity][@ontologyConceptArr][@event]'): for concept in mention.attrib['ontologyConceptArr'].split(" "): d = dict(mention.attrib) d['ontologyConceptArr'] = concept mentions.append(d) mentions_df = pandas.DataFrame(mentions) concepts = [] for concept in root.iterfind('*[@{http://www.omg.org/XMI}id][@cui][@tui]'): concepts.append(dict(concept.attrib)) concepts_df = pandas.DataFrame(concepts) events = [] for event in root.iterfind('*[@{http://www.omg.org/XMI}id][@properties]'): events.append(dict(event.attrib)) events_df = pandas.DataFrame(events) eventproperties = [] for eventpropertie in root.iterfind('*[@{http://www.omg.org/XMI}id][@docTimeRel]'): eventproperties.append(dict(eventpropertie.attrib)) eventproperties_df = pandas.DataFrame(eventproperties) merged_df = mentions_df\ .merge(right=concepts_df, left_on='ontologyConceptArr', right_on='{http://www.omg.org/XMI}id')\ .merge(right=events_df, left_on='event', right_on='{http://www.omg.org/XMI}id')\ .merge(right=eventproperties_df, left_on='properties', right_on='{http://www.omg.org/XMI}id') # # unique cui and tui per event IDEA: consider keeping all # merged_df = merged_df.drop_duplicates(subset=['event', 'cui', 'tui']) # merge the doctimerel with polarity of the *mention and the cui merged_df['doctimerelpolaritycui'] = merged_df['docTimeRel'] + merged_df['polarity_x'] + merged_df['cui'] # merge the polarity of the *mention and the cui merged_df['polaritycui'] = merged_df['polarity_x'] + merged_df['cui'] # return as a string of cui's separated by space return ' '.join(list(merged_df['doctimerelpolaritycui']) + list(merged_df['polaritycui'])) X = [] # key function for sorting the files according to the integer of the filename def key_fn(x): i = x.split(".")[0] if i != "": return int(i) return None for f in sorted(os.listdir(input_directory), key=key_fn): # for each file in the input directory if f.endswith(".xmi"): fpath = os.path.join(input_directory, f) # parse file and append as a dataframe to x_df try: X.append(ctakes_xmi_to_df(fpath)) except Exception as e: print e X.append('NaN') X = numpy.array(X) # - # ### Load annotations and classification data # read and parse csv file data = pandas.read_csv(input_filename) # data = data[0:100] # for testing # X = X[0:100] # for testing data.head() # groups: the subject ids # used in order to ensure that # "patients’ notes stay within the set, so that all discharge notes in the # test set are from patients not previously seen by the model." Gehrmann17. groups_df = data.filter(items=['subject.id']) groups = groups_df.as_matrix() # y: the annotated classes y_df = data.filter(items=conditions) # filter the conditions y = y_df.as_matrix() print(X.shape, groups.shape, y.shape) # ## Define classifiers # dictionary of classifiers (sklearn estimators) classifiers = collections.OrderedDict() def tokenizer(text): pattern = r'[\s]+' # match any sequence of whitespace characters repl = r' ' # replace with space temp_text = re.sub(pattern, repl, text) return temp_text.lower().split(' ') # lower-case and split on space # + prediction_models = [ ('logistic_regression', LogisticRegression(random_state=0)), ("random_forest", RandomForestClassifier(random_state=0)), ("naive_bayes", MultinomialNB()), ("svm_linear", SVC(kernel="linear", random_state=0, probability=True)), ("gradient_boosting", GradientBoostingClassifier(random_state=0)), ] # BoW representation_models = [('ctakes_time_bow_tfidf', TfidfVectorizer(tokenizer=tokenizer))] # IDEA: Use Tfidf on normal BoW model aswell? # cross product of representation models and prediction models # save to classifiers as pipelines of rep. model into pred. model for rep_model, pred_model in itertools.product(representation_models, prediction_models): classifiers.update({ # add this classifier to classifiers dictionary '{rep_model}_{pred_model}'.format(rep_model=rep_model[0], pred_model=pred_model[0]): # classifier name Pipeline([rep_model, pred_model]), # concatenate representation model with prediction model in a pipeline }) # - # ## Run and evaluate results = evaluation.run_evaluation(X=X, y=y, groups=groups, conditions=conditions, classifiers=classifiers, n_splits=n_splits, n_repeats=n_repeats, n_workers=n_workers) # ## Save and plot results # save results results_df = pandas.DataFrame(results) results_df.to_csv(results_filename) results_df.head() # + ## load results for plotting # import pandas # results = pandas.read_csv('output/results.csv') # + # plot and save axs = results_df.groupby('name').boxplot(column='AUROC', by='condition', rot=90, figsize=(10,10)) for ax in axs: ax.set_ylim(0,1) plt.savefig(plot_filename_1) # + # plot and save axs = results_df.groupby('condition').boxplot(column='AUROC', by='name', rot=90, figsize=(10,10)) for ax in axs: ax.set_ylim(0,1) plt.savefig(plot_filename_2)
notebooks/2.4-JS-ctakes-time-bow-tfidf.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 2A.ml - Boosting, random forest, gradient - les features qu'ils aiment # # Avantages et inconvénients des méthodes à gradient. Exercice sur la rotation de features avant l'utilisation d'une random forest. from jyquickhelper import add_notebook_menu add_notebook_menu() # La bible que tout le monde recommande : # [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/), <NAME>, <NAME>, <NAME> # ## Modèle ou features ? # # On passe 90% du temps à créer de nouvelles features, 10% restant à améliorer # les paramètres du modèle : # [Travailler les features ou le modèle](http://www.xavierdupre.fr/app/ensae_teaching_cs/helpsphinx3/notebooks/ml_features_model.html#mlfeaturesmodelrst). # ## Boosting # # Le [boosting](https://en.wikipedia.org/wiki/Boosting_(machine_learning)) est une technique de machine learning # qui consiste à sur-pondérer les erreurs. Pour un algorithme d'apprentissage itératif, # cela consiste à donner plus de poids à l'itération *n* aux erreurs produites par l'itération *n-1*. # L'algorithme le plus connu est [AdaBoost](https://en.wikipedia.org/wiki/AdaBoost). # Le [gradient boosting](https://en.wikipedia.org/wiki/Gradient_boosting) est l'application de ce concept # à un modèle et une fonction d'erreur dérivable. # # * [The Boosting Approach to Machine Learning An Overview](https://www.cs.princeton.edu/picasso/mats/schapire02boosting_schapire.pdf) # * [A Theory of Multiclass Boosting](http://rob.schapire.net/papers/multiboost-journal.pdf) # ## Weak Learners # # Un [weak learner](http://stats.stackexchange.com/questions/82049/what-is-meant-by-weak-learner>) est un modèle # de machine learning aux performances faibles. Typiquement, un noeud d'un arbre de décision est un *weak learner*. # Le *boosting* est une façon de construire un *strong learner* à partir d'un assemblage de *weak learners*. # ## Forces et faiblesses des modèles de machine learning # ### Réseau de neurones # # Les réseaux de neurones s'apprennent avec des méthodes de d'optimisation basées sur le gradient. Elles n'aiment pas : # # * *Les échelles logarithmiques :* les variables de type fréquences (nombre de clics sur une page, nombre d'occurence d'un mot, ...) ont des queues épaisses et quelques valeurs extrêmes, il est conseillé de normaliser et de passer à une échelle logarithmique. # * *Les gradients élevés :* le gradient peut avoir une valeur très élevée dan un voisinage localisée (un regression proche d'une fonction en escalier), l'optimisation à base de gradient mettra beaucoup de temps à converger. # * *Les variables discrètes :* le calcul du gradient fonctionne beaucoup mieux sur des variables continues plutôt que des variables discrètes car cela limite le nombre de valeurs que peut prendre le gradient. # ### Forêt aléatoire, arbre de décision # # *Ce qui ne les dérange pas :* # # * *Normalisation :* assemblages de décisions basées sur des seuils, les forêts aléatoires et arbres de décisions ne sont pas sensibles aux changements d'échelle. # # *Ce qu'elle n'aime pas :* # # * *Decision oblique :* un seuil s'applique sur une variable, il ne peut approcher une droite :math:`x + y = 1` qu'avec une fonction en escalier. # * *Multi-classe :* pour un assemblage de fonction binaire, il est plus facile d'avoir seulement deux choix. On compense cette lacune avec deux stratégies [one versus rest](https://en.wikipedia.org/wiki/Multiclass_classification#One-vs.-rest) ou # [one versus one](https://en.wikipedia.org/wiki/Multiclass_classification#One-vs.-one). # ### Exercice 1 : rotations aléatoires et arbres de décision # # [Random Rotation Ensembles](http://www.jmlr.org/papers/volume17/blaser16a/blaser16a.pdf) l'article étudie le gain obtenu en utilisant des rotations aléatoires de l'ensemble des caractéristiques. Le gain est significatif. Appliqué cette méthode sur une problème de classification binaire. # ## Benchmark # ### XGBoost # # [XGBoost](https://github.com/dmlc/xgboost) est une librairie de machine learning connue pour avoir gagné de nombreuses # [compétitions](https://github.com/dmlc/xgboost/blob/master/demo/README.md#machine-learning-challenge-winning-solutions). # Extrait de [XGBoost A Scalable Tree Boosting System](https://arxiv.org/pdf/1603.02754.pdf) : from pyquickhelper.helpgen import NbImage NbImage("xgboost.png") # Plusieurs améliorations ont été implémentées pour rendre l'apprentissage rapide # et capable de gérer de gros volumes de données : # # * **exact greedy :** algorithme standard pour apprendre une forêt aléatoire # * **approximate global :** chaque noeud est un seuil sur une variable, ce seuil est choisi # parmi toutes les valeurs possibles ou des quantiles, ces quantiles sont fixes pour un arbre # * **approximate local :** ou ces quantiles sont réalustés pour chaque noeud # * **out-of-core :** la librairie compresse les valeurs des variables par colonnes pour réduire l'empreinte # mémoire # * **sparsity aware :** la librairie tient compte des valeurs manquantes qui ne sont pas traitées # comme des valeurs comme les autres, chaque noeud d'un arbre possède une direction par défaut # * **parallel :** certains traitements sont parallélisés # # # ### LightGBM # # [LightGBM](https://github.com/Microsoft/LightGBM) est une librairie concurrente de XGBoost développé par Microsoft. Elle tend à s'imposer car elle est plus rapide dans la plupart des cas. # ### Exercice 2 : benchmark # # Comparer les random forest de XGBoost, LightGBM, scikit-learn sur différents jeux de données synthétiques : # # * Classification binaire en petite dimension. # * Classification binaire en grande dimension. # * Classification binaire en grande dimension et sparse features. # # Faire de même pour une classification multi-classes et une régression.
_doc/notebooks/td2a_ml/ml_cc_machine_learning_problems2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 线性方程 # + import numpy as np import pandas as pd import matplotlib.pyplot as plt from mpl_toolkits.axisartist.axislines import SubplotZero # %matplotlib inline plt.rcParams['font.family'] = ['SimHei'] # 设置中文显示 plt.rcParams['axes.unicode_minus'] = False # - fig = plt.figure(figsize=(5,5)) ax = SubplotZero(fig, 111) fig.add_subplot(ax) for direction in ["xzero", "yzero"]: ax.axis[direction].set_axisline_style("-|>")# adds arrows at the ends of each axis ax.axis[direction].set_visible(True)# adds X and Y-axis from the origin for direction in ["left", "right", "bottom", "top"]: ax.axis[direction].set_visible(False) ax.set_xlim(-1, 4) ax.set_ylim(-1, 4) ax.text(0.5, 0.5, r'$y = 2 x + 1$', fontsize=20) x = np.linspace(-1,4) y = 2 * x + 1 ax.plot(x, y) fig, ax = plt.subplots(figsize=(6, 4)) x = [300,400,400,550,720,850,900,950] y = [300,350,490,500,600,610,700,660] ax.set_xlabel('广告费x(万元)') ax.set_ylabel('销售额y(百万元)') ax.scatter(x, y) fig, ax = plt.subplots(figsize=(6, 4)) x = [300,950] y = [300,660] ax.set_xlabel('广告费x(万元)') ax.set_ylabel('销售额y(百万元)') ax.scatter(x, y) ax.plot(x, y, color='red') fig, ax = plt.subplots(figsize=(6, 4)) x = [300,400,400,550,720,850,900,950] y = [300,350,490,500,600,610,700,660] ax.set_xlabel('广告费x(万元)') ax.set_ylabel('销售额y(百万元)') ax.scatter(x, y) for i in range(len(x) - 1): ax.plot([x[i], x[i+1]], [y[i], y[i+1]]) class Regression: def __init__(self, x, y): self.n = len(x) self.x = np.array(x) self.y = np.array(y) def getBeta(self): self.beta = (np.sum(self.x * self.y) * self.n - np.sum(self.x) * np.sum(self.y)) / (self.n * np.sum(self.x ** 2) - np.sum(self.x) ** 2) return self.beta def getAlpha(self): self.alpha = np.average(self.y) - self.beta * np.average(self.x) return self.alpha x = [300,400,400,550,720,850,900,950] y = [300,350,490,500,600,610,700,660] regression = Regression(x, y) "b=%f, a=%f" % (regression.getBeta(), regression.getAlpha()) fig, ax = plt.subplots(figsize=(6, 4)) x = np.asarray([300,400,400,550,720,850,900,950], np.float32) y = [300,350,490,500,600,610,700,660] ax.set_xlabel('广告费x(万元)') ax.set_ylabel('销售额y(百万元)') ax.scatter(x, y) yhat = 189.753472 + 0.530961 * x ax.plot(x, yhat, color='red') fig, ax = plt.subplots(figsize=(6, 4)) x = np.asarray([300,400,400,550,720,850,900,950], np.float32) y = [300,350,490,500,600,610,700,660] ax.set_xlabel('广告费x(万元)') ax.set_ylabel('销售额y(百万元)') ax.scatter(x, y, label="实际值") yhat = 189.753472 + 0.530961 * x ax.plot(x, yhat, color='red', label="回归") yerr = y - yhat uplims = np.empty(len(x)) lolims = np.empty(len(x)) for i in range(len(x)): if yerr[i] > 0: uplims[i] = False lolims[i] = True else: uplims[i] = True lolims[i] = False yerr[i] = -yerr[i] ax.errorbar(x, yhat, yerr=yerr, uplims=uplims, lolims=lolims, label="离差", fmt='co') ax.legend()
regression/linear.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Introduction to the Interstellar Medium # ### <NAME> # ### Figure 5.8: Plot Planck dust continuum vs HI4PI to show gas-to-dust correlation # #### fits files are from skyview and selected to be very coarse to reduce number of points in the plot import numpy as np import matplotlib.pyplot as plt from astropy.io import fits import random # %matplotlib inline def dust_vs_gas(im1, im2, subsample=-1): fig = plt.figure(figsize=(8,5.5)) ax = fig.add_subplot(111) x = im1.flatten() y = im2.flatten() if subsample < 0: plt.plot(x, y, 'k.', alpha=0.1, ms=4, mew=0) else: N = x.size random_select = random.sample(range(N), int(N/subsample)) plt.plot(x[random_select], y[random_select], 'k.', alpha=0.1, ms=4, mew=0) ax.set_xscale("log", nonposx='clip') ax.set_yscale("log", nonposy='clip') ax.set_xlim(3e23,3e26) ax.set_ylim(0.1,1000) ax.set_xlabel(r'$N_{HI}$ [m$^{-2}$]', fontsize=20) ax.set_ylabel(r'$I_{857 GHz}$ [MJy/Sr]', fontsize=20) ax.tick_params(labelsize=16) # plot a linear relation for comparison R = 6. xlin = np.array([4e23, 2e26]) ylin = R * xlin/1e25 plt.plot(xlin, ylin, 'k--', lw=2) plt.savefig('gas_dust_ratio.pdf') # + hdu = fits.open('HI_allsky.fits') im1 = hdu[0].data hd1 = hdu[0].header # convert from cm-2 to m-2 im1 *= 1e4 hdu.close() hdu = fits.open('planck857_allsky.fits') im2 = hdu[0].data hd2 = hdu[0].header hdu.close() # - dust_vs_gas(im1, im2, subsample=-1)
atomic/.ipynb_checkpoints/gas_dust_ratio-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="FYSnbkHAqas1" # ****Preprocessing # + id="75aqANx3qatN" import torch import torchvision from torch import nn from torch import optim from torch.utils.data import DataLoader import pandas as pd from sklearn.linear_model import LinearRegression import numpy as np # + colab={"base_uri": "https://localhost:8080/"} id="kjAfLUmqqatQ" outputId="247dd833-5b46-497c-c7b8-531dfe542c6b" #*** PREPROCESSING *** def preprocess(train_path, test_path, test=False): df = pd.read_csv(train_path)[:restrict_idx] df = df.dropna(subset=['Sales']) df_test = pd.read_csv(test_path) del df_test['Id'] l_test = len(df_test) df_test['Sales'] = 0 df = pd.concat([df, df_test]) print(df.columns) df['month'] = pd.DatetimeIndex(df['Date']).month df['DayInMonth'] = pd.DatetimeIndex(df['Date']).day df = pd.get_dummies(data=df, columns=['Store','AssortmentType', 'StoreType', 'StateHoliday', "month", "DayOfWeek"]) del df["Date"] Y = df["Sales"].values del df["Sales"] df = df.values return df, Y, l_test X_train, Y_train, l_test = preprocess("/content/drive/MyDrive/Lithuanian_Challenge/train_data.csv", "/content/drive/MyDrive/Lithuanian_Challenge/test_data.csv") X_test_submission, X_train, Y_train = X_train[-l_test:], X_train[:-l_test], Y_train[:-l_test] X_val, Y_val = X_train[-50000:], Y_train[-50000:] X_train, Y_train = X_train[:-50000], Y_train[:-50000] print('Submission Set Shape: ',X_test_submission.shape) print('Train Set Shape: ',X_train.shape) print('Y Train Shape: ', Y_train.shape) print('Validation Set Shape: ', X_val.shape) # + [markdown] id="9HNxJuynqate" # ****CNN # + colab={"base_uri": "https://localhost:8080/"} id="7GrRsepZqatf" outputId="9532ddec-c118-4daf-f67d-afb27b59151a" from keras import optimizers from keras.utils import plot_model from keras.models import Sequential, Model from keras.layers.convolutional import Conv1D, MaxPooling1D from keras.layers import Dense, LSTM, RepeatVector, TimeDistributed, Flatten, Dropout from keras import backend as K def rmse(y_true, y_pred): return K.sqrt(K.mean(K.square(float(y_pred) - float(y_true)))) X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1)) X_val = X_val.reshape((X_val.shape[0], X_val.shape[1], 1)) epochs = 3 batch = 256 lr = 0.001 adam = optimizers.Adam(lr) model_cnn = Sequential() model_cnn.add(Conv1D(filters=256, kernel_size=2, activation='relu', input_shape=(X_train.shape[1], X_train.shape[2]))) model_cnn.add(MaxPooling1D(pool_size=2)) model_cnn.add(Flatten()) model_cnn.add(Dropout(0.1)) model_cnn.add(Dense(1024, activation='relu')) model_cnn.add(Dense(1)) model_cnn.compile(loss=rmse, optimizer=adam) model_cnn.summary() cnn_history = model_cnn.fit(X_train, Y_train, epochs=epochs, validation_data=(X_val, Y_val), verbose=1) # + id="3Nu3hmQNqath" res = model_cnn.predict(X_test_submission.reshape((X_test_submission.shape[0], X_test_submission.shape[1], 1))) # + id="IC7IWcICqati" colab={"base_uri": "https://localhost:8080/"} outputId="12efe649-ffb6-4f1f-b521-c671ce82eb84" print("res_min", np.min(res)) print('res_max', np.max(res)) print('res_mean', np.mean(res))
Track-2-store-sales-predictions/CNN.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 机器翻译的神经网络实现 # # 本节课我们讲述了利用编码器-解码器架构实现法-英机器翻译。 # # 整个代码包括了数据预处理、编码器+简单解码器以及编码器+带有注意力机制的解码器三个部分组成。 # # 本文件是集智AI学园http://campus.swarma.org 出品的“火炬上的深度学习”第VIII课的配套源代码 # + # 用到的包 # 进行系统操作,如io、正则表达式的包 from io import open import unicodedata import string import re import random #Pytorch必备的包 import torch import torch.nn as nn from torch.autograd import Variable from torch import optim import torch.nn.functional as F import torch.utils.data as DataSet # 绘图所用的包 import matplotlib.pyplot as plt import numpy as np import os os.environ["CUDA_VISIBLE_DEVICES"] = "4" # 判断本机是否有支持的GPU use_cuda = torch.cuda.is_available() # 即时绘图 # %matplotlib inline # - # # 一、数据准备 # # 从硬盘读取语料文件,进行基本的预处理 # 读取平行语料库 # 英=法 lines = open('data/fra.txt', encoding = 'utf-8') french = lines.read().strip().split('\n') lines = open('data/eng.txt', encoding = 'utf-8') english = lines.read().strip().split('\n') print(len(french)) print(len(english)) # + # 定义两个特殊符号,分别对应句子头和句子尾 SOS_token = 0 EOS_token = 1 # 定义一个语言类,方便进行自动的建立、词频的统计等 # 在这个对象中,最重要的是两个字典:word2index,index2word # 故名思议,第一个字典是将word映射到索引,第二个是将索引映射到word class Lang: def __init__(self, name): self.name = name self.word2index = {} self.word2count = {} self.index2word = {0: "SOS", 1: "EOS"} self.n_words = 2 # Count SOS and EOS def addSentence(self, sentence): # 在语言中添加一个新句子,句子是用空格隔开的一组单词 # 将单词切分出来,并分别进行处理 for word in sentence.split(' '): self.addWord(word) def addWord(self, word): # 插入一个单词,如果单词已经在字典中,则更新字典中对应单词的频率 # 同时建立反向索引,可以从单词编号找到单词 if word not in self.word2index: self.word2index[word] = self.n_words self.word2count[word] = 1 self.index2word[self.n_words] = word self.n_words += 1 else: self.word2count[word] += 1 # Turn a Unicode string to plain ASCII, thanks to # http://stackoverflow.com/a/518232/2809427 # 将unicode编码转变为ascii编码 def unicodeToAscii(s): return ''.join( c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn' ) # 把输入的英文字符串转成小写 def normalizeEngString(s): s = unicodeToAscii(s.lower().strip()) s = re.sub(r"([.!?])", r" \1", s) s = re.sub(r"[^a-zA-Z.!?]+", r" ", s) return s # 对输入的单词对做过滤,保证每句话的单词数不能超过MAX_LENGTH def filterPair(p): return len(p[0].split(' ')) < MAX_LENGTH and \ len(p[1].split(' ')) < MAX_LENGTH # 输入一个句子,输出一个单词对应的编码序列 def indexesFromSentence(lang, sentence): return [lang.word2index[word] for word in sentence.split(' ')] # 和上面的函数功能类似,不同在于输出的序列等长=MAX_LENGTH def indexFromSentence(lang, sentence): indexes = indexesFromSentence(lang, sentence) indexes.append(EOS_token) for i in range(MAX_LENGTH - len(indexes)): indexes.append(EOS_token) return(indexes) # 从一个词对到下标 def indexFromPair(pair): input_variable = indexFromSentence(input_lang, pair[0]) target_variable = indexFromSentence(output_lang, pair[1]) return (input_variable, target_variable) # 从一个列表到句子 def SentenceFromList(lang, lst): result = [lang.index2word[i] for i in lst if i != EOS_token] if lang.name == 'French': result = ' '.join(result) else: result = ' '.join(result) return(result) # 计算准确度的函数 def rightness(predictions, labels): """计算预测错误率的函数,其中predictions是模型给出的一组预测结果,batch_size行num_classes列的矩阵,labels是数据之中的正确答案""" pred = torch.max(predictions.data, 1)[1] # 对于任意一行(一个样本)的输出值的第1个维度,求最大,得到每一行的最大元素的下标 rights = pred.eq(labels.data).sum() #将下标与labels中包含的类别进行比较,并累计得到比较正确的数量 return rights, len(labels) #返回正确的数量和这一次一共比较了多少元素 # + # 处理数据形成训练数据 # 设置句子的最大长度 MAX_LENGTH = 5 #对英文做标准化处理 pairs = [[normalizeEngString(fra), normalizeEngString(eng)] for fra, eng in zip(french, english)] # 对句子对做过滤,处理掉那些超过MAX_LENGTH长度的句子 input_lang = Lang('French') output_lang = Lang('English') pairs = [pair for pair in pairs if filterPair(pair)] print('有效句子对:', len(pairs)) # 建立两个字典(中文的和英文的) for pair in pairs: input_lang.addSentence(pair[0]) output_lang.addSentence(pair[1]) print("总单词数:") print(input_lang.name, input_lang.n_words) print(output_lang.name, output_lang.n_words) # 形成训练集,首先,打乱所有句子的顺序 random_idx = np.random.permutation(range(len(pairs))) pairs = [pairs[i] for i in random_idx] # 将语言转变为单词的编码构成的序列 pairs = [indexFromPair(pair) for pair in pairs] # 形成训练集、校验集和测试集 valid_size = len(pairs) // 10 if valid_size > 10000: valid_size = 10000 pp = pairs pairs = pairs[ : - valid_size] valid_pairs = pp[-valid_size : -valid_size // 2] test_pairs = pp[- valid_size // 2 :] # 利用PyTorch的dataset和dataloader对象,将数据加载到加载器里面,并且自动分批 batch_size = 32 #一批包含32个数据记录,这个数字越大,系统在训练的时候,每一个周期处理的数据就越多,这样处理越快,但总的数据量会减少 print('训练记录:', len(pairs)) print('校验记录:', len(valid_pairs)) print('测试记录:', len(test_pairs)) # 形成训练对列表,用于喂给train_dataset pairs_X = [pair[0] for pair in pairs] pairs_Y = [pair[1] for pair in pairs] valid_X = [pair[0] for pair in valid_pairs] valid_Y = [pair[1] for pair in valid_pairs] test_X = [pair[0] for pair in test_pairs] test_Y = [pair[1] for pair in test_pairs] # 形成训练集 train_dataset = DataSet.TensorDataset(torch.LongTensor(pairs_X), torch.LongTensor(pairs_Y)) # 形成数据加载器 train_loader = DataSet.DataLoader(train_dataset, batch_size = batch_size, shuffle = True, num_workers=8) # 校验数据 valid_dataset = DataSet.TensorDataset(torch.LongTensor(valid_X), torch.LongTensor(valid_Y)) valid_loader = DataSet.DataLoader(valid_dataset, batch_size = batch_size, shuffle = True, num_workers=8) # 测试数据 test_dataset = DataSet.TensorDataset(torch.LongTensor(test_X), torch.LongTensor(test_Y)) test_loader = DataSet.DataLoader(test_dataset, batch_size = batch_size, shuffle = True, num_workers = 8) # - # # 二、构建编码器及简单的解码器RNN # + # 构建编码器RNN class EncoderRNN(nn.Module): def __init__(self, input_size, hidden_size, n_layers=1): super(EncoderRNN, self).__init__() self.n_layers = n_layers self.hidden_size = hidden_size # 第一层Embeddeing self.embedding = nn.Embedding(input_size, hidden_size) # 第二层GRU,注意GRU中可以定义很多层,主要靠num_layers控制 self.gru = nn.GRU(hidden_size, hidden_size, batch_first = True, num_layers = self.n_layers, bidirectional = True) def forward(self, input, hidden): #前馈过程 #input尺寸: batch_size, length_seq embedded = self.embedding(input) #embedded尺寸:batch_size, length_seq, hidden_size output = embedded output, hidden = self.gru(output, hidden) # output尺寸:batch_size, length_seq, hidden_size # hidden尺寸:num_layers * directions, batch_size, hidden_size return output, hidden def initHidden(self, batch_size): # 对隐含单元变量全部进行初始化 #num_layers * num_directions, batch, hidden_size result = Variable(torch.zeros(self.n_layers * 2, batch_size, self.hidden_size)) if use_cuda: return result.cuda() else: return result # 解码器网络 class DecoderRNN(nn.Module): def __init__(self, hidden_size, output_size, n_layers=1): super(DecoderRNN, self).__init__() self.n_layers = n_layers self.hidden_size = hidden_size # 嵌入层 self.embedding = nn.Embedding(output_size, hidden_size) # GRU单元 # 设置batch_first为True的作用就是为了让GRU接受的张量可以和其它单元类似,第一个维度为batch_size self.gru = nn.GRU(hidden_size, hidden_size, batch_first = True, num_layers = self.n_layers, bidirectional = True) # dropout操作层 self.dropout = nn.Dropout(0.1) # 最后的全链接层 self.out = nn.Linear(hidden_size * 2, output_size) self.softmax = nn.LogSoftmax(dim = 1) def forward(self, input, hidden): # input大小:batch_size, length_seq output = self.embedding(input) # embedded大小:batch_size, length_seq, hidden_size output = F.relu(output) output, hidden = self.gru(output, hidden) # output的结果再dropout output = self.dropout(output) # output大小:batch_size, length_seq, hidden_size * directions # hidden大小:n_layers * directions, batch_size, hidden_size output = self.softmax(self.out(output[:, -1, :])) # output大小:batch_size * output_size # 从output中取时间步重新开始 return output, hidden def initHidden(self): # 初始化隐含单元的状态,输入变量的尺寸:num_layers * directions, batch_size, hidden_size result = Variable(torch.zeros(self.n_layers * 2, batch_size, self.hidden_size)) if use_cuda: return result.cuda() else: return result # + # 开始训练过程 # 定义网络结构 hidden_size = 32 max_length = MAX_LENGTH n_layers = 1 encoder = EncoderRNN(input_lang.n_words, hidden_size, n_layers = n_layers) decoder = DecoderRNN(hidden_size, output_lang.n_words, n_layers = n_layers) if use_cuda: # 如果本机有GPU可用,则将模型加载到GPU上 encoder = encoder.cuda() decoder = decoder.cuda() learning_rate = 0.0001 # 为两个网络分别定义优化器 encoder_optimizer = optim.Adam(encoder.parameters(), lr=learning_rate) decoder_optimizer = optim.Adam(decoder.parameters(), lr=learning_rate) # 定义损失函数 criterion = nn.NLLLoss() teacher_forcing_ratio = 0.5 plot_losses = [] # 开始200轮的循环 num_epoch = 100 for epoch in range(num_epoch): print_loss_total = 0 # 对训练数据循环 for data in train_loader: input_variable = Variable(data[0]).cuda() if use_cuda else Variable(data[0]) # input_variable的大小:batch_size, length_seq target_variable = Variable(data[1]).cuda() if use_cuda else Variable(data[1]) # target_variable的大小:batch_size, length_seq # 初始化编码器状态 encoder_hidden = encoder.initHidden(data[0].size()[0]) # 清空梯度 encoder_optimizer.zero_grad() decoder_optimizer.zero_grad() loss = 0 # 开始编码器的计算,对时间步的循环由系统自动完成 encoder_outputs, encoder_hidden = encoder(input_variable, encoder_hidden) # encoder_outputs的大小:batch_size, length_seq, hidden_size*direction # encoder_hidden的大小:direction*n_layer, batch_size, hidden_size # 开始解码器的工作 # 输入给解码器的第一个字符 decoder_input = Variable(torch.LongTensor([[SOS_token]] * target_variable.size()[0])) # decoder_input大小:batch_size, length_seq decoder_input = decoder_input.cuda() if use_cuda else decoder_input # 让解码器的隐藏层状态等于编码器的隐藏层状态 decoder_hidden = encoder_hidden # decoder_hidden大小:direction*n_layer, batch_size, hidden_size # 以teacher_forcing_ratio的比例用target中的翻译结果作为监督信息 use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False base = torch.zeros(target_variable.size()[0]) if use_teacher_forcing: # 教师监督: 将下一个时间步的监督信息输入给解码器 # 对时间步循环 for di in range(MAX_LENGTH): # 开始一步解码 decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden) # decoder_ouput大小:batch_size, output_size # 计算损失函数 loss += criterion(decoder_output, target_variable[:, di]) # 将训练数据当做下一时间步的输入 decoder_input = target_variable[:, di].unsqueeze(1) # Teacher forcing # decoder_input大小:batch_size, length_seq else: # 没有教师训练: 使用解码器自己的预测作为下一时间步的输入 # 开始对时间步进行循环 for di in range(MAX_LENGTH): # 进行一步解码 decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden) #decoder_ouput大小:batch_size, output_size(vocab_size) #从输出结果(概率的对数值)中选择出一个数值最大的单词作为输出放到了topi中 topv, topi = decoder_output.data.topk(1, dim = 1) #topi 尺寸:batch_size, k ni = topi[:, 0] # 将输出结果ni包裹成Variable作为解码器的输入 decoder_input = Variable(ni.unsqueeze(1)) # decoder_input大小:batch_size, length_seq decoder_input = decoder_input.cuda() if use_cuda else decoder_input #计算损失函数 loss += criterion(decoder_output, target_variable[:, di]) # 开始反向传播 loss.backward() # loss = loss.cpu() if use_cuda else loss # 开始梯度下降 encoder_optimizer.step() decoder_optimizer.step() # 累加总误差 print_loss_total += loss.data.numpy() # 计算训练时候的平均误差 print_loss_avg = print_loss_total / len(train_loader) # 开始跑校验数据集 valid_loss = 0 rights = [] # 对校验数据集循环 for data in valid_loader: input_variable = Variable(data[0]).cuda() if use_cuda else Variable(data[0]) # input_variable的大小:batch_size, length_seq target_variable = Variable(data[1]).cuda() if use_cuda else Variable(data[1]) # target_variable的大小:batch_size, length_seq encoder_hidden = encoder.initHidden(data[0].size()[0]) loss = 0 encoder_outputs, encoder_hidden = encoder(input_variable, encoder_hidden) # encoder_outputs的大小:batch_size, length_seq, hidden_size*direction # encoder_hidden的大小:direction*n_layer, batch_size, hidden_size decoder_input = Variable(torch.LongTensor([[SOS_token]] * target_variable.size()[0])) # decoder_input大小:batch_size, length_seq decoder_input = decoder_input.cuda() if use_cuda else decoder_input decoder_hidden = encoder_hidden # decoder_hidden大小:direction*n_layer, batch_size, hidden_size # 没有教师监督: 使用解码器自己的预测作为下一时间步解码器的输入 for di in range(MAX_LENGTH): # 一步解码器运算 decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden) #decoder_ouput大小:batch_size, output_size(vocab_size) # 选择输出最大的项作为解码器的预测答案 topv, topi = decoder_output.data.topk(1, dim = 1) #topi 尺寸:batch_size, k ni = topi[:, 0] decoder_input = Variable(ni.unsqueeze(1)) # decoder_input大小:batch_size, length_seq decoder_input = decoder_input.cuda() if use_cuda else decoder_input # 计算预测的准确率,记录在right中,right为一个二元组,分别存储猜对的个数和总数 right = rightness(decoder_output, target_variable[:, di]) rights.append(right) # 计算损失函数 loss += criterion(decoder_output, target_variable[:, di]) # loss = loss.cpu() if use_cuda else loss # 累加校验时期的损失函数 valid_loss += loss.data.numpy() # 打印每一个Epoch的输出结果 right_ratio = 1.0 * np.sum([i[0] for i in rights]) / np.sum([i[1] for i in rights]) print('进程:%d%% 训练损失:%.4f,校验损失:%.4f,词正确率:%.2f%%' % (epoch * 1.0 / num_epoch * 100, print_loss_avg, valid_loss / len(valid_loader), 100.0 * right_ratio)) # 记录基本统计指标 plot_losses.append([print_loss_avg, valid_loss / len(valid_loader), right_ratio]) # - # 将统计指标绘图 a = [i[0] for i in plot_losses] b = [i[1] for i in plot_losses] c = [i[2] * 100 for i in plot_losses] plt.plot(a, '-', label = 'Training Loss') plt.plot(b, ':', label = 'Validation Loss') plt.plot(c, '.', label = 'Accuracy') plt.xlabel('Epoch') plt.ylabel('Loss & Accuracy') plt.legend() # + # 在测试集上测试模型运行的效果 # 首先,在测试集中随机选择20个句子作为测试 indices = np.random.choice(range(len(test_X)), 20) # 对每个句子进行循环 for ind in indices: data = [test_X[ind]] target = [test_Y[ind]] # 把源语言的句子打印出来 print(SentenceFromList(input_lang, data[0])) input_variable = Variable(torch.LongTensor(data)).cuda() if use_cuda else Variable(torch.LongTensor(data)) # input_variable的大小:batch_size, length_seq target_variable = Variable(torch.LongTensor(target)).cuda() if use_cuda else Variable(torch.LongTensor(target)) # target_variable的大小:batch_size, length_seq # 初始化编码器 encoder_hidden = encoder.initHidden(input_variable.size()[0]) loss = 0 # 编码器开始编码,结果存储到了encoder_hidden中 encoder_outputs, encoder_hidden = encoder(input_variable, encoder_hidden) # encoder_outputs的大小:batch_size, length_seq, hidden_size*direction # encoder_hidden的大小:direction*n_layer, batch_size, hidden_size # 将SOS作为解码器的第一个输入 decoder_input = Variable(torch.LongTensor([[SOS_token]] * target_variable.size()[0])) # decoder_input大小:batch_size, length_seq decoder_input = decoder_input.cuda() if use_cuda else decoder_input # 将编码器的隐含层单元数值拷贝给解码器的隐含层单元 decoder_hidden = encoder_hidden # decoder_hidden大小:direction*n_layer, batch_size, hidden_size # 没有教师指导下的预测: 使用解码器自己的预测作为解码器下一时刻的输入 output_sentence = [] decoder_attentions = torch.zeros(max_length, max_length) rights = [] # 按照输出字符进行时间步循环 for di in range(MAX_LENGTH): # 解码器一个时间步的计算 decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden) #decoder_ouput大小:batch_size, output_size(vocab_size) # 解码器的输出 topv, topi = decoder_output.data.topk(1, dim = 1) #topi 尺寸:batch_size, k ni = topi[:, 0] decoder_input = Variable(ni.unsqueeze(1)) ni = ni.numpy()[0] # 将本时间步输出的单词编码加到output_sentence里面 output_sentence.append(ni) # decoder_input大小:batch_size, length_seq decoder_input = decoder_input.cuda() if use_cuda else decoder_input # 计算输出字符的准确度 right = rightness(decoder_output, target_variable[:, di]) rights.append(right) # 解析出编码器给出的翻译结果 sentence = SentenceFromList(output_lang, output_sentence) # 解析出标准答案 standard = SentenceFromList(output_lang, target[0]) # 将句子打印出来 print('机器翻译:', sentence) print('标准翻译:', standard) # 输出本句话的准确率 right_ratio = 1.0 * np.sum([i[0] for i in rights]) / np.sum([i[1] for i in rights]) print('词准确率:', 100.0 * right_ratio) print('\n') # - # # Attention Model # + # 重新处理数据形成训练数据、校验数据与测试数据,主要是MAX_Length更大了 # 设置句子的最大长度 MAX_LENGTH = 10 #对英文做标准化处理 pairs = [[normalizeEngString(fra), normalizeEngString(eng)] for fra, eng in zip(french, english)] # 对句子对做过滤,处理掉那些超过MAX_LENGTH长度的句子 input_lang = Lang('French') output_lang = Lang('English') pairs = [pair for pair in pairs if filterPair(pair)] print('有效句子对:', len(pairs)) # 建立两个字典(中文的和英文的) for pair in pairs: input_lang.addSentence(pair[0]) output_lang.addSentence(pair[1]) print("总单词数:") print(input_lang.name, input_lang.n_words) print(output_lang.name, output_lang.n_words) # 形成训练集,首先,打乱所有句子的顺序 random_idx = np.random.permutation(range(len(pairs))) pairs = [pairs[i] for i in random_idx] # 将语言转变为单词的编码构成的序列 pairs = [indexFromPair(pair) for pair in pairs] # 形成训练集、校验集和测试集 valid_size = len(pairs) // 10 if valid_size > 10000: valid_size = 10000 valid_pairs = pairs[-valid_size : -valid_size // 2] test_pairs = pairs[- valid_size // 2 :] pairs = pairs[ : - valid_size] # 利用PyTorch的dataset和dataloader对象,将数据加载到加载器里面,并且自动分批 batch_size = 32 #一撮包含30个数据记录,这个数字越大,系统在训练的时候,每一个周期处理的数据就越多,这样处理越快,但总的数据量会减少 print('训练记录:', len(pairs)) print('校验记录:', len(valid_pairs)) print('测试记录:', len(test_pairs)) # 形成训练对列表,用于喂给train_dataset pairs_X = [pair[0] for pair in pairs] pairs_Y = [pair[1] for pair in pairs] valid_X = [pair[0] for pair in valid_pairs] valid_Y = [pair[1] for pair in valid_pairs] test_X = [pair[0] for pair in test_pairs] test_Y = [pair[1] for pair in test_pairs] # 形成训练集 train_dataset = DataSet.TensorDataset(torch.LongTensor(pairs_X), torch.LongTensor(pairs_Y)) # 形成数据加载器 train_loader = DataSet.DataLoader(train_dataset, batch_size = batch_size, shuffle = True, num_workers=8) # 校验数据 valid_dataset = DataSet.TensorDataset(torch.LongTensor(valid_X), torch.LongTensor(valid_Y)) valid_loader = DataSet.DataLoader(valid_dataset, batch_size = batch_size, shuffle = True, num_workers=8) # 测试数据 test_dataset = DataSet.TensorDataset(torch.LongTensor(test_X), torch.LongTensor(test_Y)) test_loader = DataSet.DataLoader(test_dataset, batch_size = batch_size, shuffle = True, num_workers = 8) # - # 定义基于注意力的解码器RNN class AttnDecoderRNN(nn.Module): def __init__(self, hidden_size, output_size, n_layers=1, dropout_p=0.1, max_length=MAX_LENGTH): super(AttnDecoderRNN, self).__init__() self.hidden_size = hidden_size self.output_size = output_size self.n_layers = n_layers self.dropout_p = dropout_p self.max_length = max_length # 词嵌入层 self.embedding = nn.Embedding(self.output_size, self.hidden_size) # 注意力网络(一个前馈神经网络) self.attn = nn.Linear(self.hidden_size * (2 * n_layers + 1), self.max_length) # 注意力机制作用完后的结果映射到后面的层 self.attn_combine = nn.Linear(self.hidden_size * 3, self.hidden_size) # dropout操作层 self.dropout = nn.Dropout(self.dropout_p) # 定义一个双向GRU,并设置batch_first为True以方便操作 self.gru = nn.GRU(self.hidden_size, self.hidden_size, bidirectional = True, num_layers = self.n_layers, batch_first = True) self.out = nn.Linear(self.hidden_size * 2, self.output_size) def forward(self, input, hidden, encoder_outputs): # 解码器的一步操作 # input大小:batch_size, length_seq embedded = self.embedding(input) # embedded大小:batch_size, length_seq, hidden_size embedded = embedded[:, 0, :] # embedded大小:batch_size, hidden_size embedded = self.dropout(embedded) # 将hidden张量数据转化成batch_size排在第0维的形状 # hidden大小:direction*n_layer, batch_size, hidden_size temp_for_transpose = torch.transpose(hidden, 0, 1).contiguous() temp_for_transpose = temp_for_transpose.view(temp_for_transpose.size()[0], -1) hidden_attn = temp_for_transpose # 注意力层的输入 # hidden_attn大小:batch_size, direction*n_layers*hidden_size input_to_attention = torch.cat((embedded, hidden_attn), 1) # input_to_attention大小:batch_size, hidden_size * (1 + direction * n_layers) # 注意力层输出的权重 attn_weights = F.softmax(self.attn(input_to_attention),dim=1) # attn_weights大小:batch_size, max_length # 当输入数据不标准的时候,对weights截取必要的一段 attn_weights = attn_weights[:, : encoder_outputs.size()[1]] # attn_weights大小:batch_size, length_seq_of_encoder attn_weights = attn_weights.unsqueeze(1) # attn_weights大小:batch_size, 1, length_seq 中间的1是为了bmm乘法用的 # 将attention的weights矩阵乘encoder_outputs以计算注意力完的结果 # encoder_outputs大小:batch_size, seq_length, hidden_size*direction attn_applied = torch.bmm(attn_weights, encoder_outputs) # attn_applied大小:batch_size, 1, hidden_size*direction # bmm: 两个矩阵相乘。忽略第一个batch纬度,缩并时间维度 # 将输入的词向量与注意力机制作用后的结果拼接成一个大的输入向量 output = torch.cat((embedded, attn_applied[:,0,:]), 1) # output大小:batch_size, hidden_size * (direction + 1) # 将大输入向量映射为GRU的隐含层 output = self.attn_combine(output).unsqueeze(1) # output大小:batch_size, length_seq, hidden_size output = F.relu(output) # output的结果再dropout output = self.dropout(output) # 开始解码器GRU的运算 output, hidden = self.gru(output, hidden) # output大小:batch_size, length_seq, hidden_size * directions # hidden大小:n_layers * directions, batch_size, hidden_size #取出GRU运算最后一步的结果喂给最后一层全链接层 output = self.out(output[:, -1, :]) # output大小:batch_size * output_size # 取logsoftmax,计算输出结果 output = F.log_softmax(output, dim = 1) # output大小:batch_size * output_size return output, hidden, attn_weights def initHidden(self, batch_size): # 初始化解码器隐单元,尺寸为n_layers * directions, batch_size, hidden_size result = Variable(torch.zeros(self.n_layers * 2, batch_size, self.hidden_size)) if use_cuda: return result.cuda() else: return result # + # 开始带有注意力机制的RNN训练 #定义网络架构 hidden_size = 32 max_length = MAX_LENGTH n_layers = 1 encoder = EncoderRNN(input_lang.n_words, hidden_size, n_layers = n_layers) decoder = AttnDecoderRNN(hidden_size, output_lang.n_words, dropout_p=0.5, max_length = max_length, n_layers = n_layers) if use_cuda: encoder = encoder.cuda() decoder = decoder.cuda() learning_rate = 0.0001 encoder_optimizer = optim.Adam(encoder.parameters(), lr=learning_rate) decoder_optimizer = optim.Adam(decoder.parameters(), lr=learning_rate) criterion = nn.NLLLoss() teacher_forcing_ratio = 0.5 num_epoch = 100 # 开始训练周期循环 plot_losses = [] for epoch in range(num_epoch): # 将解码器置于训练状态,让dropout工作 decoder.train() print_loss_total = 0 # 对训练数据进行循环 for data in train_loader: input_variable = Variable(data[0]).cuda() if use_cuda else Variable(data[0]) # input_variable的大小:batch_size, length_seq target_variable = Variable(data[1]).cuda() if use_cuda else Variable(data[1]) # target_variable的大小:batch_size, length_seq #清空梯度 encoder_optimizer.zero_grad() decoder_optimizer.zero_grad() encoder_hidden = encoder.initHidden(data[0].size()[0]) loss = 0 #编码器开始工作 encoder_outputs, encoder_hidden = encoder(input_variable, encoder_hidden) # encoder_outputs的大小:batch_size, length_seq, hidden_size*direction # encoder_hidden的大小:direction*n_layer, batch_size, hidden_size # 解码器开始工作 decoder_input = Variable(torch.LongTensor([[SOS_token]] * target_variable.size()[0])) # decoder_input大小:batch_size, length_seq decoder_input = decoder_input.cuda() if use_cuda else decoder_input # 将编码器的隐含层单元取值作为编码的结果传递给解码器 decoder_hidden = encoder_hidden # decoder_hidden大小:direction*n_layer, batch_size, hidden_size # 同时按照两种方式训练解码器:用教师监督的信息作为下一时刻的输入和不用监督的信息,用自己预测结果作为下一时刻的输入 use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False if use_teacher_forcing: # 用监督信息作为下一时刻解码器的输入 # 开始时间不得循环 for di in range(MAX_LENGTH): # 输入给解码器的信息包括输入的单词decoder_input, 解码器上一时刻的因曾单元状态, # 编码器各个时间步的输出结果 decoder_output, decoder_hidden, decoder_attention = decoder( decoder_input, decoder_hidden, encoder_outputs) #decoder_ouput大小:batch_size, output_size #计算损失函数,得到下一时刻的解码器的输入 loss += criterion(decoder_output, target_variable[:, di]) decoder_input = target_variable[:, di].unsqueeze(1) # Teacher forcing # decoder_input大小:batch_size, length_seq else: # 没有教师监督,用解码器自己的预测作为下一时刻的输入 # 对时间步进行循环 for di in range(MAX_LENGTH): decoder_output, decoder_hidden, decoder_attention = decoder( decoder_input, decoder_hidden, encoder_outputs) #decoder_ouput大小:batch_size, output_size(vocab_size) # 获取解码器的预测结果,并用它来作为下一时刻的输入 topv, topi = decoder_output.data.topk(1, dim = 1) #topi 尺寸:batch_size, k ni = topi[:, 0] decoder_input = Variable(ni.unsqueeze(1)) # decoder_input大小:batch_size, length_seq decoder_input = decoder_input.cuda() if use_cuda else decoder_input # 计算损失函数 loss += criterion(decoder_output, target_variable[:, di]) # 反向传播开始 loss.backward() # loss = loss.cpu() if use_cuda else loss # 开始梯度下降 encoder_optimizer.step() decoder_optimizer.step() print_loss_total += loss.data.numpy() print_loss_avg = print_loss_total / len(train_loader) valid_loss = 0 rights = [] # 将解码器的training设置为False,以便关闭dropout decoder.eval() #对所有的校验数据做循环 for data in valid_loader: input_variable = Variable(data[0]).cuda() if use_cuda else Variable(data[0]) # input_variable的大小:batch_size, length_seq target_variable = Variable(data[1]).cuda() if use_cuda else Variable(data[1]) # target_variable的大小:batch_size, length_seq encoder_hidden = encoder.initHidden(data[0].size()[0]) loss = 0 encoder_outputs, encoder_hidden = encoder(input_variable, encoder_hidden) # encoder_outputs的大小:batch_size, length_seq, hidden_size*direction # encoder_hidden的大小:direction*n_layer, batch_size, hidden_size decoder_input = Variable(torch.LongTensor([[SOS_token]] * target_variable.size()[0])) # decoder_input大小:batch_size, length_seq decoder_input = decoder_input.cuda() if use_cuda else decoder_input decoder_hidden = encoder_hidden # decoder_hidden大小:direction*n_layer, batch_size, hidden_size # 开始每一步的预测 for di in range(MAX_LENGTH): decoder_output, decoder_hidden, decoder_attention = decoder( decoder_input, decoder_hidden, encoder_outputs) #decoder_ouput大小:batch_size, output_size(vocab_size) topv, topi = decoder_output.data.topk(1, dim = 1) #topi 尺寸:batch_size, k ni = topi[:, 0] decoder_input = Variable(ni.unsqueeze(1)) # decoder_input大小:batch_size, length_seq decoder_input = decoder_input.cuda() if use_cuda else decoder_input right = rightness(decoder_output, target_variable[:, di]) rights.append(right) loss += criterion(decoder_output, target_variable[:, di]) # loss = loss.cpu() if use_cuda else loss valid_loss += loss.data.numpy() # 计算平均损失、准确率等指标并打印输出 right_ratio = 1.0 * np.sum([i[0] for i in rights]) / np.sum([i[1] for i in rights]) print('进程:%d%% 训练损失:%.4f,校验损失:%.4f,词正确率:%.2f%%' % (epoch * 1.0 / num_epoch * 100, print_loss_avg, valid_loss / len(valid_loader), 100.0 * right_ratio)) plot_losses.append([print_loss_avg, valid_loss / len(valid_loader), right_ratio]) # - # 绘制统计指标曲线图 # torch.save(encoder, 'encoder-final.mdl') # torch.save(decoder, 'decoder-final.mdl') encoder = torch.load('encoder-final.mdl') decoder = torch.load('decoder-final.mdl') a = [i[0] for i in plot_losses] b = [i[1] for i in plot_losses] c = [i[2] * 100 for i in plot_losses] plt.plot(a, '-', label = 'Training Loss') plt.plot(b, ':', label = 'Validation Loss') plt.plot(c, '.', label = 'Accuracy') plt.xlabel('Epoch') plt.ylabel('Loss & Accuracy') plt.legend() # 从测试集中随机挑选20个句子来测试翻译的结果 indices = np.random.choice(range(len(test_X)), 20) for ind in indices: data = [test_X[ind]] target = [test_Y[ind]] print(data[0]) print(SentenceFromList(input_lang, data[0])) input_variable = Variable(torch.LongTensor(data)).cuda() if use_cuda else Variable(torch.LongTensor(data)) # input_variable的大小:batch_size, length_seq target_variable = Variable(torch.LongTensor(target)).cuda() if use_cuda else Variable(torch.LongTensor(target)) # target_variable的大小:batch_size, length_seq encoder_hidden = encoder.initHidden(input_variable.size()[0]) loss = 0 encoder_outputs, encoder_hidden = encoder(input_variable, encoder_hidden) # encoder_outputs的大小:batch_size, length_seq, hidden_size*direction # encoder_hidden的大小:direction*n_layer, batch_size, hidden_size decoder_input = Variable(torch.LongTensor([[SOS_token]] * target_variable.size()[0])) # decoder_input大小:batch_size, length_seq decoder_input = decoder_input.cuda() if use_cuda else decoder_input decoder_hidden = encoder_hidden # decoder_hidden大小:direction*n_layer, batch_size, hidden_size # Without teacher forcing: use its own predictions as the next input output_sentence = [] decoder_attentions = torch.zeros(max_length, max_length) rights = [] for di in range(MAX_LENGTH): decoder_output, decoder_hidden, decoder_attention = decoder( decoder_input, decoder_hidden, encoder_outputs) #decoder_ouput大小:batch_size, output_size(vocab_size) topv, topi = decoder_output.data.topk(1, dim = 1) decoder_attentions[di] = decoder_attention.data #topi 尺寸:batch_size, k ni = topi[:, 0] decoder_input = Variable(ni.unsqueeze(1)) ni = ni.numpy()[0] output_sentence.append(ni) # decoder_input大小:batch_size, length_seq decoder_input = decoder_input.cuda() if use_cuda else decoder_input right = rightness(decoder_output, target_variable[:, di]) rights.append(right) sentence = SentenceFromList(output_lang, output_sentence) standard = SentenceFromList(output_lang, target[0]) print('机器翻译:', sentence) print('标准翻译:', standard) # 输出本句话的准确率 right_ratio = 1.0 * np.sum([i[0] for i in rights]) / np.sum([i[1] for i in rights]) print('词准确率:', 100.0 * right_ratio) print('\n') # + # 通过几个特殊的句子翻译,考察注意力机制关注的情况 input_sentence = 'elle est trop petit .' data = np.array([indexFromSentence(input_lang, input_sentence)]) input_variable = Variable(torch.LongTensor(data)).cuda() if use_cuda else Variable(torch.LongTensor(data)) # input_variable的大小:batch_size, length_seq target_variable = Variable(torch.LongTensor(target)).cuda() if use_cuda else Variable(torch.LongTensor(target)) # target_variable的大小:batch_size, length_seq encoder_hidden = encoder.initHidden(input_variable.size()[0]) loss = 0 encoder_outputs, encoder_hidden = encoder(input_variable, encoder_hidden) # encoder_outputs的大小:batch_size, length_seq, hidden_size*direction # encoder_hidden的大小:direction*n_layer, batch_size, hidden_size decoder_input = Variable(torch.LongTensor([[SOS_token]] * target_variable.size()[0])) # decoder_input大小:batch_size, length_seq decoder_input = decoder_input.cuda() if use_cuda else decoder_input decoder_hidden = encoder_hidden # decoder_hidden大小:direction*n_layer, batch_size, hidden_size output_sentence = [] decoder_attentions = torch.zeros(max_length, max_length) for di in range(MAX_LENGTH): decoder_output, decoder_hidden, decoder_attention = decoder( decoder_input, decoder_hidden, encoder_outputs) #decoder_ouput大小:batch_size, output_size(vocab_size) topv, topi = decoder_output.data.topk(1, dim = 1) # 在每一步,获取了注意力的权重向量,并将其存储到了decoder_attentions之中 decoder_attentions[di] = decoder_attention.data #topi 尺寸:batch_size, k ni = topi[:, 0] decoder_input = Variable(ni.unsqueeze(1)) ni = ni.numpy()[0] output_sentence.append(ni) # decoder_input大小:batch_size, length_seq decoder_input = decoder_input.cuda() if use_cuda else decoder_input right = rightness(decoder_output, target_variable[:, di]) rights.append(right) sentence = SentenceFromList(output_lang, output_sentence) print('机器翻译:', sentence) print('\n') # + # 将每一步存储的注意力权重组合到一起就形成了注意力矩阵,绘制为图 fig = plt.figure() ax = fig.add_subplot(111) cax = ax.matshow(decoder_attentions.numpy(), cmap='bone') fig.colorbar(cax) # 设置坐标轴 ax.set_xticklabels([''] + input_sentence.split(' ') + ['<EOS>'], rotation=90) ax.set_yticklabels([''] + sentence.split(' ')) # 在标度上展示单词 import matplotlib.ticker as ticker ax.xaxis.set_major_locator(ticker.MultipleLocator(1)) ax.yaxis.set_major_locator(ticker.MultipleLocator(1)) plt.show() # -
11_neural_translator/neural translator.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.5 64-bit (''ac_june'': conda)' # name: python3 # --- # + [markdown] id="F46q9WuSNxCZ" # # AquaCrop-OSPy: Bridging the gap between research and practice in crop-water modelling # # - # <a href="https://colab.research.google.com/github/thomasdkelly/aquacrop/blob/master/tutorials/AquaCrop_OSPy_Notebook_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a> # # + [markdown] id="qrRbaHsji3A-" # This series of notebooks provides users with an introduction to AquaCrop-OSPy, an open-source Python implementation of the U.N. Food and Agriculture Organization (FAO) AquaCrop model. AquaCrop-OSPy is accompanied by a series of Jupyter notebooks, which guide users interactively through a range of common applications of the model. Only basic Python experience is required, and the notebooks can easily be extended and adapted by users for their own applications and needs. # + [markdown] id="YDm931IGNxCb" # # This notebook series consists of four parts: # # 1. <a href=https://colab.research.google.com/github/thomasdkelly/aquacrop/blob/master/tutorials/AquaCrop_OSPy_Notebook_1.ipynb>Running an AquaCrop-OSPy model</a> # 2. <a href=https://colab.research.google.com/github/thomasdkelly/aquacrop/blob/master/tutorials/AquaCrop_OSPy_Notebook_2.ipynb>Estimation of irrigation water demands</a> # 3. <a href=https://colab.research.google.com/github/thomasdkelly/aquacrop/blob/master/tutorials/AquaCrop_OSPy_Notebook_3.ipynb>Optimisation of irrigation management strategies</a> # 4. <a href=https://colab.research.google.com/github/thomasdkelly/aquacrop/blob/master/tutorials/AquaCrop_OSPy_Notebook_4.ipynb>Projection of climate change impacts</a> # # + [markdown] id="_b5MjblTNxCf" # # Notebook 1: Getting started: Running your first simulation with AquaCrop-OSPy # # # + [markdown] id="wOL7nNR7khRC" # In this notebook, you will learn interactively how to setup and run you first AquaCrop-OSPy simulation. We begin by showing how to define key model input parameters and variables, and then show how to execute a model simulation and interpret output files. # # The first of these notebooks outlines how to setup and run single and multi-season simulations for a selected cropping system. Examples are provided about how to setup and define relevant crop, soil, weather and management parameter values and inputs, with more advanced customization guides given in the Appendices A-D. # + [markdown] id="ZbONAGxfmGXD" # ## Notes on Google Colab # # If you are unfamiliar with Jupyter Notebooks or Google Colab, <a href="https://colab.research.google.com/notebooks/intro.ipynb">here</a> is an introductory notebook to get you started. In short, these are computable documents that let you combine text blocks such as these (including html, LaTeX, etc.) with code blocks that you can write, edit and execute. To run the code in each cell either click the run button in the cell's top left corner or hit SHIFT-ENTER when the cell is selected. You can also navigate through the document using the table of contents on the left hand side. # # W recommend you save a copy of this notebook to your dive (File->Save a copy to drive). Then any changes you make to this notebook will be saved and you can open it it again at any time and carry on. # + [markdown] id="-YNld0zqPGqr" # <a id='Imports'><a/> # # ## Imports # # # + [markdown] id="-cxsh3MQoAmD" # In order to use AquaCrop-OSPy inside this notebook we first need to install and import it. Installing aquacrop is as simple as running `pip install aquacrop` or `!pip install aquacrop==VERSION` to install a specific version. In cell below we also use the `output.clear` function to keep everything tidy. # + id="LWQ58wjpobn0" # # !pip install aquacrop # from google.colab import output # output.clear() # + # only used for local development # import sys # _=[sys.path.append(i) for i in ['.', '..']] # + [markdown] id="1zGeIg5wpC7K" # Now that `aquacrop` is installed we need to import the various components into the notebook. All these functions do not have to be imported at once as we have done in the cell below but it will help keep things clearer throughout this notebook. # + id="7Z4TGBQzYXQi" # import aquacrop functions (the * simply means 'all') from aquacrop.classes import * from aquacrop.core import * # + [markdown] id="6ZHNXbliriKC" # # Selecting Model Components # + [markdown] id="Zd0f66PvrvJs" # Running an AquaCrop-OSPy model requires the selection of 5 components: # # 1. Daily climate measurements # 3. Soil selection # 4. Crop selection # 5. Initial water content # 1. Simulation start and end dates # # We will go through the selection of these components in turn below. # + [markdown] id="d7pNB13z2TaJ" # ## Climate Measurements # + [markdown] id="Nb1Q_yhPtXbZ" # AquaCrop-OSPy requires weather data to be specified over the cropping period being simulated. This includes a daily time series of minimum and maximum temperatures [C], precipitation [mm], and reference crop evapotranspiration [mm]. # # To import these data into the model, a .txt file containing relevant weather data must be created by the user in the following space delimited format. # # ![picture](https://drive.google.com/uc?export=view&id=1WG0gN4fCYgQs-EQXhXUvNrPktFJOZut9) # # If you are running this notebook locally you will need to specify the file path to the weather data file on your computer. If you are running the notebook via Google Colab you can upload the file through the tab on the left so that it is available in the current directory. # # ![picture](https://drive.google.com/uc?export=view&id=1xTg4b1W-Nvi8kuK3ytVybTZpaWSflnNE) # # To load the weather data into the model, use the `prepare_weather` function, passing in the filepath of the .txt file. This `prepare_weather` function will create a pandas DataFrame in Python storing the imported weather data in the correct format for subsequent simulations. # # # + colab={"base_uri": "https://localhost:8080/", "height": 448} id="wV2rbJzUqc_1" outputId="1ee657f3-6828-4396-c3f7-556a20afb2ad" # specify filepath to weather file (either locally or imported to colab) # filepath= 'YOUR_WEATHER_FILE.TXT' # weather_data = prepare_weather(filepath) # weather_data # + [markdown] id="pr6GZHHSwAQb" # AquaCrop-OSPy also contains a number of in-built example weather files. These can be accessed using the `get_filepath` function as shown below. Once run then use the `prep_weather` function as above to convert the data to a pandas DataFrame ready for use. A full list of the built-in weather files can be found in Appendix A. # + colab={"base_uri": "https://localhost:8080/", "height": 419} id="uNQGElSKwPE0" outputId="0ecad12e-7c76-43b5-b309-131fd758b19c" # locate built in weather file filepath=get_filepath('tunis_climate.txt') weather_data = prepare_weather(filepath) weather_data # + [markdown] id="nfZ1LMjZmUQ_" # ## Soil # + [markdown] id="pnimVgNRtNyT" # # # Selecting a soil type for an AquaCrop-OSPy simulation is done via the `SoilClass` object. This object contains all the compositional and hydraulic properties needed for the simulation. The simplest way to select a `SoilClass` is to use one of the built-in soil types taken from AquaCrop defaults. A visual representation of these defaults is shown below. # # ![picture](https://drive.google.com/uc?export=view&id=11CDRTYgYrYxsQyEwih_0jDAozIu_l-ak) # # As an example, to select a 'sandy loam' soil, run the cell below. Appendix B details how a user can edit any of these built-in soil types or create their own custom soil profile. # + id="l2itZBlsPXf6" sandy_loam = SoilClass(soilType='SandyLoam') # + [markdown] id="PedFdBWt2gZ4" # ## Crop # + [markdown] id="EDTpNTNyXEvw" # The crop type used in the simulation is selected in a similar way to soil via a `CropClass`. To select a `CropClass` you need to specify the crop type and planting date. Any of the built-in crop types (currently Maize, Wheat, Rice, Potato) can be selected by running the cell below. Appendix C details how a user can edit any of the built-in crop parameters or create custom crops. # + id="ZqCvx9epJjwr" wheat = CropClass('Wheat', PlantingDate='10/01') # + [markdown] id="s19ZMzMc4k4y" # ## Initial water content # # Specifying the intial soil-water content at the begining of the simulation is done via the `InitWCClass`. When creating an `InitWCClass`, it needs a list of locations and soil water contents. This table below details all the input paramaters for the `InitWCClass`. # # Variable Name | Type | Description | Default # --- | --- | --- | --- # wc_type| `str` | Type of value | 'Prop' # || 'Prop' = 'WP' / 'FC' / 'SAT' | # || 'Num' = XXX m3/m3 | # || 'Pct' = % TAW | # Method | `str` | 'Depth' = Interpolate depth points; 'Layer' = Constant value for each soil layer | 'Layer' # depth_layer| `list` | locations in soil profile (soil layer or depth) | [1] # value| `list` | value at that location | ['FC'] # # In the cell below we initialize the water content to be Field Capacity (FC) accross the whole soil profile. # # + id="3PvQ5QYR49tG" InitWC = InitWCClass(value=['FC']) # + [markdown] id="LCVt1vn93lQY" # ## Model # + [markdown] id="6c_yzyC5DWWB" # Additional model components you can specify include irrigation management, field management and groundwater conditions. Details on how to specify these are detailed in Appendix D however they will both default to none if not specified. # # Once you have defined your weather data, `CropClass`, `SoilClass` and `InitWCClass` then you're ready to run your simulation. # # # To run a simulation we need to combine the components we have selected into an `AquaCropModel`. It is here where we specify the simulation start date and end date (YYYY/MM/DD). Running the cell below will create an AquaCropModel simulation with all the parameters we have specified so far. # + id="AOmiLw0eXDoZ" # combine into aquacrop model and specify start and end simulation date model = AquaCropModel(SimStartTime=f'{1979}/10/01', SimEndTime=f'{1985}/05/30', wdf=weather_data, Soil=sandy_loam, Crop=wheat, InitWC=InitWC) # + [markdown] id="3zSA_nKxwt4J" # Now first initialize the model using `.initialize()`. The model can then be run forwards N days using `.step(N)`. Most of the time you will want to instead run the model till the end of the simulation which is done by running `.step(till_termination=True)` # # + id="huZOrZOxXDlj" # initilize model model.initialize() # run model till termination model.step(till_termination=True) # + [markdown] id="k33-Kb6l8jg4" # Once the model has finished running, four different output files will be produced. The `Flux` output shows daily water flux variables such as total water storage. The `Water` output shows daily water storage in each compartment. The `Growth` output details daily crop variables such as canopy cover. The `Final` output lists the final Yield and total Irrigation for each season. These outputs can be accessed through the cell below. Use `.head(N)` to view the first `N` rows. Full details on the output files can be found in Appendix E. # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="CsD30D0_psHY" outputId="df2261eb-30c2-40d5-867d-da38c7f812ac" # model.Outputs.Flux.head() # model.Outputs.Water.head() # model.Outputs.Growth.head() model.Outputs.Final.head() # + [markdown] id="m4l7TRHAsr3O" # Congratulations, you have run your first AquaCrop-OSPy model. As a final example, let's create and run another model with a different soil type and compare the results. # + id="sv230V0lFCtX" # combine into aquacrop model and specify start and end simulation date model_clay = AquaCropModel(SimStartTime=f'{1979}/10/01', SimEndTime=f'{1985}/05/30', wdf=weather_data, Soil=SoilClass('Clay'), Crop=wheat, InitWC=InitWC) model_clay.initialize() model_clay.step(till_termination=True) # + [markdown] id="abE93UUZ9Ubi" # Let's use the pandas library to collate our seasonal yields so we can visualize our results. # + id="VQO2RdLOEdyt" import pandas as pd # import pandas library names=['<NAME>','Clay'] #combine our two output files dflist=[model.Outputs.Final, model_clay.Outputs.Final] outlist=[] for i in range(len(dflist)): # go through our two output files temp = pd.DataFrame(dflist[i]['Yield (tonne/ha)']) # extract the seasonal yield data temp['label']=names[i] # add the soil type label outlist.append(temp) # save processed results # combine results all_outputs = pd.concat(outlist,axis=0) # + [markdown] id="mPtsTDHuGADt" # Now we can leverage some of pythons great plotting libraries `seaborn` and `matplotlib` to visualize and compare the yields from the two different soil types. # + colab={"base_uri": "https://localhost:8080/", "height": 465} id="RnyOSRf0EdzF" outputId="55ffcdfd-2551-4017-a393-6eebc42a4a39" import matplotlib.pyplot as plt import seaborn as sns #create figure fig,ax=plt.subplots(1,1,figsize=(10,7),) # create box plot sns.boxplot(data=all_outputs,x='label',y='Yield (tonne/ha)',ax=ax,) # labels and font sizes ax.tick_params(labelsize=15) ax.set_xlabel(' ') ax.set_ylabel('Yield (tonne/ha)',fontsize=18) # + [markdown] id="H_3G-avcE7t2" # And this brings an end to Notebook 1. Feel free to edit any of the code cells, import your own weather data, try new soils and crops. # # In Notebook 2 we use the model to estimate irrigation water demands. # + [markdown] id="Q8Oeack8mfU4" # # Appendix A: Built-in weather files # + [markdown] id="BU5dV2SxmfVD" # AquaCrop-OSPy includes a number of in-built weather files for different locations around the world, which users can select for model simulations. These have been taken from AquaCrop defaults (with the exception of Champion Nebraska **link**). In-built weather files currently include: # # # 1. Tunis (date) | 'tunis_climate.txt' # 2. Brussels (date) | 'brussels_climate.txt' # 3. Hyderabad (date) | 'weather_hyd.txt' # 4. Champion, Nebraska (date) | 'champion_weather.txt' # # The filepath to these can be found using the `get_filepath` function and the data can be read in using the `prepare_weather` function. # + colab={"base_uri": "https://localhost:8080/", "height": 466} id="v5tX0PZ2mfVF" outputId="e1c0174f-d56f-40cc-9767-506b363ab070" # get location of built in weather data file path = get_filepath('hyderabad_climate.txt') # read in weather data file and put into correct format wdf = prepare_weather(path) # show weather data wdf.head() # + [markdown] id="lrw9n4Ejowl1" # # Appendix B: Custom Soils # + [markdown] id="5ZYAeykxSk6Y" # Custom soil classes can be created by passing 'custom' as the soil type when creating a `SoilClass`. In the cell below we create a custom soil with a curve number (CN=46) and readily evaporable water (REW=7). Here is a full list of the parameters you can specify: # # Variable Name | Description | Default # --- | --- | --- # soilType | Soil classification e.g. 'sandy_loam' | REQUIRED # dz | thickness of each soil compartment e.g. 12 compartments of thickness 0.1m | [0.1]*12 # CalcSHP | Calculate soil hydraulic properties (0 = No, 1 = Yes) | 0 # AdjREW | Adjust default value for readily evaporable water (0 = No, 1 = Yes) | 1 # REW | Readily evaporable water (mm) | 9.0 # CalcCN | Calculate curve number (0 = No, 1 = Yes) | 1 # CN | Curve Number | 61.0 # zRes | Depth of restrictive soil layer (negative value if not present) | -999 # | **The parameters below should not be changed without expert knowledge** | # EvapZsurf | Thickness of soil surface skin evaporation layer (m) | 0.04 # EvapZmin | Minimum thickness of full soil surface evaporation layer (m) | 0.15 # EvapZmax | Maximum thickness of full soil surface evaporation layer (m) | 0.30 # Kex | Maximum soil evaporation coefficient | 1.1 # fevap | Shape factor describing reduction in soil evaporation in stage 2. | 4 # fWrelExp | Proportional value of Wrel at which soil evaporation layer expands | 0.4 # fwcc | Maximum coefficient for soil evaporation reduction due to sheltering effect of withered canopy | 50 # zCN | Thickness of soil surface (m) used to calculate water content to adjust curve number | 0.3 # zGerm | Thickness of soil surface (m) used to calculate water content for germination | 0.3 # AdjCN | Adjust curve number for antecedent moisture content (0: No, 1: Yes) | 1 # fshape_cr | Capillary rise shape factor | 16 # zTop | Thickness of soil surface layer for water stress comparisons (m) | 0.1 # # # # # + id="Qhnpqi42wJX6" custom = SoilClass('custom',CN=46,REW=7) # + [markdown] id="-cgXsG7ixTNB" # Soil hydraulic properties are then specified using `.add_layer()`. This function needs the thickness of the soil layer [m] (just the depth of soil profile if only using 1 layer), the water content at Wilting Point [m^3/m^3], Field Capacity [m^3/m^3], Saturation [m^3/m^3], as well as the hydraulic conductivity [mm/day] and soil penetrability [%]. # + id="9P6v2sJ3xNmX" custom.add_layer(thickness=custom.zSoil,thWP=0.24, thFC=0.40,thS=0.50,Ksat=155, penetrability=100) # + [markdown] id="C03-phCOyYCL" # Soil hydraulic properties can also be specified using the soil textural composition. This is done using the `.add_layer_from_texture()` function. This function needs the soil thickness [m], sand, clay, organic matter content [%], and the penetrability [%] # + id="OHgfs8hQyN4b" custom = SoilClass('custom',CN=46,REW=7) custom.add_layer_from_texture(thickness=custom.zSoil, Sand=10,Clay=35, OrgMat=2.5,penetrability=100) # + [markdown] id="KZL8iIpKy_Qx" # To view your custom soil profile simple run the cell below. # + colab={"base_uri": "https://localhost:8080/", "height": 421} id="ls_wkaILzFvH" outputId="2b136231-f184-4c13-fc64-a42318d6e374" custom.profile # + [markdown] id="V9ybhk35zvVc" # Both these layer creation methods can be combined together to create multi layered soils. # + colab={"base_uri": "https://localhost:8080/", "height": 421} id="T7tZ8zW7z9R0" outputId="94159a05-5f3a-4232-9c7f-57ff8cfcb66f" custom = SoilClass('custom',CN=46,REW=7) custom.add_layer(thickness=0.3,thWP=0.24, thFC=0.40,thS=0.50,Ksat=155, penetrability=100) custom.add_layer_from_texture(thickness=1.5, Sand=10,Clay=35, OrgMat=2.5,penetrability=100) custom.profile # + [markdown] id="SdJ4dr1_QKgh" # In AquaCrop-OSPy, as in AquaCrop and AquaCrop-OS, the soil is split into compartments. By default these are 12 compartments of thickness 0.1m where the bottom layers will expand in order to exceed the maximum crop root depth. # # This depth (`dz`) can be also be altered by the user by changing the `dz` argument. For example: Lets say we want the top 6 compartments to be 0.1m each and the bottom 6 compartments to be 0.2m each... # + id="Vke4fRhNRHz_" sandy_loam = SoilClass('SandyLoam',dz=[0.1]*6+[0.2]*6) # + [markdown] id="bND19_rm0n-6" # Similarly default soil types can be adjust by passing in the changed variables. # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="90UavX3G0o15" outputId="ad4ae02e-6b9a-4815-c11b-d085493e7e6a" local_sandy_loam = SoilClass('SandyLoam',dz=[0.1]*6+[0.2]*6,CN=46,REW=7) local_sandy_loam.profile.head() # + [markdown] id="fnHkUFHhYspy" # # Apendix C: Custom Crops # # + [markdown] id="axwcRvOyl2dQ" # It is more likely that the user will want to modify one of the built in crops (as opposed to modelling a brand new crop. To do this simply pass in the altered parameters when you create the `CropClass`. Any parameters you specify here will override the crop defaults. To model a crop that does not have built in defaults, you can specify the crop type 'custom' and pass in all the parameters listed in the table below. # + id="hTTSX25V_6C-" local_wheat = CropClass('Wheat', PlantingDate='11/01', HarvestDate='06/30', CGC = 0.0051,CDC = 0.0035) # + [markdown] id="_9U_WZGv_6DH" # Below is a full list of the crop parameters that can be altered. # # Variable Name | Default | Description # --- | --- | --- # Name | | Crop Name e.ge. 'maize' # CropType | | Crop Type (1 = Leafy vegetable, 2 = Root/tuber, 3 = Fruit/grain) # PlantMethod | | Planting method (0 = Transplanted, 1 = Sown) # CalendarType | | Calendar Type (1 = Calendar days, 2 = Growing degree days) # SwitchGDD | | Convert calendar to GDD mode if inputs are given in calendar days (0 = No; 1 = Yes) # PlantingDate | | Planting Date (mm/dd) # HarvestDate | | Latest Harvest Date (mm/dd) # Emergence | | Growing degree/Calendar days from sowing to emergence/transplant recovery # MaxRooting | | Growing degree/Calendar days from sowing to maximum rooting # Senescence | | Growing degree/Calendar days from sowing to senescence # Maturity | | Growing degree/Calendar days from sowing to maturity # HIstart | | Growing degree/Calendar days from sowing to start of yield formation # Flowering | | Duration of flowering in growing degree/calendar days (-999 for non-fruit/grain crops) # YldForm | | Duration of yield formation in growing degree/calendar days # GDDmethod | | Growing degree day calculation method # Tbase | | Base temperature (degC) below which growth does not progress # Tupp | | Upper temperature (degC) above which crop development no longer increases # PolHeatStress | | Pollination affected by heat stress (0 = No, 1 = Yes) # Tmax_up | | Maximum air temperature (degC) above which pollination begins to fail # Tmax_lo | | Maximum air temperature (degC) at which pollination completely fails # PolColdStress | | Pollination affected by cold stress (0 = No, 1 = Yes) # Tmin_up | | Minimum air temperature (degC) below which pollination begins to fail # Tmin_lo | | Minimum air temperature (degC) at which pollination completely fails # TrColdStress | | Transpiration affected by cold temperature stress (0 = No, 1 = Yes) # GDD_up | | Minimum growing degree days (degC/day) required for full crop transpiration potential # GDD_lo | | Growing degree days (degC/day) at which no crop transpiration occurs # Zmin | | Minimum effective rooting depth (m) # Zmax | | Maximum rooting depth (m) # fshape_r | | Shape factor describing root expansion # SxTopQ | | Maximum root water extraction at top of the root zone (m3\/ m3\/ day) # SxBotQ | | Maximum root water extraction at the bottom of the root zone (m3\/ m3\/ day) # SeedSize | | Soil surface area (cm2) covered by an individual seedling at 90% emergence # PlantPop | | Number of plants per hectare # CCx | | Maximum canopy cover (fraction of soil cover) # CDC | | Canopy decline coefficient (fraction per GDD/calendar day) # CGC | | Canopy growth coefficient (fraction per GDD) # Kcb | | Crop coefficient when canopy growth is complete but prior to senescence # fage | | Decline of crop coefficient due to ageing (%/day) # WP | | Water productivity normalized for ET0 and C02 (g/m2) # WPy | | Adjustment of water productivity in yield formation stage (% of WP) # fsink | | Crop performance under elevated atmospheric CO2 concentration (%/100) # HI0 | | Reference harvest index # dHI_pre | | Possible increase of harvest index due to water stress before flowering (%) # a_HI | | Coefficient describing positive impact on harvest index of restricted vegetative growth during yield formation # b_HI | | Coefficient describing negative impact on harvest index of stomatal closure during yield formation # dHI0 | | Maximum allowable increase of harvest index above reference value # Determinant | | Crop Determinacy (0 = Indeterminant, 1 = Determinant) # exc | | Excess of potential fruits # p_up1 | | Upper soil water depletion threshold for water stress effects on affect canopy expansion # p_up2 | | Upper soil water depletion threshold for water stress effects on canopy stomatal control # p_up3 | | Upper soil water depletion threshold for water stress effects on canopy senescence # p_up4 | | Upper soil water depletion threshold for water stress effects on canopy pollination # p_lo1 | | Lower soil water depletion threshold for water stress effects on canopy expansion # p_lo2 | | Lower soil water depletion threshold for water stress effects on canopy stomatal control # p_lo3 | | Lower soil water depletion threshold for water stress effects on canopy senescence # p_lo4 | | Lower soil water depletion threshold for water stress effects on canopy pollination # fshape_w1 | | Shape factor describing water stress effects on canopy expansion # fshape_w2 | | Shape factor describing water stress effects on stomatal control # fshape_w3 | | Shape factor describing water stress effects on canopy senescence # fshape_w4 | | Shape factor describing water stress effects on pollination # | | **The paramaters below should not be changed without expert knowledge** # fshape_b | 13.8135 | Shape factor describing the reduction in biomass production for insufficient growing degree days # PctZmin | 70 | Initial percentage of minimum effective rooting depth # fshape_ex | -6 | Shape factor describing the effects of water stress on root expansion # ETadj | 1 | Adjustment to water stress thresholds depending on daily ET0 (0 | No, 1 | Yes) # Aer | 5 | Vol (%) below saturation at which stress begins to occur due to deficient aeration # LagAer | 3 | Number of days lag before aeration stress affects crop growth # beta | 12 | Reduction (%) to p_lo3 when early canopy senescence is triggered # a_Tr | 1 | Exponent parameter for adjustment of Kcx once senescence is triggered # GermThr | 0.2 | Proportion of total water storage needed for crop to germinate # CCmin | 0.05 | Minimum canopy size below which yield formation cannot occur # MaxFlowPct | 33.3 | Proportion of total flowering time (%) at which peak flowering occurs # HIini | 0.01 | Initial harvest index # bsted | 0.000138 | WP co2 adjustment parameter given by Steduto et al. 2007 # bface | 0.001165 | WP co2 adjustment parameter given by FACE experiments # # # + [markdown] id="xPslnGHG4je_" # # Appendix D: Managment and initial conditions # + [markdown] id="n-6lk5wMoj1K" # Field management and groundwater conditions can also be specified in an AquaCrop-OSPy with a `FieldMngtClass` and `GWClass` respectively. If these are not specified then they will default to None. # + [markdown] id="7eYlR59M4-wW" # ### Field management # + [markdown] id="zzbJF4Ci5jW5" # Field Management is specified via the `FieldMngtClass`. These are largely based around the inclusion of either mulches or bunds. Two `FieldMngtClass` objects can be created for the fallow and growing period. The parameters you can specify in a `FieldMngtClass` are: # # Variable Name | Type | Description | Default # --- | --- | --- | --- # Mulches| `bool` | Soil surface covered by mulches (True or False) | False # MulchPct| `float` | Area of soil surface covered by mulches (%) | 50 # fMulch| `float` | Soil evaporation adjustment factor due to effect of mulches | 0.5 # Bunds| `bool` |Surface bunds present (True or False) | False # zBund| `float` | Bund height (m) | 0 # BundWater| `float` | Initial water height in surface bunds (mm) | 0. # CNadj| `bool` | field conditions affect curve number (True or False) | False # CNadjPct| `float` | Change in curve number (positive or negative) (%) | 0 # SRinhb| `bool` | Management practices fully inhibit surface runoff (True or False) | False # # + [markdown] id="0yfUjkOX5jQ_" # In the cell below we create an `AquaCropModel` passing in a `FieldMngtClass` with 100% field covering by mulches. # # # + id="SeGBo3im84eQ" mulches_model = AquaCropModel(SimStartTime=f'{1979}/10/01', SimEndTime=f'{1985}/05/30', wdf=weather_data, Soil=sandy_loam, Crop=wheat, InitWC=InitWC, FieldMngt=FieldMngtClass(Mulches=True, MulchPct=100, fMulch=0.5)) # + id="aTJkxbj69rGJ" mulches_model.initialize() mulches_model.step(till_termination=True) # + colab={"base_uri": "https://localhost:8080/", "height": 235} id="KSroqlcC90mg" outputId="e4430da2-6643-4c3a-efb1-5a00ef872a8b" # get final output mulches_model.Outputs.Final # + [markdown] id="va4GXkZe5jKa" # ### Groundwater # + [markdown] id="vuKR24R6EGYB" # Finally we can take a look at how to specify groundwater depth. This is done via the `GWClass` which takes in the following parameters: # # Variable Name | Type | Description | Default # --- | --- | --- | --- # WaterTable| `str` | water table considered 'Y' or 'N' | 'N' # Method | `str` | Water table input data = 'Constant' / 'Variable' | 'Constant' # dates| `list[str]` | water table observation dates 'YYYYMMDD' | [] # values| `list[float]` | value at that location | [] # # The `GWClass` needs a list of dates and water table depths. If `Method='Variable'` the water table depth will be linearly interpolated between these dates. The cell below creates a model with a constant groundwater depth of 2m # # + id="1IRW8n8GCkyA" # constant groundwater depth of 2m gw_model = AquaCropModel(SimStartTime=f'{1979}/10/01', SimEndTime=f'{1985}/05/30', wdf=weather_data, Soil=sandy_loam, Crop=wheat, InitWC=InitWC, Groundwater=GwClass(WaterTable='Y', dates=[f'{1979}/10/01'], values=[2]) ) # + id="Uh8WnBtcCkvg" gw_model.initialize() gw_model.step(till_termination=True) # + colab={"base_uri": "https://localhost:8080/", "height": 235} id="r-OL3h5o18i1" outputId="2f4a3837-61e9-4e37-f8e9-5154af20ab09" gw_model.Outputs.Final # + [markdown] id="ER2wtEHBpk66" # # Appendix E: Output files # + [markdown] id="wCbrcp0Fw9dj" # There are 4 different outputs produced by the model: # # # 1. Daily Water Flux # # Variable Name | Unit # --- | --- # water content | mm # groundwater depth | mm # surface storage | mm # irrigation | mm # infiltration | mm # runoff | mm # deep percolation | mm # capillary rise | mm # groundwater inflow | mm # actual surface evaporation | mm # potential surface evaporation | mm # actual transpiration | mm # precipitation | mm # # # 2. Soil-water content in each soil compartment # # Variable Name | Unit # --- | --- # compartment water content | mm/mm # # 3. Crop growth # # Variable Name | Unit # --- | --- # growing degree days | - # cumulative growing degree days | - # root depth | m # canopy cover | - # canopy cover (no stress) | - # biomass | kg/ha # biomass (no stress) | kg/ha # harvest index | - # adjusted harvest index | - # yield | t/ha # # 4. Final summary (seasonal total) # # Variable Name | Unit # --- | --- # yield | t/ha # total irrigation | mm # # # + [markdown] id="V7hvxkEgxbFI" # Use the `.head(N)` command to view the first N entries of the output files below # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="5s4izYgoxQqi" outputId="aeb12ac9-9f93-4bab-82cc-5acd26b209a6" model.Outputs.Flux.head() # model.Outputs.Water.head() # model.Outputs.Growth.head(10) # model.Outputs.Final.head() # + id="I9Xt1kRUfTAr" # -
docs/notebooks/AquaCrop_OSPy_Notebook_1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction to the tutorials # # ## The iPython/Jupyter notebook # # The document you are seeing it may be a PDF or a web page or another format, but what is important is how it has been made ... # # According to this article in [Nature](http://www.nature.com/news/interactive-notebooks-sharing-the-code-1.16261) the Notebook, invented by iPython, and now part of the Jupyter project, is the revolution for data analysis which will allow reproducable science. # # This tutorial will introduce you to how to access to the notebook, how to use it and perform some basic data analysis with it and the pyFAI library. # # ## Getting access to the notebook # # There are many cases ... # # ### Inside ESRF # The simplest case as the data analysis unit offers you a notebook server: just connect your web browser to http://scisoft13:8000 and authenticate with your ESRF credentials. # # ### Outside ESRF with an ESRF account. # The JupyterHub server is not directly available on the internet, but you can [login into the firewall](http://www.esrf.eu/Infrastructure/Computing/Networks/InternetAndTheFirewall/UsersManual/SSH) to forward the web server: # # ssh -XC -p5022 -L8000:scisoft13:8000 <EMAIL> # # Once logged in ESRF, keep the terminal opened and browse on your local computer to http://localhost:8000 to authenticate with your ESRF credentials. Do not worry about confidentiality as the connection from your computer to the ESRF firewall is encrypted by the SSH connection. # # ### Other cases # In the most general case you will need to install the notebook on your local computer in addition to silx, pyFAI and FabIO to follow the tutorial. WinPython provides it under windows. Please refer to the installation procedure of pyFAI to install locally pyFAI on your computer. Anaconda provides also pyFAI and FabIO as packages. # # ## Getting trained in using the notebook # # There are plenty of good tutorials on how to use the notebook. # [This one](https://github.com/jupyter/mozfest15-training/blob/master/00-python-intro.ipynb) presents a quick overview of the Python programming language and explains how to use the notebook. Reading it is strongly encouraged before proceeding to the pyFAI itself. # # Anyway, the most important information is to use **Control-Enter** to evaluate a cell. # # In addition to this, we will need to download some files from the internet. # The following cell contains a piece of code to download files using the silx library. You may have to adjust the proxy settings to be able to connect to internet, especially at ESRF, see the commented out code. # ## Introduction to diffraction image analysis using the notebook # # All the tutorials in pyFAI are based on the notebook and if you wish to practice the exercises, you can download the notebook files (.ipynb) from [Github](https://github.com/silx-kit/pyFAI/tree/master/doc/source/usage/tutorial) # # ### Load and display diffraction images # # First of all we will download an image and display it. Displaying it the right way is important as the orientation of the image imposes the azimuthal angle sign. import os, time #Nota: comment out when outside ESRF os.environ["http_proxy"] = "http://proxy.esrf.fr:3128" start_time = time.time() import pyFAI print("Using pyFAI version", pyFAI.version) from silx.resources import ExternalResources downloader = ExternalResources("pyFAI", "http://www.silx.org/pub/pyFAI/testimages/", "DATA") moke = downloader.getfile("moke.tif") print(moke) # The *moke.tif* image we just downloaded is not a real diffraction image but it is a test pattern used in the tests of pyFAI. # # Prior to displaying it, we will use the Fable Input/Output library to read the content of the file: #initializes the visualization module to work with the jupyter notebook # %pylab nbagg import fabio from pyFAI.gui import jupyter img = fabio.open(moke).data jupyter.display(img, label ="Fake diffraction image") # As you can see, the image looks like an archery target. The origin is at the **lower left** of the image. # If you are familiar with matplotlib, it correspond to the option *origin="lower"* of *imshow*. # # Displaying the image using imsho without this option ends with having the azimuthal angle (which angles are displayed in degrees on the image) to turn clockwise, so the inverse of the trigonometric order: fig, ax = subplots(1,2, figsize=(10,5)) ax[0].imshow(img, origin="lower") ax[0].set_title("Proper orientation") ax[1].imshow(img) ax[1].set_title("Wrong orientation") # **Nota:** Displaying the image properly or not does not change the content of the image nor its representation in memory, it only changes its representation, which is important only for the user. **DO NOT USE** *numpy.flipud* or other array-manipulation which changes the memory representation of the image. This is likely to mess-up all your subsequent calculation. # # ### 1D azimuthal integration # # To perform an azimuthal integration of this image, we need to create an **AzimuthalIntegrator** object we will call *ai*. # Fortunately, the geometry is explained on the image. import pyFAI, pyFAI.detectors detector = pyFAI.detectors.Detector(pixel1=1e-4, pixel2=1e-4) ai = pyFAI.AzimuthalIntegrator(dist=0.1, detector=detector) # Short version ai = pyFAI.AzimuthalIntegrator(dist=0.1, pixel1=1e-4, pixel2=1e-4) print(ai) # Printing the *ai* object displays 3 lines: # # * The detector definition, here a simple detector with square, regular pixels with the right size # * The detector position in space using the *pyFAI* coordinate system: dist, poni1, poni2, rot1, rot2, rot3 # * The detector position in space using the *FIT2D* coordinate system: direct_sample_detector_distance, center_X, # center_Y, tilt and tilt_plan_rotation # # Right now, the geometry in the *ai* object is wrong. It may be easier to define it correctly using the *FIT2D* geometry which uses pixels for the center coordinates (but the sample-detector distance is in millimeters). help(ai.setFit2D) ai.setFit2D(100, 300, 300) print(ai) # With the *ai* object properly setup, we can perform the azimuthal integration using the *intergate1d* method. # This methods takes only 2 mandatory parameters: the image to integrate and the number of bins. We will provide a few other to enforce the calculations to be performed in 2theta-space and in degrees: # + res = ai.integrate1d(img, 300, unit="2th_deg") #Display the integration result fig, ax = subplots(1,2, figsize=(10,5)) jupyter.plot1d(res, label="moke",ax=ax[0]) #Example using pure matplotlib tth = res[0] I = res[1] ax[1].plot(tth, I, label="moke") ax[1].set_title("Display 1d powder diffraction data using pure matplotlib") # - # As you can see, the 9 rings gave 9 sharp peaks at 2theta position regularly ranging from 4 to 12 degrees as expected from the image annotation. # # **Nota:** the default radial unit is "q_nm^1", so the scattering vector length expressed in inverse nanometers. To be able to calculate *q*, one needs to specify the wavelength used (here we didn't). For example: ai.wavelength = 1e-10 # # To save the content of the integrated pattern into a 2 column ASCII file, one can either save the (tth, I) arrays, or directly ask pyFAI to do it by providing an output filename: # + ai.integrate1d(img, 30, unit="2th_deg", filename="moke.dat") # now display the content of the file with open("moke.dat") as fd: for line in fd: print(line.strip()) # - # This "moke.dat" file contains in addition to the 2th/I value, a header commented with "#" with the geometry used to perform the calculation. # # **Nota: ** The *ai* object has initialized the geometry on the first call and re-uses it on subsequent calls. This is why it is important to re-use the geometry in performance critical applications. # # ### 2D integration or Caking # # One can perform the 2D integration which is called caking in FIT2D by simply calling the *intrgate2d* method with 3 mandatroy parameters: the data to integrate, the number of radial bins and the number of azimuthal bins. # + res2d = ai.integrate2d(img, 300, 360, unit="2th_deg") #Display the integration result fig, ax = subplots(1,2, figsize=(10,5)) jupyter.plot2d(res2d, label="moke",ax=ax[0]) #Example using pure matplotlib I, tth, chi = res2d ax[1].imshow(I, origin="lower", extent=[tth.min(), tth.max(), chi.min(), chi.max()], aspect="auto") ax[1].set_xlabel("2 theta (deg)") ax[1].set_ylabel("Azimuthal angle chi (deg)") # - # The displayed images the "caked" image with the radial and azimuthal angles properly set on the axes. Search for the -180, -90, 360/0 and 180 mark on the transformed image. # # Like *integrate1d*, *integrate2d* offers the ability to save the intgrated image into an image file (EDF format by default) with again all metadata in the headers. # # ### Radial integration # # Radial integration can directly be obtained from Caked images: # + target = 8 #degrees #work on fewer radial bins in order to have an actual averaging: I, tth, chi = ai.integrate2d(img, 100, 90, unit="2th_deg") column = argmin(abs(tth-target)) print("Column number %s"%column) fig, ax = subplots() ax.plot(chi, I[:,column], label=r"$2\theta=%.1f^{o}$"%target) ax.set_xlabel("Azimuthal angle") ax.set_ylabel("Intensity") ax.set_title("Radial intgration") ax.legend() # - # **Nota:** the pattern with higher noise along the diagonals is typical from the pixel splitting scheme employed. Here this scheme is a "bounding box" which makes digonal pixels look a bit larger (+40%) than the ones on the horizontal and vertical axis, explaining the variation of the noise. # # ### Integration of a bunch of files using pyFAI # # Once the processing for one file is established, one can loop over a bunch of files. # A convienient way to get the list of files matching a pattern is with the *glob* module. # # Most of the time, the azimuthal integrator is obtained by simply loading the *poni-file* into pyFAI and use it directly. # all_files = downloader.getdir("alumina.tar.bz2") all_edf = [i for i in all_files if i.endswith("edf")] all_edf.sort() print("Number of EDF downloaded: %s"%len(all_edf)) # + ponifile = [i for i in all_files if i.endswith(".poni")][0] splinefile = [i for i in all_files if i.endswith(".spline")][0] print(ponifile, splinefile) #patch the poni-file with the proper path. with open(ponifile, "a") as f: f.write("SplineFile: %s\n"%splinefile) ai = pyFAI.load(ponifile) print(ai) # + fig, ax = subplots() for one_file in all_edf: destination = os.path.splitext(one_file)[0]+".dat" image = fabio.open(one_file).data t0 = time.time() res = ai.integrate1d(image, 1000, filename=destination) print("%s: %.3fs"%(destination, time.time()-t0)) jupyter.plot1d(res, ax=ax) # - # This was a simple integration of 50 files, saving the result into 2 column ASCII files. # # **Nota:** the first frame took 40x longer than the other. This highlights go crucial it is to re-use *azimuthal intgrator* objects when performance matters. # # ## Conclusion # Using the notebook is rather simple as it allows to mix comments, code, and images for visualization of scientific data. # # The basic use pyFAI's AzimuthalIntgrator has also been presented and may be adapted to you specific needs. print("Total execution time: %.3fs"%(time.time()-start_time))
doc/source/usage/tutorial/Introduction/introduction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Numpy.random # # https://numpy.org/doc/1.19/reference/random/index.html # # https://numpy.org/doc/1.19/reference/random/generator.html#numpy.random.Generator import numpy as np # ## generating integers # # https://numpy.org/doc/1.19/reference/random/generated/numpy.random.Generator.integers.html#numpy.random.Generator.integers # # --------- rng = np.random.default_rng() rng.integers(15, size=20) rng.integers(50, size=100) rng.integers(100, size=(4, 5)) x = rng.integers(100, size=10000) x # %matplotlib inline import matplotlib.pyplot as plt plt.hist(x) plt.show() # The distribution of random numbers seems roughly the same across the histogram. # # Uniform distribution! https://revisionmaths.com/advanced-level-maths-revision/statistics/uniform-distribution # # Data in the real world data is rearly uniformly distributed. It is normally distributed. Eg. peoples heights, exam results etc. # # s = np.random.default_rng().uniform(0,1000,10000) s np.all(s >= -1) np.all(s < 0) count, bins, ignored = plt.hist(s, 15, density=True) plt.plot(bins, np.ones_like(bins), linewidth=2, color='r') plt.show() # https://numpy.org/doc/1.19/reference/random/generated/numpy.random.Generator.uniform.html#numpy.random.Generator.uniform # # plt.hist(s) mu, sigma = 10, 100 # mean and standard deviation s = np.random.default_rng().normal(mu, sigma, 100000) count, bins, ignored = plt.hist(s, 30, density=True) plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu)**2 / (2 * sigma**2) ), linewidth=2, color='r') # To add to for submission # # How does this seed outpreform the mersenn twister https://numpy.org/doc/1.19/reference/random/bit_generators/pcg64.html#numpy.random.PCG64
numpy.random.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/tejasv1846/automl-starter/blob/main/vithackothon2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + colab={"base_uri": "https://localhost:8080/"} id="OfXmYhO8Jo1G" outputId="a883c017-cb97-4f66-94c5-6f4fcb09d280" # !git clone https://github.com/kandikits/automl-starter.git # + colab={"base_uri": "https://localhost:8080/", "height": 835} id="5LZSmZ7eJuD0" outputId="4f8352f7-8b14-48b6-e40c-eca287d36785" import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline df = pd.read_csv('https://raw.githubusercontent.com/jojo62000/Smarter_Decisions/master/Chapter%206/Data/Final_SolarData.csv') df # + colab={"base_uri": "https://localhost:8080/", "height": 378} id="2n7SeNt-iLHG" outputId="1e7bfe60-cef6-4f5a-cd73-114b49c56a58" df.head(4) # + [markdown] id="75lU7DyG5brn" # comparison between inverter_output_power and inversion_input_power # + colab={"base_uri": "https://localhost:8080/", "height": 280} id="ORMO3MBsa9X3" outputId="d542e089-dced-421a-9062-2d6db63c6343" sns.pointplot(y="inverter_output_power", x="inverter_input_power", data=df,color='green'); # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="UOKr6s0Kh6N2" outputId="f030282b-27a3-4f52-d8a3-0af4de8bca0e" from matplotlib import pyplot pyplot.scatter(df['batterycurrent'],df['batteryvoltage']) from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(df[['batterycurrent']],df['batteryvoltage']) predicted_batteryvoltage = model.predict(df[['batterycurrent']]) pyplot.plot(df['batterycurrent'],predicted_batteryvoltage,color='g') pyplot.show() # + id="vzX5Wk1imZe5" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="124ef776-b5e6-4373-ee82-7aaeed0769fd" s=sns.pointplot(y="load_power2", x="load_power1", data=df); # + colab={"base_uri": "https://localhost:8080/", "height": 280} id="7jG-pasAuSXZ" outputId="2d97cd42-fff8-4530-ee22-2537842b77eb" sns.pointplot(y="load_energy2", x="load_energy1", data=df); # + id="F6XxO-l_mbZd" colab={"base_uri": "https://localhost:8080/", "height": 280} outputId="cbae457b-bf83-400a-a5c8-7e567807a60d" sns.pointplot(y="load_current2", x="load_current1", data=df); # + colab={"base_uri": "https://localhost:8080/", "height": 628} id="gmLERaGnwJs7" outputId="a9b1c084-f9b0-461a-fab1-210816c61486" from matplotlib import pyplot pyplot.scatter(df['temperature'],df['batteryvoltage']) from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(df[['temperature']],df['batteryvoltage']) predicted_batteryvoltage = model.predict(df[['temperature']]) pyplot.plot(df['temperature'],predicted_batteryvoltage,color='r') pyplot.show() # + id="pOPNisZcyMcv" x=df.drop('solarpower',axis=1) # + id="i_3HgFdTybQT" y=df['solarpower'] # + id="s5t6LI3Vx7Ph" from sklearn.model_selection import train_test_split # + id="DavCONSwxLuu" x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.33,random_state=101) # + id="ScuK99vIyuQe" from sklearn.linear_model import LinearRegression lm = LinearRegression() # + colab={"base_uri": "https://localhost:8080/", "height": 311} id="rWRIgZw8yyJm" outputId="51484e7d-24cf-4742-bc21-558186095581" lm.fit(x_train,y_train) # + id="ux5wCiTn3Dwl" df['location'] = df['location'].str.split().str.join(" ") # + colab={"base_uri": "https://localhost:8080/", "height": 345} id="1X9zgvLDzD0M" outputId="824425ce-0f4b-4fab-8393-93b6f05d8362" import pandas from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=10) df['location'] = df['location'].str.split().str.join(" ") knn.fit(x_train,y_train) # + id="E6Xb4-iO0za_"
vithackothon2.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python # language: python3 # name: python3 # --- # # Welcome to sphinxcontrib-jupyter.minimal’s documentation! # # # Contents: # - [Code blocks](code_blocks.ipynb) # - [No Execute](code_blocks.ipynb#no-execute) # - [Other Examples from rst2ipynb](code_blocks.ipynb#other-examples-from-rst2ipynb) # - [Output Test Cases](code_blocks.ipynb#output-test-cases) # - [Nested Code Blocks](code_blocks.ipynb#nested-code-blocks) # - [Code Block with None](code_blocks.ipynb#code-block-with-none) # - [Code Synonyms](code_synonyms.ipynb) # - [Collapse Cell](collapse.ipynb) # - [dependency](dependency.ipynb) # - [Download](download.ipynb) # - [Equation Numbering](equation_labels.ipynb) # - [Rubric](footnotes.ipynb) # - [Ignore](ignore.ipynb) # - [Comments](ignore.ipynb#comments) # - [Images](images.ipynb) # - [Image](images.ipynb#image) # - [Figure](images.ipynb#figure) # - [Inline](inline.ipynb) # - [Code](inline.ipynb#code) # - [Math](inline.ipynb#math) # - [Jupyter Directive](jupyter.ipynb) # - [Links](links.ipynb) # - [Special Cases](links.ipynb#special-cases) # - [Links Target](links_target.ipynb) # - [List](lists.ipynb) # - [Basic Lists](lists.ipynb#basic-lists) # - [Bullets](lists.ipynb#bullets) # - [Bullets](lists.ipynb#id1) # - [List](lists.ipynb#id2) # - [Numbered](lists.ipynb#numbered) # - [Nested Lists](lists.ipynb#nested-lists) # - [Bullets](lists.ipynb#id3) # - [Bullets](lists.ipynb#id4) # - [Mixed Lists](lists.ipynb#mixed-lists) # - [Malformed Lists that seem to work in HTML](lists.ipynb#malformed-lists-that-seem-to-work-in-html) # - [Complex Lists with Display Math](lists.ipynb#complex-lists-with-display-math) # - [Complex Lists with Code Blocks](lists.ipynb#complex-lists-with-code-blocks) # - [Literal Includes](literal_include.ipynb) # - [Math](math.ipynb) # - [Further Inline](math.ipynb#further-inline) # - [Referenced Math](math.ipynb#referenced-math) # - [Notes](notes.ipynb) # - [Only](only.ipynb) # - [Quote](quote.ipynb) # - [Epigraph](quote.ipynb#epigraph) # - [Simple Notebook Example](simple_notebook.ipynb) # - [Math](simple_notebook.ipynb#math) # - [Tables](simple_notebook.ipynb#tables) # - [Special Characters](simple_notebook.ipynb#special-characters) # - [Slide option activated](slides.ipynb) # - [Math](slides.ipynb#math) # - [Slide: Should have 2 code blocks as fragments (test)](slides.ipynb#slide-should-have-2-code-blocks-as-fragments-test) # - [Notebook without solutions](solutions.ipynb) # - [Question 1](solutions.ipynb#question-1) # - [Table](tables.ipynb) # - [Grid Tables](tables.ipynb#grid-tables) # - [Simple Tables](tables.ipynb#simple-tables) # - [Directive Table Types](tables.ipynb#directive-table-types) # - [Complex Tables](tables.ipynb#complex-tables) # - [Notebook without Tests](tests.ipynb) # - [Question 1](tests.ipynb#question-1) # # # Documents that require `Sphinx>=2.0` # # - [Exercises](exercises.ipynb) # - [Collected Exercises](exercises.ipynb#collected-exercises) # - [Exercise List (Section only)](exercise_list_section.ipynb) # - [Exercise List (all)](exercise_list_all.ipynb) # - [Exercise List from Labels](exercise_list_labels.ipynb) # - [The exercise list](exercise_list_labels.ipynb#the-exercise-list) # - [One more exercise](exercise_list_labels.ipynb#one-more-exercise) # - [Section 2](section2/index.ipynb) # - [Exercise List from section2 (all)](section2/exercise_list_sec2_all.ipynb) # - [Exercise List from section2 (section2 only)](section2/exercise_list_sec2.ipynb) # - [Exercises in section 2](section2/exercises_section2.ipynb) # # Indices and tables # # - [Index](genindex.ipynb) # - [Module Index](py-modindex.ipynb) # - [Search Page](search.ipynb)
tests/base/ipynb/index.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Part 2: Intro to Federated Learning # # In the last section, we learned about PointerTensors, which create the underlying infrastructure we need for privacy preserving Deep Learning. In this section, we're going to see how to use these basic tools to implement our first privacy preserving deep learning algorithm, Federated Learning. # # Authors: # - <NAME> - Twitter: [@iamtrask](https://twitter.com/iamtrask) # # ### What is Federated Learning? # # It's a simple, powerful way to train Deep Learning models. If you think about training data, it's always the result of some sort of collection process. People (via devices) generate data by recording events in the real world. Normally, this data is aggregated to a single, central location so that you can train a machine learning model. Federated Learning turns this on its head! # # Instead of bringing training data to the model (a central server), you bring the model to the training data (wherever it may live). # # The idea is that this allows whoever is creating the data to own the only permanent copy, and thus maintain control over who ever has access to it. Pretty cool, eh? # # Section 2.1 - A Toy Federated Learning Example # # Let's start by training a toy model the centralized way. This is about a simple as models get. We first need: # # - a toy dataset # - a model # - some basic training logic for training a model to fit the data. # # Note: If this API is un-familiar to you - head on over to [fast.ai](http://fast.ai) and take their course before continuing in this tutorial. import torch from torch import nn from torch import optim # + # A Toy Dataset data = torch.tensor([[0,0],[0,1],[1,0],[1,1.]], requires_grad=True) target = torch.tensor([[0],[0],[1],[1.]], requires_grad=True) # A Toy Model model = nn.Linear(2,1) def train(): # Training Logic opt = optim.SGD(params=model.parameters(),lr=0.1) for iter in range(20): # 1) erase previous gradients (if they exist) opt.zero_grad() # 2) make a prediction pred = model(data) # 3) calculate how much we missed loss = ((pred - target)**2).sum() # 4) figure out which weights caused us to miss loss.backward() # 5) change those weights opt.step() # 6) print our progress print(loss.data) # - train() # And there you have it! We've trained a basic model in the conventional manner. All our data is aggregated into our local machine and we can use it to make updates to our model. Federated Learning, however, doesn't work this way. So, let's modify this example to do it the Federated Learning way! # # So, what do we need: # # - create a couple workers # - get pointers to training data on each worker # - updated training logic to do federated learning # # New Training Steps: # - send model to correct worker # - train on the data located there # - get the model back and repeat with next worker import syft as sy hook = sy.TorchHook(torch) # + # create a couple workers bob = sy.VirtualWorker(hook, id="bob") alice = sy.VirtualWorker(hook, id="alice") # + # A Toy Dataset data = torch.tensor([[0,0],[0,1],[1,0],[1,1.]], requires_grad=True) target = torch.tensor([[0],[0],[1],[1.]], requires_grad=True) # get pointers to training data on each worker by # sending some training data to bob and alice data_bob = data[0:2] target_bob = target[0:2] data_alice = data[2:] target_alice = target[2:] # Iniitalize A Toy Model model = nn.Linear(2,1) data_bob = data_bob.send(bob) data_alice = data_alice.send(alice) target_bob = target_bob.send(bob) target_alice = target_alice.send(alice) # organize pointers into a list datasets = [(data_bob,target_bob),(data_alice,target_alice)] opt = optim.SGD(params=model.parameters(),lr=0.1) # + def train(): # Training Logic opt = optim.SGD(params=model.parameters(),lr=0.1) for iter in range(10): # NEW) iterate through each worker's dataset for data,target in datasets: # NEW) send model to correct worker model.send(data.location) # 1) erase previous gradients (if they exist) opt.zero_grad() # 2) make a prediction pred = model(data) # 3) calculate how much we missed loss = ((pred - target)**2).sum() # 4) figure out which weights caused us to miss loss.backward() # 5) change those weights opt.step() # NEW) get model (with gradients) model.get() # 6) print our progress print(loss.get()) # NEW) slight edit... need to call .get() on loss\ # federated averaging # - train() # ## Well Done! # # And voilà! We now are training a very simple Deep Learning model using Federated Learning! We send the model to each worker, generate a new gradient, and then bring the gradient back to our local server where we update our global model. Never in this process do we ever see or request access to the underlying training data! We preserve the privacy of Bob and Alice!!! # # ## Shortcomings of this Example # # So, while this example is a nice introduction to Federated Learning, it still has some major shortcomings. Most notably, when we call `model.get()` and receive the updated model from Bob or Alice, we can actually learn a lot about Bob and Alice's training data by looking at their gradients. In some cases, we can restore their training data perfectly! # # So, what is there to do? Well, the first strategy people employ is to **average the gradient across multiple individuals before uploading it to the central server**. This strategy, however, will require some more sophisticated use of PointerTensor objects. So, in the next section, we're going to take some time to learn about more advanced pointer functionality and then we'll upgrade this Federated Learning example # # # Congratulations!!! - Time to Join the Community! # # Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways! # # ### Star PySyft on GitHub # # The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building. # # - [Star PySyft](https://github.com/OpenMined/PySyft) # # ### Join our Slack! # # The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org) # # ### Join a Code Project! # # The best way to contribute to our community is to become a code contributor! At any time you can go to PySyft GitHub Issues page and filter for "Projects". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more "one off" mini-projects by searching for GitHub issues marked "good first issue". # # - [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject) # - [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) # # ### Donate # # If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups! # # [OpenMined's Open Collective Page](https://opencollective.com/openmined)
examples/tutorials/Part 02 - Intro to Federated Learning.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="pun7VarqeGL4" outputId="85999551-fb2b-4a46-84e6-73e37c13243e" import pandas as pd import string import re import io import numpy as np from unicodedata import normalize import keras, tensorflow from keras.models import Model from keras.layers import Input, LSTM, Dense # - # ## Reading data # + colab={} colab_type="code" id="C92VgDWZeGMA" def read_data(file): data = [] with io.open(file, 'r') as file: for entry in file: entry = entry.strip() data.append(entry) return data # + colab={} colab_type="code" id="wccatfy6eGMX" data = read_data('dataset/bilingual_pairs.txt') # - # ## Some basics about our dataset data[139990:140000] # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="o7v6YuIQqkdC" outputId="6db8a27d-20b3-4814-9ba3-09dc4e3160ca" len(data) # + colab={} colab_type="code" id="mn5AJq_7eGMc" data = data[:140000] # - # ## Splitting our data into English and French sentences # + colab={} colab_type="code" id="5izP_MfWeGMi" def build_english_french_sentences(data): english_sentences = [] french_sentences = [] for data_point in data: english_sentences.append(data_point.split("\t")[0]) french_sentences.append(data_point.split("\t")[1]) return english_sentences, french_sentences # + colab={} colab_type="code" id="lU1AA_dkeGMn" english_sentences, french_sentences = build_english_french_sentences(data) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="sHtX637EeGMs" outputId="c10578eb-cd7b-4152-936f-f41128fc7eae" len(english_sentences) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="GwE2jPKIeGM1" outputId="f5d5cfc5-34cc-4e6c-be76-6df28fc6d513" len(french_sentences) # - # ## Data cleaning # + colab={} colab_type="code" id="QgEQbWyLeGM6" def clean_sentences(sentence): # prepare regex for char filtering re_print = re.compile('[^%s]' % re.escape(string.printable)) # prepare translation table for removing punctuation table = str.maketrans('', '', string.punctuation) cleaned_sent = normalize('NFD', sentence).encode('ascii', 'ignore') cleaned_sent = cleaned_sent.decode('UTF-8') cleaned_sent = cleaned_sent.split() cleaned_sent = [word.lower() for word in cleaned_sent] cleaned_sent = [word.translate(table) for word in cleaned_sent] cleaned_sent = [re_print.sub('', w) for w in cleaned_sent] cleaned_sent = [word for word in cleaned_sent if word.isalpha()] return ' '.join(cleaned_sent) # + colab={} colab_type="code" id="2PEeqZNNeGM_" def build_clean_english_french_sentences(english_sentences, french_sentences): french_sentences_cleaned = [] english_sentences_cleaned = [] for sent in french_sentences: french_sentences_cleaned.append(clean_sentences(sent)) for sent in english_sentences: english_sentences_cleaned.append(clean_sentences(sent)) return english_sentences_cleaned, french_sentences_cleaned # + colab={} colab_type="code" id="mpg30uqIeGND" english_sentences_cleaned, french_sentences_cleaned = build_clean_english_french_sentences(english_sentences, french_sentences) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="VPL2FfwweGNJ" outputId="5ad0f65c-4a1a-4811-f821-f4228d910a33" english_sentences_cleaned[40884] # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="2nokZ3D5eGNP" outputId="4a0f930e-5ddc-439b-a7a4-222b32e146f4" french_sentences_cleaned[40884] # - # ## Building our input and target datasets # + colab={} colab_type="code" id="Ys2VR-NZeGNU" def build_data(english_sentences_cleaned, french_sentences_cleaned): input_dataset = [] target_dataset = [] input_characters = set() target_characters = set() for french_sentence in french_sentences_cleaned: input_datapoint = french_sentence input_dataset.append(input_datapoint) for char in input_datapoint: input_characters.add(char) for english_sentence in english_sentences_cleaned: target_datapoint = "\t" + english_sentence + "\n" target_dataset.append(target_datapoint) for char in target_datapoint: target_characters.add(char) return input_dataset, target_dataset, sorted(list(input_characters)), sorted(list(target_characters)) # + colab={} colab_type="code" id="DrOsk6f_eGNb" input_dataset, target_dataset, input_characters, target_characters = build_data(english_sentences_cleaned, french_sentences_cleaned) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Qo7TiTaOeGNg" outputId="b00c8f77-966d-422c-d961-5e5325c141a6" len(input_characters) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="2ZqARjhWeGNl" outputId="e1bd2619-e304-45ce-8b5c-11eac2212640" len(target_characters) # + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="_ylMPx5neGNp" outputId="aa645d32-6d8c-4ed5-da3e-743cfea5ec7a" print(input_characters) # + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="7CBV9vvieGNv" outputId="785f5038-97df-4753-fc09-733578dca8eb" print(target_characters) # - # ## Defining metadata for our data structures and model to work with # + colab={} colab_type="code" id="Haj_b9-yeGNz" def build_metadata(input_dataset, target_dataset, input_characters, target_characters): num_encoder_tokens = len(input_characters) num_decoder_tokens = len(target_characters) max_encoder_seq_length = max([len(data_point) for data_point in input_dataset]) max_decoder_seq_length = max([len(data_point) for data_point in target_dataset]) print('Number of data points:', len(input_dataset)) print('Number of unique input tokens:', num_encoder_tokens) print('Number of unique output tokens:', num_decoder_tokens) print('Maximum sequence length for inputs:', max_encoder_seq_length) print('Maximum sequence length for outputs:', max_decoder_seq_length) return num_encoder_tokens, num_decoder_tokens, max_encoder_seq_length, max_decoder_seq_length # + colab={"base_uri": "https://localhost:8080/", "height": 102} colab_type="code" id="To-bA7SPeGN3" outputId="77f4afdd-b488-453c-f6c8-198e99711649" num_encoder_tokens, num_decoder_tokens, max_encoder_seq_length, max_decoder_seq_length = build_metadata(input_dataset, target_dataset, input_characters, target_characters) # - # ## Developing mappings for character to index and vice-versa # + colab={} colab_type="code" id="DgmvQ088eGN7" def build_indices(input_characters, target_characters): input_char_to_idx = {} input_idx_to_char = {} target_char_to_idx = {} target_idx_to_char = {} for i, char in enumerate(input_characters): input_char_to_idx[char] = i input_idx_to_char[i] = char for i, char in enumerate(target_characters): target_char_to_idx[char] = i target_idx_to_char[i] = char return input_char_to_idx, input_idx_to_char, target_char_to_idx, target_idx_to_char input_char_to_idx, input_idx_to_char, target_char_to_idx, target_idx_to_char = build_indices(input_characters, target_characters) # - # ## Building data structures to accommodate our data # + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="PwJ3acmneGN-" outputId="c9222273-86aa-4b1a-e4b5-8dbef1ac1a89" def build_data_structures(length_input_dataset, max_encoder_seq_length, max_decoder_seq_length, num_encoder_tokens, num_decoder_tokens): encoder_input_data = np.zeros((length_input_dataset, max_encoder_seq_length, num_encoder_tokens), dtype='float32') decoder_input_data = np.zeros((length_input_dataset, max_decoder_seq_length, num_decoder_tokens), dtype='float32') decoder_target_data = np.zeros((length_input_dataset, max_decoder_seq_length, num_decoder_tokens), dtype='float32') print("Dimensionality of encoder input data is : ", encoder_input_data.shape) print("Dimensionality of decoder input data is : ", decoder_input_data.shape) print("Dimensionality of decoder target data is : ", decoder_target_data.shape) return encoder_input_data, decoder_input_data, decoder_target_data encoder_input_data, decoder_input_data, decoder_target_data = build_data_structures(len(input_dataset), max_encoder_seq_length, max_decoder_seq_length, num_encoder_tokens, num_decoder_tokens) # - # ## Adding data to the built data structures # + colab={} colab_type="code" id="_okGtmkIeGOC" def add_data_to_data_structures(input_dataset, target_dataset, encoder_input_data, decoder_input_data, decoder_target_data): for i, (input_data_point, target_data_point) in enumerate(zip(input_dataset, target_dataset)): for t, char in enumerate(input_data_point): encoder_input_data[i, t, input_char_to_idx[char]] = 1. for t, char in enumerate(target_data_point): # decoder_target_data is ahead of decoder_input_data by one timestep decoder_input_data[i, t, target_char_to_idx[char]] = 1. if t > 0: # decoder_target_data will be ahead by one timestep # and will not include the start character. decoder_target_data[i, t - 1, target_char_to_idx[char]] = 1. return encoder_input_data, decoder_input_data, decoder_target_data # + colab={} colab_type="code" id="bQcJamwaeGOH" encoder_input_data, decoder_input_data, decoder_target_data = add_data_to_data_structures(input_dataset, target_dataset, encoder_input_data, decoder_input_data, decoder_target_data) # - # ## Defining our model hyperparameters # + colab={} colab_type="code" id="lwadHEJ1eGOM" batch_size = 256 epochs = 100 latent_dim = 256 # - # ## Encoder Definition # + colab={} colab_type="code" id="fcXQZANneGOR" encoder_inputs = Input(shape=(None, num_encoder_tokens)) encoder = LSTM(latent_dim, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_inputs) encoder_states = [state_h, state_c] # - # ## Decoder Definition # + colab={} colab_type="code" id="VX_O_UzheGOX" decoder_inputs = Input(shape=(None, num_decoder_tokens)) decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True) decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states) decoder_dense = Dense(num_decoder_tokens, activation='softmax') decoder_outputs = decoder_dense(decoder_outputs) # - # ## Building our model # + colab={} colab_type="code" id="bYIBY06AeGOb" model = Model(inputs=[encoder_inputs, decoder_inputs], outputs=decoder_outputs) # + colab={"base_uri": "https://localhost:8080/", "height": 357} colab_type="code" id="sG8R3LhFeGOf" outputId="c990af8d-ccc3-4fe3-ff9a-048444b4efe2" model.compile(optimizer='rmsprop', loss='categorical_crossentropy') model.summary() # - # ## Training the model # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="3cLKLtZLeGOk" outputId="fd3e86fa-5487-42b7-a28d-96fb8f38aabe" model.fit([encoder_input_data, decoder_input_data], decoder_target_data, batch_size=batch_size, epochs=epochs, validation_split=0.2) # - # ## Saving the model # + colab={} colab_type="code" id="wY80ZCIseGOo" model.save('Output Files/neural_machine_translation_french_to_english.h5') # - # ## Preparing our model for inferencing # + colab={} colab_type="code" id="kW9tivC9eGOt" encoder_model = Model(encoder_inputs, encoder_states) decoder_state_input_c = Input(shape=(latent_dim,)) decoder_state_input_h = Input(shape=(latent_dim,)) decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c] decoder_outputs, state_h, state_c = decoder_lstm(decoder_inputs, initial_state=decoder_states_inputs) decoder_states = [state_h, state_c] decoder_outputs = decoder_dense(decoder_outputs) decoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states) # + colab={} colab_type="code" id="G9_DZBhyeGOx" def decode_sequence(input_seq): states_value = encoder_model.predict(input_seq) target_seq = np.zeros((1, 1, num_decoder_tokens)) target_seq[0, 0, target_char_to_idx['\t']] = 1. stop_condition = False decoded_sentence = '' while not stop_condition: output_tokens, h, c = decoder_model.predict([target_seq] + states_value) sampled_token_index = np.argmax(output_tokens[0, -1, :]) sampled_char = target_idx_to_char[sampled_token_index] decoded_sentence += sampled_char if (sampled_char == '\n' or len(decoded_sentence) > max_decoder_seq_length): stop_condition = True target_seq = np.zeros((1, 1, num_decoder_tokens)) target_seq[0, 0, sampled_token_index] = 1. states_value = [h, c] return decoded_sentence # - # ## Let's translate some French sentences to English # + colab={} colab_type="code" id="K7QnRlMHeGO1" def decode(seq_index): input_seq = encoder_input_data[seq_index: seq_index + 1] decoded_sentence = decode_sequence(input_seq) print('-') print('Input sentence:', input_dataset[seq_index]) print('Decoded sentence:', decoded_sentence) # + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="1IbAcQuDeGO5" outputId="a60126ac-f6dd-4fa6-95e7-6cfb058f172e" decode(55000) # + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="ng_3YpbWXiZi" outputId="4acf6eb7-5a7c-4d2f-c0be-1d7cec7ba050" decode(10000) # + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="3Vh-z5N1ZOEM" outputId="c64a0d2d-39f8-4a2c-d99d-48200e57d1ed" decode(200) # + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="P-VmNnYMZUMs" outputId="8bfc8543-cb20-4b22-8122-44c90c10f88a" decode(3000) # + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="jo6-UTwnaEiU" outputId="19d48d04-f570-4d07-f153-1510bd1e16b9" decode(40884)
Chapter11/French To English Translation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Python Decorators # ### First day: quick howto # Decorators are a sometimes overlooked and more advanced feature. They support a nice way to abstract your code. Although hard to grasp at first there is not that much to it. Let's start with two definitions: # > A decorator is any callable Python object that is used to modify a function, method or class definition. A decorator is passed the original object being defined and returns a modified object, which is then bound to the name in the definition. - [PythonDecorators wiki](https://wiki.python.org/moin/PythonDecorators) # # > A decorator's intent is to attach additional responsibilities to an object dynamically. Decorators provide a flexible alternative to subclassing for extending functionality. - [Design Patterns book](https://www.amazon.com/dp/0201633612/?tag=pyb0f-20) from functools import wraps import time # The basic template for a defining a decorator: def mydecorator(function): @wraps(function) def wrapper(*args, **kwargs): # do something before the original function is called # call the passed in function result = function(*args, **kwargs) # do something after the original function call return result # return wrapper = decorated function return wrapper # You can then use the decorator _wrapping_ your function like this: @mydecorator def my_function(args): pass # This is just _syntactic sugar_ for: # + def my_function(args): pass my_function = mydecorator(my_function) # - # #### args and kwargs detour # Note in Python there are different ways to pass arguments to functions, see this [great guide](http://docs.python-guide.org/en/latest/writing/style/#function-arguments) for more info and a quick example here: def get_profile(name, active=True, *sports, **awards): print('Positional arguments (required): ', name) print('Keyword arguments (not required, default values): ', active) print('Arbitrary argument list (sports): ', sports) print('Arbitrary keyword argument dictionary (awards): ', awards) get_profile() get_profile('julian') get_profile('julian', active=False) get_profile('julian', False, 'basketball', 'soccer') get_profile('julian', False, 'basketball', 'soccer', pythonista='special honor of the community', topcoder='2017 code camp') def show_args(function): @wraps(function) def wrapper(*args, **kwargs): print('hi from decorator - args:') print(args) result = function(*args, **kwargs) print('hi again from decorator - kwargs:') print(kwargs) return result # return wrapper as a decorated function return wrapper @show_args def get_profile(name, active=True, *sports, **awards): print('\n\thi from the get_profile function\n') get_profile('bob', True, 'basketball', 'soccer', pythonista='special honor of the community', topcoder='2017 code camp') # #### Timing a function / wraps # Let's look at a more practical/ realistic example, say you want to time a function's execution: def timeit(func): '''Decorator to time a function''' @wraps(func) def wrapper(*args, **kwargs): # before calling the decorated function print('== starting timer') start = time.time() # call the decorated function func(*args, **kwargs) # after calling the decorated function end = time.time() print(f'== {func.__name__} took {int(end-start)} seconds to complete') return wrapper # > It's important to add the [`functools.wraps`](https://docs.python.org/3.7/library/functools.html#functools.wraps) decorator to preserve the original function name and docstring. For example if we take `@wraps` out of `timeit` `timeit.__name__` would return `wrapper` and `timeit.__doc__` would be empty (lost docstring). # And here is the function we will decorate next: # + def generate_report(): '''Function to generate revenue report''' time.sleep(2) print('(actual function) Done, report links ...') generate_report() # - # Now see what happens if we wrap `timeit` around it: # + @timeit def generate_report(): '''Function to generate revenue report''' time.sleep(2) print('(actual function) Done, report links ...') generate_report() # - # `@wraps(func)` preserved the docstring: generate_report.__doc__ # #### Stacking decorators # <img src='stacking.png' style='float:left;'> # Decorators can be stacked, let's define another one, for example to print (positional) `args` and (keyword) `kwargs` of the function that is passed in: def print_args(func): '''Decorator to print function arguments''' @wraps(func) def wrapper(*args, **kwargs): # before print() print('*** args:') for arg in args: print(f'- {arg}') print('**** kwargs:') for k, v in kwargs.items(): print(f'- {k}: {v}') print() # call func func(*args, **kwargs) return wrapper # Let's modify our `generate_report` function to take args: def generate_report(*months, **parameters): time.sleep(2) print('(actual function) Done, report links ...') # Now let's add our `print_args` decorator on top of the `timeit` one and call `generate_report` with some arguments. Note that the order matters here, make sure `timeit` is the __outer__ decorator so the output starts with _== starting timer_: @timeit @print_args def generate_report(*months, **parameters): time.sleep(2) print('(actual function) Done, report links ...') parameters = dict(split_geos=True, include_suborgs=False, tax_rate=33) generate_report('October', 'November', 'December', **parameters) # #### Common use cases / further reading # For [Never Forget A Friend’s Birthday with Python, Flask and Twilio](https://www.twilio.com/blog/2017/09/never-forget-friends-birthday-python-flask-twilio.html) I used a [decorator](https://github.com/pybites/bday-app/blob/a360a02316e021ac4c3164dcdc4122da5d5a722b/app.py#L28) to check if a user is logged in, loosely based on the one provided in the [Flask documentation](http://flask.pocoo.org/docs/0.12/patterns/viewdecorators/#login-required-decorator). # # Another interesting one to check out (if you still have some time to squeeze in today): Django's `login_required` decorator - [source](https://github.com/django/django/blob/master/django/contrib/auth/decorators.py). # For more use cases see the _Decorators in the wild_ section of our [Learning Python Decorators by Example](https://pybit.es/decorators-by-example.html) article. At the end of that article there are links to more material including some more advanced use cases (e.g. decorators with optional arguments). # ### Second day: a practical exercise # Head over to [Bite 22. Write a decorator with argument](https://codechalleng.es/bites/promo/decorator-fun) and try write the `make_html` decorator to make this work: # # @make_html('p') # @make_html('strong') # def get_text(text): # print(text) # # Calling: # # get_text('I code with PyBites') # # Should return: # # <p><strong>I code with PyBites</strong></p> # ### Third day: more practice # Take PyBites [Write DRY Code With Decorators blog challenge](https://codechalleng.es/challenges/14/) and write one or more decorators of your choice. # # Look at the code you have written so far, where could you refactor / add decorators? The more you practice the sooner you grok them and the easier they become. # ### Time to share what you've accomplished! # # Be sure to share your last couple of days work on Twitter or Facebook. Use the hashtag **#100DaysOfCode**. # # Here are [some examples](https://twitter.com/search?q=%23100DaysOfCode) to inspire you. Consider including [@talkpython](https://twitter.com/talkpython) and [@pybites](https://twitter.com/pybites) in your tweets. # # *See a mistake in these instructions? Please [submit a new issue](https://github.com/talkpython/100daysofcode-with-python-course/issues) or fix it and [submit a PR](https://github.com/talkpython/100daysofcode-with-python-course/pulls).*
days/22-24-decorators/decorators.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Calculate Experiment Analyses Values Per Larvae # #### List of tasks accomplished in this Jupyter Notebook: # - Calculate the median concentration preferred by the larvae (normalized to the acclimation period) # - Calculate the discovery time for each larvae (normalized to the acclimation period) # - Calculate the speed in areas of high concentration, compared to speed in areas of low concentration, normalized to behavior during the acclimation period. # - Output all of these values into a separate spreadsheet. import numpy as np import pandas as pd import math, os import more_itertools as mit # + concentration_cutoff = 50 fps = 2 sh = 1 # direction in which to shift vector for delta calculations threshold = 2 # consecutive seconds spent moving still_threshold = 10 # seconds for minimum still num count angle_thresh = 20 # Allowed error from previous angle sharp_angle = 45 spiral_thresh = 4 spiral_speed_thresh = np.inf # + def safe_divide(x, y): x = safe_get(x) y = safe_get(y) try: ans = x / y except ZeroDivisionError: return 0 return ans def safe_get(x, ans=0): if math.isnan(x): return ans return x # + df = pd.read_csv("./data/experiment_IDs/cleaned_static_data.csv") animals = df["animal_ID"].unique() # Check that animal IDs in dataframe are unique assert len(df) == len(animals) fps = 2 a_master_df = pd.DataFrame() e_master_df = pd.DataFrame() for index, row in df.iterrows(): animal = row['animal_ID'] aID = animal[:9] pos = animal[10:] exp_filename = "./data/trajectories/video_calculations/"+aID+'-E-'+pos+'.csv' acc_filename = "./data/trajectories/video_calculations/"+aID+'-A-'+pos+'.csv' for tag, filename, master_df in zip(['A_', 'E_'], [acc_filename, exp_filename], [a_master_df, e_master_df]): if os.path.isfile(filename): temp = pd.read_csv(filename) # MEDIAN CONCENTRATION ----------------------------- median_conc = safe_get(temp["concentration"].median()) # DISCOVERY TIME IN MINUTES ------------------------ if len(temp[temp["concentration"] >= concentration_cutoff]) > 0: discovery_time = safe_divide(temp[temp["concentration"] >= concentration_cutoff]["frames"].values[0], fps*60) else: discovery_time = 15 # Divide up into thresholds for concentration and concentration differences high = temp[temp["concentration"] >= 50] low = temp[temp["concentration"] < 50] high_move = high[high["moving"] == True] low_move = low[low["moving"] == True] # SPEED IN BODY LENGTHS ---------------------------- # concentration dependent high_speed = safe_divide(high_move["speed_BL"].median(), row["larvae_length_mm"]) low_speed = safe_divide(low_move["speed_BL"].median(), row["larvae_length_mm"]) c_speed = high_speed - low_speed temp = pd.DataFrame({# STATIC VARIABLES -------------------------- "animal_ID": [row["animal_ID"]], "treatment_odor": row["treatment_odor"], "sex": row["sex"], "species": row['species'], 'dead':row['dead'], # VARIABLES CALCULATED FOR ENTIRE TIME ------- tag+"median_conc": median_conc, tag+"discovery_time": discovery_time, tag+"c_speed": c_speed }) if tag == 'A_': a_master_df = pd.concat([a_master_df, temp]) elif tag == 'E_': e_master_df = pd.concat([e_master_df, temp]) display(a_master_df.head(2)) display(e_master_df.head(2)) a_master_df.drop(['treatment_odor', 'sex', 'species', 'dead'], axis=1, inplace=True) df = pd.merge(a_master_df, e_master_df, on='animal_ID', how='inner') df["median_conc_diff"] = df["E_median_conc"] - df["A_median_conc"] df["discovery_time_diff"] = df["E_discovery_time"] - df["A_discovery_time"] df["c_speed_diff"] = df["E_c_speed"] - df["A_c_speed"] df.to_csv('./data/trajectories/cleaned_animal_analyses_experiment.csv', index=False) display(df.tail()) # -
4_calculate_experiment_values_per_larvae.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction/Business Problem : # Marrakech city in Morocco is one of the best touristic cities in Africa, every year more than 2M tourists come to this city. # Since the most tourist that come to the city are Europeans, so they always need to party during weekends, but the problem is that Bars or restaurants with wine/Alcohol are not everywhere in the city, for this i will use the Fouursquare website and data in order to hep
Untitled5.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + slideshow={"slide_type": "skip"} # %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-notebook') plt.style.use('ggplot') # So we don't get warnings showing up in the talk. import warnings warnings.filterwarnings("ignore") # To download data from the cloud. import fsspec # + [markdown] slideshow={"slide_type": "slide"} # ![](../img/climpred-logo.png) # # an `xarray` wrapper for analysis of ensemble forecast models for climate prediction. # + [markdown] slideshow={"slide_type": "slide"} # # Mission Statement # # `climpred` is a high-level package that leverages the scientific python ecosystem to provide an *interactive* experience for analyzing initialized prediction systems, from file input to publication-ready visualizations. # + [markdown] slideshow={"slide_type": "slide"} # # Why do we need a package like this? # + [markdown] slideshow={"slide_type": "subslide"} # The current convention is for scientists to write their own code snippets in their language of choice: e.g., NCL, MATLAB, GrADS, FORTRAN. This means that scientists spend a considerable portion of their research time manually aligning forecast and verification dates and writing their own metrics. # + [markdown] slideshow={"slide_type": "fragment"} # With databases like the CESM-DPLE, the Decadal Climate Prediction Protocol (DCPP), SubX, NMME, we have an unprecedented amount of prediction experiments to analyze. # + [markdown] slideshow={"slide_type": "fragment"} # # If the community can unite around an open-source framework, we can spend more time analyzing output and less time writing (and re-writing) code. # + [markdown] slideshow={"slide_type": "slide"} # # Why Python?: Open Source Capabilities # # ![](../img/python-logo.png) # + [markdown] slideshow={"slide_type": "fragment"} # Why not NCL, MATLAB, FORTRAN? # + [markdown] slideshow={"slide_type": "subslide"} # **Python is open-source**. This makes the software transparent, and more importantly allows for community contributions and collaboration. # # ![](../img/github-logo.png) # **Code**: <a href="https://github.com/pangeo-data/climpred"> # github.com/pangeo-data/climpred # </a> # # **Documentation**: <a href="https://climpred.readthedocs.io/en/stable/">climpred.readthedocs.io</a> # + [markdown] slideshow={"slide_type": "subslide"} # Bugs, suggestions, discussions can be posted by any user. # # ![](../img/github-issues.png) # + [markdown] slideshow={"slide_type": "subslide"} # New functions, metrics, etc. can be added by the community and are subject to (friendly) peer code review. # # ![](../img/contributors.png) # + [markdown] slideshow={"slide_type": "subslide"} # Packages like `pytest` allow us to rigorously test the existing code base and any new code that is added. # # ![](../img/pytest.png) # + [markdown] slideshow={"slide_type": "subslide"} # # Why Python?: Scientific Software # # ![](../img/pangeo-logo.png) # # The `pangeo` project organizes a community of developers working in python for Big Data geoscience research. # + [markdown] slideshow={"slide_type": "subslide"} # ![](../img/xarray-logo.png) # # `xarray` is the core driver of `climpred` and allows for easy analysis of *labeled* multi-dimensional arrays of data. # + [markdown] slideshow={"slide_type": "subslide"} # `xarray` Datasets preserve all metadata associated with the netCDF file. # + slideshow={"slide_type": "skip"} import xarray dataset = xarray.tutorial.open_dataset('air_temperature') # Convert to `cftime` for demonstration purposes. time_cf = xarray.cftime_range('2013-01', '2015-01-01', freq='6H', calendar='gregorian')[:-1] dataset['time'] = time_cf # Saved to netCDF so we can demonstrate the loading functionality. # dataset.to_netcdf('data/nmc_air_temperature.nc') # + slideshow={"slide_type": "-"} dataset = xarray.load_dataset('../data/nmc_air_temperature.nc', use_cftime=True) print(dataset) # + slideshow={"slide_type": "skip"} # Get rid of attrs for easier printing later. dataset.attrs = {} # + [markdown] slideshow={"slide_type": "subslide"} # They also intelligently handle datetime, using `cftime` to manage different model calendars (e.g. `noleap`, `gregorian`, `360_day`). # + slideshow={"slide_type": "-"} dataset['time'].head() # + [markdown] slideshow={"slide_type": "subslide"} # Their knowledge of dimension labeling makes it easy to slice and operate over dimensions. # + slideshow={"slide_type": "-"} ds_slice = dataset.sel(time=slice('2013-06-05', '2013-06-06'), lat=slice(50, 30), lon=210) # + slideshow={"slide_type": "fragment"} print(ds_slice) # + slideshow={"slide_type": "fragment"} ds_slice.std('lat') # + [markdown] slideshow={"slide_type": "subslide"} # Special `.groupby()` operations make it simple to compute climatologies. # + slideshow={"slide_type": "-"} ds_mean = dataset.mean(['lat', 'lon']) # Group off into each point's corresponding day of the year # and take the average over that group. annual_cycle = ds_mean.groupby('time.dayofyear').mean('time') # + slideshow={"slide_type": "fragment"} annual_cycle['air'].plot() # + [markdown] slideshow={"slide_type": "skip"} # Other packages that could be covered but probably shouldn't to avoid overwhelming new python folks and in the interest of time: # # * `dask` for parallel and out-of-memory computations. # * `matplotlib` for viz. # * `xskillscore` for some of our metrics. # + [markdown] slideshow={"slide_type": "slide"} # # Top-down View of `climpred` # + [markdown] slideshow={"slide_type": "subslide"} # `climpred` expects the initialized experiment to have specific dimension labels, which can be baked into the NetCDF file or can be post-processed and renamed in python. # + [markdown] slideshow={"slide_type": "fragment"} # * `init` : Initialization dates and times. # - [1954, 1955, 1956, ...] # - ['1954-01-01', '1954-01-7', ...] # + [markdown] slideshow={"slide_type": "fragment"} # * `lead` : Time since initialization. # - [1, 2, 3, 4, ...] # - `units` attribute: ['years', 'seasons', 'months', 'weeks', 'pentads', 'days'] # + [markdown] slideshow={"slide_type": "fragment"} # * `member` : Ensemble member (optional, for probabilistic metrics). # + [markdown] slideshow={"slide_type": "fragment"} # _An arbitrary number of additional dimensions can exist. E.g.,_ `lat`, `lon`, `depth`. # + [markdown] slideshow={"slide_type": "subslide"} # ## Datetime Alignment # + [markdown] slideshow={"slide_type": "-"} # One of the most powerful features of `climpred` is the automation of datetime alignment between the initialized prediction ensemble and the product it is being verified against. # # ![](../img/alignment/alignment1.png) # + [markdown] slideshow={"slide_type": "subslide"} # `climpred` automatically sub-selects a set of initializations that coincide with an observed point in time and that all verify with the observational product for all lead times. # # ![](../img/alignment/alignment3.png) # + [markdown] slideshow={"slide_type": "subslide"} # ![](../img/alignment/alignment4.png) # + [markdown] slideshow={"slide_type": "slide"} # # Demo: `climpred` Input # + slideshow={"slide_type": "skip"} # If you are on Cheyenne and want to load from disk # Uncomment and execute this and comment out next cell. # hind = xarray.open_dataset('/glade/work/rbrady/workshops/climpred/DPLE.Pacific.nc') # + slideshow={"slide_type": "-"} import xarray import fsspec # Download from Google Cloud Storage hind = xarray.open_zarr(fsspec.get_mapper('gcs://climpred_workshop/DPLE.Pacific')) # + slideshow={"slide_type": "-"} print(hind) # + [markdown] slideshow={"slide_type": "subslide"} # We'll be using bias-corrected SST anomaly forecasts from the CESM-DPLE over the Pacific. # + slideshow={"slide_type": "-"} import cartopy.crs as ccrs import matplotlib.pyplot as plt # + slideshow={"slide_type": "fragment"} ax = plt.axes(projection=ccrs.Orthographic(-180, 40)) # Make use of xarray's quick plot functionality. hind['SST'].sel(init=1965, lead=1).plot(ax=ax, transform=ccrs.PlateCarree(), x='TLONG', y='TLAT') ax.coastlines() # + [markdown] slideshow={"slide_type": "subslide"} # `climpred` offers a `HindcastEnsemble` object that can be easily instantiated. # + slideshow={"slide_type": "-"} import climpred hindcast = climpred.HindcastEnsemble(hind) print(hindcast) # + slideshow={"slide_type": "skip"} # If you are on Cheyenne and want to load from disk # Uncomment and execute this and comment out next cell. # hind = xarray.open_dataset('/glade/work/rbrady/workshops/climpred/FOSI.Pacific.nc') # + slideshow={"slide_type": "-"} fosi = xarray.open_zarr(fsspec.get_mapper('gcs://climpred_workshop/FOSI.Pacific')) # + slideshow={"slide_type": "skip"} # Move to anomaly space, not necessary to explain during # presentation. fosi = fosi - fosi.sel(time=slice(1964, 2014)).mean() # + [markdown] slideshow={"slide_type": "subslide"} # A verification product (i.e., observations) can be appended to the `HindcastEnsemble` object. # + slideshow={"slide_type": "-"} # FOSI: Forced Ocean-Sea Ice reconstruction. hindcast = hindcast.add_observations(fosi) print(hindcast) # + [markdown] slideshow={"slide_type": "slide"} # # Demo: `climpred` Analysis # # Verification is made easy by simply running the `.verify()` method and indicating one of our ~30 deterministic and probabilistic metrics. We also need to declare a comparison style (`'e2o'` is the ensemble mean to observations), a dimension to reduce over (the initialization dimension here), and an alignment style. # + slideshow={"slide_type": "-"} acc = hindcast.verify(metric='pearson_r', comparison='e2o', dim='init', alignment='maximize') print(acc) # + [markdown] slideshow={"slide_type": "subslide"} # We can compute a reference forecast, such as a persistence forecast, using the `reference=...` keyword. Changing the metric requires a simple string change! # + slideshow={"slide_type": "fragment"} # Take mean over latitude and longitude. hindcast_avg = hindcast.mean(['nlat', 'nlon']) # Compute RMSE for forecasts relative to FOSI. rmse = hindcast_avg.verify(metric='rmse', comparison='e2o', dim='init', alignment='maximize', reference='persistence') # + slideshow={"slide_type": "fragment"} # Quick plotting feature of xarray. rmse.sel(skill='initialized').SST.plot(marker='o', markersize=12) rmse.sel(skill='persistence').SST.plot(color='k', linestyle='--') plt.show() # + [markdown] slideshow={"slide_type": "subslide"} # Here, we use the Mean Square Skill Score. # + slideshow={"slide_type": "fragment"} # Take mean over latitude and longitude. hindcast_avg = hindcast.mean(['nlat', 'nlon']) # Compute Mean Square Skill Score for forecasts relative to FOSI. msss = hindcast_avg.verify(metric='msss', comparison='e2o', dim='init', alignment='maximize', reference='persistence') # + slideshow={"slide_type": "fragment"} # Quick plotting feature of xarray. msss.sel(skill='initialized').SST.plot(marker='o', markersize=12) msss.sel(skill='persistence').SST.plot(color='k', linestyle='--') plt.show() # + [markdown] slideshow={"slide_type": "subslide"} # Users can also pass in custom metrics that aren't yet offered through `climpred` by default. # + slideshow={"slide_type": "fragment"} import numpy def _msle(fct, obs, dim=None, **metric_kwargs): return ( (numpy.log(fct + 1) + numpy.log(obs + 1) ) ** 2).mean(dim) # + slideshow={"slide_type": "fragment"} from climpred.metrics import Metric msle = Metric( name='msle', function=_msle, probabilistic=False, positive=False, unit_power=0, ) # + slideshow={"slide_type": "fragment"} hindcast_avg.verify(metric=msle, comparison='e2o', dim='init', alignment='maximize') # + [markdown] slideshow={"slide_type": "slide"} # # Advanced Visualization # # Using any package from the python ecosystem, we can then produce a publication-ready figure. The main visualization package in python is `matplotlib`, but the user can use their favorite, e.g. `ggplot`, `seaborn`, ... # + slideshow={"slide_type": "skip"} acc_map = hindcast.verify(metric='pearson_r', comparison='e2o', dim='init', alignment='maximize') acc_pval = hindcast.verify(metric='pearson_r_eff_p_value', comparison='e2o', dim='init', alignment='maximize') # + [markdown] slideshow={"slide_type": "skip"} # ```python # import proplot as plot # # plot.rc['geogrid.alpha'] = 0 # plot.rc['suptitle.weight'] = 'bold' # plot.rc.fontname = 'Helvetica' # # plot.rc['title.size'] = 18 # plot.rc['suptitle.size'] = 18 # # f, axs = plot.subplots(nrows=2, ncols=5, # proj='ortho', # proj_kw={'central_longitude': -180, # 'central_latitude': 40}, # top=1.5) # # for i, ax in zip(range(10), axs): # p = ax.pcolormesh(acc_map.TLONG, # acc_map.TLAT, # acc_map.SST.isel(lead=i), # levels=plot.arange(-1, 1, 0.1), # cmap='Balance') # # ax.contourf(acc_map.TLONG, # acc_map.TLAT, # acc_pval.SST.isel(lead=i), # levels=plot.arange(0, 0.05, 0.01), # hatches=['..'], # alpha=0) # # ax.format(title=f'Lead {i+1}') # # axs.format(land=True, suptitle='Potential Predictability of SSTs') # cb = f.colorbar(p, label='Anomaly Correlation Coefficient', # loc='b', length=0.5, labelsize=18) # cb.ax.tick_params(labelsize=18) # # # f.savefig('img/example_pub_figure.png') # ``` # + [markdown] slideshow={"slide_type": "subslide"} # ![](../img/example_pub_figure.png) # + [markdown] slideshow={"slide_type": "slide"} # # Output # # If you want to visualize in your own software (e.g. NCL, GMT, ...) or save out the results of the analysis, you can easily save it out as a netCDF. # + slideshow={"slide_type": "-"} mape = hindcast_avg.verify(metric='mape', comparison='e2o', dim='init', alignment='maximize') mape.to_netcdf('../data/MAPE.SST.nc') # + [markdown] slideshow={"slide_type": "subslide"} # Now we have a NetCDF file to store away or use in any other software! # + [markdown] slideshow={"slide_type": "-"} # ![](../img/mape_command_line.png) # + [markdown] slideshow={"slide_type": "slide"} # # `climpred` Development Roadmap # + [markdown] slideshow={"slide_type": "fragment"} # 1. Add multiple reference possibilities. # # ```python # >>> HindcastEnsemble.verify(reference='persistence', metric='rmse') # >>> HindcastEnsemble.verify('damped', metric='rmse') # >>> HindcastEnsemble.verify('climatology', metric='rmse') # >>> HindcastEnsemble.verify('uninitialized', metric='rmse') # ``` # + [markdown] slideshow={"slide_type": "fragment"} # 2. Robust support for S2S and diverse model calendars. # - Currently support 'seasons', 'months', 'weeks', 'pentads', 'days' for lead time. # + [markdown] slideshow={"slide_type": "fragment"} # 3. Optimize for parallel performance with `dask` in mind. # + [markdown] slideshow={"slide_type": "fragment"} # 4. Polish up temporal and spatial smoothing modules. # + [markdown] slideshow={"slide_type": "slide"} # # How To Contribute! # + [markdown] slideshow={"slide_type": "fragment"} # The number one way is to try out `climpred` and post issues/discussion topics on Github. Encourage your colleagues to try it out! # + [markdown] slideshow={"slide_type": "fragment"} # For folks with more python experience, consider opening a Pull Request. We have a detailed guide on how to add metrics and functions to the code base on the docs. We are happy to guide first time contributers as well. # + [markdown] slideshow={"slide_type": "fragment"} # Please feel free to reach out if you have any questions or advice! (<EMAIL>) # + [markdown] slideshow={"slide_type": "slide"} # ![](../img/climpred-logo.png) # # Documentation: https://climpred.readthedocs.io # # Github: https://github.com/bradyrx/climpred # # PyPI Installation: `pip install climpred` # # Conda Installation: `conda install -c conda-forge climpred`
notebooks/climpred_overview.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings("ignore") # %matplotlib inline # - training_data = pd.read_csv("../Data/aps_failure_training_set.csv",na_values="na") training_data.head() sns.set_style('whitegrid') # Preprocessing plt.figure(figsize=(20,12)) sns.heatmap(training_data.isnull(),yticklabels=False,cbar=False,cmap = 'viridis') # # Missing value handling # We are going to use different approches with missing values: # # 1. Removing the column having more than 10,000 missing values (**Ref) # 2. Removing the column having 80% missing values (**Self intuition) # 3. Keeping all the features # 4. Later, we will try to implement some feature engineering # # # **For the rest of the missing values, we are replacing them with their mean() for now (**Ref) # # Second Approach # + missing_data = training_data.isna().sum().div(training_data.shape[0]).mul(100).to_frame() plt.figure(figsize=(10,7)) ax= sns.distplot(missing_data[0],kde = False,bins = 20,color='b') ax.set(xlabel='Missing values', ylabel='Number of features') plt.show() # - missing_data[missing_data[0]>80] # <b>we are dropping the columns that have more than 80% missing values</b> # + temp = missing_data[missing_data[0]>80] temp = list(temp.index) sample_data = training_data.drop(temp,axis=1) sample_data.head() # - plt.figure(figsize=(20,12)) sns.heatmap(sample_data.isnull(),yticklabels=False,cbar=False,cmap='viridis') # <b>Here we can see not that much improvement than the first approach in the heatmap</b><br> # For the rest of the values we are going to replace them with their mean() -- (**Ref) sample_data.fillna(sample_data.mean(),inplace=True) # + #after replacing with mean() plt.figure(figsize=(20,12)) sns.heatmap(sample_data.isnull(),yticklabels=False,cbar=False,cmap='viridis') # - # <b>So we have handled the missing value problem in this approach</b> # # <b>Now We need to handle the classification variables(class) by taking a dummy value instead of a string</b> # + #as all the other values are numerical except Class column so we can replace them with 1 and 0 sample_data = sample_data.replace('neg',0) sample_data = sample_data.replace('pos',1) sample_data.head(15) # - # # Model Implementation # # <b><font size = 5px>First Approach</big></b><br> # <b>As our data is ready, so here we are going to try to model and fit our dataset with logistic regression model</b> # + #here the predictors (X) and responce (y) is separated from the sample_data for this model X = sample_data.drop('class',axis=1) y = sample_data['class'] # - X.head() from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101) from sklearn.linear_model import LogisticRegression logmodel = LogisticRegression() #fitting the data logmodel.fit(X_train,y_train) prediction = logmodel.predict(X_test) from sklearn.metrics import classification_report print(classification_report(y_test,prediction)) # <b>Here we can see, Negative class prediction is about 99% correct , where Positive class prediction guarantees upto 44%</b><br> # Now, for the cost from sklearn.metrics import confusion_matrix tn, fp, fn, tp = confusion_matrix(y_test,prediction).ravel() cost = 10*fp+500*fn values = {'Score':[cost],'Number of Type 1 faults':[fp],'Number of Type 2 faults':[fn]} pd.DataFrame(values) # <b>By observing this , we can see that this estimation is far worse than the first approach </b> from sklearn.metrics import mean_squared_error mean_squared_error(y_test,prediction) prediction.shape
Codes/Approach 2 (Logistic Regression).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import sys sys.path.append("/Users/PRVATE/Documents/tf_transformers/src/") # + from transformers import TFBertModel from tf_transformers.models import BERTEncoder import tensorflow as tf import json from tf_transformers.utils import convert_bert_hf_to_tf_transformers import os # + # Load HF model # Always do this tf.keras.backend.clear_session() model_hf_location = '/Users/PRVATE/HUggingFace_Models/bert_base_uncased/' model_hf = TFBertModel.from_pretrained(model_hf_location) # + # Load tf_transformers model # Most config we will be providing # Default configs for the model # Default configs for the model model_config_dir = '/Users/PRVATE/Documents/tf_transformers/model_configs/' model_name = 'bert_base' config_location = os.path.join(model_config_dir, model_name, 'config.json') config = json.load(open(config_location)) # Always do this tf.keras.backend.clear_session() # Always do this tf.keras.backend.clear_session() # tf_transformers Layer (an extension of Keras Layer) # This is not Keras model, but extension of keras Layer model_layer = BERTEncoder(config=config, name='bert', mask_mode=config['mask_mode'], is_training=False, use_dropout=False, ) # model_dir = None, because we have not initialized the model with proper variable values model_tf_transformers = model_layer.get_and_load_model(model_dir=None) # - convert_bert_hf_to_tf_transformers(model_hf, model_tf_transformers, config) # + # Please have a look at tf_transformers/extra/reference_bert_cased.py for reference values input_ids = tf.constant([[1, 9, 10, 11, 23], [1, 22, 234, 432, 2349]]) input_mask = tf.ones_like(input_ids) input_type_ids = tf.ones_like(input_ids) inputs = {'input_ids': input_ids, 'input_mask': input_mask, 'input_type_ids': input_type_ids} results_tf_transformers = model_tf_transformers(inputs) for k, r in results_tf_transformers.items(): if isinstance(r, list): continue print(k, '-->', tf.reduce_sum(r), '-->', r.shape) # + # Huggingface Model input_ids = tf.constant([[1, 9, 10, 11, 23], [1, 22, 234, 432, 2349]]) input_mask = tf.ones_like(input_ids) input_type_ids = tf.ones_like(input_ids) results_hf = model_hf([input_ids, input_mask, input_type_ids]) for k in results_hf: print(k) print(tf.reduce_sum(results_hf[k]), '-->', results_hf[k].shape) # - model_tf_transformers.save_checkpoint("/Users/PRVATE/tf_transformers_models/bert_base_uncased")
src/tf_transformers/notebooks/conversion_scripts/1_convert_bert_from_huggingface.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Script to concatenate all .csv files in a folder into one .csv files # Jeff used for RD orders import os import csv path = " " # insert path # + directories = [dirs for dirs in os.listdir(path) if os.path.isdir(os.path.join(path, dirs))] print(len(directories)) for dirs in directories: print(dirs) # ouptut file new_file = 'combined_file.csv' # now add all folders in a file directory for dirs in directories: dir_path = os.path.join(path + "/" + dirs) for filename in os.listdir(dir_path): if filename.endswith('.csv'): with open(os.path.join(dir_path+"/"+filename)) as csvfile: file_reader = csv.reader(csvfile, delimiter=',') for row in file_reader: with open(os.path.join('./'+new_file), 'a') as newfile: file_writer = csv.writer(newfile) file_writer.writerow(row)
combine_csv_files.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Exercício: Análise Exploratória de Dados com Python # # Neste exercício, você vai realizar uma análise exploratória em um dos mais famosos datasets para Machine Learning, o dataset iris com informações sobre 3 tipos de plantas. Esse dataset é comumente usado em problemas de Machine Learning de classificação, quando nosso objetivo é prever a classe dos dados. No caso deste dataset, prever a categoria de uma planta a partir de medidas da planta (sepal e petal). # # Em cada célula, você encontra a tarefa a ser realizada. Faça todo o exercício e depois compare com a solução proposta. # # Dataset (já disponível com o Scikit-Learn): https://archive.ics.uci.edu/ml/datasets/iris # + # Imports import time import numpy as np import pandas as pd from matplotlib import pyplot as plt from sklearn.datasets import load_iris # %matplotlib inline fontsize = 14 ticklabelsize = 14 # - # Carregando o dataset iris = load_iris() df = pd.DataFrame(iris.data, columns=iris.feature_names) print(len(df)) df.head() # ## Extração e Transformação de Dados # Imprima os valores numéricos da Variável target (o que queremos prever), # uma de 3 possíveis categorias de plantas: setosa, versicolor ou virginica iris.target_names # Imprima os valores numéricos da Variável target (o que queremos prever), # uma de 3 possíveis categorias de plantas: 0, 1 ou 2 iris.target # Adicione ao dataset uma nova coluna com os nomes das espécies, pois é isso que vamos tentar prever (variável target) df['especie'] = pd.Categorical.from_codes(iris.target, iris.target_names) df.head() # Inclua no dataset uma coluna com os valores numéricos da variável target df['target'] = iris.target df.head() # Extraia as features (atributos) do dataset e imprima features = df.columns[:4] features # Calcule a média de cada feature para as 3 classes df.groupby('target').mean().T # ## Exploração de Dados # Imprima uma Transposta do dataset (transforme linhas e colunas e colunas em linhas) df.T # Utilize a função Info do dataset para obter um resumo sobre o dataset df.info() # Faça um resumo estatístico do dataset df.describe() # Verifique se existem valores nulos no dataset df.isnull().sum(axis=0) # Faça uma contagem de valores de sepal length df['sepal length (cm)'].value_counts(dropna = False) # ## Plot # Crie um Histograma de sepal length plt.figure(figsize=(12,8), dpi=80) df['sepal length (cm)'].value_counts().hist(bins=range(15)) plt.xlabel('sepal length (cm)', fontsize=fontsize) plt.ylabel('Frequência', fontsize=fontsize) plt.title('Histograma', fontsize=fontsize) # + # Crie um Gráficos de Dispersão (scatter Plot) da variável sepal length versus número da linha, # colorido por marcadores da variável target plt.figure(figsize=(12,8), dpi=80) plt.scatter(range(len(df)), df['sepal length (cm)'], c=df['target']) plt.xlabel('Número de linhas', fontsize=fontsize) plt.ylabel('Sepal length (cm)', fontsize=fontsize) plt.title('Gráfico de Dispersão', fontsize=fontsize) # + # Crie um Scatter Plot de 2 Features (atributos) plt.figure(figsize=(12,8), dpi=80) plt.scatter(df['petal length (cm)'], df['petal width (cm)'], c=df['target']) plt.xlabel('Petal length (cm)', fontsize=fontsize) plt.ylabel('Petal width (cm)', fontsize=fontsize) plt.title('Gráfico de Dispersão com duas Features', fontsize=fontsize) # - # Crie um Scatter Matrix das Features (atributos) df.columns atributos = ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'] pd.plotting.scatter_matrix(df[atributos], figsize=(16,12)) # Crie um Histograma de todas as features df.hist(figsize=(12,12))
09-Introducao_a_Analise_de_Dados_com_Python/02-Exercicio.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # Import data from github # + import pandas as pd import requests import io url = "https://raw.githubusercontent.com/WeiyiDeng/jupyter-tutorial/master/fortune500.csv" # Make sure the url is the raw version of the file on GitHub download = requests.get(url).content df = pd.read_csv(io.StringIO(download.decode('utf-8'))) print(df.head()) # - df.tail() df.columns = ['year', 'rank', 'company', 'revenue', 'profit'] df.head() len(df) df.dtypes # Check profit column, should be all int non_numberic_profits = df.profit.str.contains('[^0-9.-]') df.loc[non_numberic_profits].head() len(df.profit[non_numberic_profits]) bin_sizes, _, _ = plt.hist(df.year[non_numberic_profits], bins=range(1955, 2006)) df = df.loc[~non_numberic_profits] len(df) df.dtypes df["profit"] = pd.to_numeric(df["profit"]) df.dtypes # + group_by_year = df.loc[:, ['year', 'revenue', 'profit']].groupby('year') avgs = group_by_year.mean() x = avgs.index y1 = avgs.profit def plot(x, y, ax, title, y_label): ax.set_title(title) ax.set_ylabel(y_label) ax.plot(x, y) ax.margins(x=0, y=0) fig, ax = plt.subplots() plot(x, y1, ax, 'Increase in mean Fortune 500 company profits from 1955 to 2005', 'Profit (millions)') # - y2 = avgs.revenue fig, ax = plt.subplots() plot(x, y2, ax, 'Increase in mean Fortune 500 company revenues from 1955 to 2005', 'Revenue (millions)') # + def plot_with_std(x, y, stds, ax, title, y_label): ax.fill_between(x, y - stds, y + stds, alpha=0.2) plot(x, y, ax, title, y_label) fig, (ax1, ax2) = plt.subplots(ncols=2) title = 'Increase in mean and std Fortune 500 company %s from 1955 to 2005' stds1 = group_by_year.std().profit.values stds2 = group_by_year.std().revenue.values plot_with_std(x, y1.values, stds1, ax1, title % 'profits', 'Profit (millions)') plot_with_std(x, y2.values, stds2, ax2, title % 'revenues', 'Revenue (millions)') fig.set_size_inches(14, 4) fig.tight_layout() # - # [Implemented example from the tutorial in this web page](https://www.dataquest.io/blog/jupyter-notebook-tutorial/)
TF/.ipynb_checkpoints/tryExample-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Scraping the issue pages # anaconda python 3.7.6 from time import sleep from selenium import webdriver # version 3.141.0 import pandas as pd # version 1.0.1 from bs4 import BeautifulSoup # version 4.8.2 import seaborn as sns # version 0.10.0 import matplotlib.pyplot as plt # version 3.1.3 # make the list of pages to scrape page_lst = ['https://github.com/uchicago-computation-workshop/Fall2020/issues/1'] # make the list of workshop dates date_lst = ["2020-10-01"] # + def scrape_github_issues(page_lst, date_lst): # initialize the webdriver driver = webdriver.Chrome() driver.maximize_window() # intialize an empty dataframe to hold the data df = pd.DataFrame() # loop thorough pages and scrape for idx, url in enumerate(page_lst): tmp = scrape_github_issue(url, driver, date_lst[idx]) df = df.append(tmp) driver.close() return df def scrape_github_issue(url, driver, date): # load the url and click on the "load more" button to get the full list of comments driver.get(url) driver.find_element_by_class_name('ajax-pagination-btn').click() # wait for the page to load: kind of hacky but also prevents DDOS attacking GitHub sleep(4) # use beautifulsoup to get the html soup = BeautifulSoup(driver.page_source) # parse the fist element because it is the preceptor all_comments = soup.find_all("div", class_="ml-n3 timeline-comment unminimized-comment comment previewable-edit js-task-list-container js-comment timeline-comment--caret")[1:] # create a list of dictionaries holding the scraped stuff lst = make_lst(all_comments, date) return pd.DataFrame(lst) def make_lst(all_comments, date): # initialize empty list lst = [] # loop through results from find_all and extract the data, put it in a dictionary and append to the list for idx, comment in enumerate(all_comments): dict = {} dict["name"] = comment.find("a", class_="author link-gray-dark css-truncate-target width-fit").text dict["time"] = comment.find("a", class_="link-gray js-timestamp").find("relative-time")["datetime"] dict["workshop_date"] = date dict["position"] = idx dict["text"] = comment.find("td", "d-block comment-body markdown-body js-comment-body").text.replace("\n", " ").strip() temp_upvote = comment.find("div", class_="comment-reactions-options") # if the comment got no upvote such element would not exist if temp_upvote is None: dict["num_upvotes"] = 0 else: dict["num_upvotes"] = int(temp_upvote.text.split()[1]) lst.append(dict) return lst def get_time_to_deadline(row): # because of the summer time which I still do not understand I need a separate function for this # ended at November 3rd if row['workshop_date'] in ["2019-10-10", "2019-10-17", "2019-10-24", "2019-10-31", "2020-04-09", "2020-04-30", "2020-05-07", "2020-05-14", "2020-05-21", "2020-05-28", "2020-10-01"]: return pd.to_datetime(row['time']) - pd.to_datetime(row['workshop_date'] + '-06', utc=True) else: return pd.to_datetime(row['time']) - pd.to_datetime(row['workshop_date'] + '-05', utc=True) # - df = scrape_github_issues(page_lst, date_lst) df['time_to_deadline'] = df.apply(get_time_to_deadline, axis=1).astype('timedelta64[m]') df = df[df['time_to_deadline'] <= 0].copy() df.to_pickle('scraped_data_1001.pkl') fig, ax = plt.subplots(figsize=(20,10)) sns.regplot(data=df, x='position', y='num_upvotes', ax=ax, color="black", line_kws={'color':'blue'}) sns.despine() ax.set_xlabel('Position in Thread', size=20) ax.set_ylabel('Number of Upvotes', size=20) ax.set_title('Primacy Effect 2020/10/01 Scraped at 10:22 AM Central', size=20, loc='left')
quick_analysis_1001.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import rioxarray as rio import xarray as xr import glob import os import numpy as np import requests import geopandas as gpd from pathlib import Path from datetime import datetime from rasterio.enums import Resampling import matplotlib.pyplot as plt # %matplotlib inline site = "PDR" # Change site name chirps_seas_out_dir = Path('/home/serdp/rhone/rhone-ecostress/rasters/ee_season_precip_data_pdr') eeflux_seas_int_out_dir = Path('/home/serdp/rhone/rhone-ecostress/rasters/ee_growing_season_integrated_pdr') chirps_wy_out_dir = Path('/home/serdp/rhone/rhone-ecostress/rasters/wy_total_chirps_pdr') eeflux_seas_int_out_dir = Path('/home/serdp/rhone/rhone-ecostress/rasters/ee_season_mean_pdr') all_scenes_f_precip = Path('/scratch/waves/rhone-ecostress/rasters/chirps-clipped') all_scenes_f_et = Path('/home/serdp/rhone/rhone-ecostress/rasters/eeflux/PDR') # Change file path based on site all_precip_paths = list(all_scenes_f_precip.glob("*")) all_et_paths = list(all_scenes_f_et.glob("*.tif")) # Variable name agnostic to site? # + # for some reason the fll value is not correct. this is the correct bad value to mask by testf = all_precip_paths[0] x = rio.open_rasterio(testf) badvalue = np.unique(x.where(x != x._FillValue).sel(band=1))[0] def chirps_path_date(path): _, _, year, month, day, _ = path.name.split(".") day = day.split("-")[0] return datetime(int(year), int(month), int(day)) def open_chirps(path): data_array = rio.open_rasterio(path) #chunks makes i lazyily executed data_array = data_array.sel(band=1).drop("band") # gets rid of old coordinate dimension since we need bands to have unique coord ids data_array["date"] = chirps_path_date(path) # makes a new coordinate return data_array.expand_dims({"date":1}) # makes this coordinate a dimension ### data is not tiled so not a good idea to use chunking #https://github.com/pydata/xarray/issues/2314 import rasterio with rasterio.open(testf) as src: print(src.profile) len(all_precip_paths) * 41.7 / 10e3 # convert from in to mm # %timeit open_chirps(testf) all_daily_precip_path = "/home/serdp/ravery/rhone-ecostress/netcdfs/all_chirps_daily_i.nc" if Path(all_daily_precip_path).exists(): all_chirps_arr = xr.open_dataarray(all_daily_precip_path) all_chirps_arr = all_chirps_arr.sortby("date") else: daily_chirps_arrs = [] for path in all_precip_paths: daily_chirps_arrs.append(open_chirps(path)) all_chirps_arr = xr.concat(daily_chirps_arrs, dim="date") all_chirps_arr = all_chirps_arr.sortby("date") all_chirps_arr.to_netcdf(all_daily_precip_path) def eeflux_path_date(path): year, month, day, et = path.name.split("_") # Change this line accordingly based on format of eeflux dates return datetime(int(year), int(month), int(day)) def open_eeflux(path, da_for_match): data_array = rio.open_rasterio(path) #chunks makes i lazyily executed data_array.rio.reproject_match(da_for_match) data_array = data_array.sel(band=1).drop("band") # gets rid of old coordinate dimension since we need bands to have unique coord ids data_array["date"] = eeflux_path_date(path) # makes a new coordinate return data_array.expand_dims({"date":1}) # makes this coordinate a dimension # The following lines seem to write the lists of rasters to netcdf files? Do we need to replicate for chirps? da_for_match = rio.open_rasterio(all_et_paths[0]) daily_eeflux_arrs = [open_eeflux(path, da_for_match) for path in all_et_paths] all_eeflux_arr = xr.concat(daily_eeflux_arrs, dim="date") all_daily_eeflux_path = "/home/serdp/ravery/rhone-ecostress/netcdfs/all_eeflux_daily_i.nc" all_eeflux_arr.to_netcdf(all_daily_eeflux_path) all_eeflux_arr[-3,:,:].plot.imshow() all_eeflux_arr = all_eeflux_arr.sortby("date") # - ey = max(all_eeflux_arr['date.year'].values) all_eeflux_arr['date.dayofyear'].values # + # THIS IS IMPORTANT def years_list(all_arr): ey = max(all_arr['date.year'].values) sy = min(all_arr['date.year'].values) start_years = range(sy, ey) end_years = range(sy+1, ey+1) # Change to sy+1, ey+1 for across-calendar-year (e.g. winter) calculations return list(zip(start_years, end_years)) def group_by_custom_doy(all_arr, doy_start, doy_end): start_end_years = years_list(all_arr) water_year_arrs = [] for water_year in start_end_years: start_mask = ((all_arr['date.dayofyear'].values > doy_start) & (all_arr['date.year'].values == water_year[0])) end_mask = ((all_arr['date.dayofyear'].values < doy_end) & (all_arr['date.year'].values == water_year[1])) water_year_arrs.append(all_arr[start_mask | end_mask]) # | = or, & = and return water_year_arrs def group_by_season(all_arr, doy_start, doy_end): yrs = np.unique(all_arr['date.year']) season_arrs = [] for yr in yrs: start_mask = ((all_arr['date.dayofyear'].values >= doy_start) & (all_arr['date.year'].values == yr)) end_mask = ((all_arr['date.dayofyear'].values <= doy_end) & (all_arr['date.year'].values == yr)) season_arrs.append(all_arr[start_mask & end_mask]) return season_arrs # - # THIS IS IMPORTANT doystart = 125 # Edit these variables to change doy length of year doyend = 275 # eeflux_water_year_arrs = group_by_custom_doy(all_eeflux_arr, doystart, doyend) # Replaced by eeflux_seas_arrs below chirps_water_year_arrs = group_by_custom_doy(all_chirps_arr, doyend, doystart) eeflux_seas_arrs = group_by_season(all_eeflux_arr, doystart, doyend) eeflux_seas_arrs chirps_water_year_arrs[-1] fig = plt.figure() plt.plot(wy_list,[arr.mean() for arr in chirps_wy_sums],'.') plt.ylabel('WY Precipitation (mm)') # + # Creates figure of ET availability group_counts = list(map(lambda x: len(x['date']), water_year_arrs)) year_tuples = years_list(all_eeflux_arr) indexes = np.arange(len(year_tuples)) plt.bar(indexes, group_counts) degrees = 80 plt.xticks(indexes, year_tuples, rotation=degrees, ha="center") plt.title("Availability of EEFLUX between DOY 125 and 275") plt.savefig("eeflux_availability.png") # Figure below shows empty years in 85, 88, 92, 93, 96; no winter precip rasters generated for these years b/c no ET data w/in winter window # - water_year_arrs[0]['date.year'] def sum_seasonal_precip(precip_arr, eeflux_group_arr): return precip_arr.sel(date=slice(eeflux_group_arr.date.min(), eeflux_group_arr.date.max())).sum(dim="date") # This is matching up precip w/ available ET window for each year et_doystart = eeflux_group['date.dayofyear'].values[0] et_doyend = eeflux_group['date.dayofyear'].values[-1] # NEED TO DEFINE SITE GLOBAL VAR for index, eeflux_group in enumerate(eeflux_seas_arrs): if len(eeflux_group['date']) > 0: seasonal_precip = sum_seasonal_precip(all_chirps_arr, eeflux_group) # Variable/array name matters here seasonal_et = eeflux_group.integrate(coord="date", datetime_unit="D") year = eeflux_group[0]['date.year'].values[0] et_doystart = eeflux_group['date.dayofyear'].values[0] et_doyend = eeflux_group['date.dayofyear'].values[-1] pname = os.path.join(chirps_seas_out_dir,f"seas_chirps_{site}_{year}_{et_doystart}_{et_doyend}.tif") #Edit output raster labels eename = os.path.join(eeflux_seas_int_out_dir, f"seasonal_eeflux_integrated_{site}_{year}_{et_doystart}_{et_doyend}.tif") seasonal_precip.rio.to_raster(pname) seasonal_et.rio.to_raster(eename) # This chunk actually outputs the rasters ## Elmera Additions for winter precip: for index, (eeflux_group,chirps_group) in enumerate(zip(eeflux_seas_arrs,chirps_water_year_arrs[3:]): #changed eeflux_group to eeflux_seas_arrs & changed from water_year_arrs to season_arrs if len(eeflux_group['date']) > 0: # eeflux_group to eeflux_seas_arrs mean_seas_et = eeflux_group.mean(coord='date',skipna=False) chirps_wy_sum = chirps_group.sum(coord='date',skipna=False) # seasonal_precip = sum_seasonal_precip(chirps_water_year_arrs, eeflux_seas_arr) # Here's where above fxn is applied to rasters, need to replace eeflux_group year = eeflux_group[0]['date.year'].values[0] pname = os.path.join(chirps_wy_out_dir,f"wy_total_chirps_{site}_{year}.tif") #Edit output raster labels eename = os.path.join(eeflux_seas_int_out_dir,f"mean_daily_seas_et_{site}_{year}.tif") chirps_wy_sum.rio.to_raster(pname) mean_seas_et.rio.to_raster(eename) # This chunk actually outputs the rasters, ET lines removed - including seasonal_precip line? [arr['date.year'] for arr in chirps_water_year_arrs] seasonal_precip # This just shows the array - corner cells have empty values b/c of projection mismatch @ edge of raster water_year_arrs[0][0].plot.imshow() water_year_arrs[0].integrate(dim="date", datetime_unit="D").plot.imshow() # This chunk does the actual integration all_eeflux_arr.integrate(dim="date", datetime_unit="D") # + import pandas as pd import numpy as np labels = ['<=2', '3-9', '>=10'] bins = [0,2,9, np.inf] pd.cut(all_eeflux_arr, bins, labels=labels) # - all_eeflux_arr # + import pandas as pd all_scene_ids = [str(i) for i in list(all_scenes_f.glob("L*"))] df = pd.DataFrame({"scene_id":all_scene_ids}).reindex() split_vals_series = df.scene_id.str.split("/") dff = pd.DataFrame(split_vals_series.to_list(), columns=['_', '__', '___', '____', '_____', '______', 'fname']) df['date'] = dff['fname'].str.slice(10,18) df['pathrow'] = dff['fname'].str.slice(4,10) df['sensor'] = dff['fname'].str.slice(0,4) df['datetime'] = pd.to_datetime(df['date']) df = df.set_index("datetime").sort_index() # - marc_df = df['2014-01-01':'2019-12-31'] marc_df = marc_df[marc_df['sensor']=="LC08"] x.where(x != badvalue).sel(band=1).plot.imshow() # + # Evan additions year_tuples = years_list(all_eeflux_arr) year_tuples # - # Winter precip calculations year_tuples_p = years_list(all_chirps_arr) year_tuples_p def group_p_by_custom_doy(all_chirps_arr, doy_start, doy_end): start_end_years = years_list(all_chirps_arr) water_year_arrs = [] for water_year in start_end_years: start_mask = ((all_chirps_arr['date.dayofyear'].values > doy_start) & (all_chirps_arr['date.year'].values == water_year[0])) end_mask = ((all_chirps_arr['date.dayofyear'].values < doy_end) & (all_chirps_arr['date.year'].values == water_year[0])) water_year_arrs.append(all_chirps_arr[start_mask | end_mask]) return water_year_arrs doystart = 275 # Edit these variables to change doy length of year doyend = 125 water_year_arrs = group_p_by_custom_doy(all_chirps_arr, doystart, doyend) water_year_arrs def sum_seasonal_precip(precip_arr, eeflux_group_arr): return precip_arr.sel(date=slice(eeflux_group_arr.date.min(), eeflux_group_arr.date.max())).sum(dim="date") # This is matching up precip w/ available ET window for each year, need to figure out what to feed in for 2nd variable for index, eeflux_group in enumerate(water_year_arrs): if len(eeflux_group['date']) > 0: seasonal_precip = sum_seasonal_precip(all_chirps_arr, eeflux_group) # Here's where above fxn is applied to rasters, need to replace eeflux_group year_range = year_tuples_p[index] pname = f"winter_chirps_{year_range[0]}_{year_range[1]}_{doystart}_{doyend}.tif" #Edit output raster labels seasonal_precip.rio.to_raster(pname) # This chunk actually outputs the rasters, ET lines removed
notebooks/.ipynb_checkpoints/integrate_seasonal_et_p-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # CSCI 4850/5850 Group 7 # Cordell, Gailbreath, Norrod, Swindell # Milestone 3/15/21 # | Deliverable | % Complete | Estimated Completion Date | % Complete by Next Milestone | # | :- | :-: | :-: | :-: | # | Code | 0% | April 19 | 25% | # | Paper | 0% | April 21 | 15% | # | Demo | 0% | May 1 | 5% | # | Presentation | 0% | May 1 | 5% | # | What deliverable goals established in the last milestone report were accomplished to the anticipated percentage? | # | :- | # | Since this is the first milestone report, no anticipated percentages have been made yet. | # | What deliverable goals established in the last milestone report were not accomplished to the anticipated percentage? | # | :- | # | Since this is the first milestone report, no anticipated percentages have been made yet. | # | What are the main deliverable goals to meet before the next milestone report, and who is working on them? | # | :- | # | We mainly need to get started on the code, as well as starting the foundations of the paper. | # | The code is primarily being worked on by Jesse, with input from the rest of the group. | # | The paper is being led by Grayson, with help from Noah and the rest of the group. | # | As the code is being worked on, Jacob will be working on the demo. | # Milestone 3/22/21 # | Deliverable | % Complete | Estimated Completion Date | % Complete by Next Milestone | # | :- | :-: | :-: | :-: | # | Code | 20% | April 19 | 25% | # | Paper | 5% | April 21 | 15% | # | Demo | 0% | May 1 | 5% | # | Presentation | 0% | May 1 | 5% | # | What deliverable goals established in the last milestone report were accomplished to the anticipated percentage? | # | :- | # | We accidentally made our estimates for 29th instead of the 22nd, so they were a bit over-ambitious. | # | We have made significant headway on the code portion so far. | # | What deliverable goals established in the last milestone report were not accomplished to the anticipated percentage? | # | :- | # | We accidentally made our estimates for 29th instead of the 22nd, so they were a bit over-ambitious. | # | We have not yet started on the demo or presentation. | # | What are the main deliverable goals to meet before the next milestone report, and who is working on them? | # | :- | # | The code is coming along nicely, primarily led by Jesse, with input from the rest of the group. | # | We will get started on the paper soon, led by Grayson with help from Noah and the rest of the group. | # | Jacob will be working on the demo once more code is completed. | # Milestone 3/29/21 # | Deliverable | % Complete | Estimated Completion Date | % Complete by Next Milestone | # | :- | :-: | :-: | :-: | # | Code | 20% | April 19 | 35% | # | Paper | 10% | April 21 | 20% | # | Demo | 0% | May 1 | 5% | # | Presentation | 2% | May 1 | 5% | # | What deliverable goals established in the last milestone report were accomplished to the anticipated percentage? | # | :- | # | While we did make progress in the past week, we did not reach accomplish any of the anticipated percentages. | # | What deliverable goals established in the last milestone report were not accomplished to the anticipated percentage? | # | :- | # | We made some progress on the code and paper, but not the anticipated amount. | # | We have also not started on the demo and have just started the presentation. | # | What are the main deliverable goals to meet before the next milestone report, and who is working on them? | # | :- | # | Jesse is still leading development of the code, and that is obviously still the main focus at the moment. | # | We also need to have the data set prepared by the next milestone report. | # | We have continued to collect resources for the paper, which will be led by Grayson with help from Noah and the rest of the group. | # | For now we can't really do much in terms of the demo, but Jacob will be leading it once we have enough of the code complete. | # Milestone 4/5/21 # | Deliverable | % Complete | Estimated Completion Date | % Complete by Next Milestone | # | :- | :-: | :-: | :-: | # | Code | 25% | April 19 | 35% | # | Paper | 16% | April 21 | 20% | # | Demo | 0% | May 1 | 0% | # | Presentation | 2% | May 1 | 5% | # | What deliverable goals established in the last milestone report were accomplished to the anticipated percentage? | # | :- | # | While we did make progress in the past week, we did not reach accomplish any of the anticipated percentages. | # | What deliverable goals established in the last milestone report were not accomplished to the anticipated percentage? | # | :- | # | We made some progress on the code and paper, but not the anticipated amount. | # | We have also not started on the demo and have not worked more on the presentation this week. | # | What are the main deliverable goals to meet before the next milestone report, and who is working on them? | # | :- | # | Jesse made a lot of progress on the code, but we hit a snag that set us back a bit. | # | We have started the actual writing phase of the paper, which will be led by Grayson with help from Noah and the rest of the group. | # | For now we can't really do much in terms of the demo, but Jacob will be leading it once we have enough of the code complete. | # Milestone 4/12/21 # | Deliverable | % Complete | Estimated Completion Date | % Complete by Next Milestone | # | :- | :-: | :-: | :-: | # | Code | 60% | April 19 | 100% | # | Paper | 16% | April 21 | 33% | # | Demo | 0% | May 1 | 25% | # | Presentation | 2% | May 1 | 5% | # | What deliverable goals established in the last milestone report were accomplished to the anticipated percentage? | # | :- | # | We not only met our expected goal for the code, we have exceeded it. | # | We now have the data set ready and are working on preparing the network around it. | # | What deliverable goals established in the last milestone report were not accomplished to the anticipated percentage? | # | :- | # | Despite our progress on the code, we didn't really progress on the other deliverables this week. | # | What are the main deliverable goals to meet before the next milestone report, and who is working on them? | # | :- | # | Jesse and Grayson made significant progress on the code so far, and will keep up the good work. | # | We have the structure for the paper, which is being led by Grayson and Noah, but will be adding more content now that the code is more complete. | # | Now that the code is nearly done, Jacob will be leading the demo. | # | Jacob is also putting together an explanation of the data augmentation that will be part of the paper and demo. | # Milestone 4/19/21 # | Deliverable | % Complete | Estimated Completion Date | % Complete by Next Milestone | # | :- | :-: | :-: | :-: | # | Code | 95% | April 23 | 100% | # | Paper | 25% | April 23 | 100% | # | Demo | 0% | May 1 | 25% | # | Presentation | 2% | May 1 | 5% | # | What deliverable goals established in the last milestone report were accomplished to the anticipated percentage? | # | :- | # | We did not hit any of our anticipated completion percentages this week. | # | What deliverable goals established in the last milestone report were not accomplished to the anticipated percentage? | # | :- | # | The code is almost done now, and we're dedicating this week to work on the paper and finishing the last 5% of the code. | # | With those done, we should be able to tackle the demo and presentation shortly after. | # | What are the main deliverable goals to meet before the next milestone report, and who is working on them? | # | :- | # | Jesse will be working on the finishing touches to the code. | # | All four of us will be working on the paper. | # | We will probably not resume working on the demo and presentation until this weekend. | # Milestone 4/26/21 # | Deliverable | % Complete | Estimated Completion Date | % Complete by Next Milestone | # | :- | :-: | :-: | :-: | # | Code | 100% | April 23 | 100% | # | Paper | 100% | April 23 | 100% | # | Demo | 0% | May 1 | 100% | # | Presentation | 2% | May 1 | 100% | # | What deliverable goals established in the last milestone report were accomplished to the anticipated percentage? | # | :- | # | The code and paper are done. | # | What deliverable goals established in the last milestone report were not accomplished to the anticipated percentage? | # | :- | # | We didn't work on the demo or presentation this past week but that's what we'll be focusing on this week. | # | What are the main deliverable goals to meet before the next milestone report, and who is working on them? | # | :- | # | We're going to be all-hands-on-deck working on the demo and presentation. |
Project_Milestones/4850 Group 7 Milestones.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #word2vec from gensim.models import KeyedVectors from gensim.models import Word2Vec from gensim.test.utils import datapath from collections import Counter import operator import collections #import fasttext.util #utils import pandas as pd import numpy as np import matplotlib.pyplot as plt import sys #src #root_project = "/content/ReSt/" root_project = "/Users/Alessandro/Dev/repos/ReSt/" #root_project = "/home/jupyter/ReSt/" sys.path.append(root_project) from src.data.utils import load_csv_to_dict, dtype, dtype_transformation, set_unkmark_token from src.data.word_embedding import get_index_key_association, get_int_seq, build_keras_embedding_matrix, get_data_to_emb # %load_ext autoreload # %autoreload 2 # - # # Dataset #path #dataset_path = "/Users/Alessandro/Dev/repos/ReSt/dataset/haspeede2/preprocessed/reference/reference_tweets.csv" dataset_path = root_project + 'dataset/haspeede2/preprocessed/dev/dev.csv' w2v_bin_path = root_project + 'results/model/word2vec/twitter128.bin' #w2v_bin_path = root_project + 'results/model/word2vec/tweets_2019_Word2Vect.bin' dataset = load_csv_to_dict(dataset_path) senteces = dataset["tokens"] senteces[29] dtype(dataset) sentece_i = 53 print("Examples sentence: {}".format(dataset["text"][sentece_i])) print("To tokens: {}".format(dataset["tokens"][sentece_i])) # ## useful information # + #data n_sentences = len(senteces) unique_words = set([word for words in senteces for word in words]) unique_words_freq = dict(Counter(i for sub in senteces for i in set(sub))) n_unique_words = len(unique_words) #print data print(" - #sentences: {}".format(n_sentences)) print(" - Unique word on the datset: {}".format(n_unique_words)) # - # ## W2V token_setences = dataset["tokens"] #w2v_model = Word2Vec.load(w2v_bin_path) wv = KeyedVectors.load_word2vec_format(datapath(w2v_bin_path), binary=True) wv["africani"] #len(w2v_model.wv.vocab.keys()) len(wv.vocab.keys()) wv.vectors.shape know_words = [] unknow_words = [] for word in unique_words: if word in wv.vocab.keys(): know_words.append(word) else: unknow_words.append(word) print("know words: {}".format(len(know_words))) print("unknow words: {}".format(len(unknow_words))) unknow_words_freq = {word: unique_words_freq[word] for word in unknow_words} unknow_words_freq_sorted = sorted(unknow_words_freq.items(),key=operator.itemgetter(1),reverse=True) unknow_words_freq_sorted[:111] # # build keras embedding matrix # + def get_index_key_association(wv): key_to_index = {"<UNK>": 0} index_to_key = {0: "<UNK>"} for idx, word in enumerate(sorted(wv.vocab)): key_to_index[word] = idx+1 # which row in `weights` corresponds to which word? index_to_key[idx+1] = word # which row in `weights` corresponds to which word? return index_to_key, key_to_index def build_keras_embedding_matrix(wv, index_to_key=None): print('Vocab_size is {}'.format(len(wv.vocab))) vec_size = wv.vector_size vocab_size = len(wv.vocab) + 1 # plus the unknown word if index_to_key is None: index_to_key, _ = get_index_key_association(wv) # Create the embedding matrix where words are indexed alphabetically embedding_matrix = np.zeros(shape=(vocab_size, vec_size)) for idx in index_to_key: #jump the first, words not found in embedding int 0 and will be all-zeros if idx != 0: embedding_matrix[idx] = wv.get_vector(index_to_key[idx]) print('Embedding_matrix with unk word loaded') print('Shape {}'.format(embedding_matrix.shape)) return embedding_matrix, vocab_size # - index_to_key, key_to_index = get_index_key_association(wv) embedding_matrix, vocab_size = build_keras_embedding_matrix(wv, index_to_key)
notebooks/word2vec/build_w2v.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import matplotlib.pyplot as plt from PIL import Image import pandas as pd import numpy as np from bokeh.plotting import ColumnDataSource, figure from bokeh.layouts import row from sklearn.model_selection import train_test_split from bokeh.io import output_file, show from pickle import dump import math from numpy.linalg import inv import matplotlib.pyplot as plt from sklearn import model_selection from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC # # making an image by pixels # + b=77 g=88 r=126 # r, g, and b are 512x512 float arrays with values >= 0 and < 1. rgbArray = np.zeros((28,28,3), 'uint8') rgbArray[..., 0] = r#*256 rgbArray[..., 1] = g#*256 rgbArray[..., 2] = b#*256 img = Image.fromarray(rgbArray) #img.save('myimg.jpeg') imgplot = plt.imshow(img) # - # # Loading Dataset # + test_size_param = 0.2 test_size =int((145185*20)/100) dataset = pd.read_csv('train_all.csv') #, encoding='utf-8' ,index_col=0) dataset.head() # + #without learning , accuracy will be more than 70% :"D #i will get more than 70% if i just classify each class belong to class 2 dataset['class'].value_counts(normalize=True) dataset["class"].value_counts(normalize=True).plot(kind="bar", yticks=[0.0, 0.2, 0.4, 0.6, 0.8, 1.0], rot=0) #plt.savefig('unpalanced_data.png', bbox_inches='tight') # - # ### shuffle the dataset # + from sklearn.utils import shuffle shuffled_dataset = shuffle(dataset, random_state = 0) shuffled_dataset.head(10) training_set = shuffled_dataset[test_size:] testing_set = shuffled_dataset[0:test_size] X_test = testing_set.iloc[:,1:4] Y_test = testing_set.iloc[:,4] X_train = training_set.iloc[:,1:4] Y_train = training_set.iloc[:,4] # + from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis qda = QuadraticDiscriminantAnalysis(store_covariance=True) qda.fit(X_train, Y_train) ''' X_pred = pd.read_csv('test_all.csv') y_pred = qda.predict(X_pred.iloc[:, 1:4]) my_submission = pd.DataFrame({'Id': X_pred.iloc[:,0], 'class': y_pred}) # you could use any filename. We choose submission here my_submission.to_csv('submission0.csv', index=False) ''' print("QDA Priors : "+str(qda.priors_)) print("QDA Decision Function : "+str(qda.decision_function(X_train))) print("QDA Score : "+str(qda.score(X_test, Y_test))) # - # # Building QuadraticDiscriminantAnalysis (QDA) # + #You have to separate the two classes #Skin class class1 = training_set[training_set['class'] == 1] #non_skin class class2 = training_set[training_set['class'] == 2] print(class1.describe()) print(class2.describe()) # + #Calculate the mean of each class class1_mu = (class1[['B', 'G', 'R']].mean(axis=0)).values class2_mu = (class2[['B', 'G', 'R']].mean(axis=0)).values #Calculate the standard deviation of each class class1_sigma = (class1[['B', 'G', 'R']].cov()).values class2_sigma = (class2[['B', 'G', 'R']].cov()).values #Inverse the standard deviation of each class class1_sigmaInv =inv(class1_sigma) class2_sigmaInv =inv(class2_sigma) #Calculate the threshold p1 = class1.shape[0]/training_set.shape[0] p2 = class2.shape[0]/training_set.shape[0] th = math.log(p2/p1) t1=np.dot(np.dot(class2_mu.T, class2_sigmaInv),class2_mu) t2=np.dot(np.dot(class1_mu.T, class1_sigmaInv),class1_mu) term=t1-t2 # - # ### Evaluate the QDA # + def score( x ): quadratic = np.dot(np.dot(x.T,(class2_sigmaInv - class2_sigmaInv)),x) linear = 2*np.dot(x.T,np.reshape(np.dot(class2_sigmaInv,class2_mu)-np.dot(class1_sigmaInv,class1_mu),(3,1))) return quadratic-linear+term def classify (h_x): if(h_x > th): return 1 else: return 2 y_score = np.apply_along_axis( score, axis=1, arr=X_test.values ) y_pred =np.apply_along_axis(classify, axis=1, arr= y_score) print("Accuracy: "+str(sum(y_pred==Y_test)*100/X_test.shape[0])) # - from sklearn.metrics import confusion_matrix cm = confusion_matrix(Y_test, y_pred) print("Confusion matrix of the implemented QDA:") print(cm) from sklearn.metrics import confusion_matrix lin_cm = confusion_matrix(Y_test, qda.predict(X_test)) print("Confusion matrix of the built-in QDA:") print(lin_cm) # # Building KNN # + neighbors = 3 knn = KNeighborsClassifier(n_neighbors=neighbors) knn.fit(X_train, Y_train) y_predKnn = knn.predict(X_test) cm_knn = confusion_matrix(Y_test, y_predKnn) print("Confusion matrix of the Knn model:") print(cm_knn) # - # # Building LogisticRegression from sklearn.linear_model import LogisticRegression logreg = LogisticRegression() logreg.fit(X_train, Y_train) print('Accuracy of Logistic regression classifier on training set: {:.2f}' .format(logreg.score(X_train, Y_train))) print('Accuracy of Logistic regression classifier on test set: {:.2f}' .format(logreg.score(X_test, Y_test))) # # Building SVM from sklearn.svm import SVC clf = SVC() clf.fit(X_train, Y_train) print ('SVM classifier score : ', clf.score(X_test, Y_test)) print ('Pred label : ', clf.predict(X_test)) # # Visualizations # ### countplot # + import seaborn as sns import pylab as pl from pandas.tools.plotting import scatter_matrix from matplotlib import cm sns.countplot(dataset['class'],label="Count") plt.show() # - # ### Box plot dataset.drop(['class', 'id'], axis=1).plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False, figsize=(9,9), title='Box Plot for each input variable') plt.savefig('Box_plot.png') plt.show() # ### Histogram plot dataset.drop(['class', 'id'] ,axis=1).hist(bins=30, figsize=(9,9)) pl.suptitle("Histogram for each numeric input variable") plt.savefig('RGB_hist') plt.show() # ### Scatter matrix plot feature_names = ['R', 'G', 'B'] X = dataset[feature_names] y = dataset['class'] cmap = cm.get_cmap('gnuplot') scatter = pd.scatter_matrix(X, c = y, marker = 'o', s=40, hist_kwds={'bins':15}, figsize=(9,9), cmap = cmap) plt.suptitle('Scatter-matrix for each input variable') plt.savefig('RGB_scatter_matrix') plt.show() # ### visualize the relationship between the features and the response using scatterplots # sns.pairplot(dataset, x_vars=['R', 'G', 'B'], y_vars='class', size=3)#, aspect=0.7 plt.show() # ### Histogram of predicted probabilities # 8 bins plt.hist(y_pred, bins=8) plt.title('Histogram of predicted probabilities') plt.xlabel('Predicted probability of classes') plt.ylabel('Frequency') plt.show() # # Compare Algorithms # + # prepare models models = [] models.append(('LR', LogisticRegression())) models.append(('KNN', KNeighborsClassifier())) models.append(('QDA', QuadraticDiscriminantAnalysis())) models.append(('SVM', SVC())) neighbors = 3 # evaluate each model in turn results = [] names = [] scoring = 'accuracy' for name, model in models: kfold = model_selection.KFold(n_splits=10, random_state= 7) cv_results = model_selection.cross_val_score(model, X_train, Y_train, cv=kfold, scoring=scoring) results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std()) print(msg) # - # ### Boxplot for algorithm comparison fig = plt.figure(figsize=(10,10)) fig.suptitle('Algorithm Comparison') ax = fig.add_subplot(111) plt.boxplot(results) ax.set_xticklabels(names) #plt.savefig('algorithms_comparision') plt.show()
train.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Population and Sample import numpy as np np.random.seed(42) population=np.random.randint(0,50,10000) population len(population) np.random.seed(42) sample=np.random.choice(population, 100) np.random.seed(42) sample_1000=np.random.choice(population, 1000) len(sample) len(sample_1000) sample sample.mean() sample_1000.mean() population.mean() np.random.seed(42) for i in range(20): sample=np.random.choice(population, 100) print(sample.mean()) np.random.seed(42) sample_means=[] for i in range(20): sample=np.random.choice(population, 10000) sample_means.append(sample.mean()) sample_means np.mean(sample_means) population.mean() sum(sample_means)/len(sample_means) # ## Skewness and Kurtosis import numpy as np from scipy.stats import kurtosis, skew from scipy import stats import matplotlib.pyplot as plt # + x=np.random.normal(0,2,1000) # print(excess kurtosis of normal distribution (should be 0): {}'.format( kurtosis(x) )) # print(skewness of normal distribution (should be 0): {}'.format( skew(x) )) #In finance, high excess kurtosis is an indication of high risk. # - plt.hist(x,bins=100); # + x=np.random.normal(0,2,1000000) # print(excess kurtosis of normal distribution (should be 0): {}'.format( kurtosis(x) )) # print(skewness of normal distribution (should be 0): {}'.format( skew(x) )) #In finance, high excess kurtosis is an indication of high risk. # - plt.hist(x,bins=100); kurtosis(x) skew(x) shape, scale = 2, 2 s=np.random.gamma(shape,scale, 1000) plt.hist(s, bins=100); shape, scale = 2, 2 s=np.random.gamma(shape,scale, 100000) plt.hist(s, bins=100); kurtosis(s) skew(s) shape, scale = 6, 2 s=np.random.gamma(shape,scale, 100000) plt.hist(s, bins=100); kurtosis(s) skew(s)
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/CLARUSWAY/STATISTICS/Statistics_Session_2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Ruby 3.0.0 # language: ruby # name: ruby # --- # # Open Classes in Ruby # # In Ruby, classes are never closed; you can always add methods to an existing class. This applies to the classes you write as well as the standard, built-in classes. All you have to do is open up a class definition for an existing class, and the new contents you specify will be added to thatever's there. # # We are going to add the method `dispAttr` to the [MotorCycle](assets/motorcycle.rb) class. # + require './assets/motorcycle' m = MotorCycle.new('Yamaha', 'yellow') m.start_engine class MotorCycle def disp_attr puts "Color of MotorCycle is #{@color}" puts "Make of MotorCycle is #{@make}" end end m.disp_attr m.start_engine puts self.class puts self # - # Please note that `self.class` refers to `Object` and `self` refers to an object called `main` of class `Object`. # + require './assets/dog' class Dog def big_bark puts 'Woof! Woof!' end end d = Dog.new('Labrador', 'Benzy') d.bark d.big_bark d.display # - # Here's another example of adding a method to the `String` class. # # We would like to add a `write_size` method to the `String` class. But first, we shall get a list of all methods in Ruby's standard `String` class that begins with `wr`: String.methods.grep /^wr/ # We observe that the `String` class has no method starting with `wr`. # + class String def write_size self.size end end size_writer = 'Tell me the size!' puts size_writer.write_size # - # > If you're writing a new method that conceptually belongs in the original class, you can reopen the class and append your method to the class definition. You should only do this if your method is generally useful, and you're sure it won't conflict with a method defined by some library you include in the future. If your method isn't generally useful, or you don't want to take the risk of modifying a class after its initial creation, create a subclass of the original class. The subclass can override its parent's methods, or add new ones. This is safer because the original class, and any code that depended on it, is unaffected.
notebooks/OpenClasses.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.6 64-bit # metadata: # interpreter: # hash: df0893f56f349688326838aaeea0de204df53a132722cbd565e54b24a8fec5f6 # name: python3 # --- import pandas as pd import matplotlib.pyplot as plt import numpy as np df = pd.read_csv("https://raw.githubusercontent.com/dsacademybr/PythonFundamentos/master/Cap11/pima-data.csv") df.shape df.head() df.tail(5) df.isnull().values.any() def plot_corr(df, size=5): corr = df.corr() fig, ax = plt.subplots(figsize = (size, size)) print(fig, ax) ax.matshow(corr) plt.xticks(range(len(corr.columns)), corr. columns) plt.yticks(range(len(corr.columns)), corr. columns) plot_corr(df) df.corr() diabetes_map = {True: 1, False: 0} df['diabetes'] = df['diabetes'].map(diabetes_map) df.head() # + num_true = len(df.loc[df['diabetes'] == 1]) num_false = len(df.loc[df['diabetes'] == 0]) print("Cases True:", num_true, (num_true/(num_true + num_false))*100, "%") print("Cases False:", num_false, (num_false/(num_false + num_true))*100, "%") # - from sklearn.model_selection import train_test_split atributos = ['num_preg', 'glucose_conc', 'diastolic_bp', 'thickness', 'insulin', 'bmi', 'diab_pred', 'age'] atrib_prev = ['diabetes'] x = df[atributos].values y = df[atrib_prev].values split_test_size = 0.3 x_treino, x_teste, y_treino, y_teste = train_test_split(x, y, test_size = split_test_size, random_state = 42) # + # TRATANDO DADOS MISSING IMPUTE # - print("# Linhas no dataframe {0}".format(len(df))) print("# Linhas missing glucose_conc: {0}".format(len(df.loc[df['glucose_conc'] == 0]))) print("# Linhas missing diastolic_bp: {0}".format(len(df.loc[df['diastolic_bp'] == 0]))) print("# Linhas missing thickness: {0}".format(len(df.loc[df['thickness'] == 0]))) print("# Linhas missing insulin: {0}".format(len(df.loc[df['insulin'] == 0]))) print("# Linhas missing bmi: {0}".format(len(df.loc[df['bmi'] == 0]))) print("# Linhas missing age: {0}".format(len(df.loc[df['age'] == 0]))) from sklearn.impute import SimpleImputer # + preenche_0 = SimpleImputer(missing_values = 0, strategy = "mean") X_treino = preenche_0.fit_transform(x_treino) X_teste = preenche_0.fit_transform(x_teste) # - X_treino from sklearn.naive_bayes import GaussianNB modelo_v1 = GaussianNB() modelo_v1.fit(X_treino, y_treino.ravel()) from sklearn import metrics nb_predict_train = modelo_v1.predict(X_treino) print("Exatidão (Accuracy): {0:.4f}".format(metrics.accuracy_score(y_treino, nb_predict_train))) nb_predict_test = modelo_v1.predict(X_teste) print("Exatidão (Accuracy): {0:.4f}".format(metrics.accuracy_score(y_teste, nb_predict_test))) print() # + # Criando uma Confusion Matrix print("Confusion Matrix") print("{0}".format(metrics.confusion_matrix(y_teste, nb_predict_test, labels = [1, 0]))) print("") print("Classification Report") print(metrics.classification_report(y_teste, nb_predict_test, labels = [1, 0])) # - from sklearn.linear_model import LogisticRegression modulo_v3 = LogisticRegression(C = 0.7, random_state = 42) modulo_v3.fit(X_treino, y_treino.ravel()) lr_predict_test = modulo_v3.predict(X_teste) print("Exatidão (Accuracy): {0:.4f}".format(metrics.accuracy_score(y_teste, lr_predict_test))) print() print("Classification Report") print(metrics.classification_report(y_teste, lr_predict_test, labels = [1, 0])) import pickle filename = 'modelo_treinado_v3.sav' pickle.dump(modulo_v3, open(filename, 'wb')) loaded_model = pickle.load(open(filename, 'rb')) resultado1 = loaded_model.predict(X_teste[15].reshape(1, -1)) resultado2 = loaded_model.predict(X_teste[18].reshape(1, -1)) print(resultado1) print(resultado2)
Machine_Learning/algorithms/Cases/diabetes.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### 최적의 정규화 가중치 찾아내기 # - model : 정류장 1개당 model로, ols를 저장한 model과 동일하다. # - 정류장 1000개 모델별 r_squared, r_squared_adj 를 구하고, 평균값이 가장 높게 하는 가중치를 선택한다. # - 이 방식은 우리가 최종 r_squared를 구하는 것과는 약간 다르지만, 최적의 가중치를 얻는데 인사이트를 얻기위해 진행한다. len(models) # + #모델들의 rsquared2 합치기 rsquared2=[] for model in models: result=model.fit() rsquared2.append(result.rsquared) rsquared2=[round(rsquared,3) for rsquared in rsquared2] #모델들의 rsquared_adj 합치기 rsquared_adj2=[] for model in models: result=model.fit() rsquared_adj2.append(result.rsquared_adj) rsquared_adj2=[round(rsquared_adj,3) for rsquared_adj in rsquared_adj2] #rsquared 데이터프레임만들기 #null,inf,음수,1이상 전처리. rsquared_df2=pd.DataFrame(rsquared2,rsquared_adj2).reset_index().rename(columns={'index':'rsquared_adj',0:'rsquared'}) rsquared_df2=rsquared_df2.fillna(0) rsquared_df2[np.isinf]=0 rsquared_df2[rsquared_df2<0]=0 rsquared_df2[rsquared_df2>1]=1 rsquared_df2 # - # - rsquared_df와 rsquared_df2의 연관성 #정규화 하지 않은 경우 rsquared_df2['rsquared'].median() #정규화 가중치에 따라 models의 평균 rsquared, rsquared_adj 를 반환하는 함수 #이때의 rsquared,rsquared_adj는 각 모델별 값으로, 실제 우리의 모델과는 구하는 방식이 다를 수 있다. #따라서, 아래의 함수는 최적의 가중치를 얻어내는 인사이트로만 활용하자. def weight_regularization(models,L1_wt=0): df_mean=pd.DataFrame() for n in np.arange(0.01,0.1,0.01).tolist(): results_fr_rsquared=[] results_fr_rsquared_adj=[] rs_mean=[] rs_adj_mean=[] for model in models: #정규화 #alpah 가중치=n results_fu = model.fit() results_fr = model.fit_regularized(L1_wt=L1_wt, alpha=n, start_params=results_fu.params) results_fr_fit = sm.regression.linear_model.OLSResults(model, results_fr.params, model.normalized_cov_params) #rsquared,rsquared_adj results_fr_rsquared.append(results_fr_fit.rsquared) results_fr_rsquared_adj.append(results_fr_fit.rsquared_adj) results_fr_rsquared=[round(rsquared,3) for rsquared in results_fr_rsquared] results_fr_rsquared_adj=[round(rsquared,3) for rsquared in results_fr_rsquared_adj] #dataframe results_fr_df=pd.DataFrame({'rsquared':results_fr_rsquared,'rsquared_adj':results_fr_rsquared_adj}) results_fr_df=results_fr_df.fillna(0) results_fr_df[np.isinf]=0 results_fr_df[results_fr_df<0]=0 results_fr_df[results_fr_df>1]=1 #mean rs_mean.append(results_fr_df['rsquared'].mean()) rs_adj_mean.append(results_fr_df['rsquared_adj'].mean()) #mean_dataframe df=pd.DataFrame({'alpha':n,'rs_mean':rs_mean,'rs_adj_mean':rs_adj_mean}) df_mean=pd.concat([df_mean,df]) max_of_rs_mean=df_mean[df_mean['rs_mean']==df_mean['rs_mean'].max()] max_of_rs_adj_mean=df_mean[df_mean['rs_adj_mean']==df_mean['rs_adj_mean'].max()] return df_mean,max_of_rs_mean,max_of_rs_adj_mean #L1_wt=0 : ridge 모형 #최적의 alpha : 0.01 df_mean,max_of_rs_mean,max_of_rs_adj_mean=weight_regularization(models,0) df_mean max_of_rs_mean max_of_rs_adj_mean ##L1_wt=1 : Lasso 모형 #최적의 alpha : df_mean,max_of_rs_mean,max_of_rs_adj_mean=weight_regularization(models,1) df_mean max_of_rs_mean max_of_rs_adj_mean ##L1_wt=0.05 : Elastic Net 모형 #최적의 alpha : df_mean,max_of_rs_mean,max_of_rs_adj_mean=weight_regularization(models,0.5) df_mean max_of_rs_mean max_of_rs_adj_mean # #### 결론 # - ridge 정규화, alpha:0.01 이 가장 적합하다. # ### 정규화를 적용하여 모델 돌리기 # - 사용한 정규화 : ridge 정규화, alpha:0.01 # - result = model.fit_regularized(alpha=0.01, L1_wt=0) # - 정규화 과정에서 변경된 부분만 나타내기 위하여 데이터로드,변수추가,스케일링등의 과정은 생략함 # - ```ols_validation``` 함수에서 model 변수를 return한 이유는, 위에서 나타낸 최적의 정규화 가중치를 구하기 위해서이다. 따라서 실제 정규화를 적용할 때는 없애도 된다. # + # split 데이터 검증용 # train + validation data를 받아서 임의의 정류장 갯수를 가져옴 def split(df_train, num, seed): test_frame = pd.DataFrame(columns=df_train.columns) np.random.seed(seed) for i in np.random.choice(df_train['station_code'].unique(), num, replace=False): df1 = df_train[df_train['station_code'] == i] test_frame = pd.concat([test_frame, df1]) return test_frame # 데이터 검증용 함수 # df_train(or test_frame)를 train_df(학습)와 validation_df(검주ㅡㅇ)를 만듬 # train에는 중복제거된 데이터가 포함되어있음 # train과 validation을 나누는 과정에서 random_state를 다르게 주면 데이터가 바뀜(Kfold와 비슷한 효과) def make_train_validation(dataframe, cate, test_size, seed): train_df = pd.DataFrame(columns=dataframe.columns) validation_df = pd.DataFrame(columns=dataframe.columns) total = tqdm(dataframe['station_code'].unique()) print('make_train_validation 실행중....') for i in total: df1 = dataframe[dataframe['station_code'] == i] # 필수 train 데이터를 만들고 nec_train = df1.drop_duplicates(subset=cate) # 필수 train 데이터를 제외한 train_validation을 만듬 train_validation = df1.drop(df1['id'][nec_train['id']]) # 만약 필수 train데이터와 train_validation의 크기가 같다(모두 고윳값이다)면 if len(nec_train) == len(df1): #그냥 모두 train_df에 넣음,(validation에는 넣지않음, 박사님 피드백) train_df = pd.concat([train_df, nec_train]) # 만약 train_validation의 갯수가 1이하면, train_validation은 바로 validation이 됨 elif len(train_validation) <= 1: # train_df와 train_df = pd.concat([train_df, nec_train]) # validation_df를 생성함 validation_df = pd.concat([validation_df, train_validation]) # 그 외에는 필수 train + train , validation 으로 나눠줌 else: X, y = train_test_split( train_validation, test_size=test_size, random_state=seed) train_a = pd.concat([nec_train, X]) train_df = pd.concat([train_df, train_a]) validation_df = pd.concat([validation_df, y]) return train_df, validation_df # 검증 모델 # 만일 validation_df의 station_code가 없다면 학습만하고 예측을 하지 않음 def ols_validation(train_df, validation_df, var, cate): total = tqdm(train_df['station_code'].unique()) columns = train_df.columns df_tr = pd.DataFrame(columns=columns) df_te = pd.DataFrame(columns=columns) df_tr['yhat'] = 999 df_te['yhat'] = 999 cate_c = [f"C({name})" for name in cate] y = ['scale_ride18'] print('ols_validation 실행중....') models=[] for i in total: train_ols = train_df[train_df['station_code'] == i] validation_ols = validation_df[validation_df['station_code'] == i] if len(validation_ols) ==0: model = sm.OLS.from_formula( 'scale_ride18 ~ ' + '+'.join(var) + '+'.join('+') + '+'.join(cate_c), data=train_ols) models.append(model) # 학습 result = model.fit_regularized(alpha=0.01, L1_wt=0) # 결과 train_ols['yhat'] = result.predict(train_ols) # 학습 저장 df_tr = pd.concat([df_tr, train_ols]) else : model = sm.OLS.from_formula( 'scale_ride18 ~ ' + '+'.join(var) + '+'.join('+') + '+'.join(cate_c), data=train_ols) models.append(model) # 학습 result = model.fit_regularized(alpha=0.01, L1_wt=0) # 결과 train_ols['yhat'] = result.predict(train_ols) # 학습 저장 df_tr = pd.concat([df_tr, train_ols]) validation_ols_df = validation_ols[var+cate] # 테스트 모델 validation_ols['yhat'] = result.predict(validation_ols_df) df_te = pd.concat([df_te, validation_ols]) return df_tr, df_te, models # R스퀘어 구하기 # 앞에서 train과 validation 분리한 seed를 넣어 어떤 seed가 어떤 결정계수가 나왔는지 확인한다. # DataFrame형태로 반환 def get_rsquared(df_tr, df_te, seed): df_tr['residual'] = df_tr['scale_ride18'] - df_tr['yhat'] df_tr['explained'] = df_tr['yhat'] - np.mean(df_tr['yhat']) df_tr['total'] = df_tr['scale_ride18'] - np.mean(df_tr['scale_ride18']) df_te['residual'] = df_te['scale_ride18'] - df_te['yhat'] df_te['explained'] = df_te['yhat'] - np.mean(df_te['yhat']) df_te['total'] = df_te['scale_ride18'] - np.mean(df_te['scale_ride18']) train_ess = np.sum((df_tr['explained'] ** 2)) train_rss = np.sum((df_tr['residual'] ** 2)) train_tss = np.sum((df_tr['total'] ** 2)) test_ess = np.sum((df_te['explained'] ** 2)) test_rss = np.sum((df_te['residual'] ** 2)) test_tss = np.sum((df_te['total'] ** 2)) rsquared = {'seed': [f'{seed}'], 'train_rsquared_1': [1-train_rss/train_tss], 'train_rsquared_2': [train_ess/train_tss], 'validation_rsquared_1': [1-test_rss/test_tss], 'validation_rsquared_2': [test_ess/test_tss], 'train_ESS' : [round(train_ess)], 'train_RSS' : [round(train_rss)], 'train_TSS' : [round(train_tss)], 'validation_ESS' : [round(test_ess)], 'validation_RSS' : [round(test_rss)], 'validation_TSS' : [round(test_tss)], 'train_RMSE' : [np.sqrt(((df_tr['scale_ride18'] - df_tr['yhat']) ** 2).mean())], 'validation_RMSE' : [np.sqrt(((df_te['scale_ride18'] - df_te['yhat']) ** 2).mean())], } print(f'seed : {seed} 완료') return pd.DataFrame(rsquared) # + # 한번에 여러번 해보기 # 위에서 만든 make_train_validation -> ols_validation -> get_rsquared의 순서를 거침 # 시드를 리스트로 받아서 # dataframe = (train과 validation으로 나누기 전의 데이터) # seeds = 리스트형태의 seed 목록 (갯수만큼 ols를 검증함) # test_size = validation_size, 왠만하면 0.2로 고정해주세요 # 리턴되는 rsquared_df 데이터 프레임에 train, validation의 결정계수, ess, rss, tss, RMSE를 seed 별로 데이터프레임으로 만들어줍니다. def validations(dataframe, seeds, test_size): rsquared_df = pd.DataFrame() for seed in seeds: train_df, validation_df = make_train_validation(dataframe, cate, test_size, seed) df_tr, df_te, models = ols_validation(train_df, validation_df, var, cate) rsquared = get_rsquared(df_tr, df_te, seed) rsquared_df = pd.concat([rsquared_df, rsquared]) return rsquared_df, models # - # 상위 정류장 1000개 만들고 # 전체 3563개의 정류장 중 상위 1000개의 정류장이 전체 데이터의 약 76% 이상을 차지 함 dataframe_1000 = make_top_station(1000) # + # 여기서 변수 변경해가면서 검증해보세요 var_total = ['scale_ride6','scale_ride7','scale_ride8','scale_ride9', 'scale_ride10','scale_ride11', 'scale_off6','scale_off7','scale_off8','scale_off9','scale_off10','scale_off11', 'scale_temperature','scale_precipitation','scale_bus_interval', 'scale_ride67','scale_ride89','scale_ride1011','scale_off67','scale_off89', 'scale_off1011', 'scale_ride_sum','scale_off_sum','scale_bus_route_id_sum','scale_bus_route_id_all_sum'] # 'scale_ride67','scale_ride89','scale_ride1011','scale_off67','scale_off89', 'scale_off1011' # 는 2시간 더한 컬럼입니다. # 검증에 사용할 실수 변수를 넣으세요 var = ['scale_temperature','scale_precipitation','scale_bus_interval', 'scale_ride_sum','scale_off_sum','scale_bus_route_id_sum','scale_bus_route_id_all_sum', 'scale_ride67','scale_ride89','scale_ride1011','scale_off67','scale_off89', 'scale_off1011'] # 'scale_ride6','scale_ride7','scale_ride8','scale_ride9', 'scale_ride10','scale_ride11', # 'scale_off6','scale_off7','scale_off8','scale_off9','scale_off10','scale_off11', # 검증에 사용할 카테고리 변수를 넣으세요 cate = ['bus_route_id','in_','out', 'weekend', 'weekday', 'holiday', 'typhoon'] # , seeds = [300, 20, 30, 40, 50] # 랜덤시드를 주어서 Kfold한 효과를 가져옴 test_size = 0.3 #변경해도 상관은 없는데 그냥 0.3로 하는게.. 그래야 7.5:2.5 정도 나옵니다. rsquared_df,models = validations(dataframe_1000, seeds, test_size) rsquared_df
sjh/Final03_model_test_ver_0.0.2_regularization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %config IPCompleter.greedy=True # %load_ext autoreload # %autoreload 2 import numpy as np import matplotlib.pyplot as plt import sklearn.preprocessing, sklearn.datasets, sklearn.model_selection # - # In this notebook, we will be dealing with multiclass classification. We will have finally model, that can distinguish between all the numbers from the MNIST dataset and we will not need to deal with 4 and 9 only. The proper way of handling this problem is to use *softmax* function. I will show different approaches before, so we can compare them. # Firstly, we need data. The template is still the same, so I will not describe it any more. data, target = sklearn.datasets.fetch_openml('mnist_784', version=1, return_X_y=True, as_frame=False) target = target.astype(int) data = data.reshape(-1, 784) data[data < 128] = 0 data[data > 0] = 1 data = np.hstack([data, np.ones((data.shape[0],1))]) train_data, test_data, train_target, test_target = sklearn.model_selection.train_test_split(data, target.astype(int), test_size=0.3, random_state=47) # # Perceptron # # If you remember, we dealt with this problem in one of the previous notebook, when we were talking about the perceptron algorithm. Just as a reminder, let's do it once again here, co we may compare the results. I moved it into a separate class, so I don't need to copy-paste it here once again. If you are interested, it is in the [src/perceptron.py](src/perceptron.py) file. # + from src.perceptron_05 import multiclass_perceptron train_acc, test_acc = multiclass_perceptron(train_data, train_target, test_data, test_target, iters=500, random_state=42) print(f"Train accuracy: {train_acc}, Test accuracy: {test_acc}") # - # The results don't tell much yet. The test accuracy is maybe too low compare to the train accuracy, but we will see how different models will behave. # # One-vs-one # # We may use the same approach for the logistic regression as well. I will use the `Neuron` implementation from previous notebooks. You can recall it in [src/neuron_05.py](src/neuron_05.py) file. We have seen the `BCELoss` class as well, so I will just copy it. from src.neuron_05 import Neuron class BCELoss: def __call__(self, target, predicted): return np.sum(-target * np.log(np.maximum(predicted, 1e-15)) - (1 - target) * np.log(np.maximum(1 - predicted, 1e-15)), axis=0) def gradient(self, target, predicted): return - target / (np.maximum(predicted, 1e-15)) + (1 - target) / (np.maximum(1 - predicted, 1e-15)) # Now let's make train the model for each combination. We have seen the code before, so again, I will not comment on it. # train models models = np.empty((10,10), dtype=object) for i in range(10): for j in range(i): models[i][j] = Neuron(BCELoss(), epochs=200, learning_rate=0.001, batch_size=128, random_state=42+i*10+j) mask = np.logical_or(train_target == i, train_target == j) current_X = train_data[mask] current_y = (train_target[mask] - j) / (i - j) models[i][j].fit(current_X, current_y, progress=True) # That took a while, but we have the models. We may now predict the class of each model and take simply the majority for the example. # predict train_predictions = np.zeros((train_target.shape[0], 10), dtype=int) test_predictions = np.zeros((test_target.shape[0], 10), dtype=int) for i in range(10): for j in range(i): prediction = np.around(models[i][j].predict(train_data)) train_predictions[prediction == 0, j] += 1 train_predictions[prediction == 1, i] += 1 prediction = np.around(models[i][j].predict(test_data)) test_predictions[prediction == 0, j] += 1 test_predictions[prediction == 1, i] += 1 train_predictions = train_predictions.argmax(axis=1) test_predictions = test_predictions.argmax(axis=1) train_acc = sklearn.metrics.accuracy_score(train_target, train_predictions) test_acc = sklearn.metrics.accuracy_score(test_target, test_predictions) print(f"Train accuracy: {train_acc}, Test accuracy: {test_acc}") # That doesn't seem bad. The accuracy is a bit higher (for the test set) compared to the simple perceptron. Although the training accuracy is lower than the perceptron one, this is not what we care about. When we have a model, we care about its generalization error. The generalization is better for the logistic regression, as the test accuracy is better and training accuracy is closer to the test accuracy. # However, we didn't use the full potential of the model. Remember, that the logistic regression returns probabilities, not classes. And as you may think, probabilities $45\% : 55\%$ should have lower impact on the result that accuracies $1\% : 99\%$. But in the previous case, we treat them same. # # What if we were adding the probabilities instead of classes? # predict train_predictions = np.zeros((train_target.shape[0], 10), dtype=float) test_predictions = np.zeros((test_target.shape[0], 10), dtype=float) for i in range(10): for j in range(i): prediction = models[i][j].predict(train_data) train_predictions[:, j] += 1-prediction train_predictions[:, i] += prediction prediction = models[i][j].predict(test_data) test_predictions[:, j] += 1-prediction test_predictions[:, i] += prediction train_predictions = train_predictions.argmax(axis=1) test_predictions = test_predictions.argmax(axis=1) train_acc = sklearn.metrics.accuracy_score(train_target, train_predictions) test_acc = sklearn.metrics.accuracy_score(test_target, test_predictions) print(f"Train accuracy: {train_acc}, Test accuracy: {test_acc}") # Finally, this is the best result so far. By using probabilities, we allowed the model to use its full potential. # # Tree-based approach # # > Note that this chapter is more playing around than what would be done in reality. However, I thought it may be interesting for some of you to see what can be done and how it performs. If you want to see only the mandatory parts, that you gonna need in the following notebooks, please skip to the next chapter. # # The previous approach, sometimes called *one-vs-one*, had good results, however, we needed to train too many models - in fact when we have $c$ classes, we need $\frac{c\cdot(c-1)}{2}$ models. In our case that is $45$ models in total. In reality, that's too much, especially if training of one model took hours or days. First what we may think about is build a tree. # # We may think about it as a decision tree. At each node, there is a logistic regression that tells us, which sides to continue. We put the resulting classes to the leaves. We still need only binary classification, so we may use the perceptron or logistic regression. We may visualize the tree. # # ![tree in order](img/TreeInOrder_07.svg) # Let's try it. As the implementation is straightforward, but still quite long (I preferred simplicity in this case), I left the implementation itself in the [src/treebased_07.py](src/treebased_07.py) file. Fell free to look at it. from src.treebased_07 import TreeInOrder tree = TreeInOrder(BCELoss(), epochs=400, learning_rate=0.001, batch_size=128, random_state=42) tree.fit(train_data, train_target, progress=True) # You may notice how times decreases. More precisely, it decreases after every layer of the tree. This is quiet logical, as nodes at lower levels of tree uses less data for training (they need only about labels in their subtree). And of course, the less data we have, the faster the training is. Let's see the performance of this approach. train_predictions = tree.predict_probbased(train_data) test_predictions = tree.predict_probbased(test_data) train_acc = sklearn.metrics.accuracy_score(train_target, train_predictions) test_acc = sklearn.metrics.accuracy_score(test_target, test_predictions) print(f"Train accuracy: {train_acc}, Test accuracy: {test_acc}") # That doesn't look good. We had almost $92.5\%$ accuracy before, now we are down to $81\%$. Why is that? We must understand that in the tree-based approach, the error is accumulated along the path to the node. Even if each model had only $5\%$ error, it accumulates to $15\%$ or $20\%$ error along the way (from the root of the tree to the leaf). We didn't need to train 45 models and 9 was enough. In other words, we may spend more time train the model to achieve better accuracy compared to one-to-one approach. # During the inference (the prediction), I used probabilities and I computed the probability of each class. Finally, I picked up the class with the highest probability. Once again, we may get rid of the probabilities and round the value to either $0$ or $1$. That results in a binary search tree of some sort. It shouldn't be surprising, that the accuracy gets worse, as we are completely ignoring the cases, when the model knows its not sure about. train_predictions = tree.predict_direct(train_data) test_predictions = tree.predict_direct(test_data) train_acc = sklearn.metrics.accuracy_score(train_target, train_predictions) test_acc = sklearn.metrics.accuracy_score(test_target, test_predictions) print(f"Train accuracy: {train_acc}, Test accuracy: {test_acc}") # Note that the order of nodes is one of the hyperparameters - when we change it, the accuracy can increase or decrease. For example digits $1$ and $7$ are harder to distinguish than $6$ and $8$, so we may keep them together down in the tree. Let's take for example following tree. # # ![tree with custom order](img/TreeSpecialOrder_07.svg) # # The order is just the first one, that I came. Let's see, how the accuracies change. from src.treebased_07 import TreeSpecialOrder tree = TreeSpecialOrder(BCELoss(), epochs=400, learning_rate=0.001, batch_size=128, random_state=42) tree.fit(train_data, train_target, progress=True) train_predictions = tree.predict_direct(train_data) test_predictions = tree.predict_direct(test_data) train_acc = sklearn.metrics.accuracy_score(train_target, train_predictions) test_acc = sklearn.metrics.accuracy_score(test_target, test_predictions) print(f"Train accuracy: {train_acc}, Test accuracy: {test_acc}") train_predictions = tree.predict_probbased(train_data) test_predictions = tree.predict_probbased(test_data) train_acc = sklearn.metrics.accuracy_score(train_target, train_predictions) test_acc = sklearn.metrics.accuracy_score(test_target, test_predictions) print(f"Train accuracy: {train_acc}, Test accuracy: {test_acc}") # As you may see, both direct prediction (when no probabilities are used) and probability-based accuracies increased. Still not a very good, but we were able to increase the accuracy by $0.6\%$ by simply changing the order of the leaves. # As I said, this chapter was more playing around than what would be done in reality. Now let's see the proper way. # # One-to-rest # # The final and the correct approach used in neural networks (we are slowly getting there, aren't we) is to something called one-vs-rest. We are going to train one model for each digit and it will return the probability of that digit. Complement is "it is any other digit". In the end, we are going to pick up the most probable digit as the prediction. First what we are going to need is to train the models. models = np.empty((10,), dtype=object) for i in range(10): current_target = train_target.copy() + 100 # copy the train target and shift it up by 100 current_target[current_target == 100 + i] = 1 # set the current training digit to equal 1 current_target[current_target >= 100] = 0 # all other digits set to 0 models[i] = Neuron(BCELoss(), epochs=400, learning_rate=0.001, batch_size=128, random_state=42+i) models[i].fit(train_data, current_target, progress=True) # The code should be straightforward. Variable `models` holds logistic regressions - at index $i$ (`models[i]`) is model, that predict probability of input being digit $i$. Now let's predict the labels. train_distribution = np.zeros((len(train_target), 10)) test_distribution = np.zeros((len(test_target), 10)) for i in range(10): train_distribution[:,i] = models[i].predict(train_data) test_distribution[:,i] = models[i].predict(test_data) train_predictions = np.argmax(train_distribution, axis=1) test_predictions = np.argmax(test_distribution, axis=1) # Now we have our predictions. Notice that although `train_distribution` and `test_distribution` are called distributions, they are not yet. Remember, probabilities in distribution need's to accumulate to 1. To really obtain the distributions, we would need to divide it by the sum of probabilities. train_distribution = train_distribution / np.sum(train_distribution, axis=1)[:,np.newaxis] test_distribution = test_distribution / np.sum(test_distribution, axis=1)[:,np.newaxis] with np.printoptions(precision=3, suppress=True): print(test_distribution[:10]) # We usually want to have the distribution, so the model should return it instead of the unnormalize distribution. The function, that is responsible for that is called **softmax** and we will talk about it in the very next notebook. I don't want to implement it yet, as our model needs some refactoring, so we may plug it in. # # Let's see the accuracies. train_acc = sklearn.metrics.accuracy_score(train_target, train_predictions) test_acc = sklearn.metrics.accuracy_score(test_target, test_predictions) print(f"Train accuracy: {train_acc}, Test accuracy: {test_acc}") # That looks good. Although the accuracy is not as good as in the first one-vs-one approach. We trade fewer models to train for $2\%$ of accuracy. That sound's bad right now, but this approach has one benefit - we can join all ten models into one and train them in parallel. That allows us to train model better, shorter time, and track the performance during training - we may monitor the progress and, for example, stop models with bad hyperparameters. Shorter time means we may test more hyperparameter combinations and we may, in fact, achieve better score by just modifying learning rate and other hyperparameters (we will have plenty of them through the following notebooks). # # Let's refactor the code and learn about softmax - the final stop before our first neural network.
07-multiclass-classification.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from googletrans import Translator translator = Translator() translator.translate('saya nak makan nasi,这本书正文有500页').text translator.translate('saya nk makan nasi,这本书正文有500页').text hehe = translator.translate('saya nk makan nasi,这本书正文有500页').text print(hehe)
Translation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # <center>LECTURE OVERVIEW </center> # # --- # # # <center> What we've learned so far...</center> # # - What are `if` statements useful for # - How to write simple and multiple condition `if` statements # # ## By the end of the day you will be able to: # - write `for` loops to iterate over containers # - write `for` loops to iterate a given number of times # - update containers in a `for` loop # - write a `list` comprehension # - write a `dictionary` comprehension # # # <center> `for` LOOPS </center> # # --- # # ## <font color='LIGHTGRAY'>By the end of the day you will be able to:<font color='LIGHTGRAY'> # - **write `for` loops to iterate over containers** # - <font color='LIGHTGRAY'>write for loops to iterate a given number of times</font> # - <font color='LIGHTGRAY'>update containers in a for loop</font> # - <font color='LIGHTGRAY'>write a list comprehension</font> # - <font color='LIGHTGRAY'>write a dictionary comprehension</font> # # ```python # sequence_list = [item_1, item_2, item_3, item_4, item_5] # for item in sequence_list: # # do something with item # # do another thing with item # # ... # ``` # + slideshow={"slide_type": "-"} num_list = [1, 2, 3, 4, 5] for item in num_list: print(item) # + slideshow={"slide_type": "subslide"} for item in num_list: print(item) print(item * 10) # + slideshow={"slide_type": "subslide"} letters_list = ['a', 's', 'h', 'l', 'e', 'y'] for item in letters_list: print(item) new_item = item + 'Z' print(new_item) # + [markdown] slideshow={"slide_type": "subslide"} # # Iterating Over `strings` # + slideshow={"slide_type": "-"} # iterate over characters in a string my_str = 'ashley s. lee' for char in my_str: print(char) # + slideshow={"slide_type": "subslide"} # iterate over words in a string (split by space) for word in my_str.split(): print(word) # - # ### **<font color='GREEN'> Exercise</font>** # # Iterate through the items in `my_numbers_list`, add 100 to each item, and print each item. my_numbers_list = [10, 20, 30, 40, 50] # Use the format: # ```python # for item in ... # ``` # + # TODO: insert solution here # + [markdown] slideshow={"slide_type": "subslide"} # ### **<font color='GREEN'> Exercise</font>** # # Iterate over the characters in `this_str` and print the characters. # - this_str = 'I am a dog.' # + [markdown] slideshow={"slide_type": "subslide"} # Use the format: # ```python # for char in ... # ``` # # Then, iterate over and print the words. # # Use the format: # ```python # for word in ... # ``` # + # TODO: insert solution here # + [markdown] slideshow={"slide_type": "subslide"} # # `for` Loops That Iterate Over Dictionaries # + slideshow={"slide_type": "-"} birth_year_dict = dict(Ashley=1990, Rihanna=1992, Emily=1986) print(birth_year_dict) # + slideshow={"slide_type": "-"} for key in birth_year_dict.keys(): print(key) # + slideshow={"slide_type": "subslide"} for value in birth_year_dict.values(): print(value) # + slideshow={"slide_type": "-"} for key, value in birth_year_dict.items(): print(key, value) # - for key, _ in birth_year_dict.items(): print(key) for _, value in birth_year_dict.items(): print(value) # + [markdown] slideshow={"slide_type": "subslide"} # ### **<font color='GREEN'> Exercise</font>** # # Iterate over `cities_dict` and print the keys and values. # - cities_dict = { 'BOS': 'Boston', 'NYC': 'New York City', 'LAX': 'Los Angeles' } # + [markdown] slideshow={"slide_type": "subslide"} # Use the format: # ```python # for key, value in ... # ``` # + slideshow={"slide_type": "subslide"} # TODO: insert solution here # + [markdown] slideshow={"slide_type": "subslide"} # # Iterating Over Containers using `range()` and `enumerate()` # # ## <font color='LIGHTGRAY'>By the end of the day you will be able to:<font color='LIGHTGRAY'> # - <font color='LIGHTGRAY'>write for loops to iterate over containers</font> # - **write `for` loops to iterate a given number of times** # - <font color='LIGHTGRAY'>update containers in a for loop</font> # - <font color='LIGHTGRAY'>write a list comprehension</font> # - <font color='LIGHTGRAY'>write a dictionary comprehension</font> # + slideshow={"slide_type": "-"} for i in range(12): print(i) # + slideshow={"slide_type": "subslide"} my_str = '<NAME>' for i in range(len(my_str)): print(i, my_str[i]) # - for i, elem in enumerate(my_str): print(i, elem) for i, _ in enumerate(my_str): print(i) for _, elem in enumerate(my_str): print(elem) # + [markdown] slideshow={"slide_type": "subslide"} # # Updating Container Elements # # ## <font color='LIGHTGRAY'>By the end of the day you will be able to:<font color='LIGHTGRAY'> # - <font color='LIGHTGRAY'>write for loops to iterate over containers</font> # - <font color='LIGHTGRAY'>write for loops to iterate a given number of times</font> # - **update containers in a `for` loop** # - <font color='LIGHTGRAY'>write a list comprehension</font> # - <font color='LIGHTGRAY'>write a dictionary comprehension</font> # + slideshow={"slide_type": "-"} prime_list = [1, 3, 5, 7, 11] for i in range(len(prime_list)): prime_list[i] = prime_list[i] ** 2 print(prime_list) # + prime_list = [1, 3, 5, 7, 11] for i, elem in enumerate(prime_list): prime_list[i] = elem ** 2 print(prime_list) # + prime_list = [1, 3, 5, 7, 11] for i, elem in enumerate(prime_list): prime_list[i] = elem ** 2 print(prime_list) # + birth_year_dict = dict(Ashley=1990, Rihanna=1992, Emily=1986) for k, v in birth_year_dict.items(): birth_year_dict[k] = v + 1 print(birth_year_dict) # - # You can also use assignment operators (e.g., `+=`) when updating containers too. # # Recall: `x = x + 5` is equivalent to `x += 5` # + prime_list = [1, 3, 5, 7, 11] for i, elem in enumerate(prime_list): prime_list[i] += 5 print(prime_list) # + [markdown] slideshow={"slide_type": "subslide"} # ## Appending to a `list` in a `for` Loop # + slideshow={"slide_type": "-"} coins_list = [0.01, 0.05, 0.1, 0.25] coins_list.append(1.0) print(coins_list) # + slideshow={"slide_type": "-"} new_coins_list = [] new_coins_list.append(1.0) print(new_coins_list) # + slideshow={"slide_type": "subslide"} ints_list = [1, 2, 3, 4] sq_list = [] for item in ints_list: sq_list.append(item**2) print(sq_list) # - # ### **<font color='GREEN'> Exercise</font>** # # Iterate over the length of `years_list`, update the values in `years_list` by adding 5 to each value, and print the index of each list item and the list item. years_list = [1990, 1956, 1959, 1988] # Use the format: # ```python # for i in ... # ``` # + # TODO: insert solution here # + [markdown] slideshow={"slide_type": "subslide"} # ### **<font color='GREEN'> Exercise</font>** # # Iterate over `int_list`, multiply the values by 10, and append the values to an empty list called `tens_list` and print it. # - int_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # + [markdown] slideshow={"slide_type": "subslide"} # Use the format: # ```python # for item in ... # ``` # + # TODO: insert solution here # + [markdown] slideshow={"slide_type": "subslide"} # # `list` Comprehensions # # ## <font color='LIGHTGRAY'>By the end of the day you will be able to:<font color='LIGHTGRAY'> # - <font color='LIGHTGRAY'>write for loops to iterate over containers</font> # - <font color='LIGHTGRAY'>write for loops to iterate a given number of times</font> # - <font color='LIGHTGRAY'>update containers in a for loop</font> # - **write a `list` comprehension** # - <font color='LIGHTGRAY'>write a dictionary comprehension</font> # # ```python # [do something to item for item in iterable] # ``` # + slideshow={"slide_type": "-"} print(num_list) print([item for item in num_list]) # + slideshow={"slide_type": "-"} print([number for number in num_list]) # + slideshow={"slide_type": "-"} print([item * 100 for item in num_list]) # + slideshow={"slide_type": "subslide"} this_str = 'hello world' print(this_str) # + slideshow={"slide_type": "-"} chars_list = [char for char in this_str] print(chars_list) # + slideshow={"slide_type": "-"} words_list = [word for word in this_str.split()] print(words_list) # + slideshow={"slide_type": "subslide"} index_list = [ [i, elem] for i, elem in enumerate(this_str) ] print(index_list) # + [markdown] slideshow={"slide_type": "subslide"} # ### **<font color='GREEN'> Exercise</font>** # # Construct a `list` comprehension that multiplies each number in `exercise_list` by 2 and assign it to a container called `doubles_list`. Print it. # - exercise_list = [10, 20, 30, 40, 50, 100] # + # TODO: insert solution here # + [markdown] slideshow={"slide_type": "slide"} # - <font color='LIGHTGRAY'> write for loops to iterate over containers </font> # - <font color='LIGHTGRAY'> write for loops to iterate a given number of times </font> # - <font color='LIGHTGRAY'> updating containers in a for loop </font> # - <font color='LIGHTGRAY'> write a list comprehension </font> # - write a dictionary comprehension # + [markdown] slideshow={"slide_type": "subslide"} # # `dictionary` Comprehensions # # ## <font color='LIGHTGRAY'>By the end of the day you will be able to:<font color='LIGHTGRAY'> # - <font color='LIGHTGRAY'>write for loops to iterate over containers</font> # - <font color='LIGHTGRAY'>write for loops to iterate a given number of times</font> # - <font color='LIGHTGRAY'>update containers in a for loop</font> # - <font color='LIGHTGRAY'>write a list comprehension</font> # - **write a `dictionary` comprehension** # # ```python # {key: do something to value for key, items in iterable.items()} # ``` # - birth_year_dict = dict(Ashley=1990, Rihanna=1992, Emily=1986) # + slideshow={"slide_type": "-"} print({key: value for key, value in birth_year_dict.items()}) # + slideshow={"slide_type": "-"} print([value for value in birth_year_dict.values()]) # + slideshow={"slide_type": "-"} print({key: value + 5 for key, value in birth_year_dict.items()}) # + [markdown] slideshow={"slide_type": "subslide"} # ### **<font color='GREEN'> Exercise</font>** # # Construct a `dictionary` comprehension that multiplies each value by 5. Print your result. # - cities_dict = { 'BOS': 5, 'NYC': 10, 'LAX': 15 } # + # TODO: insert solution here # - # # Conclusion # # ## You are now able to: # - write `for` loops to iterate over containers # - write `for` loops to iterate a given number of times # - update containers in a `for` loop # - write a `list` comprehension # - write a `dictionary` comprehension
week_2/day_1_lecture.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd # %matplotlib inline import matplotlib import requests # # TRYING OUT GITHUB Repos # # + stocks = ["goog", "aapl", "fb", "amzn"] public_api_key = "<KEY>" stocks_df = None for stock in stocks: data = requests.get(f"https://cloud.iexapis.com/stable/stock/{stock}/chart/3m?token={public_api_key}").json() stock_df = pd.DataFrame.from_dict(data) stock_df['date'] = pd.to_datetime(stock_df['date'], format="%Y-%m-%d") stock_df = stock_df.set_index('date') stock_df = stock_df[["close"]] stock_df.columns = [ stock ] stock_df = stock_df / stock_df[stock][0] # Normalization at t=0 if stocks_df is None: stocks_df = stock_df else: stocks_df = stocks_df.join(stock_df) stocks_df.plot(figsize=(12,8)) # - # ## This is cool
Lab.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 10. Simulation # Simulation is method of empirically determining probabilities by means of experimentation # # Random number generator: generate the value of uniform (0,1) random variable using the equation: # <span class='eqn'>$$X_{n+1} = (aX_n + c) \bmod m; \;n \ge 0$$</span> # + language="html" # <style> # .tableHeader{ # font-size:16px; # } # # .eqn{ # font-size:16px; # color:blue; # } # </style> # - # --- # ## 10.2 General techniques for simulating continuous random variable # # ### The inverse transformation method # Let U be a uniform (0,1) random variable. For any continuous distribution function F, if we define the random variable Y by # <span class='eqn'>$$ # Y= F^{-1}(U) \\ # y = F^{-1}(x) \implies F(y) = x # $$</span> # then the random variable Y has distribution function F # # ### The rejection method # Suppose that we have a method for simulating a random variable having density function **g(x)**. We can use this method as the basis for simulating from continuous distribution having density **f(x)** by simulating **Y** from **g** and then accepting the simulated value with a probability proportional to **f(Y)/g(Y)**. Let **c** be a constant such that # <span class='eqn'>$$ # \frac{f(y)}{g(y)} \le c; \; \text{for all y} # $$</span> #
a-first-course-in-probability/10-simulation.ipynb
# --- # jupyter: # jupytext: # cell_metadata_json: true # formats: ipynb,py:percent # text_representation: # extension: .py # format_name: percent # format_version: '1.3' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %% [markdown] # Before we begin, we will change a few settings to make the notebook look a bit prettier # %% {"language": "html"} # <style> body {font-family: "Calibri", cursive, sans-serif;} </style> # %% [markdown] # <img src="https://github.com/IKNL/guidelines/blob/master/resources/logos/iknl_nl.png?raw=true" width=200 align="right"> # # # 03 - Survival Analysis # # Perform some basic survival analysis on the synthetic data # # ## Imports # %% # %% [markdown] # ## 3.1 Read the data # Read the data file that you generated in the notebook `01_preprocessing` # %% # %% [markdown] # ## 3.2 Kaplan-Meier analysis # Perform a variety of KM analyses on the dataset. # Some pointers: # # * Calculate the duration (T) - the time to event (in our case, the event is either death or last time we contacted the patient) # * Calculate the censorship (C) # * Apply the KM fitter to the _whole_ dataset and generate KM curves # * Now, generate KM curves to compare the survival depending on: # - Gender (male vs female) <br> # In this case, perform a log rank test: is the survival of males and females different? # - Clinical stage (numeric value) # - Pathological stage (numeric value) # # Make sure you save all your plots in the `results` directory # # **Tip**: take a look at [`lifelines`](https://lifelines.readthedocs.io/en/latest/) # %% # %% [markdown] # ## 3.2 Cox Proportional Hazards analysis # Perform a CPH analysis on the dataset. # # * Make sure you process the variables accordingly (encoding, normalization, etc. where needed) # * Generate a graphical representation of the coefficients # # Make sure you save all your plots in the `results` directory # # **Tip**: take a look at [`lifelines`](https://lifelines.readthedocs.io/en/latest/) (again) # %% # %% [markdown] # * What do the coefficients of the regression tell us? # * Which variable has the largest impact on survival? # * What limitations does the CPH model have? # %% [markdown] # ...
notebooks/03_survival_analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Deep learning from scratch # # Learning objectives of the notebook # - Get used to working with *PyTorch Tensors*, the core data structure needed for working with neural networks; # - Practice using the `autograd` capabilities of PyTorch Tensors to carry out backpropagation without all the pain; # - Apply the useful PyTorch `torch.no_grad` context manager for managing memory consumption; # - Convert a NumPy-based gradient descent algorithm into one relying on PyTorch Tensors! # # PyTorch Basics # [PyTorch](http://pytorch.org) is a Python-based scientific computing package to support deep learning research. It provides tensor support (a replacement of NumPy, of sorts) to provide a fast & flexible platform for experimenting with neural networks. # + jupyter={"outputs_hidden": false} slideshow={"slide_type": "fragment"} import torch import numpy as np print(f'PyTorch version: {torch.__version__}') print(f'NumPy version: {np.__version__}') # + [markdown] slideshow={"slide_type": "subslide"} # The principal data structures in PyTorch are *tensors*; these are pretty much the same as standard multidimensional NumPy arrays. To illustrate this, let's construct a matrix of zeros (of `long` or 64 bit integer `dtype`) in NumPy, and then in PyTorch. # + jupyter={"outputs_hidden": false} slideshow={"slide_type": "fragment"} # zeros construction in NumPy x_np = np.zeros((2,4), dtype=np.int64) print(x_np, x_np.dtype) # + jupyter={"outputs_hidden": false} slideshow={"slide_type": "fragment"} # zeros construction in PyTorch x = torch.zeros(2, 4, dtype=torch.long) # Observe difference in calling syntax! print(x, x.dtype) # + [markdown] slideshow={"slide_type": "subslide"} # You can query a tensor's size (dimensions) with the `size` method (contrast with NumPy array `shape` attribute). # + jupyter={"outputs_hidden": false} slideshow={"slide_type": "fragment"} print(x) print(x.size()) # "size" is *method* for torch tensors print(x_np.shape) # 'shape' is *attribute* returning tuple print(x_np.size) # "size" is *attribute* for np arrays # + slideshow={"slide_type": "fragment"} # torch.Tensor.size() yields subclass of Python tuple print(type(x.size())) # - # As with NumPy, there are a variety of PyTorch data types for arrays: # # | NumPy dtype | PyTorch dtype | Alternative | Tensor class | # |:-:|:-:|:-:|:-:| # | `np.int16` |`torch.Int16` |`torch.short` |`ShortTensor` | # | `np.int32` |`torch.Int32` |`torch.Int` |`IntTensor` | # | `np.int64` |`torch.Int64` |`torch.long` |`LongTensor` | # | `np.float16`|`torch.float16`|`torch.half` |`HalfTensor` | # | `np.float32`|`torch.float32`|`torch.float` |`FloatTensor` | # | `np.float64`|`torch.float64`|`torch.double`|`DoubleTensor`| # # # Many functions and methods in PyTorch have similar names to NumPy functions & methods: print(torch.empty(3, 4, dtype=torch.short), end='\n\n') # like numpy.empty print(torch.ones(3, 4, dtype=torch.short), end='\n\n') # like numpy.ones print(torch.randn(3, 4, dtype=torch.float), end='\n\n') # like numpy.random.randn # You can also construct PyTorch tensors from lists of numerical data or NumPy arrays. # Constructing tensors from lists of data print(torch.tensor([1,2,3]).dtype) # inferred to be 64 bit integers print(torch.Tensor([1,2,3]).dtype) # specifically cast to 32 bit floats # Notice the factory function `torch.tensor` differs from the class constructor `torch.Tensor`. The former *infers* the data type of the tensor to construct from the numerical data input. By constrast, the latter is just an alias for `Torch.FloatTensor` (i.e., the data are cast to 32 bit floating point numbers). # + [markdown] slideshow={"slide_type": "fragment"} # PyTorch Tensors can be converted to NumPy arrays using the method `torch.Tensor.numpy`: # + jupyter={"outputs_hidden": false} slideshow={"slide_type": "fragment"} a = torch.rand(2,3) # first, construct a random PyTorch tensor print(a) print(a.dtype) # + jupyter={"outputs_hidden": false} slideshow={"slide_type": "fragment"} b = a.numpy() # converts to NumPy array (shallow copy; use .copy() for deep copy) print(b) print(type(b)) # - # [What is PyTorch?](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html) at [`pytorch.org`](https://pytorch.org) provides a quick tour through related topics (e.g., tensor indexing, arithmetic operations, elementwise functions, linear algebra, etc.). For the most part, these resemble (although not perfectly) the same corresponding tasks in NumPy. # # # Backpropagation with `autograd` # # Why PyTorch Tensors when all they seem to offer is the same functionality of NumPy arrays? Another related question is why go through the trouble to reimplement everything that's done in NumPy in PyTorch (with slightly different names & APIs)? There are two principle advantages that systems like PyTorch have over NumPy for numerical computing: # # 1. **Automatic differentiation**: PyTorch includes a package called [`autograd`](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html) that computes the backpropagation algorithm for users. As such, the management of gradients (and the associated memory needed) is significantly simplified with the PyTorch framework. This is, of course, very important for implementing gradient descent. # 2. **GPU computation**: GPUs (graphical processing units) are widely available to speed up computation. However, GPU programming remains challenging for most developers with the memory management issues associated with moving data onto GPUs to speed up computation. With PyTorch, much of the work of moving tensors onto GPUs is handled for the user which makes programing with GPUs much easier... and this in turn speeds up a lot of neural network training. # # If we examine the object `a` created above, you can see it has an attribute `device` that can be set in various ways depending on the availability of GPU hardware. a.device # PyTorch tensors have a `device` attribute # Common alternatives: device(type='cpu'), device(type='cuda'), etc. # + [markdown] slideshow={"slide_type": "fragment"} toc-hr-collapsed=true # We'll focus mostly on automatic differentiation today as supported by the [`autograd`](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html) module. Remember, our main reason for wanting to do this is to compute gradients as needed to train neural network parameters (weights & biases) with gradient descent. In PyTorch, automatic differentiation of tensors is achieved using through setting the `requires_grad` attribute to `True` for all relevant `torch.Tensor`s on construction (the default value is `False`). Alternatively, there is also a method `.requires_grad_( ... )` that modifies the `requires_grad` flag in-place (default value `False`). # # Once tensors are defined with the `requires_grad` attribute set correctly, additional space is allocated for intermediate computations (remember all the extra lists of arrays we had to maintain explicitly within the `forward` and `backward` functions?). These are used when calling `torch.Tensor.backward()` to compute all gradients recursively. The intermediate gradients computed can then be retrieved using the attribute `torch.Tensor.grad`. # + [markdown] slideshow={"slide_type": "subslide"} # ### Backpropagation example # # Let's consider a simple polynomial function like below applied to a scalar value $x$: # # $\begin{aligned} &\mathrm{Function:} & f(x) &= 3x^4 -2x^3 + 4x^2 - x + 5 \\ # &\mathrm{Derivative:} & f'(x) &= 12x^3 -6 x^2 + 8x -1\end{aligned}$ # + [markdown] slideshow={"slide_type": "fragment"} # 1. Create tensor `x` with the attribute `requires_grad=True` set in the constructor. # + slideshow={"slide_type": "fragment"} x = torch.tensor(2.0, requires_grad=True) # + [markdown] slideshow={"slide_type": "subslide"} # 2. Map the polynomial function $f$ onto tensor the `x` and assign the result to `y`. You can verify explicitly that, when $x=2$, $f(x)=51$: # $$f(2)=3(2)^4 - 2(2)^3 + 4(2)^2 -(2) +5 = 48-16+16-2+5 = 51$$ # + slideshow={"slide_type": "fragment"} y = 3*x**4 - 2*x**3 + 4*x**2 - x + 5 # Write out computation of y explicitly. print(y) # Notice y has a new attribute: grad_fn print(y.grad_fn) # + [markdown] slideshow={"slide_type": "subslide"} # The object `y` has an associated gradient function accessible as `y.grad_fn`. When `y` is computed and stored, a set of algebraic operations is applied to the tensor `x`. If the derivatives of those operations are known, the `autograd` package provides support for computing those derivatives (that's what the `AddBackward0` object is). Invoking `y.backward()`, then, computes the value of *gradient* of `y` with respect to `x` evaluated at `x==2`: # # $$f'(2) = 12(2^3) - 6(2^2) + 8(2) - 1 = 96-24+16-1 = 87. $$ # # Notice that the computed gradient value is stored in the attribute `x.grad` of the original tensor `x`. # + slideshow={"slide_type": "fragment"} y.backward() # Compute derivatives and propagate values back through tensors on which y depends print(x.grad) # Expect the value 87 as a singleton tensor # - # Notice that invoking `y.backward()` a second time raises an exception. This is because the intermediate arrays required to execute the backpropagation have been released (i.e., the memory has been deallocated). # + slideshow={"slide_type": "fragment"} y.backward() # Yields a RuntimeError # + [markdown] slideshow={"slide_type": "subslide"} # ### Another backpropagation example # # # + Use $z = \cos(u)$ with $u=x^2$ at $x=\sqrt{\frac{\pi}{3}}$ # # + Expect $z=\frac{1}{2}$ when $x=\sqrt{\frac{\pi}{3}}$ # + jupyter={"outputs_hidden": false} slideshow={"slide_type": "fragment"} x = torch.tensor([np.sqrt(np.pi/3)], requires_grad=True) u = x ** 2 z = torch.cos(u) print(f'x: {x}\nu: {u}\nz: {z}') # + [markdown] slideshow={"slide_type": "subslide"} # # + Expect # $$\frac{dz}{dx} = \frac{dz}{du} \frac{du}{dx} = (-\sin u) (2 x) = \sqrt{\pi}$$ # when $x=\sqrt{\frac{\pi}{3}}$ # + jupyter={"outputs_hidden": false} slideshow={"slide_type": "fragment"} # Now apply backward for backpropagation of derivate values z.backward() # + jupyter={"outputs_hidden": false} slideshow={"slide_type": "fragment"} print(f'x.grad:\t\t\t\t\t\t{x.grad}') x, u = x.item(), u.item() # extract scalar values print(f'Computed derivative using analytic formula:\t{-np.sin(u)*2*x}') # - # Notice that the tensors `x`, `u`, and `z` are all singleton tensors. The method `item` is used to extract a scalar entry out of a singleton tensor. # # Building a Neural Network in PyTorch # # Let's now use an approach adapted from one by [<NAME>](https://pytorch.org/tutorials/beginner/pytorch_with_examples.html) (BSD Clause-3 License). The goal is to convert a NumPy-constructed gradient descent process modelling a feed-forward neural network into a PyTorch neural network. The architecture is similar to the one constructed in the last notebook. # # + The input vectors are assumed to have $784(=28^2)$ features. # + The first layer is a hidden layer with 100 units and a *rectified linear unit* activation function (often called $\mathrm{ReLU}$): # # $$ \mathrm{ReLU}(x) = \begin{cases} x, & \mathrm{if\ }x>0 \\ 0 & \mathrm{otherwise} \end{cases} \quad\Rightarrow\quad # \mathrm{ReLU}'(x) = \begin{cases} 1, & \mathrm{if\ }x>0 \\ 0 & \mathrm{otherwise} \end{cases}. # $$ # # + The final output layer has 10 units and the activation function assiciated with this layer is the identity map. # # The loop provided below does not use functions to represent the initialization, forward propagation, backpropagation, and update steps of the steepest descent process. You'll use this as a starting point to develop a PyTorch version of this gradient descent loop. # + N_batch, dimensions = 64, [784, 100, 10] # Create random input and output data X = np.random.randn(dimensions[0], N_batch) y = np.random.randn(dimensions[-1], N_batch) # Randomly initialize weights & biases W1 = np.random.randn(dimensions[1], dimensions[0]) W2 = np.random.randn(dimensions[2], dimensions[1]) b1 = np.random.randn(dimensions[1], 1) b2 = np.random.randn(dimensions[2], 1) eta, MAXITER, SKIP = 5e-6, 2500, 100 for epoch in range(MAXITER): # Forward propagation: compute predicted y Z1 = np.dot(W1, X) + b1 A1 = np.maximum(Z1, 0) # ReLU function Z2 = np.dot(W2, A1) + b2 A2 = Z2 # identity function on output layer # Compute and print loss loss = 0.5 * np.power(A2 - y, 2).sum() if (divmod(epoch, SKIP)[1]==0): print(epoch, loss) # Backpropagation to compute gradients of loss with respect to W1, W2, b1, and b2 delta2 = (A2 - y) # derivative of identity map == multiplying by ones grad_W2 = np.dot(delta2, A1.T) grad_b2 = np.dot(delta2, np.ones((N_batch, 1))) delta1 = np.dot(W2.T, delta2) * (Z1>0) # derivative of ReLU is a step function grad_W1 = np.dot(delta1, X.T) grad_b1 = np.dot(delta1, np.ones((N_batch, 1))) # Update weights & biases W1 = W1 - eta * grad_W1 b1 = b1 - eta * grad_b1 W2 = W2 - eta * grad_W2 b2 = b2 - eta * grad_b2 # - # ## 1. Convert the preceding code to use PyTorch Tensors instead of NumPy arrays # # + Replace use of `numpy.random.randn` with `torch.randn` to initialize the problem with PyTorch Tensors rather than NumPy arrays. # + Replace instances of `np.dot` with [`torch.mm`](https://pytorch.org/docs/stable/torch.html#torch.mm) (both of which implement standard matrix-vector products). # + Replace use of `np.ones` [`torch.ones`](https://pytorch.org/docs/stable/torch.html#torch.ones). # + Replace the computation of `A1 = np.maximum(Z1, 0)` with a call to the PyTorch builtin function `torch.relu`. # + Modify the computation of the `loss` to use PyTorch specific functions/methods (hint: there is a PyTorch `torch.Tensor.pow` method). # + When printing the loss every hundred epochs, use the `.item()` method to extract its singleton scalar entry. # + Make sure the loop executes in a similar fashion to the preceding loop. # + N_batch, dimensions = 64, [784, 100, 10] # Create random input and output data X = torch.randn(dimensions[0], N_batch) y = torch.randn(dimensions[-1], N_batch) # Randomly initialize weights & biases W1 = torch.randn(dimensions[1], dimensions[0]) W2 = torch.randn(dimensions[2], dimensions[1]) b1 = torch.randn(dimensions[1], 1) b2 = torch.randn(dimensions[2], 1) eta, MAXITER, SKIP = 5e-6, 2500, 100 for epoch in range(MAXITER): # Forward propagation: compute predicted y Z1 = ___ A1 = ___ # Native PyTorch ReLU function Z2 = ____ A2 = Z2 # Compute and print loss loss = ____ if (divmod(epoch, SKIP)[1]==0): print(epoch, loss.item()) # Backpropagation to compute gradients of loss with respect to W1, W2, b1, and b2 delta2 = (A2 - y) # derivative of identity map == multiplying by ones grad_W2 = ____ grad_b2 = ____ delta1 = ____ # derivative of ReLU is a step function grad_W1 = ____ grad_b1 = ____ # Update weights & biases W1 = W1 - eta * grad_W1 b1 = b1 - eta * grad_b1 W2 = W2 - eta * grad_W2 b2 = b2 - eta * grad_b2 # - # ## 2. Use `backward()` and `grad` to compute backpropagation and updates # # Having set up the main loop with PyTorch Tensors, now make use of `autograd` to eliminate the tedious work of having to write the code to compute the gradients of the loss function with respect to `W1`, `W2`, `b1`, and `b2` explicitly. # + Insert `requires_grad=True` as an argument in the construction of `W1`, `W2`, `b1`, and `b2`. # + After computing the loss function value `loss`, replace all the lines used to compute gradients explicitly by a single call to `loss.backward()`. # + Replace the update steps with gradients stored in `.grad` attributes of the weights & biases. For instance, you can now compute `W1 -= eta * W1.grad` *after* the call to `loss.backward()` rather than computing and explicitly storing `grad_W1` and later computing `W1 -= eta * grad_W1`. # + Do these update steps within a `with torch.no_grad():` block (as provided below). The purpose of the [`torch.no_grad`](https://pytorch.org/docs/stable/torch.html#torch.no_grad) context manager is to reduce memory consumption. # + After completing the updates, zero out the computed gradients before the next iteration by calling the method `.zero_()`. For instance, you would call `W1.grad.zero_()` to zero out the computed gradient in place. This call will be within the scope of the `torch.no_grad` context manager. # # Notice, in PyTorch, methods like `.zero_` that have a training underscore in their name operate in place, i.e., they overwrite the memory locations associated with the tensor. # + # Create random input and output data X = torch.randn(dimensions[0], N_batch) y = torch.randn(dimensions[-1], N_batch) # Randomly initialize weights & biases W1 = torch.randn(dimensions[1], dimensions[0], requires_grad=True) W2 = torch.randn(dimensions[2], dimensions[1], requires_grad=True) b1 = torch.randn(dimensions[1], 1, requires_grad=True) b2 = torch.randn(dimensions[2], 1, requires_grad=True) eta, MAXITER, SKIP = 5e-6, 2500, 100 for epoch in range(MAXITER): # Forward propagation: compute predicted y Z1 = ____ A1 = ____ # Native PyTorch ReLU function Z2 = ____ A2 = Z2 # Compute and print loss loss = ____ if (divmod(epoch, SKIP)[1]==0): print(epoch, loss.item()) # Backpropagation to compute gradients of loss with respect to W1, W2, b1, and b2 loss.backward() # Update weights & biases with torch.no_grad(): # # Fill in the code for the update steps # # Manually zero the gradients after updating weights # # Fill in the code to zero the gradients. # # - # --- # # # What next? # # PyTorch has a large ecosystem of utilities including packages like `torch.nn` (which is like Keras in spirit to simplify specifying a network architecture in an object-oriented way) and `torch.optim` (which makes managing different optimization schemes easier). We've covered a lot of ground in this tutorial so far, so this will be as far as we can get today. But you now should have enough of an understanding of backpropagation that you can pick up more at [`pytorch.org`](https://pytorch.org) independently.
notebooks/3-Student-deep-learning-from-scratch-pytorch.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/ricardoBatista77/Business-IA/blob/master/Stock_Price_prediction_for_Google.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="kKHo65JMGYs8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="088ccbf5-eadf-4833-9f9e-a32648e0a845" #Import the libraries import math import pandas_datareader as web import numpy as np import pandas as pd from sklearn.preprocessing import MinMaxScaler from keras.models import Sequential from keras.layers import Dense, LSTM import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') # + id="5SEgiNRAGvMK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 450} outputId="5e83a3b3-ba1d-4ac0-b2c6-0a7d16417ac4" #Get the stock quote df = web.DataReader('GOOGL', data_source='yahoo', start='2012-01-01', end='2020-04-19') #Show teh data df # + id="maSveZr2GvQB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d60ee3cb-936c-4021-cb22-69ceb0cd6d73" #Get the number of rows and columns in the data set df.shape # + id="gneuulAoGvVB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 518} outputId="85faaa21-a5ad-4958-950a-5f436de25893" #Visualize the closing price history plt.figure(figsize=(16,8)) plt.title('Close Price History') plt.plot(df['Close']) plt.xlabel('Date', fontsize=18) plt.ylabel('Close Price USD ($)', fontsize=18) plt.show() # + id="lT26uf8HGvYi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f7e5a0b9-1205-49d9-e412-16bd9bad9fc7" #Create a new dataframe with only the 'Close column data = df.filter(['Close']) #Convert the dataframe to a numpy array dataset = data.values #Get the number of rows to train the model on training_data_len = math.ceil( len(dataset) * .8 ) training_data_len # + id="Nv560MgdG8n5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="5b3f2ec8-3552-4c0e-e257-477409e4df80" #Scale the data scaler = MinMaxScaler(feature_range=(0,1)) scaled_data = scaler.fit_transform(dataset) scaled_data # + id="9h9vq_N-G8r8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 680} outputId="49aa6d8a-efe8-49f3-ba82-12aa0bdbe294" #Create the training data set #Create the scaled training data set train_data = scaled_data[0:training_data_len , :] #Split the data into x_train and y_train data sets x_train = [] y_train = [] for i in range(60, len(train_data)): x_train.append(train_data[i-60:i, 0]) y_train.append(train_data[i, 0]) if i<= 61: print(x_train) print(y_train) print() # + id="IjAkUXjLG8wV" colab_type="code" colab={} #Convert the x_train and y_train to numpy arrays x_train, y_train = np.array(x_train), np.array(y_train) # + id="A18FgWKKGvbp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5ccd0cc3-c3ca-427b-b828-97f856690ad3" #Reshape the data x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1)) x_train.shape # + id="F89vtc0GGviz" colab_type="code" colab={} #Build the LSTM model model = Sequential() model.add(LSTM(50, return_sequences=True, input_shape= (x_train.shape[1], 1))) model.add(LSTM(50, return_sequences= False)) model.add(Dense(25)) model.add(Dense(1)) # + id="TW4gVV8ZHVUz" colab_type="code" colab={} #Compile the model model.compile(optimizer='adam', loss='mean_squared_error') # + id="DHPU9YUjHVYC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="2df82815-2566-49c9-be3a-1b138cb86901" #Train the model model.fit(x_train, y_train, batch_size=1, epochs=1) # + id="tw9iZyOWHVbS" colab_type="code" colab={} #Create the testing data set #Create a new array containing scaled values from index 1543 to 2002 test_data = scaled_data[training_data_len - 60: , :] #Create the data sets x_test and y_test x_test = [] y_test = dataset[training_data_len:, :] for i in range(60, len(test_data)): x_test.append(test_data[i-60:i, 0]) # + id="VyhT1NtLHVeI" colab_type="code" colab={} #Convert the data to a numpy array x_test = np.array(x_test) # + id="10cFXNpAHVhB" colab_type="code" colab={} #Reshape the data x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1 )) # + id="mR1b0N14Htv3" colab_type="code" colab={} #Get the models predicted price values predictions = model.predict(x_test) predictions = scaler.inverse_transform(predictions) # + id="eOiWEespHt0n" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="83098d98-7490-46dd-d667-e64f6e411712" #Get the root mean squared error (RMSE) rmse=np.sqrt(np.mean(((predictions- y_test)**2))) rmse # + id="kgMe7PExHt5N" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 620} outputId="f48c080c-a24f-43b4-f09d-df6b9c82b23f" #Plot the data train = data[:training_data_len] valid = data[training_data_len:] valid['Predictions'] = predictions #Visualize the data plt.figure(figsize=(16,8)) plt.title('Model') plt.xlabel('Date', fontsize=18) plt.ylabel('Close Price USD ($)', fontsize=18) plt.plot(train['Close']) plt.plot(valid[['Close', 'Predictions']]) plt.legend(['Train', 'Val', 'Predictions'], loc='lower right') plt.show() # + id="1NRdvb3UHt-K" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 450} outputId="2c37deef-22aa-40d3-c3a8-658e0cf7ab08" #Show the valid and predicted prices valid # + id="yZqn6BfQHtrn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2911f267-c5f2-48c3-fd19-7bd6a616120b" #Get the quote apple_quote = web.DataReader('GOOGL', data_source='yahoo', start='2012-01-01', end='2020-04-19') #Create a new dataframe new_df = apple_quote.filter(['Close']) #Get teh last 60 day closing price values and convert the dataframe to an array last_60_days = new_df[-60:].values #Scale the data to be values between 0 and 1 last_60_days_scaled = scaler.transform(last_60_days) #Create an empty list X_test = [] #Append teh past 60 days X_test.append(last_60_days_scaled) #Convert the X_test data set to a numpy array X_test = np.array(X_test) #Reshape the data X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1)) #Get the predicted scaled price pred_price = model.predict(X_test) #undo the scaling pred_price = scaler.inverse_transform(pred_price) print(pred_price) # + id="htpM0KNdHVjy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="d390328f-b623-418b-8278-50083723033c" #Get the quote apple_quote2 = web.DataReader('GOOGL', data_source='yahoo', start='2020-04-18', end='2020-04-18') print(apple_quote2['Close']) # + id="IW-gs75zINld" colab_type="code" colab={} # + id="QxjgabAxINp9" colab_type="code" colab={} # + id="BPt_g2dPINtW" colab_type="code" colab={} # + id="RR6YuD0KINwM" colab_type="code" colab={} # + id="IaX6lY9TINy-" colab_type="code" colab={} # + id="yiPI2-LyIN11" colab_type="code" colab={} # + id="hLnAm6_dIN4O" colab_type="code" colab={}
Stock_Price_prediction_for_Google.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import random as rd import numpy as np import seaborn as sn import pandas as pd mu1 = -1 mu2 = 3 sig1 = 0.5 sig2 = 1 N = 100 np.random.seed(10) x11=np.random.randn(N,1)*sig1 + mu1 x12=np.random.randn(N,1)*sig1 + mu1+3 x21=np.random.randn(N,1)*sig2 + mu2 x22=np.random.randn(N,1)*sig2 + mu2+3 c = np.vstack((-np.ones((N,1)), np.ones((N,1)))) x1 = np.hstack((x11,x12)) x2 = np.hstack((x21,x22)) X = np.hstack( (np.vstack( (x1,x2) ),c) ) np.random.shuffle(X) dataset = pd.DataFrame(data=X, columns=['x','y','c']) # - # View initial clusters dataset["clusters"]= dataset["c"]. replace({1:"cluster_1", -1: "cluster_2"}) sn.scatterplot(x= "x", y = "y", hue= "clusters",data = dataset) # ## Pseudocode for K_Means Clustering # 1. Initialize M points/centroids from the dataset and number of iterations # 2. Find Euclidean distance between each point and each sample in data # 3. Assign each M points to the closet centroid denoting a cluster # 4. Take mean of new points belonging to each centroid # 5. Repeat step 2 to step 4 for the number of iterations or till a threshold is met class K_Means(object): """K Means Algorithm.""" def __init__(self, M, iters = 250): self. M = M self.iters = iters def initial_centroids(self): random_idx = np.random.permutation(self.X.shape[0]) selected_index= random_idx[0:self.M] return self.X[selected_index] def compute_dist(self): cluster_dist = np.zeros((self.X.shape[0], self.M)) for i in range(self.M): cluster_dist[:,i]= np.sqrt(np.sum((self.X - self.centroids[i])**2, axis=1)) return cluster_dist def update_centroids(self): for i in range(self.M): self.centroids[i] = np.mean(self.X[self.clusters == i]) return def fit(self, X): self.clusters = np.zeros(X.shape[0]) self.X= X self.centroids= self.initial_centroids() for i in range(self.iters): self.clusters = np.argmin(self.compute_dist(), axis =1) self.update_centroids() def clusters(self): return self.clusters def centroids(self): return self.centroids X_train = dataset.iloc[:,[0,1]] km = K_Means(2) # 2 clusters km.fit(X_train.values) km.clusters dataset["new_clusters"] = km.clusters dataset.head() dataset["new_cluster_label"]= dataset["new_clusters"].replace({0:"cluster_1", 1: "cluster_2"}) sn.scatterplot(x = "x", y = "y", hue= "new_cluster_label", data= dataset) # ### Points correctly separated into clusters
KMeans.ipynb
/ --- / jupyter: / jupytext: / text_representation: / extension: .q / format_name: light / format_version: '1.5' / jupytext_version: 1.14.4 / kernelspec: / display_name: SQL / language: sql / name: SQL / --- / # SQL Server 2019 Polybase - Data Virtualization / This notebook has a series of instructions to setup external data sources using Polybase in SQL Server 2019 / ## SQL Server Polybase with Oracle / The following will go through how to setup an external data source, external table, and query to Oracle. First change context to the WideWorldImporters database and remove an existing external tables from a previous run. / ### Step 1: Add a master key / / Add a master key to encrypt the database scoped credential CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<PASSWORD>' GO / ### Step 2: Create database scoped credentials /* specify credentials to external data source * IDENTITY: user name for external source. * SECRET: password for external source. */ DROP DATABASE SCOPED CREDENTIAL OracleCredentials GO CREATE DATABASE SCOPED CREDENTIAL OracleCredentials WITH IDENTITY = 'gl', Secret = 'gl<PASSWORD>' GO / ### / ### Step 3: Create the external data source /* LOCATION: Location string should be of format '<vendor>://<server>[:<port>]'. * PUSHDOWN: specify whether computation should be pushed down to the source. ON by default. * CREDENTIAL: the database scoped credential, created above. */ CREATE EXTERNAL DATA SOURCE OracleServer WITH ( LOCATION = 'oracle://bworacle:49161', PUSHDOWN = ON, CREDENTIAL = OracleCredentials ) GO / ### Step 4: Create a new schema DROP SCHEMA oracle go CREATE SCHEMA oracle GO / ### Step 5: Create the exteranl table /* LOCATION: oracle table/view in 'database_name.schema_name.object_name' format * DATA_SOURCE: the external data source, created above. */ CREATE EXTERNAL TABLE oracle.accountsreceivable ( arid int, ardate date, ardesc nvarchar(100) COLLATE Latin1_General_100_CI_AS, arref int, aramt decimal(10,2) ) WITH ( LOCATION='[XE].[GL].[ACCOUNTSRECEIVABLE]', DATA_SOURCE=OracleServer ) / ### Step 6: Create statistics on key columns CREATE STATISTICS arrefstats ON oracle.accountsreceivable ([arref]) WITH FULLSCAN GO / ### Step 7: Do a filter query for predicate pushdown -- Try a simple filter SELECT * FROM oracle.accountsreceivable WHERE arref = 336252 GO / ### Step 8: Join with a local table -- Join with a local table -- SELECT ct.*, oa.arid, oa.ardesc FROM oracle.accountsreceivable oa JOIN [Sales].[CustomerTransactions] ct ON oa.arref = ct.CustomerTransactionID GO
sql2019book/ch9_data_virtualization/sqldatahub/oracle/oracle_external_table.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Working on Lists # Lists are just like dynamic sized arrays, declared in other languages (vector in C++ and ArrayList in Java). Lists need not be homogeneous always which makes it a most powerful tool in Python. A single list may contain DataTypes like Integers, Strings, as well as Objects. Lists are mutable, and hence, they can be altered even after their creation. # # List in Python are ordered and have a definite count. The elements in a list are indexed according to a definite sequence and the indexing of a list is done with 0 being the first index. Each element in the list has its definite place in the list, which allows duplicating of elements in the list, with each element having its own distinct place and credibility. # # Note- Lists are a useful tool for preserving a sequence of data and further iterating over it. # # Creating a List # Lists in Python can be created by just placing the sequence inside the square brackets[]. Unlike Sets, list doesn’t need a built-in function for creation of list. # + # Python program to demonstrate # Creation of List # Creating a List List = [] print("Blank List: ") print(List) # - # Creating a List of numbers List = [10, 20, 14] print("\nList of numbers: ") print(List) # Creating a List of strings and accessing # using index List = ["Geeks", "For", "Geeks"] print("\nList Items: ") print(List[0]) print(List[2]) # Creating a Multi-Dimensional List # (By Nesting a list inside a List) List = [['Geeks', 'For'] , ['Geeks']] print("\nMulti-Dimensional List: ") print(List) # Creating a list with multiple distinct or duplicate elements # # A list may contain duplicate values with their distinct positions and hence, multiple distinct or duplicate values can be passed as a sequence at the time of list creation. # Creating a List with # the use of Numbers # (Having duplicate values) List = [1, 2, 4, 4, 3, 3, 3, 6, 5] print("\nList with the use of Numbers: ") print(List) # Creating a List with # mixed type of values # (Having numbers and strings) List = [1, 2, 'Geeks', 4, 'For', 6, 'Geeks'] print("\nList with the use of Mixed Values: ") print(List) # + # Knowing the size of List # Creating a List List1 = [] print(len(List1)) # Creating a List of numbers List2 = [10, 20, 14] print(len(List2)) # - # # Adding Elements to a List # Elements can be added to the List by using built-in append() function. Only one element at a time can be added to the list by using append() method, for addition of multiple elements with the append() method, loops are used. Tuples can also be added to the List with the use of append method because tuples are immutable. Unlike Sets, Lists can also be added to the existing list with the use of append() method. # # # + # Python program to demonstrate # Addition of elements in a List # Creating a List List = [] print("Initial blank List: ") print(List) # - # Addition of Elements # in the List List.append(1) List.append(2) List.append(4) print("\nList after Addition of Three elements: ") print(List) # Adding elements to the List # using Iterator for i in range(1, 4): List.append(i) print("\nList after Addition of elements from 1-3: ") print(List) # Adding Tuples to the List List.append((5, 6)) print("\nList after Addition of a Tuple: ") print(List) # Addition of List to a List List2 = ['For', 'Geeks'] List.append(List2) print("\nList after Addition of a List: ") print(List) # Using insert() method # # append() method only works for addition of elements at the end of the List, for addition of element at the desired position, insert() method is used. Unlike append() which takes only one argument, insert() method requires two arguments(position, value). # + #Python program to demonstrate # Addition of elements in a List # Creating a List List = [1,2,3,4] print("Initial List: ") print(List) # Addition of Element at # specific Position # (using Insert Method) List.insert(3, 12) List.insert(0, 'Geeks') print("\nList after performing Insert Operation: ") print(List) # - # Using extend() method # # Other than append() and insert() methods, there’s one more method for Addition of elements, extend(), this method is used to add multiple elements at the same time at the end of the list. # # Note – append() and extend() methods can only add elements at the end. # + # Python program to demonstrate # Addition of elements in a List # Creating a List List = [1,2,3,4] print("Initial List: ") print(List) # Addition of multiple elements # to the List at the end # (using Extend Method) List.extend([8, 'Geeks', 'Always']) print("\nList after performing Extend Operation: ") print(List) # - # # Accessing elements from the List # In order to access the list items refer to the index number.Use the index operator [ ] to access an item in a list.The index must be an integer.Nested list are accessed using nested indexing. # + # Python program to demonstrate # accessing of element from list # Creating a List with # the use of multiple values List = ["Geeks", "For", "Geeks"] # - # accessing a element from the # list using index number print("Accessing a element from the list") print(List[0]) print(List[2]) # + # Creating a Multi-Dimensional List # (By Nesting a list inside a List) List = [['Geeks', 'For'] , ['Geeks']] # accessing a element from the # Multi-Dimensional List using # index number print("Acessing a element from a Multi-Dimensional list") print(List[0][1]) print(List[1][0]) # - # Removing Elements from the List # # Using remove() method # Elements can be removed from the List by using built-in remove() function but an Error arises if element doesn’t exist in the set. Remove() method only removes one element at a time, to remove range of elements, iterator is used. The remove() method removes the specified item. # # Note – Remove method in List will only remove the first occurrence of the searched element. # + # Python program to demonstrate # Removal of elements in a List # Creating a List List = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] print("Intial List: ") print(List) # - # Removing elements from List # using Remove() method List.remove(5) List.remove(6) print("\nList after Removal of two elements: ") print(List) # Removing elements from List # using iterator method for i in range(1, 5): List.remove(i) print("\nList after Removing a range of elements: ") print(List) # Using pop() method # # Pop() function can also be used to remove and return an element from the set, but by default it removes only the last element of the set, to remove element from a specific position of the List, index of the element is passed as an argument to the pop() method # + List = [1,2,3,4,5] # Removing element from the # Set using the pop() method List.pop() print("\nList after popping an element: ") print(List) # - # Removing element at a # specific location from the # Set using the pop() method List.pop(2) print("\nList after popping a specific element: ") print(List) # # Slicing Of a List # In Python List, there are multiple ways to print the whole List with all the elements, but to print a specific range of elements from the list, we use Slice operation. Slice operation is performed on Lists with the use of colon(:). To print elements from beginning to a range use [:Index], to print elements from end use [:-Index], to print elements from specific Index till the end use [Index:], to print elements within a range, use [Start Index:End Index] and to print whole List with the use of slicing operation, use [:]. Further, to print whole List in reverse order, use [::-1]. # # Note – To print elements of List from rear end, use Negative Indexes. # + # Python program to demonstrate # Removal of elements in a List # Creating a List List = ['G','E','E','K','S','F', 'O','R','G','E','E','K','S'] print("Intial List: ") print(List) # - # Print elements of a range # using Slice operation Sliced_List = List[3:8] print("\nSlicing elements in a range 3-8: ") print(Sliced_List) # Print elements from a # pre-defined point to end Sliced_List = List[5:] print("\nElements sliced from 5th " "element till the end: ") print(Sliced_List) # Printing elements from # beginning till end Sliced_List = List[:] print("\nPrinting all elements using slice operation: ") print(Sliced_List) # Negative index List slicing # Creating a List List = ['G','E','E','K','S','F', 'O','R','G','E','E','K','S'] print("Initial List: ") print(List) # Print elements from beginning # to a pre-defined point using Slice Sliced_List = List[:-6] print("\nElements sliced till 6th element from last: ") print(Sliced_List) # Print elements of a range # using negative index List slicing Sliced_List = List[-6:-1] print("\nElements sliced from index -6 to -1") print(Sliced_List) # Printing elements in reverse # using Slice operation Sliced_List = List[::-1] print("\nPrinting List in reverse: ") print(Sliced_List)
Day2/List/List.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # + all_df = pd.DataFrame() for trait in ['aggression','hate','humor','sarcasm','stance']: df = pd.read_csv('../results/hyper_search_{}.csv'.format(trait)) df = df.sort_values(['weighted_f1'],ascending=[False]) df = df.drop_duplicates(subset=['config']) df['config'] = df.config.apply(eval) df = pd.concat([df[['weighted_f1', 'macro_f1']], df.config.apply(pd.Series)], axis=1) df['data'] = trait.capitalize() all_df = pd.concat([all_df,df],axis=0) df = pd.read_csv('../results/results_{}_detection.csv'.format(trait)) df = df.sort_values(['weighted_f1'],ascending=[False]) df = df.drop_duplicates(subset=['config']) df['config'] = df.config.apply(eval) df = pd.concat([df[['weighted_f1', 'macro_f1']], df.config.apply(pd.Series)], axis=1) df['data'] = trait.capitalize() all_df = pd.concat([all_df,df],axis=0) all_df = all_df[all_df.use_features == False] all_df = all_df[all_df.loss == 'ce'] all_df = all_df.drop_duplicates(subset=['model_name','text_max_len','char_max_len','word_char_max_len','data']) # - all_df.head() markers = {} all_df[all_df.model_name == 'HAN'].sort_values(['data','macro_f1'],ascending=[False, False]) # + fig, ax = plt.subplots(1,4,figsize=(24,5),sharey=True) sns.lineplot(data=all_df[all_df.model_name == 'WLSTM'],x="text_max_len", y="macro_f1", hue="data", style='data',markers=True, ax=ax[0],legend=False) sns.lineplot(data=all_df[all_df.model_name == 'Transformer'],x="char_max_len", y="macro_f1", hue="data",style='data',ax=ax[1],markers=True,legend=False) sns.lineplot(data=all_df[all_df.model_name == 'HAN'].drop_duplicates(subset=['model_name','text_max_len','data']),x="text_max_len", y="macro_f1", hue="data",style='data',markers=True, ax=ax[2]) sns.lineplot(data=all_df[all_df.model_name == 'HAN'].drop_duplicates(subset=['model_name','word_char_max_len','data']),x="word_char_max_len", y="macro_f1", markers=True,hue="data",style='data',ax=ax[3],legend=False) plt.rc('xtick', labelsize=15) # fontsize of the tick labels plt.rc('ytick', labelsize=15) # fontsize of the tick labels plt.rc('legend', fontsize=15) # legend fontsize plt.rc('axes', titlesize=15) plt.rc('axes', labelsize=15) #ax[0][0].legend(loc="upper left").get_frame().set_linewidth(1) #ax[1][0].legend(loc="upper left") #ax[0][1].legend(loc="lower right") #ax[1][1].legend(loc="lower right") ax[0].set_title("(a) WLSTM") ax[1].set_title("(b) Transformer") ax[2].set_title("(c) C-HAN") ax[3].set_title("(d) C-HAN") ax[0].set_ylim(0.75,0.95) ax[1].set_ylim(0.75,0.95) ax[2].set_ylim(0.75,0.95) ax[3].set_ylim(0.75,0.95) ax[0].set_xlabel('$\ell_{word}$', fontsize=15) #'Max Text Word Length' ax[1].set_xlabel('$\ell_{subword}$', fontsize=15) ax[2].set_xlabel('$\ell_{word}$', fontsize=15) ax[3].set_xlabel('$\ell_{char}^{word}$', fontsize=15) ax[0].set_ylabel('Macro F1', fontsize=18) ax[1].set_ylabel('Macro F1', fontsize=18) ax[2].set_ylabel(r'Macro F1', fontsize=18) ax[3].set_ylabel(r'Macro F1', fontsize=18) handles, labels = ax[2].get_legend_handles_labels() #ax[3].legend(handles=handles[1:],labels=labels[1:], loc=7) ax[2].legend(handles=handles[1:],labels=labels[1:],loc='upper center', bbox_to_anchor=(-.1, -0.15), fancybox=True, shadow=True, ncol=5) #handles, labels = ax[1][1].get_legend_handles_labels() #ax[1][1].legend(handles=handles,labels=labels, loc="lower right") plt.rcParams["font.family"] = 'sans-serif' plt.savefig('../plots/hyper_search.pdf',dpi=200, bbox_inches='tight') plt.show() # - all_df[all_df.model_name == 'HAN'] all_df[all_df.model_name == 'Transformer']
notebooks/hyper_plots.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # What are `LightCurve` objects? # [`LightCurve`](https://docs.lightkurve.org/api/lightkurve.lightcurve.LightCurve.html#lightkurve.lightcurve.LightCurve) objects are data objects which encapsulate the brightness of a star over time. They provide a series of common operations, for example folding, binning, plotting, etc. There are a range of subclasses of `LightCurve` objects specific to telescopes, including [`TESSLightCurve`](https://docs.lightkurve.org/api/lightkurve.lightcurve.TessLightCurve.html#lightkurve.lightcurve.TessLightCurve) for TESS data. # # You can create a `LightCurve` object from a `TargetPixelFile` object using Simple Aperture Photometry (see our tutorial for more information on Target Pixel Files [here](x). Aperture Photometry is the simple act of summing up the values of all the pixels in a pre-defined aperture, as a function of time. By carefully choosing the shape of the aperture mask, you can avoid nearby contaminants or improve the strength of the specific signal you are trying to measure relative to the background. # # To demonstrate, lets create a `TESSLightCurve` from a `TESSTargetPixelFile`. # First we open a Target Pixel File from [MAST](https://archive.stsci.edu/kepler/), this one is already cached from our previous [tutorial](https://docs.lightkurve.org/tutorials/01-what-are-lightcurves.html)!: # + import numpy as np import pandas as pd from lightkurve import search_targetpixelfile tpf = search_targetpixelfile('TIC 307210830 c', sector=2).download() # - # Then we convert the target pixel file into a light curve using the [`to_lightcurve`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.html#lightkurve.targetpixelfile.KeplerTargetPixelFile.to_lightcurve) command and the pipeline-defined aperture mask (look [here](https://keplerscience.arc.nasa.gov/pipeline.html#pixel-response-function) for more information about aperture photometry and the optimal aperture mask definition). lc = tpf.to_lightcurve(aperture_mask=tpf.pipeline_mask) # We've built a new `TESSLightCurve` object called `lc`. Note in this case we've passed an **aperture_mask** to the `to_lightcurve` method. The default is to use the `SPOC` pipeline aperture. (You can pass your own aperture, which is a boolean `numpy` array.) By summing all the pixels in the aperture we have created a Simple Aperture Photometry (SAP) lightcurve. # # `TESSLightCurve` has many useful functions that you can use. As with Target Pixel Files you can access the meta data very simply: lc.mission lc.sector # And you still have access to time and flux attributes. In a light curve, there is only one flux point for every time stamp. The data can be displayed as a table using `Pandas`: # + info = pd.DataFrame(data={'Time (BKJD)': lc.time, 'Flux (e-/s)': lc.flux}) info # - # You can also check the Combined Differential Photometric Precision (CDPP) RMS per transit duration noise metric (see [Gilliland et al., 2011](https://iopscience.iop.org/article/10.1088/0067-0049/197/1/6/pdf) for more details) of the lightcurve using the built in method [`estimate_cdpp`](https://docs.lightkurve.org/api/lightkurve.lightcurve.FoldedLightCurve.html#lightkurve.lightcurve.FoldedLightCurve.estimate_cdpp): lc.estimate_cdpp() # The above is the Savitzky-Golay CDPP noise metric in units parts-per-million (ppm) # Now we can use the built in `plot` function on the `TESSLightCurve` object to plot the time series. You can pass `plot` any keywords you would normally pass to [`matplotlib.pyplot.plot`](https://matplotlib.org/3.1.3/api/_as_gen/matplotlib.pyplot.plot.html). # %matplotlib inline lc.plot(); # There are a set of useful functions in [`LightCurve`](https://docs.lightkurve.org/api/lightkurve.lightcurve.LightCurve.html#lightkurve.lightcurve.LightCurve) objects which you can use to work with the data. These include: # * [`flatten()`](https://docs.lightkurve.org/api/lightkurve.lightcurve.LightCurve.html#lightkurve.lightcurve.LightCurve.flatten): Remove long term trends using a [Savitzky–Golay filter](https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter) # * [`remove_outliers()`](https://docs.lightkurve.org/api/lightkurve.lightcurve.LightCurve.html#lightkurve.lightcurve.LightCurve.remove_outliers): Remove outliers using simple sigma clipping # * [`remove_nans()`](https://docs.lightkurve.org/api/lightkurve.lightcurve.LightCurve.html#lightkurve.lightcurve.LightCurve.remove_nans): Remove infinite or NaN values (these can occur during thruster firings) # * [`fold()`](https://docs.lightkurve.org/api/lightkurve.lightcurve.LightCurve.html#lightkurve.lightcurve.LightCurve.fold): Fold the data at a particular period # * [`bin()`](https://docs.lightkurve.org/api/lightkurve.lightcurve.LightCurve.html#lightkurve.lightcurve.LightCurve.bin): Reduce the time resolution of the array, taking the average value in each bin. # # We can use these simply on a light curve object flat_lc = lc.flatten(window_length=401) flat_lc.plot(); folded_lc = flat_lc.fold(period=3.690621) folded_lc.plot(); binned_lc = folded_lc.bin(binsize=10) binned_lc.plot(); # Or we can do these all in a single (long) line! lc.remove_nans().flatten(window_length=401).fold(period=3.690621).bin(binsize=20).plot(); # Congratulations! You have now learnt about LightCurve objects and how you can manipulate them. Move on to our [next tutorial](https://docs.lightkurve.org/tutorials/01-lightcurve-files.html) where you will learn about LightCurveFile objects.
docs/source/tutorials/01-what-are-lightcurvesv2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/Itto-ryu/OOP-1-2/blob/main/Activity_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="KPZgnXhXW6Oi" # ##Write a Python program that converts the temperature Celsius to Fahrenheit. Create a class name Temperature # + id="RV2YlJbWWymq" #Create Celsius as attribute name, Temp() as method, and temp1 as object name. #F=1.8*C+32 # + colab={"base_uri": "https://localhost:8080/"} id="3H4w1EAsYZZY" outputId="38018a7e-588a-4655-81ac-2691a295960f" class Temperature: def __init__(self,Celsius): self.Celsius = Celsius def Temp(self): return 1.8*self.Celsius+32 input_temp = float(input("Input the temperature in Celsius= ")) temp1 = Temperature(input_temp) print(round(temp1.Temp(),3)) # + [markdown] id="Yi0KmP-HflEq" # ##Define an Area() method of the class that calculates the circle’s area. # + colab={"base_uri": "https://localhost:8080/"} id="0wV2PlQ8fl5w" outputId="c9ac42c9-9485-47b2-a6d4-c5531e676c26" class Circle: def __init__(self,circleArea): self.circleArea = circleArea def Radius(self): PI = 3.14 rds = 6 return PI * (rds*rds) def display(self): print("the area of the circle is:", self.Radius()) circle = Circle(6) print(circle.circleArea) circle.display() # + [markdown] id="cs_aq3TzfyAY" # ##Define a Perimeter() method of the class which allows you to calculate the perimeter of the circle. # + colab={"base_uri": "https://localhost:8080/"} id="l9t_QicZfyVK" outputId="b6bc4948-a2d0-40d5-c810-e4f5463abaaa" class Circle: def __init__(self,circlePerimeter): self.circlePerimeter = circlePerimeter def Radius(self): PI = 3.14 rds = 9 return 2 * PI * rds def display(self): print("the perimeter of the circles is: ", self.Radius()) circle = Circle(9) print(circle.circlePerimeter) circle.display()
Activity_2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Series de tiempo # <p style='text-align: justify;'>Usaremos las series de tiempo para predecir el cierre de una acción en la bolsa de valores mexicana. # Podemos pensar en las series de tiempo como secuencias de datos que miden la misma cosa sobre un periodo de tiempo, podemos imaginarlo como un conjunto de datos que tiene una fecha y/o hora en la cual cada registro fue creado. # Este tipo de conjunto de datos crece constantemente, tenemos bolsas de valores que están constantemente registrando los costos, existen casas inteligentes que crean series de tiempo con la temperatura del hogar, energía entre otras medidas.</p> # # <p style='text-align: justify;'>Por lo general, en las series de tiempo cada que un dato nuevo se genera, este se anexa a la serie existente. Los datos ya guardados no pueden ser modificados pues es un evento en el pasado. Además, los datos llegan ordenados por el tiempo asociado a cada registro. Podrás imaginar que el tiempo es el eje principal para este tipo de conjunto de datos.</p> # # <p style='text-align: justify;'>Poder medir los sistemas presentes y guardar esa información en series de tiempo nos permite analizar el pasado, monitorear el presente y predecir el futuro.</p> # # Cualquier serie de tiempo se puede describir en 4 componentes básicos: # - Nivel. El valor de referencia para la serie si fuera una línea recta. # - Tendencia. El aumento o disminución de la serie a lo largo del tiempo. # - Estacionalidad. Los patrones o ciclos a lo largo del tiempo. # - Ruido. La variabilidad en las observaciones. # # Como realizar predicciones: # - Métodos Clásicos (ARIMA) # - Métodos de Machine Learning # # Series de tiempo en la bolsa de valores # https://finance.yahoo.com/quote/CEMEXCPO.MX import pandas as pd import matplotlib.pyplot as plt import numpy as np df = pd.read_csv("./data/finance/CEMEXCPO.MX.csv") # # %matplotlib notebook # %matplotlib inline df.head() df.dtypes #vemos que el campo Date lo entiende como objeto no como tipo fecha asi que lo transformamos df.Date = pd.to_datetime(df.Date) #lo cambiamos a tipo fecha df.index = df.Date df.dtypes df = df.set_index('Date').asfreq('d') #'d' es frecuencia diaria que es el caso de la bolsa df.head(10) #pero vemos vacios del fin de semana, hemos de quitarles df = df.fillna(method="ffill") #el valor vacio rellenalo con el valor anterior df.head(10) df = df["2016-06":"2018-06"] #para consultar registros de una fecha dada df.head(10) #dibujamos la info que tenemos antes de la predicción import pandas.plotting._converter as pandacnv pandacnv.register() plt.figure(figsize=(9,5)) plt.plot(df.Close); # # Haciendo predicciones a futuro # Supongamos que queremos hacer una predicción del día de mañana (actual valor a predecir). Si solo tenemos la fecha como referencia difícilmente podremos dar un pronóstico acertado pues nos falta conocer cómo se ha comportado hasta el momento la serie. Es ahí donde utilizamos ventanas deslizantes de tiempo o simplemente ventanas de tiempo, estas nos permiten crear variables artificiales las cuales nos describen el comportamiento de los días anteriores. # # Esto a su ves nos permite resolver el problema de prediccion con aprendizaje supervisado donde las variables X (descripciones de la ventana) nos permite predecir una variable Y (el valor actual de cierre). # <center><img src="img/ts.png" width = "60%"></center> import seaborn as sns import numpy as np test_size = 60 #cuantos datos a futuro queremos window_size = 3 #cuantos elementos quiero que considere antes de la predicción # + df_shift = df.Close.shift(1) #nos movemos un dia df_mean_roll = df_shift.rolling(window_size).mean() df_std_roll = df_shift.rolling(window_size).std() df_mean_roll.name = "mean_roll" df_std_roll.name = "std_roll" #cambiamos los indices para que se alineen en el dataframe df_mean_roll.index = df.index df_std_roll.index = df.index # - df_shift.head(),df_mean_roll.head(),df_std_roll.head() #como nos basamos en los 3 dias anteriores para hacer la precidicción, no puede hacer las 3 primeras predicciones #unimos la serie predicha con la original df_w = pd.concat([df.Close,df_mean_roll,df_std_roll],axis=1) df_w.head(10) df_w = df_w[window_size:] #guarda todo menos los 3 primeros en df_w df_w.head() test = df_w[-test_size:] #testea con los datos predichos train = df_w[:-test_size] #entrenate con los datos que sabemos X_test = test.drop("Close",axis = 1) y_test = test["Close"] #nos interesa que haga predicción del valor de cierre de la bolsa X_train = train.drop("Close",axis = 1) y_train = train["Close"] from sklearn.svm import SVR clf = SVR(gamma="scale") clf.fit(X_train, y_train) y_train_hat = pd.Series(clf.predict(X_train),index=y_train.index) y_test_hat = pd.Series(clf.predict(X_test),index=y_test.index) plt.figure(figsize=(9,5)) plt.plot(y_train ,label='Training') plt.plot(y_test_hat,label='Test_Prediction') plt.plot(y_test , label='Test_Real') plt.legend(loc='best') plt.title('Rolling Mean & Standard Deviation'); from sklearn.metrics import mean_squared_error mse = mean_squared_error(y_test, y_test_hat) print('MSE: {}'.format(mse)) # + # %matplotlib notebook #asi podemos tener zoom y mas detalles plt.figure(figsize=(9,5)) plt.plot(y_train ,label='Training') plt.plot(y_test_hat,label='Test_Prediction') plt.plot(y_test , label='Test_Real') plt.legend(loc='best') plt.title('Rolling Mean & Standard Deviation'); # -
Semana 4/2-Series de Tiempo.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] toc=true # <h1>Table of Contents<span class="tocSkip"></span></h1> # <div class="toc"><ul class="toc-item"><li><span><a href="#Compare-weighted-and-unweighted-mean-temperature" data-toc-modified-id="Compare-weighted-and-unweighted-mean-temperature-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Compare weighted and unweighted mean temperature</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Data" data-toc-modified-id="Data-1.0.1"><span class="toc-item-num">1.0.1&nbsp;&nbsp;</span>Data</a></span></li><li><span><a href="#Creating-weights" data-toc-modified-id="Creating-weights-1.0.2"><span class="toc-item-num">1.0.2&nbsp;&nbsp;</span>Creating weights</a></span></li><li><span><a href="#Weighted-mean" data-toc-modified-id="Weighted-mean-1.0.3"><span class="toc-item-num">1.0.3&nbsp;&nbsp;</span>Weighted mean</a></span></li><li><span><a href="#Plot:-comparison-with-unweighted-mean" data-toc-modified-id="Plot:-comparison-with-unweighted-mean-1.0.4"><span class="toc-item-num">1.0.4&nbsp;&nbsp;</span>Plot: comparison with unweighted mean</a></span></li></ul></li></ul></li></ul></div> # - # # Compare weighted and unweighted mean temperature # # # Author: [<NAME>user](https://github.com/mathause/) # # # We use the `air_temperature` example dataset to calculate the area-weighted temperature over its domain. This dataset has a regular latitude/ longitude grid, thus the gridcell area decreases towards the pole. For this grid we can use the cosine of the latitude as proxy for the grid cell area. # # + # %matplotlib inline import cartopy.crs as ccrs import matplotlib.pyplot as plt import numpy as np import xarray as xr # - # ### Data # # Load the data, convert to celsius, and resample to daily values # + ds = xr.tutorial.load_dataset("air_temperature") # to celsius air = ds.air - 273.15 # resample from 6-hourly to daily values air = air.resample(time="D").mean() air # - # Plot the first timestep: # + projection = ccrs.LambertConformal(central_longitude=-95, central_latitude=45) f, ax = plt.subplots(subplot_kw=dict(projection=projection)) air.isel(time=0).plot(transform=ccrs.PlateCarree(), cbar_kwargs=dict(shrink=0.7)) ax.coastlines() # - # ### Creating weights # # For a rectangular grid the cosine of the latitude is proportional to the grid cell area. weights = np.cos(np.deg2rad(air.lat)) weights.name = "weights" weights # ### Weighted mean air_weighted = air.weighted(weights) air_weighted weighted_mean = air_weighted.mean(("lon", "lat")) weighted_mean # ### Plot: comparison with unweighted mean # # Note how the weighted mean temperature is higher than the unweighted. # + weighted_mean.plot(label="weighted") air.mean(("lon", "lat")).plot(label="unweighted") plt.legend()
doc/examples/area_weighted_temperature.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] id="uf-ZAG8ddOxf" # # Forex Arbitrage # # This notebook presents an example of linear optimization on a network model for financial transactions. The goal is to identify whether or not an arbitrage opportunity exists given a matrix of cross-currency exchange rates. Other treatments of this problem and application are available, including the following links. # # * [Crypto Arbitrage Framework](https://github.com/hzjken/crypto-arbitrage-framework) # * [Crypto Trading and Arbitrage Identification Strategies](https://nbviewer.org/github/rcroessmann/sharing_public/blob/master/arbitrage_identification.ipynb) # # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 9828, "status": "ok", "timestamp": 1647615556911, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg_n8V7bVINy02QRuRgOoMo11Ri7NKU3OUKdC1bkQ=s64", "userId": "09038942003589296665"}, "user_tz": 240} id="RkfYiXhygbos" outputId="e418700e-0848-495b-ed17-38e23604ce1e" # install Pyomo and solvers for Google Colab import sys if "google.colab" in sys.modules: # !wget -N -q https://raw.githubusercontent.com/jckantor/MO-book/main/tools/install_on_colab.py # %run install_on_colab.py # + [markdown] id="uf-ZAG8ddOxf" # ## Problem # # Exchanging one currency for another is among the most common of all banking transactions. Currencies are normally priced relative to each other. # # At this moment of this writing, for example, the Japanese yen (symbol JPY) is priced at 0.00761 relative to the euro (symbol EUR). At this price 100 euros would purchase 100/0.00761 = 13,140.6 yen. Conversely, EUR is priced at 131.585 yen. The 'round-trip' of 100 euros from EUR to JPY and back to EUR results in # # $$100 \text{ EUR} \times \frac{1\text{ JPY}}{0.00761\text{ EUR}} {\quad\longrightarrow\quad} 12,140.6 \text{ JPY} \times\frac{1\text{ EUR}}{131.585\text{ JPY}} {\quad\longrightarrow\quad} 99.9954\text{ EUR}$$ # # The small loss in this round-trip transaction is the fee collected by the brokers and banking system to provide these services. # # Needless to say, if a simple round-trip transaction like this reliably produced a net gain then there would many eager traders ready to take advantage of the situation. Trading situations offering a net gain with no risk are called arbitrage, and are the subject of intense interest by traders in the foreign exchange (forex) and crypto-currency markets around the globe. # # As one might expect, arbitrage opportunities involving a simple round-trip between a pair of currencies are almost non-existent in real-world markets. When the do appear, they are easily detected and rapid and automated trading quickly exploit the situation. More complex arbitrage opportunities, however, can arise when working with three more currencies and a table of cross-currency exchange rates. # # # + [markdown] id="J-ADrgCoQP3L" # ## Demonstration of Triangular Arbitrage # # Consider the following cross-currency matrix. # # | i <- J | USD | EUR | JPY | # | :--- | :---: | :---: | :---: | # | USD | 1.0 | 2.0 | 0.01 | # | EUR | 0.5 | 1.0 | 0.0075 | # | JPY | 100.0 | 133 1/3 | 1.0 | # # # Entry $a_{m, n}$ is the number units of currency $m$ received in exchange for one unit of currency $n$. We use the notation # # $$a_{m, n} = a_{m \leftarrow n}$$ # # as reminder of what the entries denote. For this data there are no two way arbitrage opportunities. We can check this by explicitly computing all two-way currency exchanges # # $$I \rightarrow J \rightarrow I$$ # # by computing # # $$ a_{i \leftarrow j} \times a_{j \leftarrow i}$$ # # This data set shows no net cost and no arbitrage for conversion from one currency to another and back again. # + colab={"base_uri": "https://localhost:8080/", "height": 196} executionInfo={"elapsed": 208, "status": "ok", "timestamp": 1647604331600, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg_n8V7bVINy02QRuRgOoMo11Ri7NKU3OUKdC1bkQ=s64", "userId": "09038942003589296665"}, "user_tz": 240} id="TsL1c79nx3aN" outputId="59b9aec9-bae7-426a-9556-27fc0787ebdd" import pandas as pd df = pd.DataFrame([[1.0, 0.5, 100], [2.0, 1.0, 1/0.0075], [0.01, 0.0075, 1.0]], columns = ['USD', 'EUR', 'JPY'], index = ['USD', 'EUR', 'JPY']).T display(df) # USD -> EUR -> USD print(df.loc['USD', 'EUR'] * df.loc['EUR', 'USD']) # USD -> JPY -> USD print(df.loc['USD', 'JPY'] * df.loc['JPY', 'USD']) # EUR -> JPY -> EUR print(df.loc['EUR', 'JPY'] * df.loc['JPY', 'EUR']) # + [markdown] id="wmcvV5oiQ27w" # Now consider a currency exchange comprised of three trades that returns back to the same currency. # # $$ I \rightarrow J \rightarrow K \rightarrow I $$ # # The net exchange rate can be computed as # # $$ a_{i \leftarrow k} \times a_{k \leftarrow j} \times a_{j \leftarrow i} $$ # # By direct calculation we see there is a three-way **triangular** arbitrage opportunity for this data set that returns a 50% increase in wealth. # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 2, "status": "ok", "timestamp": 1647604332177, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg_n8V7bVINy02QRuRgOoMo11Ri7NKU3OUKdC1bkQ=s64", "userId": "09038942003589296665"}, "user_tz": 240} id="MvHEDf2zUJqS" outputId="d8570d6e-d2b8-41af-dc6e-47c3afce010d" I = 'USD' J = 'JPY' K = 'EUR' print(df.loc[I, K] * df.loc[K, J] * df.loc[J, I]) # + [markdown] id="SGv4WjmTx3aO" # Our challenge is create a model that can identify complex arbitrage opportunities that may exist in cross-currency forex markets. # + [markdown] id="Y_tuUX8XqnwA" # ## Modeling # # The cross-currency table $A$ provides exchange rates among currencies. Entry $a_{i,j}$ in row $i$, column $j$ tells us how many units of currency $i$ are received in exchange for one unit of currency $j$. We'll use the notation $a_{i, j} = a_{i\leftarrow j}$ to remind ourselves of this relationship. # # We start with $w_j(0)$ units of currency $j \in N$, where $N$ is the set of all currencies in the data set. We consider a sequence of trades $t = 1, 2, \ldots, T$ where $w_j(t)$ is the amount of currency $j$ on hand after completing trade $t$. # # Each trade is executed in two phases. In the first phase an amount $x_{i\leftarrow j}(t)$ of currency $j$ is committed for exchange to currency $i$. This allows a trade to include multiple currency transactions. After the commitment the unencumbered balance for currency $j$ must satisfy trading constraints. Each trade consists of simultaneous transactions in one or more currencies. # # $$w_j(t-1) - \sum_{i\ne j} x_{i\leftarrow j}(t) \geq 0$$ # # Here a lower bound has been placed to prohibit short-selling of currency $j$. This constraint could be modified if leveraging is allowed on the exchange. # # The second phase of the trade is complete when the exchange credits all of the currency accounts according to # # $$ w_j(t) = w_j(t-1) - \underbrace{\sum_{i\ne j} x_{i\leftarrow j}(t)}_{\text{outgoing}} + \underbrace{\sum_{i\ne j} a_{j\leftarrow i}x_{j\leftarrow i}(t)}_{\text{incoming}} $$ # # We assume all trading fees and costs are represented in the bid/ask spreads represented by $a_{j\leftarrow i}$ # # The goal of this calculation is to find a set of transactions $x_{i\leftarrow j}(t) \geq 0$ to maximize the value of portfolio after a specified number of trades $T$. # # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 133, "status": "ok", "timestamp": 1647604334065, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg_n8V7bVINy02QRuRgOoMo11Ri7NKU3OUKdC1bkQ=s64", "userId": "09038942003589296665"}, "user_tz": 240} id="NzTVF6JOW8-S" outputId="cec33270-6287-4fc9-82a3-6b71471c1ab6" import pyomo.environ as pyo import numpy as np from graphviz import Digraph def arbitrage(T, df, R='EUR'): m = pyo.ConcreteModel() # length of trading chain m.T0 = pyo.RangeSet(0, T) # number of transactions m.T1 = pyo.RangeSet(1, T) # currency *nodes* m.NODES = pyo.Set(initialize=df.index) # paths between currency nodes i -> j m.ARCS = pyo.Set(initialize = m.NODES * m.NODES, filter = lambda arb, i, j: i != j) # w[i, t] amount of currency i on hand after transaction t m.w = pyo.Var(m.NODES, m.T0, domain=pyo.NonNegativeReals) # x[m, n, t] amount of currency m converted to currency n in transaction t t m.x = pyo.Var(m.ARCS, m.T1, domain=pyo.NonNegativeReals) # start with assignment of 100 units of a selected reserve currency @m.Constraint(m.NODES) def initial_condition(m, i): if i == R: return m.w[i, 0] == 100.0 return m.w[i, 0] == 0 # no shorting constraint @m.Constraint(m.NODES, m.T1) def max_trade(m, j, t): return m.w[j, t-1] >= sum(m.x[i, j, t] for i in m.NODES if i != j) # one round of transactions @m.Constraint(m.NODES, m.T1) def balances(m, j, t): return m.w[j, t] == m.w[j, t-1] - sum(m.x[i, j, t] for i in m.NODES if i != j) \ + sum(df.loc[j, i]*m.x[j, i, t] for i in m.NODES if i != j) @m.Objective(sense=pyo.maximize) def wealth(m): return m.w[R, T] solver = pyo.SolverFactory('glpk') solver.solve(m) for t in m.T0: print(f"\nt = {t}\n") if t >= 1: for i, j in m.ARCS: if m.x[i,j,t]() > 0: print(f"{j} -> {i} Convert {m.x[i, j, t]()} {j} to {df.loc[i,j]*m.x[i,j,t]()} {i}") print() for i in m.NODES: print(f"w[{i},{t}] = {m.w[i, t]():9.2f} ") return m m = arbitrage(3, df, 'EUR') print(m.w['EUR', 0]()) print(m.w['EUR', 3]()) # + [markdown] id="vArKbEvA1E6u" # ## Display graph # + import networkx as nx def display_graph(m): path = [] for t in m.T0: for i in m.NODES: if m.w[i, t]() >= 1e-6: path.append(f"{m.w[i, t]()} {i}") path = " -> ".join(path) print("\n", path) G = nx.DiGraph() for i in m.NODES: G.add_node(i) nodelist = set() edge_labels = dict() for t in m.T1: for i, j in m.ARCS: if m.x[i, j, t]() > 0.1: nodelist.add(i) nodelist.add(j) y = m.w[j, t-1]() x = m.w[j, t]() G.add_edge(j, i) edge_labels[(j, i)] = df.loc[i, j] nodelist = list(nodelist) pos = nx.spring_layout(G) nx.draw(G, pos, with_labels=True, node_size=2000, nodelist=nodelist, node_color="lightblue", node_shape="s", arrowsize=20, label=path) nx.draw_networkx_edge_labels(G, pos, edge_labels=edge_labels) display_graph(m) # + [markdown] id="bT4Wc81yx3aQ" # ## FOREX data # # https://www.bloomberg.com/markets/currencies/cross-rates # # https://www.tradingview.com/markets/currencies/cross-rates-overview-prices/ # + colab={"base_uri": "https://localhost:8080/", "height": 300} executionInfo={"elapsed": 145, "status": "ok", "timestamp": 1647604159358, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg_n8V7bVINy02QRuRgOoMo11Ri7NKU3OUKdC1bkQ=s64", "userId": "09038942003589296665"}, "user_tz": 240} id="fl_3XoLyV3Yb" outputId="0877b5b2-3998-4d71-e340-0a624984a1ec" # data extracted 2022-03-17 bloomberg = """ USD EUR JPY GBP CHF CAD AUD HKD USD - 1.1096 0.0084 1.3148 1.0677 0.7915 0.7376 0.1279 EUR 0.9012 - 0.0076 1.1849 0.9622 0.7133 0.6647 0.1153 JPY 118.6100 131.6097 - 155.9484 126.6389 93.8816 87.4867 15.1724 GBP 0.7606 0.8439 0.0064 - 0.8121 0.6020 0.5610 0.0973 CHF 0.9366 1.0393 0.0079 1.2314 - 0.7413 0.6908 0.1198 CAD 1.2634 1.4019 0.0107 1.6611 1.3489 - 0.9319 0.1616 AUD 1.3557 1.5043 0.0114 1.7825 1.4475 1.0731 - 0.1734 HKD 7.8175 8.6743 0.0659 10.2784 8.3467 6.1877 5.7662 - """ import pandas as pd import io df = pd.read_csv(io.StringIO(bloomberg.replace('-', '1.0')), sep='\t', index_col=0) display(df) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"elapsed": 145, "status": "ok", "timestamp": 1647604160624, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg_n8V7bVINy02QRuRgOoMo11Ri7NKU3OUKdC1bkQ=s64", "userId": "09038942003589296665"}, "user_tz": 240} id="KSJR6u-bW95v" outputId="e048ce29-74bd-4fba-dc7e-b3058e67e275" m = arbitrage(3, df, 'USD') # - display_graph(m)
notebooks/04/forex-arbitrage.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import flat_table from tabulate import tabulate # ### Example Dataset data = [ ( 1001, { 'first_name': 'john', 'last_name': 'smith', 'phones': {'mobile': '201-..', 'home': '978-..'} }, [{ 'zip': '07014', 'city': 'clifton' }] ), ( 1002, pd.np.nan, [{'zip': '07014', 'address1': '1 Journal Square'}] ), ( 1003, { 'first_name': 'marry', 'last_name': 'kate', 'gender': 'female' }, [{ 'zip': '10001', 'city': 'new york' }, { 'zip': '10008', 'city': 'brooklyn' }] ), ] df = pd.DataFrame(data, columns=['id', 'user_info', 'address']) df # ### Using flat_table flat_table.normalize(df) # ### Mapper function mapper = flat_table.mapper(df) mapper mapper.drop('obj', axis=1, inplace=True) mapper['obj'] = '...' print(tabulate(mapper, tablefmt="pipe", headers="keys")) final = flat_table.normalize(df) final print(tabulate(final, tablefmt="pipe", headers="keys")) # ### New in Version 1.1.0 final = flat_table.normalize(df, expand_dicts=False, expand_lists=True) final print(tabulate(final, tablefmt="pipe", headers="keys")) # ### Comparison with json_normalize() pd.io.json.json_normalize(df.user_info, max_level=0)
notebooks/Readme-Example.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Typo-Corrector # # After I bought new MacBook, I lost my dominate over the new keyboard and I have a lot of typo during typing. What is interesting in this scenario is that always the mis-characters are in the neighborhood. So I thought a simple Autoencoder should learn this behavior easily. # # Then I decided to implement this Jupiter file to see the result. Also, it could be a good exercise :). # So let's do it. import numpy as np from keras.models import Model from keras.layers import Input, LSTM, Dense # First, we need to have all possible character which can by typed mistakenly by neighbor character. Here I defined a dictionary which the key is the character, and values are possible mistake characters. char_dict ={'q':"wsa", 'w':"qase", 'e':"wsdr", 'r':"edft", 't':"rfgy", 'y':"tghu", 'u':"yhji", 'i':"ujko", 'o':"iklp", 'p':"ol", 'a':"qsz", 's':"adeqwxz", 'd':"esxcfr", 'f':"rdcvgt", 'g':"tfvbhy", 'h':"ygbnju", 'j':"uhnmki", 'k':"jmloi", 'l':"kmp", 'z':"asx", 'x':"zsdc", 'c':"xdfv", 'v':"cfgb", 'b':"vghn", 'n':"bhjm", 'm':"njkl"} # Then we need to have some words to examine out the idea, the best source is a pre-trained word embedding. I used GloVe. Note that we only need words, not the weights. So I used 50d GloVe word to vector. vocab = [] with open('glove.6B.50d.txt', mode='r') as file: for line in file: values = line.strip().split(' ') word = values[0].lower() if len(word) < 3: continue vocab.append(word) if len(vocab) == 10000: break # Next step is to produce the raw data. To do that, I defined a function which gets a word as an input and then loops over its characters and replaces this character by mistaken characters. def mismaker(word): mis_words = [] for i, w in enumerate(list(word)): index = char_dict.get(w) if index is not None: mischars = char_dict[w] for m in mischars: mis_word = word[:i] + m + word[i + 1:] mis_words.append(mis_word) return mis_words # + data = [] for word in vocab: row = [word] mis_words = mismaker(word) row.extend(mis_words) data.append(row) # save data on the disk with open('data.txt', mode='w') as file: for row in data: line = ' '.join(row) file.write(line+'\n') del(data) # - # Create two empty lists for the inputs and the outputs. The inputs are typo words, and outputs are correct words. In autoencoders, we need to provide start and end of sequences. Here, Sequences are a series of characters. The encoder part needs to know where the sequences are ended and decoder needs the start of sequences and end of sequences. I demonstrate it in below picture. # <img src="images/typo.png"> # + inputs_data = [] outputs_data = [] with open('data.txt', mode='r') as file: for line in file: words = line.strip().split(' ') correct_word = words[0] for i in range(1,len(words)): inputs_data.append(words[i]) outputs_data.append('\t'+correct_word+'\n') assert len(inputs_data)==len(outputs_data) print("There are ", len(inputs_data), "typo words") # - # shuffle data from random import shuffle zipped = list(zip(inputs_data,outputs_data)) shuffle(zipped) inputs_data, outputs_data = zip(*zipped) # + # extract unique characters chars = set() for word in outputs_data: for c in word: if c not in chars: chars.add(c) chars = sorted(chars) # create a character to index dictionary char2idx = dict() for c in chars: char2idx[c] = len(char2idx) # create a index to character dictionary idx2char = {v:k for k,v in char2idx.items()} print(chars) # - # ### Keras seq2seq # To make life easier, I used [keras sequential model](https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html). If you don't have info, I highly recommend you to read it. num_encoder_tokens = num_decoder_tokens = len(char2idx) max_encoder_seq_length = max([len(word) for word in inputs_data]) max_decoder_seq_length = max([len(word) for word in outputs_data]) encoder_input_data = np.zeros( (len(inputs_data), max_encoder_seq_length, num_encoder_tokens), dtype='float32') decoder_input_data = np.zeros( (len(inputs_data), max_decoder_seq_length, num_decoder_tokens), dtype='float32') decoder_target_data = np.zeros( (len(inputs_data), max_decoder_seq_length, num_decoder_tokens), dtype='float32') for i, (input_word, output_word) in enumerate(zip(inputs_data, outputs_data)): for t, char in enumerate(input_word): encoder_input_data[i, t, char2idx[char]] = 1. for t, char in enumerate(output_word): # decoder_target_data is ahead of decoder_input_data by one timestep decoder_input_data[i, t, char2idx[char]] = 1. if t > 0: # decoder_target_data will be ahead by one timestep # and will not include the start character. decoder_target_data[i, t - 1, char2idx[char]] = 1. # ### Hyperparameters # I used only first 5000 words from GloVe word embedding. If we have a larger data, e.g 100K words, we should use a larger LSTM unit. I choose this setting, because of training time. # hyperparameters latent_dim = 128 batch_size = 512 epochs = 50 # Define an input sequence and process it. encoder_inputs = Input(shape=(None, num_encoder_tokens)) encoder = LSTM(latent_dim, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_inputs) # We discard `encoder_outputs` and only keep the states. encoder_states = [state_h, state_c] # + # Set up the decoder, using `encoder_states` as initial state. decoder_inputs = Input(shape=(None, num_decoder_tokens)) # We set up our decoder to return full output sequences, # and to return internal states as well. We don't use the # return states in the training model, but we will use them in inference. decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True) decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states) decoder_dense = Dense(num_decoder_tokens, activation='softmax') decoder_outputs = decoder_dense(decoder_outputs) # Define the model that will turn # `encoder_input_data` & `decoder_input_data` into `decoder_target_data` model = Model([encoder_inputs, decoder_inputs], decoder_outputs) # Run training model.compile(optimizer='rmsprop', loss='categorical_crossentropy') # - model.fit([encoder_input_data, decoder_input_data], decoder_target_data, batch_size=batch_size, epochs=epochs, validation_split=0.2) # **Caution** I ran this model on GPU 1070. In cpu, It will take more than 5 min for each epoch. # Save model model.save('typo-corrector.h5') # ### Evaluation # For evaluation, we need to run a separate graph. Because in this step, we don't have ground truth word and decoder should guess next character, based on previous guessed character (dashed line in the picture above). It will continue until we reach the maximum time step or decoder guess a `\n` character. # # It is good to point it out which decoder in training phase will use ground truth character at each time step. # + # Define sampling models encoder_model = Model(encoder_inputs, encoder_states) decoder_state_input_h = Input(shape=(latent_dim,)) decoder_state_input_c = Input(shape=(latent_dim,)) decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c] decoder_outputs, state_h, state_c = decoder_lstm( decoder_inputs, initial_state=decoder_states_inputs) decoder_states = [state_h, state_c] decoder_outputs = decoder_dense(decoder_outputs) decoder_model = Model( [decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states) # - def decode_sequence(input_seq): # Encode the input as state vectors. states_value = encoder_model.predict(input_seq) # Generate empty target sequence of length 1. target_seq = np.zeros((1, 1, num_decoder_tokens)) # Populate the first character of target sequence with the start character. target_seq[0, 0, char2idx['\t']] = 1. # Sampling loop for a batch of sequences # (to simplify, here we assume a batch of size 1). stop_condition = False decoded_sentence = '' while not stop_condition: output_tokens, h, c = decoder_model.predict( [target_seq] + states_value) # Sample a token sampled_token_index = np.argmax(output_tokens[0, -1, :]) sampled_char = idx2char[sampled_token_index] decoded_sentence += sampled_char # Exit condition: either hit max length # or find stop character. if (sampled_char == '\n' or len(decoded_sentence) > max_decoder_seq_length): stop_condition = True # Update the target sequence (of length 1). target_seq = np.zeros((1, 1, num_decoder_tokens)) target_seq[0, 0, sampled_token_index] = 1. # Update states states_value = [h, c] return decoded_sentence def get_one_hot(word): oh_word = np.zeros((max_encoder_seq_length, num_encoder_tokens),dtype=np.float32) for i, c in enumerate(word): oh_word[i,char2idx[c]] = 1. return oh_word for seq_index in range(100): # Take one sequence (part of the training set) # for trying out decoding. input_seq = encoder_input_data[seq_index: seq_index + 1] decoded_sentence = decode_sequence(input_seq) print('-') print('Typo word:', inputs_data[seq_index]) print('True word:', outputs_data[seq_index]) print('Decoded word:', decoded_sentence) my_typo_sentence = "thid senrence cintains manu typi" my_words = my_typo_sentence.split() oh_words = [] for w in my_words: oh = get_one_hot(w) oh = np.reshape(oh, (1, max_encoder_seq_length, num_encoder_tokens)) decoded_sentence = decode_sequence(oh) print('-') print('Typo word:', w) print('Decoded word:', decoded_sentence) # It was interesting, isn't it? :)
typo-corrector.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:wildfires] # language: python # name: conda-env-wildfires-py # --- # ## Setup from specific import * ( endog_data, exog_data, master_mask, filled_datasets, masked_datasets, land_mask, ) = get_offset_data() # ### Retrieve previous results from the 'model' notebook X_train, X_test, y_train, y_test = data_split_cache.load() rf = get_model() # ### SHAP values shap_values = shap_cache.load() # ### BA in the train and validation sets # # Valid elements are situated where master_mask is False plt.hist(np.abs(shap_values.flatten()), bins=5000) plt.xscale("log") plt.yscale("log") # ### Calculate 2D masked array SHAP values map_shap_results = calculate_2d_masked_shap_values(X_train, master_mask, shap_values) # ### Plotting maps of SHAP values plot_shap_value_maps( X_train, map_shap_results, map_figure_saver, directory=Path("shap_maps") / "normal", ) # ### High BA month mask target_ba = get_masked_array(endog_data.values, master_mask) mean_ba = np.ma.mean(target_ba, axis=0) max_ba = np.ma.max(target_ba, axis=0) # + root_dir = Path("weighted_shap_maps") / "high_ba" high_ba_mask = ( ~((target_ba > (2 * np.ma.mean(target_ba, axis=0))) & (np.ma.max(target_ba) > 1e-2)) ).data high_ba_mask |= target_ba.mask high_ba_mask |= np.sum(high_ba_mask, axis=0) < 9 high_ba_sum = np.sum(high_ba_mask, axis=0) plot_sum = np.ma.MaskedArray(12 - high_ba_sum, mask=np.isclose(high_ba_sum, 12)) boundaries = np.arange(np.min(plot_sum), np.max(plot_sum) + 2) - 0.5 fig, cbar = cube_plotting( plot_sum, boundaries=boundaries, return_cbar=True, colorbar_kwargs={"label": "nr. valid samples"}, fig=plt.figure(figsize=(5.1, 2.6)), ) cbar.set_ticks(get_centres(boundaries)) cbar.set_ticklabels(list(map(int, get_centres(boundaries)))) map_figure_saver.save_figure(fig, "high_ba_n_valid", sub_directory=root_dir) # - # #### Calculate SHAP results only for those voxels with high BA high_ba_map_shap_results = calculate_2d_masked_shap_values( X_train, master_mask, shap_values, additional_mask=high_ba_mask ) # #### Plotting corresponding maps of high BA SHAP values plot_shap_value_maps( X_train, high_ba_map_shap_results, map_figure_saver, directory=Path("shap_maps") / "high_ba", ) # ### Rank SHAP masks # + max_month = 9 # Mark areas where at least this fraction of |SHAP| has been plotted previously. thres = 0.7 close_figs = True def param_iter(): for (exc_name, exclude_inst) in tqdm( [("with_inst", False), ("no_inst", True)], desc="Exclude inst." ): for feature_name in tqdm( ["VOD Ku-band", "SIF", "FAPAR", "LAI", "Dry Day Period"], desc="Feature" ): for (filter_name, shap_results) in tqdm( [("normal", map_shap_results), ("high_ba", high_ba_map_shap_results)], desc="SHAP results", ): for shap_measure in tqdm( ["masked_max_shap_arrs", "masked_abs_shap_arrs"], desc="SHAP measure", ): yield (exc_name, exclude_inst), feature_name, ( filter_name, shap_results, ), shap_measure for ( (exc_name, exclude_inst), feature_name, (filter_name, shap_results), shap_measure, ) in islice(param_iter(), 0, None): short_feature = shorten_features(feature_name) sub_directory = Path("rank_shap_maps") / filter_name / shap_measure / exc_name filtered = np.array(filter_by_month(X_train.columns, feature_name, max_month)) lags = np.array( [get_lag(feature, target_feature=feature_name) for feature in filtered] ) # Ensure lags are sorted consistently. lag_sort_inds = np.argsort(lags) filtered = tuple(filtered[lag_sort_inds]) lags = tuple(lags[lag_sort_inds]) if exclude_inst and 0 in lags: assert lags[0] == 0 lags = lags[1:] filtered = filtered[1:] n_features = len(filtered) # There is no point plotting this map for a single feature or less since we are # interested in a comparison between different feature ranks. if n_features <= 1: continue selected_data = np.empty(n_features, dtype=object) for i, col in enumerate(X_train.columns): if col in filtered: selected_data[lags.index(get_lag(col))] = shap_results[shap_measure][ "data" ][i].copy() shared_mask = reduce(np.logical_or, (data.mask for data in selected_data)) for data in selected_data: data.mask = shared_mask stacked_abs = np.abs(np.vstack([data.data[np.newaxis] for data in selected_data])) # Indices in descending order. sort_indices = np.argsort(stacked_abs, axis=0)[::-1] # Maintain the same colors even if fewer colors are used. colors = [lag_color_dict[lag] for lag in lags] cmap, norm = from_levels_and_colors( levels=np.arange(n_features + 1), colors=colors, extend="neither", ) short_feature = shorten_features(feature_name) sum_shap = np.ma.MaskedArray(np.sum(stacked_abs, axis=0), mask=shared_mask) already_plotted = np.zeros_like(sum_shap) for i, rank in zip( tqdm(range(n_features), desc="Plotting", leave=False), ["1st", "2nd", "3rd", "4th", "5th"], ): cube = dummy_lat_lon_cube(np.ma.MaskedArray(sort_indices[i], mask=shared_mask)) fig, ax = plt.subplots( figsize=(5.1, 2.6), subplot_kw={"projection": ccrs.Robinson()} ) style = 2 if style == 1: # Stippling for significant areas. mpl.rc("hatch", linewidth=0.2) hatches = ["." * 6, None] else: # Hatching for insignificant areas. mpl.rc("hatch", linewidth=0.1) hatches = ["/" * 14, None] # XXX: Temporary, since this takes an exorbitant amount of time to render. # if np.any(already_plotted >= thres): # ax.contourf( # cube.coord("longitude").points, # cube.coord("latitude").points, # already_plotted, # transform=ccrs.PlateCarree(), # colors="none", # zorder=4, # levels=[thres, 1], # hatches=hatches, # ) fig, cbar = cube_plotting( cube, title=f"{rank} |SHAP {short_feature} Lag| - thres: {thres * 100:0.0f}%", fig=fig, ax=ax, cmap=cmap, norm=norm, return_cbar=True, colorbar_kwargs={"label": short_feature}, coastline_kwargs={"linewidth": 0.3}, ) # Label the colorbar using the feature names. cbar.set_ticks(np.arange(n_features) + 0.5) labels = [] for lag in lags: if lag: labels.append(f"{lag} M") else: labels.append("Inst.") cbar.set_ticklabels(labels) data = np.take_along_axis(stacked_abs, sort_indices[i : i + 1], axis=0)[0] already_plotted += data / sum_shap map_figure_saver.save_figure( fig, f"{rank}_{short_feature}", sub_directory=sub_directory ) plt.close(fig) # - # ### Use significant peak finding algorithm to determine mean timing of maximum impact using SHAP values # #### Investigate the role of the significance parameters # + max_month = 9 # Mark areas where at least this fraction of |SHAP| has been plotted previously. thres = 0.7 def param_iter(): for exclude_inst in tqdm([False, True], desc="Exclude inst."): for feature_name in tqdm( ["VOD Ku-band", "SIF", "FAPAR", "LAI", "Dry Day Period"], desc="Feature" ): yield exclude_inst, feature_name for exclude_inst, feature_name in islice(param_iter(), 0, None): filtered = np.array(filter_by_month(X_train.columns, feature_name, max_month)) lags = np.array( [get_lag(feature, target_feature=feature_name) for feature in filtered] ) # Ensure lags are sorted consistently. lag_sort_inds = np.argsort(lags) filtered = tuple(filtered[lag_sort_inds]) lags = tuple(lags[lag_sort_inds]) if exclude_inst and 0 in lags: assert lags[0] == 0 lags = lags[1:] filtered = filtered[1:] n_features = len(filtered) # There is no point plotting this map for a single feature or less since we are # interested in a comparison between different feature ranks. if n_features <= 1: continue short_feature = shorten_features(feature_name) selected_data = np.empty(n_features, dtype=object) for i, col in enumerate(X_train.columns): if col in filtered: selected_data[lags.index(get_lag(col))] = map_shap_results[ "masked_abs_shap_arrs" ]["data"][i].copy() shared_mask = reduce(np.logical_or, (data.mask for data in selected_data)) for data in selected_data: data.mask = shared_mask stacked_shaps = np.vstack([data.data[np.newaxis] for data in selected_data]) # Calculate the significance of the global maxima for each of the valid pixels. # Valid indices are recorded in 'shared_mask'. valid_i, valid_j = np.where(~shared_mask) total_valid = len(valid_i) def get_true_significances(kwargs): ptp_threshold_factor = kwargs.pop("ptp_threshold_factor") significant = [] for i, j in zip(valid_i, valid_j): ptp_threshold = ptp_threshold_factor * mean_ba[i, j] significant.append( significant_peak( stacked_shaps[:, i, j], ptp_threshold=ptp_threshold, **kwargs ) ) return dict(zip(*np.unique(significant, return_counts=True)))[True] with concurrent.futures.ProcessPoolExecutor(max_workers=get_ncpus()) as executor: diff_thresholds = np.linspace(0.01, 0.99, 15) ptp_threshold_factors = np.linspace(0, 1.5, 8) kwargs_list = [ { "diff_threshold": diff_threshold, "ptp_threshold_factor": ptp_threshold_factor, } for diff_threshold, ptp_threshold_factor in product( diff_thresholds, ptp_threshold_factors ) ] fs = [executor.submit(get_true_significances, kwargs) for kwargs in kwargs_list] for f in tqdm( concurrent.futures.as_completed(fs), total=len(fs), desc="Varying significances", smoothing=0, ): pass plt.figure(figsize=(12, 8)) perc_sigs = 100 * np.array([f.result() for f in fs]) / total_valid for ptp_threshold_factor in ptp_threshold_factors: ptp_indices = [ i for i, kwargs in enumerate(kwargs_list) if np.isclose(kwargs["ptp_threshold_factor"], ptp_threshold_factor) ] plt.plot( diff_thresholds, perc_sigs[ptp_indices], label=f"ptp_thres: {ptp_threshold_factor:0.2f}", ) plt.title(f"{short_feature} - Exclude Inst: {exclude_inst}") plt.xlabel("Diff threshold") plt.ylabel("% significant") plt.grid(linestyle="--", alpha=0.4) plt.legend(loc="best") # - # #### Plot sig. maps diff_threshold = 0.5 ptp_threshold_factor = 0.12 # relative the mean # ### Plot weighted SHAP peak locations # + max_month = 9 close_figs = True def param_iter(): for (exc_name, exclude_inst) in tqdm( [("with_inst", False), ("no_inst", True)], desc="Exclude inst." ): for feature_name in tqdm( ["VOD Ku-band", "SIF", "FAPAR", "LAI", "Dry Day Period"], desc="Feature" ): for (filter_name, shap_results) in tqdm( [("normal", map_shap_results), ("high_ba", high_ba_map_shap_results)], desc="SHAP results", ): for shap_measure in tqdm( [ "masked_max_shap_arrs", "masked_abs_shap_arrs", "masked_shap_arrs", ], desc="SHAP measure", ): yield (exc_name, exclude_inst), feature_name, ( filter_name, shap_results, ), shap_measure for ( (exc_name, exclude_inst), feature_name, (filter_name, shap_results), shap_measure, ) in islice(param_iter(), 0, None): short_feature = shorten_features(feature_name) sub_directory = Path("weighted_shap_maps") / filter_name / shap_measure / exc_name filtered = np.array(filter_by_month(X_train.columns, feature_name, max_month)) lags = np.array( [get_lag(feature, target_feature=feature_name) for feature in filtered] ) # Ensure lags are sorted consistently. lag_sort_inds = np.argsort(lags) filtered = tuple(filtered[lag_sort_inds]) lags = tuple(lags[lag_sort_inds]) if exclude_inst and 0 in lags: assert lags[0] == 0 lags = lags[1:] filtered = filtered[1:] n_features = len(filtered) # There is no point plotting this map for a single feature or less since we are # interested in a comparison between different feature ranks. if n_features <= 1: continue selected_data = np.empty(n_features, dtype=object) for i, col in enumerate(X_train.columns): if col in filtered: selected_data[lags.index(get_lag(col))] = shap_results[shap_measure][ "data" ][i].copy() shared_mask = reduce(np.logical_or, (data.mask for data in selected_data)) for data in selected_data: data.mask = shared_mask stacked_shaps = np.vstack([data.data[np.newaxis] for data in selected_data]) # Calculate the significance of the global maxima for each of the valid pixels. # Valid indices are recorded in 'shared_mask'. valid_i, valid_j = np.where(~shared_mask) total_valid = len(valid_i) max_positions = np.ma.MaskedArray( np.zeros_like(shared_mask, dtype=np.float64), mask=True ) for i, j in zip(tqdm(valid_i, desc="Evaluating maxima", smoothing=0), valid_j): ptp_threshold = ptp_threshold_factor * mean_ba[i, j] if significant_peak( stacked_shaps[:, i, j], diff_threshold=diff_threshold, ptp_threshold=ptp_threshold, ): # If the maximum is significant, go on the calculate the weighted avg. of the signal. max_positions[i, j] = np.average( lags, weights=np.abs(stacked_shaps[:, i, j]) ) # fig = cube_plotting( max_positions, title=f"|SHAP| Weighted Maximum {short_feature} - Exclude Inst: {exclude_inst}", fig=plt.figure(figsize=(5.1, 2.6)), colorbar_kwargs={"label": short_feature, "format": "%0.1f"}, coastline_kwargs={"linewidth": 0.3}, # boundaries=np.arange(0, max(lags)+1), ) map_figure_saver.save_figure( fig, f"weighted_shap_{short_feature}", sub_directory=sub_directory ) if close_figs: plt.close() # # Plot example series. # plt.figure(figsize=(12, 8)) # plt.title(f"{short_feature} - Exclude Inst: {exclude_inst}") # for i in np.random.RandomState(0).choice(total_valid, 100, False): # plot_i = valid_i[i] # plot_j = valid_j[i] # plot_data = stacked_shaps[:, plot_i, plot_j] # plt.plot(lags, plot_data / np.max(plot_data), c="C0", alpha=0.4) # plt.xlabel(lags) # plt.ylabel("|SHAP|") # plt.grid(linestyle="--", alpha=0.4) # # plt.legend(loc='best') # - # ### Analyse SHAP -- BA relationship for selected regions # + ba_cmap = plt.get_cmap("inferno") fig, ax = plt.subplots( 1, 1, figsize=(14, 6), subplot_kw=dict(projection=ccrs.Robinson()) ) feature_name = "<NAME>" results_dict = map_shap_results["masked_shap_arrs"] fig = cube_plotting( results_dict["data"][ list(map(shorten_features, X_train.columns)).index(feature_name) ], fig=fig, ax=ax, title=f"Mean SHAP value for '{shorten_features(feature_name)}'", nbins=7, vmin=results_dict["vmin"], vmax=results_dict["vmax"], log=True, log_auto_bins=False, extend="neither", min_edge=1e-3, cmap="Spectral_r", cmap_midpoint=0, cmap_symmetric=True, colorbar_kwargs={ "format": "%0.1e", "label": f"SHAP ('{shorten_features(feature_name)}')", }, coastline_kwargs={"linewidth": 0.3}, ) # lat_range = (-5, 4) lat_range = (-5, 0) lon_range = (12.5, 26.5) ax.set_global() ax.add_patch( mpatches.Rectangle( xy=[min(lon_range), min(lat_range)], width=np.ptp(lon_range), height=np.ptp(lat_range), facecolor="none", edgecolor="blue", alpha=0.8, lw=2, transform=ccrs.PlateCarree(), ) ) # _ = ax.gridlines() ba_train_cube = dummy_lat_lon_cube(get_mm_data(y_train.values, master_mask, "train")) ba_subset = ba_train_cube.intersection(latitude=lat_range).intersection( longitude=lon_range ) cube_plotting(ba_subset, log=True) mm_valid_indices, mm_valid_train_indices, mm_valid_val_indices = get_mm_indices( master_mask ) mm_kind_indices = mm_valid_train_indices[: shap_values.shape[0]] X_i = list(map(shorten_features, X_train.columns)).index(feature_name) # Convert 1D shap values into 3D array (time, lat, lon). masked_shap_comp = np.ma.MaskedArray( np.zeros_like(master_mask, dtype=np.float64), mask=np.ones_like(master_mask) ) masked_shap_comp.ravel()[mm_kind_indices] = shap_values[:, X_i] # if additional_mask is not None: # masked_shap_comp.mask |= match_shape(additional_mask, masked_shap_comp.shape) assert np.all( np.isclose(results_dict["data"][X_i], np.ma.mean(masked_shap_comp, axis=0)) ) shap_cube = dummy_lat_lon_cube(masked_shap_comp) shap_subset = shap_cube.intersection(latitude=lat_range).intersection( longitude=lon_range ) plt.figure() indices = list(np.ndindex(ba_subset.shape[1:])) ba_camap = plt.get_cmap("inferno") for i, index in enumerate(tqdm(indices, desc="Plotting")): plt.plot( ba_subset.data[(slice(None), *index)], marker="o", alpha=0.4, c=ba_cmap(i / (len(indices) - 1)), ) _ = plt.ylabel("BA") plt.figure() indices = list(np.ndindex(ba_subset.shape[1:])) ba_camap = plt.get_cmap("inferno") for i, index in enumerate(tqdm(indices, desc="Plotting")): plt.plot( shap_subset.data[(slice(None), *index)], marker="o", alpha=0.4, c=ba_cmap(i / (len(indices) - 1)), ) _ = plt.ylabel("SHAP(Dry Days)") plt.figure(figsize=(15, 15)) indices = list(np.ndindex(ba_subset.shape[1:])) ba_camap = plt.get_cmap("inferno") for i, index in enumerate(tqdm(indices, desc="Plotting")): plt.plot( shap_subset.data[(slice(None), *index)], ba_subset.data[(slice(None), *index)], marker="o", alpha=0.4, c=ba_cmap(i / (len(indices) - 1)), linestyle="", ) plt.ylabel("BA") _ = plt.xlabel("SHAP(Dry Days)") # + fig, ax = plt.subplots( 1, 1, figsize=(14, 6), subplot_kw=dict(projection=ccrs.Robinson()) ) feature_name = "<NAME>" results_dict = map_shap_results["masked_shap_arrs"] fig = cube_plotting( results_dict["data"][ list(map(shorten_features, X_train.columns)).index(feature_name) ], fig=fig, ax=ax, title=f"Mean SHAP value for '{shorten_features(feature_name)}'", nbins=7, vmin=results_dict["vmin"], vmax=results_dict["vmax"], log=True, log_auto_bins=False, extend="neither", min_edge=1e-3, cmap="Spectral_r", cmap_midpoint=0, cmap_symmetric=True, colorbar_kwargs={ "format": "%0.1e", "label": f"SHAP ('{shorten_features(feature_name)}')", }, coastline_kwargs={"linewidth": 0.3}, ) # lat_range = (-5, 4) lat_range = (6, 11) lon_range = (12.5, 26.5) ax.set_global() ax.add_patch( mpatches.Rectangle( xy=[min(lon_range), min(lat_range)], width=np.ptp(lon_range), height=np.ptp(lat_range), facecolor="none", edgecolor="blue", alpha=0.8, lw=2, transform=ccrs.PlateCarree(), ) ) # _ = ax.gridlines() ba_train_cube = dummy_lat_lon_cube(get_mm_data(y_train.values, master_mask, "train")) ba_subset = ba_train_cube.intersection(latitude=lat_range).intersection( longitude=lon_range ) cube_plotting(ba_subset, log=True) mm_valid_indices, mm_valid_train_indices, mm_valid_val_indices = get_mm_indices( master_mask ) mm_kind_indices = mm_valid_train_indices[: shap_values.shape[0]] X_i = list(map(shorten_features, X_train.columns)).index(feature_name) # Convert 1D shap values into 3D array (time, lat, lon). masked_shap_comp = np.ma.MaskedArray( np.zeros_like(master_mask, dtype=np.float64), mask=np.ones_like(master_mask) ) masked_shap_comp.ravel()[mm_kind_indices] = shap_values[:, X_i] # if additional_mask is not None: # masked_shap_comp.mask |= match_shape(additional_mask, masked_shap_comp.shape) assert np.all( np.isclose(results_dict["data"][X_i], np.ma.mean(masked_shap_comp, axis=0)) ) shap_cube = dummy_lat_lon_cube(masked_shap_comp) shap_subset = shap_cube.intersection(latitude=lat_range).intersection( longitude=lon_range ) plt.figure() indices = list(np.ndindex(ba_subset.shape[1:])) ba_camap = plt.get_cmap("inferno") for i, index in enumerate(tqdm(indices, desc="Plotting")): plt.plot( ba_subset.data[(slice(None), *index)], marker="o", alpha=0.4, c=ba_cmap(i / (len(indices) - 1)), ) _ = plt.ylabel("BA") plt.figure() indices = list(np.ndindex(ba_subset.shape[1:])) ba_camap = plt.get_cmap("inferno") for i, index in enumerate(tqdm(indices, desc="Plotting")): plt.plot( shap_subset.data[(slice(None), *index)], marker="o", alpha=0.4, c=ba_cmap(i / (len(indices) - 1)), ) _ = plt.ylabel("SHAP(Dry Days)") plt.figure(figsize=(15, 15)) indices = list(np.ndindex(ba_subset.shape[1:])) ba_camap = plt.get_cmap("inferno") for i, index in enumerate(tqdm(indices, desc="Plotting")): plt.plot( shap_subset.data[(slice(None), *index)], ba_subset.data[(slice(None), *index)], marker="o", alpha=0.4, c=ba_cmap(i / (len(indices) - 1)), linestyle="", ) plt.ylabel("BA") _ = plt.xlabel("SHAP(Dry Days)") # - # ### Categorisation into multiple peaks # #### Test the effect of parameters on the peak distribution # + max_month = 9 # Mark areas where at least this fraction of |SHAP| has been plotted previously. thres = 0.7 def param_iter(): for exclude_inst in tqdm([False], desc="Exclude inst."): for feature_name in tqdm(["FAPAR", "Dry Day Period"], desc="Feature"): yield exclude_inst, feature_name for exclude_inst, feature_name in islice(param_iter(), 0, 1): filtered = np.array(filter_by_month(X_train.columns, feature_name, max_month)) lags = np.array( [get_lag(feature, target_feature=feature_name) for feature in filtered] ) # Ensure lags are sorted consistently. lag_sort_inds = np.argsort(lags) filtered = tuple(filtered[lag_sort_inds]) lags = tuple(lags[lag_sort_inds]) if exclude_inst and 0 in lags: assert lags[0] == 0 lags = lags[1:] filtered = filtered[1:] n_features = len(filtered) # There is no point plotting this map for a single feature or less since we are # interested in a comparison between different feature ranks. if n_features <= 1: continue short_feature = shorten_features(feature_name) selected_data = np.empty(n_features, dtype=object) for i, col in enumerate(X_train.columns): if col in filtered: selected_data[lags.index(get_lag(col))] = map_shap_results[ "masked_shap_arrs" ]["data"][i].copy() shared_mask = reduce(np.logical_or, (data.mask for data in selected_data)) for data in selected_data: data.mask = shared_mask stacked_shaps = np.vstack([data.data[np.newaxis] for data in selected_data]) # Calculate the significance of the global maxima for each of the valid pixels. # Valid indices are recorded in 'shared_mask'. valid_i, valid_j = np.where(~shared_mask) total_valid = len(valid_i) max_positions = np.ma.MaskedArray( np.zeros_like(shared_mask, dtype=np.float64), mask=True ) def get_n_peaks(ptp_threshold_factor, diff_threshold): n_peaks = [] for i, j in zip(valid_i, valid_j): ptp_threshold = ptp_threshold_factor * mean_ba[i, j] peaks_i = significant_peak( stacked_shaps[:, i, j], diff_threshold=diff_threshold, ptp_threshold=ptp_threshold, strict=False, ) n_peaks.append(peaks_i) return dict(zip(*np.unique(n_peaks, return_counts=True))) with concurrent.futures.ProcessPoolExecutor(max_workers=get_ncpus()) as executor: diff_thresholds = np.round(np.linspace(0.01, 0.99, 8), 2) ptp_threshold_factors = np.round(np.linspace(0, 0.75, 12), 2) fs = [ executor.submit(get_n_peaks, ptp_threshold_factor, diff_threshold) for ptp_threshold_factor, diff_threshold in product( ptp_threshold_factors, diff_thresholds ) ] for f in tqdm( concurrent.futures.as_completed(fs), total=len(fs), desc="Varying significances", smoothing=0, ): pass results = {} for (f, (ptp_threshold_factor, diff_threshold)) in zip( fs, product(ptp_threshold_factors, diff_thresholds) ): results[(ptp_threshold_factor, diff_threshold)] = f.result() # - n_peaks_dict = {} for key, vals in results.items(): n_peaks_dict[key] = {} all_tuples = list(vals.keys()) all_n_peaks = np.array([len(tup) for tup in all_tuples]) all_n = np.array(list(vals.values())) for n in np.unique(all_n_peaks): n_peaks_dict[key][n] = np.sum(all_n[all_n_peaks == n]) df = pd.DataFrame(n_peaks_dict).T df.index.names = ["ptp_thres_f", "diff_thres"] _ = df.groupby("ptp_thres_f").plot(kind="bar") df = pd.DataFrame(n_peaks_dict).T df.index.names = ["ptp_thres_f", "diff_thres"] _ = df.groupby("diff_thres").plot(kind="bar") # #### Carry out run pfts = ESA_CCI_Landcover_PFT() pfts.limit_months(start=PartialDateTime(2010, 1), end=PartialDateTime(2015, 1)) pfts.regrid() pfts = pfts.get_mean_dataset() pft_cube = Datasets(pfts).select_variables("pftHerb", inplace=False).cube fig = cube_plotting(pft_cube, fig=plt.figure(figsize=(12, 5))) # + pft_cubes = [] pft_names = ("pftShrubBD", "pftShrubBE", "pftShrubNE") for pft_name in pft_names: pft_cube = Datasets(pfts).select_variables(pft_name, inplace=False).cube pft_cubes.append(pft_cube) fig = cube_plotting(pft_cube, fig=plt.figure(figsize=(12, 5))) fig = cube_plotting( reduce(lambda x, y: x + y, pft_cubes), title=f"Sum of {', '.join(pft_names)}", fig=plt.figure(figsize=(12, 5)), ) pft_cube = Datasets(pfts).select_variables("ShrubAll", inplace=False).cube fig = cube_plotting(pft_cube, fig=plt.figure(figsize=(12, 5))) # + pft_cubes = [] pft_names = ("pftTreeBD", "pftTreeBE", "pftTreeND", "pftTreeNE") for pft_name in pft_names: pft_cube = Datasets(pfts).select_variables(pft_name, inplace=False).cube pft_cubes.append(pft_cube) fig = cube_plotting(pft_cube, fig=plt.figure(figsize=(12, 5))) fig = cube_plotting( reduce(lambda x, y: x + y, pft_cubes), title=f"Sum of {', '.join(pft_names)}", fig=plt.figure(figsize=(12, 5)), ) pft_cube = Datasets(pfts).select_variables("TreeAll", inplace=False).cube fig = cube_plotting(pft_cube, fig=plt.figure(figsize=(12, 5))) # - pft_cubes = [] pft_names = ("ShrubAll", "TreeAll") for pft_name in pft_names: pft_cube = Datasets(pfts).select_variables(pft_name, inplace=False).cube pft_cubes.append(pft_cube) fig = cube_plotting(pft_cube, fig=plt.figure(figsize=(12, 5))) fig = cube_plotting( reduce(lambda x, y: x + y, pft_cubes), title=f"Sum of {', '.join(pft_names)}", fig=plt.figure(figsize=(12, 5)), ) # + max_month = 9 close_figs = True def param_iter(): for (exc_name, exclude_inst) in tqdm( [("with_inst", False), ("no_inst", True)], desc="Exclude inst." ): for feature_name in tqdm( ["VOD Ku-band", "SIF", "FAPAR", "LAI", "Dry Day Period"], desc="Feature" ): for (filter_name, shap_results) in tqdm( [("normal", map_shap_results), ("high_ba", high_ba_map_shap_results)], desc="SHAP results", ): for shap_measure in tqdm( [ "masked_max_shap_arrs", "masked_abs_shap_arrs", "masked_shap_arrs", ], desc="SHAP measure", ): yield (exc_name, exclude_inst), feature_name, ( filter_name, shap_results, ), shap_measure for ( (exc_name, exclude_inst), feature_name, (filter_name, shap_results), shap_measure, ) in islice(param_iter(), 0, None): short_feature = shorten_features(feature_name) sub_directory = ( Path("shap_peaks") / filter_name / shap_measure / short_feature / exc_name ) filtered = np.array(filter_by_month(X_train.columns, feature_name, max_month)) lags = np.array( [get_lag(feature, target_feature=feature_name) for feature in filtered] ) # Ensure lags are sorted consistently. lag_sort_inds = np.argsort(lags) filtered = tuple(filtered[lag_sort_inds]) lags = tuple(lags[lag_sort_inds]) if exclude_inst and 0 in lags: assert lags[0] == 0 lags = lags[1:] filtered = filtered[1:] n_features = len(filtered) # There is no point plotting this map for a single feature or less since we are # interested in a comparison between different feature ranks. if n_features <= 1: continue selected_data = np.empty(n_features, dtype=object) for i, col in enumerate(X_train.columns): if col in filtered: selected_data[lags.index(get_lag(col))] = shap_results[shap_measure][ "data" ][i].copy() shared_mask = reduce(np.logical_or, (data.mask for data in selected_data)) for data in selected_data: data.mask = shared_mask stacked_shaps = np.vstack([data.data[np.newaxis] for data in selected_data]) # Calculate the significance of the global maxima for each of the valid pixels. # Valid indices are recorded in 'shared_mask'. valid_i, valid_j = np.where(~shared_mask) total_valid = len(valid_i) max_positions = np.ma.MaskedArray( np.zeros_like(shared_mask, dtype=np.float64), mask=True ) peak_indices = [] for i, j in zip(tqdm(valid_i, desc="Evaluating maxima", smoothing=0), valid_j): ptp_threshold = ptp_threshold_factor * mean_ba[i, j] peaks_i = significant_peak( stacked_shaps[:, i, j], diff_threshold=diff_threshold, ptp_threshold=ptp_threshold, strict=False, ) # Disregarding the sign of the mean influence, sorted by absolute # value (not peak height) magnitude. # # peak_indices.append(peaks_i) # Adding information about the sign of the mean influence, sorted by absolute # value (not peak height) magnitude. # # peak_indices.append(tuple( # f"{p_i}({'+' if stacked_shaps[p_i, i, j] > 0 else '-'})" for p_i in peaks_i # )) # Adding information about the sign of the mean influence, sorted by time. # peak_indices.append( tuple( f"{lags[p_i]}({'+' if stacked_shaps[p_i, i, j] > 0 else '-'})" for p_i in sorted(peaks_i) ) ) # pd.Series( dict( zip( *np.unique( [len(indices) for indices in peak_indices], return_counts=True ) ) ) ).plot.bar( ax=plt.subplots(figsize=(6, 4))[1], title=f"{short_feature}, exclude inst: {exclude_inst}", rot=0, ) figure_saver.save_figure(plt.gcf(), "n_peaks_distr", sub_directory=sub_directory) if close_figs: plt.close() # peaks_arr = np.ma.MaskedArray( np.zeros_like(shared_mask, dtype=np.float64), mask=True ) for i, j, indices in zip(valid_i, valid_j, peak_indices): peaks_arr[i, j] = len(indices) # fig, cbar = cube_plotting( peaks_arr, title=f"Nr. Peaks {short_feature}, exclude inst: {exclude_inst}", boundaries=np.arange(0, 4) - 0.5, fig=plt.figure(figsize=(5.1, 2.6)), coastline_kwargs={"linewidth": 0.3}, colorbar_kwargs={"label": "nr. peaks", "format": "%0.1f"}, return_cbar=True, ) tick_pos = np.arange(4, dtype=np.float64) tick_pos[3] -= 0.5 cbar.set_ticks(tick_pos) tick_labels = list(map(str, range(3))) + [">2"] cbar.set_ticklabels(tick_labels) # plt.gca().gridlines() map_figure_saver.save_figure( fig, f"nr_shap_peaks_map_{short_feature}", sub_directory=sub_directory ) if close_figs: plt.close() # masked_peaks = peaks_arr.copy() masked_peaks.mask |= (peaks_arr.data == 0) | (peaks_arr.data > 2) cmap, norm = from_levels_and_colors( levels=np.arange(1, 4) - 0.5, colors=["C1", "C2"], extend="neither", ) # fig, cbar = cube_plotting( masked_peaks, title=f"Nr. Peaks {short_feature}, exclude inst: {exclude_inst}", # boundaries=np.arange(1, 4) - 0.5, fig=plt.figure(figsize=(5.1, 2.6)), coastline_kwargs={"linewidth": 0.3}, colorbar_kwargs={"label": "nr. peaks", "format": "%0.1f"}, return_cbar=True, cmap=cmap, norm=norm, ) tick_pos = np.arange(3, dtype=np.float64) cbar.set_ticks(tick_pos) tick_labels = list(map(str, range(1, 3))) cbar.set_ticklabels(tick_labels) # plt.gca().gridlines() map_figure_saver.save_figure( fig, f"filtered_nr_shap_peaks_map_{short_feature}", sub_directory=sub_directory ) if close_figs: plt.close() # valid_peak_indices = [] for i, j, peaks_i in zip(valid_i, valid_j, peak_indices): if masked_peaks.mask[i, j]: # Only use valid samples. continue valid_peak_indices.append(peaks_i) assert np.all( np.sort(np.unique([len(indices) for indices in valid_peak_indices])) == np.array([1, 2]) ) # pd.Series( dict( zip( *np.unique( [len(indices) for indices in valid_peak_indices], return_counts=True ) ) ) ).plot.bar( ax=plt.subplots(figsize=(6, 4))[1], title=f"{short_feature}, exclude inst: {exclude_inst}", rot=0, ) figure_saver.save_figure( plt.gcf(), "filtered_n_peaks_distr", sub_directory=sub_directory ) if close_figs: plt.close() # peaks_dict = dict(zip(*np.unique(valid_peak_indices, return_counts=True))) total_counts = np.sum(list(peaks_dict.values())) relative_counts_dict = {key: val / total_counts for key, val in peaks_dict.items()} # print(f"{short_feature}, exclude inst: {exclude_inst}", relative_counts_dict) # fig = plt.figure(figsize=(7, 0.3 * len(relative_counts_dict) + 0.4)) pd.Series( {", ".join(k): v for k, v in relative_counts_dict.items()} ).sort_values().plot.barh( fontsize=12, title=f"{short_feature}, exclude inst: {exclude_inst}", ) figure_saver.save_figure(plt.gcf(), "peak_comb_distr", sub_directory=sub_directory) if close_figs: plt.close() # ##### Eliminate the lowest X% iff there are more than Y combinations keys, values = list(zip(*relative_counts_dict.items())) keys = np.asarray(keys) values = np.asarray(values) # elim_frac = 0.2 # min_n_peaks = 6 max_n_peaks = 6 min_frac = 0.05 # Require at least this fraction per entry. sorted_indices = np.argsort(values) cumulative_fractions = np.cumsum(values[sorted_indices]) # Ensure at least `min_n_entries` entries are present, but no more than `max_n_peaks`. mask = np.ones_like(cumulative_fractions, dtype=np.bool_) # mask[-min_n_peaks:] = True mask[:-max_n_peaks] = False # no more than `max_n_peaks`. # mask |= (cumulative_fractions > elim_frac) mask &= values[sorted_indices] > min_frac print( f"Remaining fraction: {short_feature}, exclude inst: {exclude_inst}", np.sum(values[sorted_indices][mask]), ) thres_counts_dict = { key: val for key, val in zip(keys[sorted_indices][mask], values[sorted_indices][mask]) } print(f"{short_feature}, exclude inst: {exclude_inst}", thres_counts_dict) peak_keys = list(thres_counts_dict) imp_peaks = np.ma.MaskedArray( np.zeros_like(shared_mask, dtype=np.float64), mask=True ) for i, j, indices in zip(valid_i, valid_j, peak_indices): if indices in peak_keys: imp_peaks[i, j] = peak_keys.index(indices) # boundaries = np.arange(len(peak_keys) + 1) - 0.5 cmap, norm = from_levels_and_colors( levels=boundaries, colors=[plt.get_cmap("tab10")(i) for i in range(len(peak_keys))], extend="neither", ) fig, cbar = cube_plotting( imp_peaks, title=f"Peak Distr. {short_feature}, exclude inst: {exclude_inst}", fig=plt.figure(figsize=(5.1, 2.6)), coastline_kwargs={"linewidth": 0.3}, colorbar_kwargs={"label": "peak combination"}, return_cbar=True, cmap=cmap, norm=norm, ) tick_pos = np.arange(len(peak_keys), dtype=np.float64) cbar.set_ticks(tick_pos) cbar.set_ticklabels(peak_keys) # plt.gca().gridlines() map_figure_saver.save_figure( fig, f"shap_peak_distr_map_{short_feature}", sub_directory=sub_directory ) if close_figs: plt.close() # assert len(np.unique(imp_peaks.data[~imp_peaks.mask])) == len(peak_keys) results = {} for comb_i in tqdm( np.unique(imp_peaks.data[~imp_peaks.mask]), desc="Peak combination" ): for pft_cube in pfts: selection = (~(pft_cube.data.mask | imp_peaks.mask)) & np.isclose( imp_peaks, comb_i ) results[ (str(peak_keys[int(comb_i)]), pft_cube.name()) ] = pft_cube.data.data[selection] df = pd.DataFrame({key: pd.Series(vals) for key, vals in results.items()}) df.columns.names = ["peak_combination", "pft"] for level in [0, 1]: # df.groupby(axis=1, level=level).boxplot( subplots=True, layout=(len(df.columns.levels[level]), 1), figsize=(10, 5 * len(df.columns.levels[level])), rot=30, ) plt.tight_layout() figure_saver.save_figure( plt.gcf(), f"boxplots_level_{df.columns.names[level]}", sub_directory=sub_directory, ) if close_figs: plt.close() # for key in df.columns.levels[level]: # fig = plt.figure() sns.violinplot(data=df.xs(key, level=level, axis="columns")) plt.title(key) figure_saver.save_figure( fig, f"violin_level_{df.columns.names[level]}_{key}", sub_directory=sub_directory, ) if close_figs: plt.close() #
analyses/seasonality_paper_st/fapar_only/shap_map_plot.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:metis] * # language: python # name: conda-env-metis-py # --- import pandas as pd import re raw_data_path = './data/raw/ted-talks/' meta_data_filename = 'ted_main.csv' transcripts_filename = 'transcripts.csv' m_df = pd.read_csv(raw_data_path+meta_data_filename) t_df = pd.read_csv(raw_data_path+transcripts_filename) t_df.shape # + [markdown] heading_collapsed=true # ### Figuring out the regex # + hidden=true print(re.match('(?=.*love)(?=.*hope)','hope love')) # + hidden=true s='This sentence contains both the word hope and the word love.' print(re.match('(?=.*love)(?=.*hope)',s)) # + [markdown] hidden=true # Finding the words using regex ended up being very time consuming. It was much faster to use single word `contains` and [pandas overloaded 'bitwise' `&` operator](https://stackoverflow.com/questions/36921951/truth-value-of-a-series-is-ambiguous-use-a-empty-a-bool-a-item-a-any-o). # - # ### Looking for the words in the transcripts print('Transcripts that contain the words:') print(' faith',t_df[t_df['transcript'].str.contains('faith',case=False)].shape[0]) print(' hope',t_df[t_df['transcript'].str.contains('hope',case=False)].shape[0]) print(' love',t_df[t_df['transcript'].str.contains('love',case=False)].shape[0]) print(' faith & hope',t_df[t_df['transcript'].str.contains('faith',case=False) & t_df['transcript'].str.contains('hope',case=False)].shape[0]) print(' hope & love', t_df[t_df['transcript'].str.contains('hope',case=False) & t_df['transcript'].str.contains('love',case=False)].shape[0]) print(' faith & love',t_df[t_df['transcript'].str.contains('faith',case=False) & t_df['transcript'].str.contains('love',case=False)].shape[0]) print(' faith, hope & love',t_df[t_df['transcript'].str.contains('faith',case=False) & t_df['transcript'].str.contains('hope',case=False) & t_df['transcript'].str.contains('love',case=False)].shape[0]) print(' faith or hope or love',t_df[t_df['transcript'].str.contains('faith',case=False) | t_df['transcript'].str.contains('hope',case=False) | t_df['transcript'].str.contains('love',case=False)].shape[0]) # ### Looking for the words in the descriptions print('Descriptions that contain the words:') print(' faith',m_df[m_df['description'].str.contains('faith',case=False)].shape[0]) print(' hope',m_df[m_df['description'].str.contains('hope',case=False)].shape[0]) print(' love',m_df[m_df['description'].str.contains('love',case=False)].shape[0]) print(' faith & hope',m_df[m_df['description'].str.contains('faith',case=False) & m_df['description'].str.contains('hope',case=False)].shape[0]) print(' hope & love',m_df[m_df['description'].str.contains('hope',case=False) & m_df['description'].str.contains('love',case=False)].shape[0]) print(' faith & love',m_df[m_df['description'].str.contains('faith',case=False) & m_df['description'].str.contains('love',case=False)].shape[0]) print(' faith, hope & love',m_df[m_df['description'].str.contains('faith',case=False) & m_df['description'].str.contains('hope',case=False) & m_df['description'].str.contains('love',case=False)].shape[0]) # ### Looking for any of the words in the titles print('Titles that contain the words:') print(' faith',m_df[m_df['title'].str.contains('faith',case=False)].title.count()) print(' hope',m_df[m_df['title'].str.contains('hope',case=False)].title.count()) print(' love',m_df[m_df['title'].str.contains('love',case=False)].title.count()) print(' faith, hope or love',m_df[m_df['title'].str.contains('faith|hope|love',case=False)].title.count())
proposal_EDA.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import numpy as np import pandas as pd import gzip import json path = r'../../../Results/' file = r'Concatenated_MFile.gz' pathFile = path + file # + dataList = [] with gzip.open(pathFile, 'r') as dataFile: for line in dataFile: lineData = json.loads(line.decode('utf-8')) dataList.append(lineData) data = pd.DataFrame(dataList) # - len(data) data.longitude = data.longitude.round(5) data.latitude = data.latitude.round(5) cluster = data.groupby(['E.164 format', 'latitude', 'longitude']).size().reset_index(name='counts') cluster = cluster.sort_values('counts', ascending=False) pd.set_option('max_colwidth', 400) cluster cluster['counts'] > 200 phoneNumbers = cluster['E.164 format'].loc[cluster['counts'] > 200] data = data[~data['E.164 format'].isin(phoneNumbers)] cluster_clean = data.groupby(['E.164 format', 'latitude', 'longitude']).size().reset_index(name='counts') cluster_clean = cluster_clean.sort_values('counts', ascending=False) cluster_clean len(data) file = 'LB_ClusteringResult.gz' data.to_json(path + file, compression="gzip", orient='records', lines=True)
notebooks/Entity/LocalBusiness/Preprocessing/Clustering.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os from os.path import join import numpy as np import pandas as pd import cv2 from src import config from src.utils import RLenc, make_dir PREDICTION_DIRS = [ '/workdir/data/predictions/mos-fpn-lovasz-se-resnext50-001/' ] FOLDS = [0, 1, 2, 3, 4, 5] FOLD_DIRS = [join(p, 'fold_%d'%f) for p in PREDICTION_DIRS for f in FOLDS] PREDICTION_DIRS = [ '/workdir/data/predictions/fpn-lovasz-se-resnext50-006-after-001/', ] FOLDS = [0, 1, 2, 3, 4] FOLD_DIRS += [join(p, 'fold_%d'%f) for p in PREDICTION_DIRS for f in FOLDS] segm_thresh = 0.4 prob_thresh = 0.5 SAVE_NAME = 'mean-005-0.4' make_dir(f'/workdir/data/{SAVE_NAME}') # - FOLD_DIRS def get_mean_probs_df(): probs_df_lst = [] for fold_dir in FOLD_DIRS: probs_df = pd.read_csv(join(fold_dir, 'test', 'probs.csv'), index_col='id') probs_df_lst.append(probs_df) mean_probs_df = probs_df_lst[0].copy() for probs_df in probs_df_lst[1:]: mean_probs_df.prob += probs_df.prob mean_probs_df.prob /= len(probs_df_lst) return mean_probs_df mean_probs_df = get_mean_probs_df() # + sample_submition = pd.read_csv(config.SAMPLE_SUBM_PATH) for i, row in sample_submition.iterrows(): pred_name = row.id+'.png' pred_lst = [] for fold_dir in FOLD_DIRS: pred_path = join(fold_dir, 'test', pred_name) pred = cv2.imread(pred_path, cv2.IMREAD_GRAYSCALE) pred = pred / 255 pred_lst.append(pred) mean_pred = np.mean(pred_lst, axis=0) prob = mean_probs_df.loc[row.id].prob pred = mean_pred > segm_thresh prob = int(prob > prob_thresh) pred = (pred * prob).astype(np.uint8) if np.all(pred == 1): pred[:] = 0 print('Full mask to empty', pred_name) rle_mask = RLenc(pred) cv2.imwrite(f'/workdir/data/{SAVE_NAME}/{pred_name}', pred * 255) row.rle_mask = rle_mask sample_submition.to_csv(f'/workdir/data/submissions/{SAVE_NAME}.csv', index=False) # + import matplotlib.pyplot as plt # %matplotlib inline plt.hist(mean_probs_df.prob.values, bins=20) # -
notebooks/mean_submission.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.2 (''datacamp_env'': venv)' # name: datacamp # --- # + import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import missingno as msno pd.set_option('display.max_rows', 40) pd.set_option('display.max_columns', 20) pd.set_option('display.width', 200) def explore_df(df): print(df.shape) print(df.head()) print(df.info()) # - # # Course Description # Data science isn't just for predicting ad-clicks-it's also useful for social impact! This course is a case study from a machine learning competition on DrivenData. You'll explore a problem related to school district budgeting. By building a model to automatically classify items in a school's budget, it makes it easier and faster for schools to compare their spending with other schools. In this course, you'll begin by building a baseline model that is a simple, first-pass approach. In particular, you'll do some natural language processing to prepare the budgets for modeling. Next, you'll have the opportunity to try your own techniques and see how they compare to participants from the competition. Finally, you'll see how the winner was able to combine a number of expert techniques to build the most accurate model. # # Summarizing the data # You'll continue your EDA in this exercise by computing summary statistics for the numeric data in the dataset. The data has been pre-loaded into a DataFrame called df. # # You can use `df.info()` in the IPython Shell to determine which columns of the data are numeric, specifically type float64. You'll notice that there are two numeric columns, called `FTE` and `Total`. # # * `FTE`: Stands for "full-time equivalent". If the budget item is associated to an employee, this number tells us the percentage of full-time that the employee works. A value of 1 means the associated employee works for the school full-time. A value close to 0 means the item is associated to a part-time or contracted employee. # # * `Total`: Stands for the total cost of the expenditure. This number tells us how much the budget item cost. # # After printing summary statistics for the numeric data, your job is to plot a histogram of the non-null `FTE` column to see the distribution of part-time and full-time employees in the dataset. # # This course touches on a lot of concepts you may have forgotten, so if you ever need a quick refresher, download the Scikit-Learn Cheat Sheet and keep it handy! df = pd.read_csv('datasets/school_info.csv', index_col=0) display(df.head()) df['Function'].value_counts() dummies = pd.get_dummies(df['Function']) dummies # + # Print summary statistics print(df.describe()) # import matplotlib import matplotlib.pyplot as plt # Create a histogram of the non-null 'FTE' column plt.hist(df['FTE'].dropna()) # Add title and labels plt.title('Distribution of %full-time \n employee works') plt.xlabel('% of full-time') plt.ylabel('num employees') plt.show() # - # ### RESULT # # The high variance in expenditures makes sense (some purchases are cheap some are expensive). Also, it looks like the FTE column is bimodal. That is, there are some part-time and some full-time employees. # # Encode the labels as categorical variables # Remember, your ultimate goal is to predict the probability that a certain label is attached to a budget line item. You just saw that many columns in your data are the inefficient object type. Does this include the labels you're trying to predict? Let's find out! # # There are 9 columns of labels in the dataset. Each of these columns is a category that has many possible values it can take. The 9 labels have been loaded into a list called LABELS. In the Shell, check out the type for these labels using `df[LABELS].dtypes`. # # You will notice that every label is encoded as an object datatype. Because category datatypes are much more efficient your task is to convert the labels to category types using the .astype() method. # # Note: .astype() only works on a pandas Series. Since you are working with a pandas DataFrame, you'll need to use the .apply() method and provide a lambda function called categorize_label that applies .astype() to each column, x LABELS = ['Function', 'Use', 'Sharing', 'Reporting', 'Student_Type', 'Position_Type', 'Object_Type', 'Pre_K', 'Operating_Status'] # + # Define a labda function categorize_label to convert column x into x.astype('category') categorize_label = lambda x: x.astype('category') # Use the LABELS list provided to convert the subset of data df[LABELS] to categorical types ussing .apply() method and categorize_label. Don't forget axis=0 df[LABELS] = df[LABELS].apply(categorize_label, axis=0) # Print the converted .dtypes attribut of df[LABELS] print(df[LABELS].dtypes) # - # # Counting unique labels # As Peter mentioned in the video, there are over 100 unique labels. In this exercise, you will explore this fact by counting and plotting the number of unique values for each category of label. # # The dataframe df and the LABELS list have been loaded into the workspace; the LABELS columns of df have been converted to category types. # # pandas, which has been pre-imported as pd, provides a pd.Series.nunique method for counting the number of unique values in a Series. # + # Create a DataFrame by using apply() methond on df[LABELS] with pd.Series.nunique as argument num_unique_labels = df[LABELS].apply(pd.Series.nunique) # Plot the number of unique values for each label num_unique_labels.sort_values().plot(kind='bar') plt.xlabel('Labels') plt.ylabel('Num of unique values') plt.show() # - def compute_log_loss(predicted, actual, eps=1e-14): """ Computes the logarithmic loss between `predicted` and `actual` when these are 1D arrays. """ predicted = np.clip(predicted, eps, 1 - eps) return -1 * np.mean(actual * np.log(predicted) + (1 - actual) * np.log(1 - predicted)) # # Computing log loss with NumPy # To see how the log loss metric handles the trade-off between accuracy and confidence, we will use some sample data generated with NumPy and compute the log loss using the provided function compute_log_loss(), which Peter showed you in the video. # # 5 one-dimensional numeric arrays simulating different types of predictions have been pre-loaded: actual_labels, correct_confident, correct_not_confident, wrong_not_confident, and wrong_confident. # # Your job is to compute the log loss for each sample set provided using the compute_log_loss(predicted_values, actual_values). It takes the predicted values as the first argument and the actual values as the second argument. # # Setting up a train-test split in scikit-learn # Alright, you've been patient and awesome. It's finally time to start training models! # # The first step is to split the data into a training set and a test set. Some labels don't occur very often, but we want to make sure that they appear in both the training and the test sets. We provide a function that will make sure at least min_count examples of each label appear in each split: multilabel_train_test_split. # # Feel free to check out the full code for multilabel_train_test_split here. # # You'll start with a simple model that uses just the numeric columns of your DataFrame when calling multilabel_train_test_split. The data has been read into a DataFrame df and a list consisting of just the numeric columns is available as NUMERIC_COLUMNS. # + from warnings import warn def multilabel_sample(y, size=1000, min_count=5, seed=None): """ Takes a matrix of binary labels `y` and returns the indices for a sample of size `size` if `size` > 1 or `size` * len(y) if size =< 1. The sample is guaranteed to have > `min_count` of each label. """ try: if (np.unique(y).astype(int) != np.array([0, 1])).any(): raise ValueError() except (TypeError, ValueError): raise ValueError('multilabel_sample only works with binary indicator matrices') if (y.sum(axis=0) < min_count).any(): raise ValueError('Some classes do not have enough examples. Change min_count if necessary.') if size <= 1: size = np.floor(y.shape[0] * size) if y.shape[1] * min_count > size: msg = "Size less than number of columns * min_count, returning {} items instead of {}." warn(msg.format(y.shape[1] * min_count, size)) size = y.shape[1] * min_count rng = np.random.RandomState(seed if seed is not None else np.random.randint(1)) if isinstance(y, pd.DataFrame): choices = y.index y = y.values else: choices = np.arange(y.shape[0]) sample_idxs = np.array([], dtype=choices.dtype) # first, guarantee > min_count of each label for j in range(y.shape[1]): label_choices = choices[y[:, j] == 1] label_idxs_sampled = rng.choice(label_choices, size=min_count, replace=False) sample_idxs = np.concatenate([label_idxs_sampled, sample_idxs]) sample_idxs = np.unique(sample_idxs) # now that we have at least min_count of each, we can just random sample sample_count = int(size - sample_idxs.shape[0]) # get sample_count indices from remaining choices remaining_choices = np.setdiff1d(choices, sample_idxs) remaining_sampled = rng.choice(remaining_choices, size=sample_count, replace=False) return np.concatenate([sample_idxs, remaining_sampled]) def multilabel_train_test_split(X, Y, size, min_count=5, seed=None): """ Takes a features matrix `X` and a label matrix `Y` and returns (X_train, X_test, Y_train, Y_test) where all classes in Y are represented at least `min_count` times. """ index = Y.index if isinstance(Y, pd.DataFrame) else np.arange(Y.shape[0]) test_set_idxs = multilabel_sample(Y, size=size, min_count=min_count, seed=seed) train_set_idxs = np.setdiff1d(index, test_set_idxs) test_set_mask = index.isin(test_set_idxs) train_set_mask = ~test_set_mask return (X[train_set_mask], X[test_set_mask], Y[train_set_mask], Y[test_set_mask]) # + filter = df.dtypes == 'float64' NUMERIC_COLUMNS = list(df.columns[filter]) print(NUMERIC_COLUMNS) filter = df.dtypes == 'object' LABELS = list(df.columns[filter]) print(LABELS) LABELS = ['Function', 'Use', 'Sharing', 'Reporting', 'Student_Type', 'Position_Type', 'Object_Type', 'Pre_K', 'Operating_Status'] print(LABELS) # + # Create the new DataFrame: numeric_data_only numeric_data_only = df[NUMERIC_COLUMNS].fillna(-1000) # Get labels and convert to dummy variables: label_dummies label_dummies = pd.get_dummies(df[LABELS]) display(label_dummies.head()) # Create training and test sets X_train, X_test, y_train, y_test = multilabel_train_test_split(numeric_data_only, label_dummies, size=0.2, seed=123) # Print the info print("X_train info:") print(X_train.info()) print("\nX_test info:") print(X_test.info()) print("\ny_train info:") print(y_train.info()) print("\ny_test info:") print(y_test.info()) # - # # Training a model # With split data in hand, you're only a few lines away from training a model. # # In this exercise, you will import the logistic regression and one versus rest classifiers in order to fit a multi-class logistic regression model to the NUMERIC_COLUMNS of your feature data. # # Then you'll test and print the accuracy with the .score() method to see the results of training. # # Before you train! Remember, we're ultimately going to be using logloss to score our model, so don't worry too much about the accuracy here. Keep in mind that you're throwing away all of the text data in the dataset - that's by far most of the data! So don't get your hopes up for a killer performance just yet. We're just interested in getting things up and running at the moment. # # All data necessary to call multilabel_train_test_split() has been loaded into the workspace. # + # Import classifiers from sklearn.linear_model import LogisticRegression from sklearn.multiclass import OneVsRestClassifier # Instantiate the classifier: clf clf = OneVsRestClassifier(LogisticRegression()) # Fit the clf clf.fit(X_train, y_train) print("Accuracy: {}".format(clf.score(X_test, y_test))) # - # ### RESULT: The good news is that your workflow didn't cause any errors. The bad news is that your model scored the lowest possible accuracy: 0.0! But hey, you just threw away ALL of the text data in the budget. Later, you won't. Before you add the text data, let's see how the model does when scored by log loss. # # Use your model to predict values on holdout data # You're ready to make some predictions! Remember, the train-test-split you've carried out so far is for model development. The original competition provides an additional test set, for which you'll never actually see the correct labels. This is called the "holdout data." # # The point of the holdout data is to provide a fair test for machine learning competitions. If the labels aren't known by anyone but DataCamp, DrivenData, or whoever is hosting the competition, you can be sure that no one submits a mere copy of labels to artificially pump up the performance on their model. # # Remember that the original goal is to predict the probability of each label. In this exercise you'll do just that by using the .predict_proba() method on your trained model. # # First, however, you'll need to load the holdout data, which is available in the workspace as the file HoldoutData.csv. # + # Load holdout data holdout = pd.read_csv('datasets/school_info_holdout.csv', index_col=0) display(holdout.head()) # Generate prediction predictions = clf.predict_proba(holdout[NUMERIC_COLUMNS].fillna(-1000)) display(predictions) # - # # Writing out your results to a csv for submission # At last, you're ready to submit some predictions for scoring. In this exercise, you'll write your predictions to a .csv using the .to_csv() method on a pandas DataFrame. Then you'll evaluate your performance according to the LogLoss metric discussed earlier! # # You'll need to make sure your submission obeys the correct format. # # To do this, you'll use your predictions values to create a new DataFrame, prediction_df. # # Interpreting LogLoss & Beating the Benchmark: # # When interpreting your log loss score, keep in mind that the score will change based on the number of samples tested. To get a sense of how this very basic model performs, compare your score to the DrivenData benchmark model performance: 2.0455, which merely submitted uniform probabilities for each class. # # Remember, the lower the log loss the better. Is your model's log loss lower than 2.0455? # + BOX_PLOTS_COLUMN_INDICES = [range(0, 37), range(37, 48), range(48, 51), range(51, 76), range(76, 79), range(79, 82), range(82, 87), range(87, 96), range(96, 104)] def _multi_multi_log_loss(predicted, actual, class_column_indices=BOX_PLOTS_COLUMN_INDICES, eps=1e-15): """ Multi class version of Logarithmic Loss metric as implemented on DrivenData.org """ class_scores = np.ones(len(class_column_indices), dtype=np.float64) # calculate log loss for each set of columns that belong to a class: for k, this_class_indices in enumerate(class_column_indices): # get just the columns for this class preds_k = predicted[:, this_class_indices].astype(np.float64) # normalize so probabilities sum to one (unless sum is zero, then we clip) preds_k /= np.clip(preds_k.sum(axis=1).reshape(-1, 1), eps, np.inf) actual_k = actual[:, this_class_indices] # shrink predictions so y_hats = np.clip(preds_k, eps, 1 - eps) sum_logs = np.sum(actual_k * np.log(y_hats)) class_scores[k] = (-1.0 / actual.shape[0]) * sum_logs return np.average(class_scores) def score_submission(pred_path, holdout_path): # this happens on the backend to get the score holdout_labels = pd.get_dummies( pd.read_csv(holdout_path, index_col=0) .apply(lambda x: x.astype('category'), axis=0) ) preds = pd.read_csv(pred_path, index_col=0) # make sure that format is correct assert (preds.columns == holdout_labels.columns).all() assert (preds.index == holdout_labels.index).all() return _multi_multi_log_loss(preds.values, holdout_labels.values) # - # + # Format predictions prediction_df = pd.DataFrame(columns=pd.get_dummies(df[LABELS]).columns, index=holdout.index, data=predictions) # Save predictions to csv prediction_df.to_csv('datasets/predictions.csv') display(prediction_df.head()) display(prediction_df.columns.shape) # Submit the predictinos for scoring # score = score_submission('datasets/predictions.csv', 'datasets/school_info_holdout.csv') # - holdout_labels = pd.get_dummies( pd.read_csv('datasets/school_info_holdout.csv', index_col=0) .apply(lambda x: x.astype('category'), axis=0) ) display(holdout_labels.head()) display(holdout_labels.shape) # # NPL Tokenizer # # # Creating a bag-of-words in scikit-learn # In this exercise, you'll study the effects of tokenizing in different ways by comparing the bag-of-words representations resulting from different token patterns. # # You will focus on one feature only, the Position_Extra column, which describes any additional information not captured by the Position_Type label. # # For example, in the Shell you can check out the budget item in row 8960 of the data using `df.loc[8960]`. Looking at the output reveals that this Object_Description is overtime pay. For who? The Position Type is merely "other", but the Position Extra elaborates: "BUS DRIVER". Explore the column further to see more instances. It has a lot of NaN values. # # Your task is to turn the raw text in this column into a bag-of-words representation by creating tokens that contain only alphanumeric characters. # # For comparison purposes, the first 15 tokens of vec_basic, which splits df.Position_Extra into tokens when it encounters only whitespace characters, have been printed along with the length of the representation. df.head() df.shape df.loc[8960] # + # Import CountVectorizer from sklearn.feature_extraction.text import CountVectorizer # Create the token pattern # Captures only alphanumeric tokens that are followed by onen or more spaces TOKEN_ALPHANUMERIC = '[A-Za-z0-9]+(?=\\s+)' # Fill missing values in df.Position_Extra df.Position_Extra.fillna('', inplace=True) # Instantiate the ConterVectorizer vec_alphanumeric = CountVectorizer(token_pattern = TOKEN_ALPHANUMERIC) # Fit vectorizer to data vec_alphanumeric.fit(df.Position_Extra) # Print the number of tokens adn the first 15 print(len(vec_alphanumeric.get_feature_names())) print(vec_alphanumeric.get_feature_names()) # - # # # Combining text columns for tokenization # In order to get a bag-of-words representation for all of the text data in our DataFrame, you must first convert the text data in each row of the DataFrame into a single string. # # In the previous exercise, this wasn't necessary because you only looked at one column of data, so each row was already just a single string. CountVectorizer expects each row to just be a single string, so in order to use all of the text columns, you'll need a method to turn a list of strings into a single string. # # In this exercise, you'll complete the function definition combine_text_columns(). When completed, this function will convert all training text data in your DataFrame to a single string per row that can be passed to the vectorizer object and made into a bag-of-words using the .fit_transform() method. # # Note that the function uses NUMERIC_COLUMNS and LABELS to determine which columns to drop. These lists have been loaded into the workspace. # + def combine_text_columns(data_frame, to_drop=NUMERIC_COLUMNS+LABELS): """ Convert all text in each row of data_frame to single vector""" # Drop non-text columns that are in the df to_drop = set(to_drop) & set(data_frame.columns.tolist()) display(to_drop) text_data = data_frame.drop(columns=to_drop) display(text_data.head()) # Replace nans with blanks text_data.fillna('', inplace=True) display(text_data.head()) # Join all text items in a row that have a space in between # Apply the function across the columns return text_data.apply(lambda x: " ".join(x), axis='columns') result = combine_text_columns(df, to_drop=NUMERIC_COLUMNS+LABELS) display(result) # - # # What's in a token? # Now you will use combine_text_columns to convert all training text data in your DataFrame to a single vector that can be passed to the vectorizer object and made into a bag-of-words using the .fit_transform() method. # # You'll compare the effect of tokenizing using any non-whitespace characters as a token and using only alphanumeric characters as a token. # + # Import the CountVectorizer from sklearn.feature_extraction.text import CountVectorizer # Create the basic token pattern # Any combination of non spce characters followed by one or more space TOKENS_BASIC = '\\S+(?=\\s+)' # Alphanumeric token pattern TOKENS_ALPHANUMERIC = '[A-Za-z0-9]+(?=\\s+)' # Instatiate alphanumeric Counter vec_basic = CountVectorizer(token_pattern=TOKENS_BASIC) # Instatiate alphanumeric vec_alphanumeric = CountVectorizer(token_pattern=TOKENS_ALPHANUMERIC) # Create text vector text_vector = combine_text_columns(df, to_drop=NUMERIC_COLUMNS+LABELS) # Fit counters vec_basic.fit_transform(text_vector) # Print number of tokens of vec_basic print("There are {} tokens in the dataset".format(len(vec_basic.get_feature_names()))) vec_alphanumeric.fit_transform(text_vector) # Print number of tokens of vec_alphanumeric print("There are {} alpha-numeric tokens in the dataset".format(len(vec_alphanumeric.get_feature_names()))) # - # # Instantiate pipeline # In order to make your life easier as you start to work with all of the data in your original DataFrame, df, it's time to turn to one of scikit-learn's most useful objects: the Pipeline. # # For the next few exercises, you'll reacquaint yourself with pipelines and train a classifier on some synthetic (sample) data of multiple datatypes before using the same techniques on the main dataset. # # The sample data is stored in the DataFrame, sample_df, which has three kinds of feature data: numeric, text, and numeric with missing values. It also has a label column with two classes, a and b. # # In this exercise, your job is to instantiate a pipeline that trains using the numeric column of the sample data. sample_df = pd.read_csv('datasets/sample_data.csv', index_col=0) display(sample_df.shape) display(sample_df.head()) # + from sklearn.pipeline import Pipeline from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.multiclass import OneVsRestClassifier # Split train and test data X_train, X_test, y_train, y_test = train_test_split(sample_df[['numeric']], pd.get_dummies(sample_df['label']), random_state=22) # Instatiate Pipeline object pl = Pipeline([ ( 'clf', OneVsRestClassifier(LogisticRegression()) ) ]) pl.fit(X_train, y_train) accuracy = pl.score(X_test, y_test) print("\nAccuracy on sample data - numeric, no nans: ", accuracy) # - # # Preprocessing numeric features # What would have happened if you had included the with 'with_missing' column in the last exercise? Without imputing missing values, the pipeline would not be happy (try it and see). So, in this exercise you'll improve your pipeline a bit by using the Imputer() imputation transformer from scikit-learn to fill in missing values in your sample data. # # By default, the imputer transformer replaces NaNs with the mean value of the column. That's a good enough imputation strategy for the sample data, so you won't need to pass anything extra to the imputer. # # After importing the transformer, you will edit the steps list used in the previous exercise by inserting a (name, transform) tuple. Recall that steps are processed sequentially, so make sure the new tuple encoding your preprocessing step is put in the right place. # # The sample_df is in the workspace, in case you'd like to take another look. Make sure to select both numeric columns- in the previous exercise we couldn't use with_missing because we had no preprocessing step! # + # Split train and test data X_train, X_test, y_train, y_test = train_test_split(sample_df[['numeric', 'with_missing']], pd.get_dummies(sample_df['label']), random_state=22) # Instatiate Pipeline object pl = Pipeline([ ( 'clf', OneVsRestClassifier(LogisticRegression()) ) ]) pl.fit(X_train, y_train) accuracy = pl.score(X_test, y_test) print("\nAccuracy on sample data - numeric, no nans: ", accuracy) # + from sklearn.impute import SimpleImputer X_train, X_test, y_train, y_test = train_test_split(sample_df[['numeric', 'with_missing']], pd.get_dummies(sample_df['label']), random_state=456) pl = Pipeline([ ('imp', SimpleImputer(missing_values=np.nan, strategy='mean')), ('clf', OneVsRestClassifier(LogisticRegression())) ]) pl.fit(X_train, y_train) accuracy = pl.score(X_test, y_test) print("\nAccuracy on sample data - all numeric, incl nans: ", accuracy) # - # # # Preprocessing text features # Here, you'll perform a similar preprocessing pipeline step, only this time you'll use the text column from the sample data. # # To preprocess the text, you'll turn to CountVectorizer() to generate a bag-of-words representation of the data, as in Chapter 2. Using the default arguments, add a (step, transform) tuple to the steps list in your pipeline. # # Make sure you select only the text column for splitting your training and test sets. # # As usual, your sample_df is ready and waiting in the workspace. sample_df.isna().sum() # + from sklearn.feature_extraction.text import CountVectorizer X_train, X_test, y_train, y_test = train_test_split(sample_df['text'].fillna(''), pd.get_dummies(sample_df['label']), random_state=456) pl = Pipeline([ ('vec', CountVectorizer()), ('clf', OneVsRestClassifier(LogisticRegression())) ]) pl.fit(X_train, y_train) accuracy = pl.score(X_test, y_test) print("\nAccuracy on sample data - just text data: ", accuracy) # - # # Multiple types of processing: FunctionTransformer # The next two exercises will introduce new topics you'll need to make your pipeline truly excel. # # Any step in the pipeline must be an object that implements the fit and transform methods. The FunctionTransformer creates an object with these methods out of any Python function that you pass to it. We'll use it to help select subsets of data in a way that plays nicely with pipelines. # # You are working with numeric data that needs imputation, and text data that needs to be converted into a bag-of-words. You'll create functions that separate the text from the numeric variables and see how the .fit() and .transform() methods work. sample_df.text.fillna('', inplace=True) sample_df.isna().sum() # + from sklearn.preprocessing import FunctionTransformer # Obtain the text data get_text_data = FunctionTransformer(lambda x: x['text'], validate=False) # Obtain the numeric data get_numeric_data = FunctionTransformer(lambda x: x[['numeric', 'with_missing']], validate=False) # Fit and transform the text data just_text_data = get_text_data.fit_transform(sample_df) just_numeric_data = get_numeric_data.fit_transform(sample_df) # Print head to check results print('Text Data') print(just_text_data.head()) print('\nNumeric Data') print(just_numeric_data.head()) # - # # Multiple types of processing: FeatureUnion # Now that you can separate text and numeric data in your pipeline, you're ready to perform separate steps on each by nesting pipelines and using FeatureUnion(). # # These tools will allow you to streamline all preprocessing steps for your model, even when multiple datatypes are involved. Here, for example, you don't want to impute our text data, and you don't want to create a bag-of-words with our numeric data. Instead, you want to deal with these separately and then join the results together using FeatureUnion(). # # In the end, you'll still have only two high-level steps in your pipeline: preprocessing and model instantiation. The difference is that the first preprocessing step actually consists of a pipeline for numeric data and a pipeline for text data. The results of those pipelines are joined using FeatureUnion(). vec = CountVectorizer() vec.fit_transform(sample_df['text']) result = vec.get_feature_names() print(result) print(sample_df['text'].unique()) # # Split Pipeline by Columns Using ColumnTransformer # # How CountVectorizer() works together with Pipelines # https://towardsdatascience.com/pipeline-columntransformer-and-featureunion-explained-f5491f815f # https://stackoverflow.com/questions/63000388/how-to-include-simpleimputer-before-countvectorizer-in-a-scikit-learn-pipeline # vec = CountVectorizer() X = vec.fit_transform(sample_df['text'].fillna('')) print(vec.get_feature_names()) print(pd.DataFrame(X.toarray(), columns=vec.get_feature_names())) # + from sklearn.preprocessing import FunctionTransformer # Super important as the Imputer does not return a 1 Dim array flatten one_dim = FunctionTransformer(np.reshape, kw_args={'newshape':-1}) text_pipe = Pipeline([ ('imputer', SimpleImputer(strategy='constant', fill_value='') ), ('one_dim', FunctionTransformer(np.reshape, kw_args={'newshape':-1}) ), ('vec', CountVectorizer() ) ]) text_pipe2 = Pipeline([ ('imputer', SimpleImputer(strategy='constant', fill_value='')), ]) X_train, X_test, y_train, y_test = train_test_split( sample_df['text'], pd.get_dummies(sample_df['label']), random_state=22 ) imp = SimpleImputer(strategy='constant', fill_value='') display(X_train.head()) values = X_train.values.reshape(-1,1) display(values[:4]) text_pipe.fit(values) result = text_pipe.transform(values) columns = text_pipe.steps[2][1].get_feature_names() print(columns) display(pd.DataFrame(result.toarray(), columns=columns).head()) # + # Using ColumnTransformer # + from sklearn.compose import ColumnTransformer sample_df = pd.read_csv('datasets/sample_data.csv', index_col=0) display(sample_df.head()) X_train, X_test, y_train, y_test = train_test_split( sample_df.drop('label', axis='columns'), sample_df['label'], test_size=0.2, random_state=22 ) numerical_columns = ['numeric', 'with_missing'] text_columns = ['text'] columns = np.append(numerical_columns, text_columns) text_pipe = Pipeline([ ('imputer', SimpleImputer(strategy='constant', fill_value='')), # Super important as the Imputer does not return a 1 Dim array flatten ('one_dim', FunctionTransformer(np.reshape, kw_args={'newshape':-1}) ), ('vec', CountVectorizer()) ]) num_pipe = Pipeline([ ('imputer', SimpleImputer(missing_values=np.nan, strategy='mean')) ]) preprocessor = ColumnTransformer(transformers=[('text', text_pipe, text_columns), ('numrical', num_pipe, numerical_columns)]) preprocessor.fit(X_train) display(X_train.head()) vec_columns = preprocessor.named_transformers_['text'][2].get_feature_names() columns = np.append(vec_columns, numerical_columns) print(columns) display(pd.DataFrame(preprocessor.transform(X_train), columns=columns)) # + pipe = Pipeline([ ('preprocessor', preprocessor), ('clf', LogisticRegression()) ]) pipe.fit(X_train, y_train) score = pipe.score(X_test, y_test) print(f"The score is {score:.4f}") # - # # Apply Pipeline to school_info # 1. Load the dataset # 2. Define the labels and non label columns # 3. Create the training and test datasets # # [Link to DataDrive Context](https://www.drivendata.org/competitions/4/box-plots-for-education/page/121/) # + df = pd.read_csv('datasets/school_info.csv', index_col=0) display(df.head(2)) LABEL_COLUMNS = ['Function','Use', 'Sharing', 'Reporting', 'Student_Type', 'Position_Type', 'Object_Type', 'Pre_K', 'Operating_Status'] # Get the dummy encoding of the labels dummy_labels = pd.get_dummies(df[LABEL_COLUMNS]) display(dummy_labels.head(2)) # Get the columns that are features in the original df NON_LABELS_COLUMNS = [c for c in df.columns if c not in LABEL_COLUMNS] display(NON_LABELS_COLUMNS) # - # # Create the test and train Datasets # 1. Define a train_test_split stratified for multiple labels # # + from warnings import warn def multilabel_sample(y, size=1000, min_count=5, seed=None): """ Takes a matrix of binary labels `y` and returns the indices for a sample of size `size` if `size` > 1 or `size` * len(y) if size =< 1. The sample is guaranteed to have > `min_count` of each label. """ try: if (np.unique(y).astype(int) != np.array([0, 1])).any(): raise ValueError() except (TypeError, ValueError): raise ValueError('multilabel_sample only works with binary indicator matrices') if (y.sum(axis=0) < min_count).any(): raise ValueError('Some classes do not have enough examples. Change min_count if necessary.') if size <= 1: size = np.floor(y.shape[0] * size) if y.shape[1] * min_count > size: msg = "Size less than number of columns * min_count, returning {} items instead of {}." warn(msg.format(y.shape[1] * min_count, size)) size = y.shape[1] * min_count rng = np.random.RandomState(seed if seed is not None else np.random.randint(1)) if isinstance(y, pd.DataFrame): choices = y.index y = y.values else: choices = np.arange(y.shape[0]) sample_idxs = np.array([], dtype=choices.dtype) # first, guarantee > min_count of each label for j in range(y.shape[1]): label_choices = choices[y[:, j] == 1] label_idxs_sampled = rng.choice(label_choices, size=min_count, replace=False) sample_idxs = np.concatenate([label_idxs_sampled, sample_idxs]) sample_idxs = np.unique(sample_idxs) # now that we have at least min_count of each, we can just random sample sample_count = int(size - sample_idxs.shape[0]) # get sample_count indices from remaining choices remaining_choices = np.setdiff1d(choices, sample_idxs) remaining_sampled = rng.choice(remaining_choices, size=sample_count, replace=False) return np.concatenate([sample_idxs, remaining_sampled]) def multilabel_train_test_split(X, Y, size, min_count=5, seed=None): """ Takes a features matrix `X` and a label matrix `Y` and returns (X_train, X_test, Y_train, Y_test) where all classes in Y are represented at least `min_count` times. """ index = Y.index if isinstance(Y, pd.DataFrame) else np.arange(Y.shape[0]) test_set_idxs = multilabel_sample(Y, size=size, min_count=min_count, seed=seed) train_set_idxs = np.setdiff1d(index, test_set_idxs) test_set_mask = index.isin(test_set_idxs) train_set_mask = ~test_set_mask return (X[train_set_mask], X[test_set_mask], Y[train_set_mask], Y[test_set_mask]) # - # Split into training and test sets X_train, X_test, y_train, y_test = multilabel_train_test_split(df[NON_LABELS_COLUMNS], dummy_labels, 0.2, seed=123) # # Prepare the imput values # 1. Split them in NUMERIC and TEXT COLUMNS # # # Create Text Columns Pipe # 1. Create a function transformer to combien all the text columns in one # 2. Create another function that reshapes that column from 1Dim vector to a 2Dim # 3. Tokenize the text columns from 1 column to a columns for each unique word # + from sklearn.preprocessing import FunctionTransformer from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import CountVectorizer NUMERIC_COLUMNS = ['FTE', 'Total'] TEXT_COLUMNS = [c for c in df.columns if c not in (LABEL_COLUMNS+NUMERIC_COLUMNS)] def combine_text_columns(data_frame, to_drop=NUMERIC_COLUMNS+LABEL_COLUMNS): """ Takes the dataset as read in, drops the non-feature, non-text columns and then combines all of the text columns into a single vector that has all of the text for a row. :param data_frame: The data as read in with read_csv (no preprocessing necessary) :param to_drop (optional): Removes the numeric and label columns by default. """ # drop non-text columns that are in the df to_drop = set(to_drop) & set(data_frame.columns.tolist()) text_data = data_frame.drop(to_drop, axis=1) # replace nans with blanks text_data.fillna("", inplace=True) # joins all of the text items in a row (axis=1) # with a space in between return text_data.apply(lambda x: " ".join(x), axis=1) text_pipe = Pipeline([ #('imputer', SimpleImputer(strategy='constant', fill_value='')), ('combine_text', FunctionTransformer(combine_text_columns, kw_args={'to_drop':NUMERIC_COLUMNS+LABEL_COLUMNS})), # Super important as the Imputer does not return a 1 Dim array flatten ('one_dim', FunctionTransformer(np.reshape, kw_args={'newshape':-1}) ), ('vec', CountVectorizer()) ]) # Test the pipeline with all TEXT Columns and check the out is what we expect text_pipe.fit(df[TEXT_COLUMNS]) text_pipe_results = text_pipe.transform(df[TEXT_COLUMNS]) # Access the vectorizer within the pipeline and get all the token names columns = text_pipe.steps[2][1].get_feature_names() # Create a dataset to visualize the pipeline reault text_pipe_results_df = pd.DataFrame(text_pipe_results.toarray(), columns=columns) display(text_pipe_results_df.head()) # - # # Create Numeric Columns Preparation Pipeline # 1. Fill in Nan values with the mean value # + from sklearn.impute import SimpleImputer num_pipe = Pipeline([ ('imputer', SimpleImputer(missing_values=np.nan, strategy='mean')) ]) # Check the pipeline output before combine it num_pipe.fit(df[NUMERIC_COLUMNS]) num_pipe_results = num_pipe.transform(df[NUMERIC_COLUMNS]) num_pipe_results_df = pd.DataFrame(num_pipe_results, columns=NUMERIC_COLUMNS) display(num_pipe_results_df.head()) # - # # Create a ColumnTransformer that allows to create different pipeline for group of columns # + from sklearn.compose import ColumnTransformer preprocessor = ColumnTransformer(transformers=[('text', text_pipe, TEXT_COLUMNS), ('numerical', num_pipe, NUMERIC_COLUMNS)]) # Test the pipeline output preprocessor.fit(X_train) result = preprocessor.transform(X_train) vec_columns = preprocessor.named_transformers_['text'][2].get_feature_names() result_df = pd.DataFrame(result.toarray(), columns=np.append(vec_columns, NUMERIC_COLUMNS)) print(result_df.head()) # - # # Add the classifier and we are done and ready to train # + from sklearn.ensemble import RandomForestClassifier pipe = Pipeline([ ('preprocessor', preprocessor), ('clf', RandomForestClassifier()) ]) pipe.fit(X_train, y_train) accuracy = pipe.score(X_test, y_test) print(f"Accuracy on budget dataset {accuracy}") # + from sklearn.ensemble import RandomForestClassifier pipe = Pipeline([ ('preprocessor', preprocessor), ('clf', RandomForestClassifier(n_estimators=15)) ]) pipe.fit(X_train, y_train) accuracy = pipe.score(X_test, y_test) print(f"Accuracy on budget dataset {accuracy}") # - # # In order to look for ngram relationships at multiple scales, you will use the ngram_range parameter # # Special functions: You'll notice a couple of new steps provided in the pipeline in this and many of the remaining exercises. Specifically, the dim_red step following the vectorizer step , and the scale step preceeding the clf (classification) step. # # These have been added in order to account for the fact that you're using a reduced-size sample of the full dataset in this course. To make sure the models perform as the expert competition winner intended, we have to apply a dimensionality reduction technique, which is what the dim_red step does, and we have to scale the features to lie between -1 and 1, which is what the scale step does. # # The dim_red step uses a scikit-learn function called SelectKBest(), applying something called the chi-squared test to select the K "best" features. The scale step uses a scikit-learn function called MaxAbsScaler() in order to squash the relevant features into the interval -1 to 1. # # You won't need to do anything extra with these functions here, just complete the vectorizing pipeline steps below. However, notice how easy it was to add more processing steps to our pipeline! # + from sklearn.feature_selection import chi2, SelectKBest # Select 300 best features chi_k = 300 TOKENS_ALPHANUMERIC = '[A-Za-z0-9]+(?=\\s+)' text_pipe = Pipeline([ #('imputer', SimpleImputer(strategy='constant', fill_value='')), ('combine_text', FunctionTransformer(combine_text_columns, kw_args={'to_drop':NUMERIC_COLUMNS+LABEL_COLUMNS})), # Super important as the Imputer does not return a 1 Dim array flatten ('one_dim', FunctionTransformer(np.reshape, kw_args={'newshape':-1}) ), ('vec', CountVectorizer(token_pattern=TOKENS_ALPHANUMERIC, ngram_range=(1,2))), ('dim_red', SelectKBest(chi2, k=chi_k)) ]) dummy_labels = pd.get_dummies(df[LABEL_COLUMNS]) # display(dummy_labels.shape) text_pipe.fit(df[TEXT_COLUMNS], dummy_labels) text_pipe_results = text_pipe.transform(df[TEXT_COLUMNS]) # display(text_pipe_results) columns = pd.Series(text_pipe.steps[2][1].get_feature_names()) # display(columns.shape) columns_mask = text_pipe.steps[3][1].get_support() # display(columns[columns_mask]) text_pipe_results_df = pd.DataFrame(text_pipe_results.toarray(), columns=columns[columns_mask]) display(text_pipe_results_df.head()) num_pipe = Pipeline([ ('imputer', SimpleImputer(missing_values=np.nan, strategy='mean')) ]) num_pipe.fit(df[NUMERIC_COLUMNS]) num_pipe_results = num_pipe.transform(df[NUMERIC_COLUMNS]) num_pipe_results_df = pd.DataFrame(num_pipe_results, columns=NUMERIC_COLUMNS) display(num_pipe_results_df.head()) preprocessor = ColumnTransformer(transformers=[('text', text_pipe, TEXT_COLUMNS), ('numerical', num_pipe, NUMERIC_COLUMNS)]) preprocessor.fit(X_train, y_train) result = preprocessor.transform(X_train) columns = pd.Series(preprocessor.named_transformers_['text'][2].get_feature_names()) columns_mask = preprocessor.named_transformers_['text'][3].get_support() vec_columns = columns[columns_mask] # display(vec_columns.shape) # display(vec_columns) columns = columns=np.append(vec_columns, NUMERIC_COLUMNS) result_df = pd.DataFrame(result.toarray(), columns=columns) display(result_df.head()) # - # # Apply a MaxAbsScaler to everthing # + from sklearn.ensemble import RandomForestClassifier from sklearn.preprocessing import MaxAbsScaler pipe = Pipeline([ ('preprocessor', preprocessor), ('scale', MaxAbsScaler()), # ('clf', RandomForestClassifier(n_estimators=15)) ]) pipe.fit(X_train, y_train) result = pipe.transform(X_train) preprocessor_ = pipe.steps[0][1] columns = pd.Series(preprocessor_.named_transformers_['text'][2].get_feature_names()) columns_mask = preprocessor_.named_transformers_['text'][3].get_support() vec_columns = columns[columns_mask] columns = columns=np.append(vec_columns, NUMERIC_COLUMNS) # display(vec_columns.shape) # display(vec_columns) result_df = pd.DataFrame(result.toarray(), columns=columns) display(result_df.head()) # - # # Use a RandonForestClassifier # + pipe = Pipeline([ ('preprocessor', preprocessor), ('scale', MaxAbsScaler()), ('clf', RandomForestClassifier(n_estimators=15)) ]) pipe.fit(X_train, y_train) # accuracy = pipe.score(X_test, y_test) print(f"Accuracy on budget dataset {accuracy}") # - # # Use a LogisticRegresion classifier # + from sklearn.linear_model import LogisticRegression from sklearn.multiclass import OneVsRestClassifier pipe = Pipeline([ ('preprocessor', preprocessor), ('scale', MaxAbsScaler()), ('clf', OneVsRestClassifier(LogisticRegression())) ]) pipe.fit(X_train, y_train) accuracy = pipe.score(X_test, y_test) print(f"Accuracy on budget dataset {accuracy}") # - # # Interaction Terms # # ### What if the order is not important. If we want to know if 2nd grade english teacher is important we want the following text to score 1 no matter the order'English teacher 2nd grade' is the same as '2nd Grade English teacher' # # ### We want to know when 2nd grade and english teacher appear together, we don't care if the appear in the different order. However in ngrams order matters # # ### We need to create another set of columns of interaction terms # + from sklearn.base import BaseEstimator, TransformerMixin from scipy import sparse from itertools import combinations class SparseInteractions(BaseEstimator, TransformerMixin): def __init__(self, degree=2, feature_name_separator="_"): self.degree = degree self.feature_name_separator = feature_name_separator def fit(self, X, y=None): return self def transform(self, X): if not sparse.isspmatrix_csc(X): X = sparse.csc_matrix(X) if hasattr(X, "columns"): self.orig_col_names = X.columns else: self.orig_col_names = np.array([str(i) for i in range(X.shape[1])]) spi = self._create_sparse_interactions(X) return spi def get_feature_names(self): return self.feature_names def _create_sparse_interactions(self, X): out_mat = [] self.feature_names = self.orig_col_names.tolist() for sub_degree in range(2, self.degree + 1): for col_ixs in combinations(range(X.shape[1]), sub_degree): # add name for new column name = self.feature_name_separator.join(self.orig_col_names[list(col_ixs)]) self.feature_names.append(name) # get column multiplications value out = X[:, col_ixs[0]] for j in col_ixs[1:]: out = out.multiply(X[:, j]) out_mat.append(out) return sparse.hstack([X] + out_mat) # + pipe = Pipeline([ ('preprocessor', preprocessor), ('scale', MaxAbsScaler()), ('interactions', SparseInteractions(degree=2)) ]) pipe.fit(X_train, y_train) result = pipe.transform(X_train) # + preprocessor_ = pipe.steps[0][1] columns = pd.Series(preprocessor_.named_transformers_['text'][2].get_feature_names()) interactions = pipe.steps[2][1] inter_columns = interactions.get_feature_names() display(inter_columns[:10]) # + from sklearn.linear_model import LogisticRegression from sklearn.multiclass import OneVsRestClassifier pipe = Pipeline([ ('preprocessor', preprocessor), ('interactions', SparseInteractions(degree=2)), ('scale', MaxAbsScaler()), ('clf', OneVsRestClassifier(LogisticRegression())) ]) pipe.fit(X_train, y_train) accuracy = pipe.score(X_test, y_test) print(f"Accuracy on budget dataset {accuracy}") # - # # Improve performance # ## Using Hashing Vectorizer instead of Count Vectorizer # ## As we can see below the main difference between CountVectorizer and Hashing is the get_features() # ## Hashing convert tokens into numbers to improve performance # + # Import HashingVectorizer from sklearn.feature_extraction.text import HashingVectorizer # Get text data: text_data text_data = combine_text_columns(X_train) # Create the token pattern: TOKENS_ALPHANUMERIC TOKENS_ALPHANUMERIC = '[A-Za-z0-9]+(?=\\s+)' # Instantiate the HashingVectorizer: hashing_vec count_vec = CountVectorizer(token_pattern=TOKENS_ALPHANUMERIC) # Fit and transform the Hashing Vectorizer count_text = count_vec.fit_transform(text_data) display(count_vec.get_feature_names()[:10]) display(count_text.shape) # Instantiate the HashingVectorizer: hashing_vec hashing_vec = HashingVectorizer(token_pattern=TOKENS_ALPHANUMERIC, norm=None, binary=False, ngram_range=(1,2)) # Fit and transform the Hashing Vectorizer hashed_text = hashing_vec.fit_transform(text_data) # Create DataFrame and print the head # display(hashing_vec.get_feature_names()) hashed_text.shape # - # + from sklearn.feature_extraction.text import HashingVectorizer text_pipe = Pipeline([ #('imputer', SimpleImputer(strategy='constant', fill_value='')), ('combine_text', FunctionTransformer(combine_text_columns, kw_args={'to_drop':NUMERIC_COLUMNS+LABEL_COLUMNS})), # Super important as the Imputer does not return a 1 Dim array flatten ('one_dim', FunctionTransformer(np.reshape, kw_args={'newshape':-1}) ), ('vec', HashingVectorizer(token_pattern=TOKENS_ALPHANUMERIC, norm=None, binary=False, ngram_range=(1,2))), #('dim_red', SelectKBest(chi2, k=chi_k)) ]) dummy_labels = pd.get_dummies(df[LABEL_COLUMNS]) # display(dummy_labels.shape) text_pipe.fit(df[TEXT_COLUMNS], dummy_labels) text_pipe_results = text_pipe.transform(df[TEXT_COLUMNS]) # display(text_pipe_results) columns_mask = text_pipe.steps[3][1].get_support() # display(columns[columns_mask]) text_pipe_results_df = pd.DataFrame(text_pipe_results.toarray(), columns=columns[columns_mask]) display(text_pipe_results_df.head()) # -
courses/school_budgeting_ml/school_budgeting_ml.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import requests import re from bs4 import BeautifulSoup import nltk #This function will get data on the webpage and convert to text def get_tender(link): page = requests.get(link) text_raw=page.text return text_raw # using regular function, we substitute the brackets def remove_between_square_brackets(text): return re.sub('\[[^]]*\]', '', text) def lower_text(text): return text.lower() # - def search_pages(web_links): a=0 pre=get_tender(web_links) html_gone = BeautifulSoup(pre, 'html.parser') text_rm_br=remove_between_square_brackets(html_gone.text) words = nltk.word_tokenize(text_rm_br) lower_data=[lower_text(x) for x in words] for x in key_search_word: if x.lower() in lower_data: print(x) a=1 if a==1: print(web_links) else: print('No keywords found') key_search_word=['work station','cluster', 'server','AI','workstation','nodes'] tender_data=pd.read_excel('tender website list.xlsx') web_links=tender_data['Website list'].to_list() for x in web_links: search_pages(x)
tender_search.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # > **Copyright (c) 2020 <NAME>**<br><br> # > **Copyright (c) 2021 Skymind Education Group Sdn. Bhd.**<br> # <br> # Licensed under the Apache License, Version 2.0 (the \"License\"); # <br>you may not use this file except in compliance with the License. # <br>You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0/ # <br> # <br>Unless required by applicable law or agreed to in writing, software # <br>distributed under the License is distributed on an \"AS IS\" BASIS, # <br>WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # <br>See the License for the specific language governing permissions and # <br>limitations under the License. # <br> # <br> # **SPDX-License-Identifier: Apache-2.0** # <br> # # Introduction # # This notebook will introduce you the basic of spaCy and some simple implementations of spaCy. SpaCy is widely used in NLP tasks. # # Notebook Content # # * [What’s SpaCy?](#What’s-spaCy?) # # # * [Features](#Features) # # # * [Installation](#Installation) # # # * [Linguistic Annotations](#Linguistic-Annotations) # # # * [Tokenization](#Tokenization) # # # * [Part-of-speech Tags and Dependencies](#Part-of-speech-Tags-and-Dependencies) # # # * [Named Entities](#Named-Entities) # # # * [Word Vectors and Similarity](#Word-Vectors-and-Similarity) # # # * [Vocab, Hashes and Lexemes](#Vocab,-Hashes-and-Lexemes) # <img align="left" width="300" height="300" src="../../../images/spacy.png"> # # What’s spaCy? # # spaCy is a free, **open-source library** for **advanced Natural Language Processing (NLP)** in Python. # # If you’re working with a lot of text, you’ll eventually want to know more about it. For example, what’s it about? What do the words mean in context? Who is doing what to whom? What companies and products are mentioned? Which texts are similar to each other? # # spaCy is designed specifically for **production use** and helps you build applications that process and “understand” large volumes of text. It can be used to build **information extraction** or **natural language understanding systems**, or to **pre-process text for deep learning**. # # Features # # In this notebook, you’ll come across mentions of spaCy’s features and capabilities. Some of them refer to linguistic concepts, while others are related to more general machine learning functionality. # # ![todo](../../../images/todo.png) # # Installation # # spaCy’s trained pipelines can be installed as Python packages. This means that they’re a component of your application, just like any other module. They’re versioned and can be defined as a dependency in your requirements.txt. Trained pipelines can be installed from a download URL or a local directory, manually or via pip. Their data can be located anywhere on your file system. # # Installation link: https://spacy.io/usage/models # # ![installation](../../../images/installation.png) # # Linguistic Annotations # # spaCy provides a variety of linguistic annotations to give you **insights into a text’s grammatical structure**. This includes the word types, like the parts of speech, and how the words are related to each other. For example, if you’re analyzing text, it makes a huge difference whether a noun is the subject of a sentence, or the object – or whether “google” is used as a verb, or refers to the website or company in a specific context. # + import spacy # # !python -m spacy download en_core_web_sm nlp = spacy.load("en_core_web_sm") doc = nlp("Apple is looking at buying U.K. startup for $1 billion") for token in doc: print(token.text, token.pos_, token.dep_) # - # Even though a `Doc` is processed – e.g. split into individual words and annotated – it still holds **all information of the original text**, like whitespace characters. You can always get the offset of a token into the original string, or reconstruct the original by joining the tokens and their trailing whitespace. This way, you’ll never lose any information when processing text with spaCy. # # Tokenization # # During processing, spaCy first **tokenizes** the text, i.e. segments it into words, punctuation and so on. This is done by applying rules specific to each language. For example, punctuation at the end of a sentence should be split off – whereas “U.K.” should remain one token. Each `Doc` consists of individual tokens, and we can iterate over them: # + import spacy nlp = spacy.load("en_core_web_sm") doc = nlp("Apple is looking at buying U.K. startup for $1 billion") for token in doc: print(token.text) # - # ![tokenization](../../../images/tokenization.png) # # First, the raw text is split on whitespace characters, similar to `text.split(' ')`. Then, the tokenizer processes the text from left to right. On each substring, it performs two checks: # # 1. Does the substring match a tokenizer exception rule? For example, “don’t” does not contain whitespace, but should be split into two tokens, “do” and “n’t”, while “U.K.” should always remain one token. # # 2. Can a prefix, suffix or infix be split off? For example punctuation like commas, periods, hyphens or quotes. # # If there’s a match, the rule is applied and the tokenizer continues its loop, starting with the newly split substrings. This way, spaCy can split **complex, nested tokens** like combinations of abbreviations and multiple punctuation marks. # # ![tokenization](../../../images/tokenization_2.png) # # Part-of-speech Tags and Dependencies # # After tokenization, spaCy can **parse** and **tag** a given `Doc`. This is where the trained pipeline and its statistical models come in, which enable spaCy to **make predictions** of which tag or label most likely applies in this context. A trained component includes binary data that is produced by showing a system enough examples for it to make predictions that generalize across the language – for example, a word following “the” in English is most likely a noun. # # Linguistic annotations are available as Token attributes. Like many NLP libraries, spaCy **encodes all strings to hash values** to reduce memory usage and improve efficiency. So to get the readable string representation of an attribute, we need to add an underscore `_` to its name: # + import spacy nlp = spacy.load("en_core_web_sm") doc = nlp("Apple is looking at buying U.K. startup for $1 billion") for token in doc: print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_, token.shape_, token.is_alpha, token.is_stop) # - # ![POS](../../../images/pos.png) # # Named Entities # # A named entity is a “real-world object” that’s assigned a name – for example, a person, a country, a product or a book title. spaCy can **recognize various types of named entities in a document, by asking the model for a prediction**. Because models are statistical and strongly depend on the examples they were trained on, this doesn’t always work *perfectly* and might need some tuning later, depending on your use case. # # Named entities are available as the `ents` property of a `Doc`: # + import spacy nlp = spacy.load("en_core_web_sm") doc = nlp("Apple is looking at buying U.K. startup for $1 billion") for ent in doc.ents: print(ent.text, ent.start_char, ent.end_char, ent.label_) # - # ![Named Entity](../../../images/named_entity.png) # # Here’s what our example sentence and its named entities look like: # # ![Named Entity](../../../images/named_entity_2.png) # # Word Vectors and Similarity # # Similarity is determined by comparing **word vectors** or **“word embeddings”**, multi-dimensional meaning representations of a word. Word vectors can be generated using an algorithm like **word2vec** and usually look like this: # # array([ # <p>2.02280000e-01, -7.66180009e-02, 3.70319992e-01, # <br> 3.28450017e-02, -4.19569999e-01, 7.20689967e-02, # <br> -3.74760002e-01, 5.74599989e-02, -1.24009997e-02, # <br> 5.29489994e-01, -5.23800015e-01, -1.97710007e-01, # <br> -3.41470003e-01, 5.33169985e-01, -2.53309999e-02, # <br> 1.73800007e-01, 1.67720005e-01, 8.39839995e-01, # <br> 5.51070012e-02, 1.05470002e-01, 3.78719985e-01, # <br> 2.42750004e-01, 1.47449998e-02, 5.59509993e-01, # <br> 1.25210002e-01, -6.75960004e-01, 3.58420014e-01, # <br> # ... and so on ... # <br> 3.66849989e-01, 2.52470002e-03, -6.40089989e-01, # <br> -2.97650009e-01, 7.89430022e-01, 3.31680000e-01, # <br> -1.19659996e+00, -4.71559986e-02, 5.31750023e-01,</p><br>] dtype=float32) # > To make them compact and fast, spaCy’s small **pipeline packages** (all packages that end in `sm`) **don’t ship with word vectors**, and only include context-sensitive **tensors**. This means you can still use the `similarity()` methods to compare documents, spans and tokens – but the result won’t be as good, and individual tokens won’t have any vectors assigned. So in order to use real word vectors, you need to download a larger pipeline package: # # > Small pipeline: python -m spacy download en_core_web_sm # # > Large pipeline: python -m spacy download en_core_web_lg # Pipeline packages that come with built-in word vectors make them available as the `Token.vector attribute`. `Doc.vector` and `Span.vector` will default to an average of their token vectors. You can also check if a token has a vector assigned, and get the **L2 norm**, which can be used to normalize vectors. # + import spacy # # !python -m spacy download en_core_web_lg # To install en_core_web_md use > python -m spacy download en_core_web_md nlp = spacy.load("en_core_web_lg") tokens = nlp("dog cat banana afskfsd") for token in tokens: print(token.text, token.has_vector, token.vector_norm, token.is_oov) # - # The words “dog”, “cat” and “banana” are all pretty common in English, so they’re part of the pipeline’s vocabulary, and come with a vector. The word “afskfsd” on the other hand is a lot less common and out-of-vocabulary – so its vector representation consists of 300 dimensions of `0`, which means it’s practically nonexistent. If your application will benefit from a **large vocabulary** with more vectors, you should consider using one of the larger pipeline packages or loading in a full vector package, for example, `en_core_web_lg`, which includes 685k unique vectors. # # spaCy is able to compare two objects, and make a prediction of **how similar they are**. Predicting similarity is useful for building recommendation systems or flagging duplicates. For example, you can suggest a user content that’s similar to what they’re currently looking at, or label a support ticket as a duplicate if it’s very similar to an already existing one. # # Each `Doc`, `Span`, `Token` and `Lexeme` comes with a `.similarity` method that lets you compare it with another object, and determine the similarity. Of course similarity is always subjective – whether two words, spans or documents are similar really depends on how you’re looking at it. spaCy’s similarity implementation usually assumes a pretty general-purpose definition of similarity. # + import spacy nlp = spacy.load("en_core_web_lg") # make sure to use larger package! doc1 = nlp("I like salty fries and hamburgers.") doc2 = nlp("Fast food tastes very good.") # Similarity of two documents print(doc1, "<->", doc2, doc1.similarity(doc2)) # Similarity of tokens and spans french_fries = doc1[2:4] burgers = doc1[5] print(french_fries, "<->", burgers, french_fries.similarity(burgers)) # - # Computing similarity scores can be helpful in many situations, but it’s also important to maintain **realistic expectations** about what information it can provide. Words can be related to each other in many ways, so a single “similarity” score will always be a **mix of different signals**, and vectors trained on different data can produce very different results that may not be useful for your purpose. Here are some important considerations to keep in mind: # # > + There’s no objective definition of similarity. Whether “I like burgers” and “I like pasta” is similar **depends on your application**. Both talk about food preferences, which makes them very similar – but if you’re analyzing mentions of food, those sentences are pretty dissimilar, because they talk about very different foods. # > + The similarity of `Doc` and `Span` objects defaults to the **average** of the token vectors. This means that the vector for “fast food” is the average of the vectors for “fast” and “food”, which isn’t necessarily representative of the phrase “fast food”. # > + Vector averaging means that the vector of multiple tokens is **insensitive to the order of the words**. Two documents expressing the same meaning with dissimilar wording will return a lower similarity score than two documents that happen to contain the same words while expressing different meanings. # # Vocab, Hashes and Lexemes # # Whenever possible, spaCy tries to store data in a vocabulary, the `Vocab`, that will be **shared by multiple documents**. To save memory, spaCy also encodes all strings to **hash values** – in this case for example, “coffee” has the hash `3197928453018144401`. Entity labels like “ORG” and part-of-speech tags like “VERB” are also encoded. Internally, spaCy only “speaks” in hash values. # # ![Vocab Hashing](../../../images/hashing.png) # # If you process lots of documents containing the word “coffee” in all kinds of different contexts, storing the exact string “coffee” every time would take up way too much space. So instead, spaCy hashes the string and stores it in the `StringStore`. You can think of the `StringStore` as a **lookup table that works in both directions** – you can look up a string to get its hash, or a hash to get its string: # + import spacy nlp = spacy.load("en_core_web_sm") doc = nlp("I love coffee") print(doc.vocab.strings["coffee"]) # 3197928453018144401 print(doc.vocab.strings[3197928453018144401]) # 'coffee' # - # Now that all strings are encoded, the entries in the vocabulary **don’t need to include the word text** themselves. Instead, they can look it up in the StringStore via its hash value. Each entry in the vocabulary, also called `Lexeme`, contains the **context-independent** information about a word. For example, no matter if “love” is used as a verb or a noun in some context, its spelling and whether it consists of alphabetic characters won’t ever change. Its hash value will also always be the same. # + import spacy nlp = spacy.load("en_core_web_sm") doc = nlp("I love coffee") for word in doc: lexeme = doc.vocab[word.text] print(lexeme.text, lexeme.orth, lexeme.shape_, lexeme.prefix_, lexeme.suffix_, lexeme.is_alpha, lexeme.is_digit, lexeme.is_title, lexeme.lang_) # - # The mapping of words to hashes doesn’t depend on any state. To make sure each value is unique, spaCy uses a hash function to calculate the hash **based on the word string**. This also means that the hash for “coffee” will always be the same, no matter which pipeline you’re using or how you’ve configured spaCy. # # However, hashes **cannot be reversed** and there’s no way to resolve `3197928453018144401` back to “coffee”. All spaCy can do is look it up in the vocabulary. That’s why you always need to make sure all objects you create have access to the same vocabulary. If they don’t, spaCy might not be able to find the strings it needs. # + import spacy from spacy.tokens import Doc from spacy.vocab import Vocab nlp = spacy.load("en_core_web_sm") doc = nlp("I love coffee") # Original Doc print(doc.vocab.strings["coffee"]) # 3197928453018144401 print(doc.vocab.strings[3197928453018144401]) # 'coffee' 👍 empty_doc = Doc(Vocab()) # New Doc with empty Vocab # empty_doc.vocab.strings[3197928453018144401] will raise an error :( empty_doc.vocab.strings.add("coffee") # Add "coffee" and generate hash print(empty_doc.vocab.strings[3197928453018144401]) # 'coffee' 👍 new_doc = Doc(doc.vocab) # Create new doc with first doc's vocab print(new_doc.vocab.strings[3197928453018144401]) # 'coffee' 👍 # - # If the vocabulary doesn’t contain a string for `3197928453018144401`, spaCy will raise an error. You can re-add “coffee” manually, but this only works if you actually know that the document contains that word. To prevent this problem, spaCy will also export the `Vocab` when you save a `Doc` or `nlp` object. This will give you the object and its encoded annotations, plus the “key” to decode it. # # Contributors # # **Author** # <br><NAME> # # References # # 1. [SpaCy API Documentation](https://spacy.io/api) # 2. [SpaCy 101: Everything You Need to Know](https://spacy.io/usage/spacy-101)
nlp-labs/Day_02/NLP_Spacy/Spacy_Tutorials.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import sys import multiprocessing sys.path.append(os.path.join('..', '..')) from matplotlib import pylab pylab.rcParams['figure.figsize'] = (10.0, 10.0) pylab.rcParams['image.cmap'] = 'rainbow' import numpy as np import pandas as pd import logging import time import pickle from astropy.coordinates import SkyCoord from astropy import units as u from astropy import constants as const from astropy.wcs.utils import pixel_to_skycoord from arl.data.polarisation import PolarisationFrame from arl.visibility.base import create_visibility from arl.skycomponent.operations import create_skycomponent from arl.image.operations import show_image, import_image_from_fits, export_image_to_fits, \ smooth_image, calculate_image_frequency_moments, calculate_image_from_frequency_moments from arl.image.deconvolution import deconvolve_cube, restore_cube from arl.image.iterators import image_raster_iter from arl.image.solvers import solve_image from arl.visibility.iterators import vis_timeslice_iter # from arl.util.testing_support import create_named_configuration, \ # create_low_test_image_from_gleam, create_low_test_beam from arl.imaging import * from arl.imaging.weighting import weight_visibility from IPython.display import display, HTML import plotly.offline as py from help.array import add_arr_sum from help.plot import plot_table_bar, show log = logging.getLogger() log.setLevel(logging.DEBUG) log.addHandler(logging.StreamHandler(sys.stdout)) py.init_notebook_mode(connected=True) # + # construct low configuration results_dir = './results' data_dir = os.path.join('./examples/arl', results_dir) # for import/export os.makedirs(results_dir, exist_ok=True) # low = create_named_configuration('LOWBD2-CORE') cellsize = 0.001 npixel=512 padding = 2 oversampling = 32 nchan = 7 times = np.linspace(-3, +3, 5) * np.pi / 12.0 frequency = np.linspace(0.8e8, 1.2e8, nchan) ntimes, nfreq = len(times), len(frequency) centre_frequency = np.array([np.average(frequency)]) channel_bandwidth=np.array(nchan * [frequency[1]-frequency[0]]) total_bandwidth = np.array([np.sum(channel_bandwidth)]) polarisation_frame = PolarisationFrame('stokesI') # - # create visibility vt = pickle.load(open('%s/visibility.vis' % results_dir, 'rb')) # + # make a test image model_centrechannel = import_image_from_fits('%s/model_centrechannel.fits' % data_dir) model_multichannel = import_image_from_fits('%s/model_multichannel.fits' % data_dir) beam = import_image_from_fits('%s/beam.fits' % data_dir) model_multichannel.data *= beam.data # + # predict vt.data['vis'] *= 0.0 start = time.time() vt0, time0 = \ predict_2d(vt, model_multichannel, oversampling=oversampling, polarisation_frame=polarisation_frame, timing=True) stop = time.time() time_degrid0, time_fft0 = time0 time_pred0 = stop - start start = time.time() vt1, time1 = \ predict_2d(vt, model_multichannel, oversampling=oversampling, polarisation_frame=polarisation_frame, opt=True, timing=True) stop = time.time() time_degrid1, time_fft1 = time1 time_pred1 = stop - start vt = vt0 time_predict = [[time_pred0, time_degrid0, time_fft0], [time_pred1, time_degrid1, time_fft1]] display(HTML('<h3> Predict Time (s)')) display(pd.DataFrame(np.around(time_predict, decimals=2), columns=['Predict', 'Degrid', 'FFT'], index=['Origin', 'Optimized'])) # + # invert start = time.time() dirty0, sumwt0, time00 = invert_2d(vt, model_multichannel, padding=1, timing=True) psf0, sumwt0, time01 = invert_2d(vt, model_multichannel, dopsf=True, padding=1, timing=True) stop = time.time() time_grid0 = time00[0] + time01[0] time_ifft0 = time00[1] + time01[1] time_inv0 = stop - start start = time.time() dirty1, sumwt1, time10 = invert_2d(vt, model_multichannel, padding=1, opt=True, timing=True) psf1, sumwt1, time11 = invert_2d(vt, model_multichannel, dopsf=True, padding=1, opt=True, timing=True) time_grid1 = time10[0] + time11[0] time_ifft1 = time10[1] + time11[1] stop = time.time() time_inv1 = stop - start dirty, sumwt = dirty0, sumwt0 psf, sumwt = psf0, sumwt0 time_invert = [[time_inv0, time_grid0, time_ifft0], [time_inv1, time_grid1, time_ifft1]] display(HTML('<h3> Invert Time (s)')) display(pd.DataFrame(np.around(time_invert, decimals=2), columns=['Invert', 'Grid', 'IFFT'], index=['Origin', 'Optimized'])) # + # deconvolution start = time.time() comp0, residual0 = deconvolve_cube(dirty, psf, niter=300, gain=0.7, algorithm='msmfsclean', scales=[0, 3, 10, 30], threshold=0.01, fractional_threshold=0.001, nmoments=3) stop = time.time() time_deconv0 = stop - start start = time.time() comp1, residual1 = deconvolve_cube(dirty, psf, niter=300, gain=0.7, algorithm='msmfsclean', scales=[0, 3, 10, 30], threshold=0.01, fractional_threshold=0.001, nmoments=3, opt=True) stop = time.time() time_deconv1 = stop - start comp, residual = comp0, residual0 display(HTML('<h3> Deconvolve_cube Time')) display(pd.DataFrame(np.around([[time_deconv0, time_deconv1]], decimals=2), columns=['Origin', 'Optimized'], index=['Time(s)'])) clean = restore_cube(model=comp, psf=psf, residual=residual) export_image_to_fits(clean, '%s/mfs_clean.fits' % (results_dir)) # + # Single Image Processing Pipeline origin = add_arr_sum([time_pred0, time_inv0, time_deconv0]) optimized = add_arr_sum([time_pred1, time_inv1, time_deconv1]) columns = ['Predict', 'Invert', 'Deconv', 'Total'] fig_title = 'Single Image Processing Pipeline' show(origin, optimized, columns, fig_title) # Computational Kernel origin = np.around([time_grid0, time_degrid0], decimals=2) optimized = np.around([time_grid1, time_degrid1], decimals=2) columns = ['Gridding', 'Degridding'] fig_title = 'Computational Kernel (Gridding)' show(origin, optimized, columns, fig_title) origin = np.around([time_fft0, time_ifft0], decimals=2) optimized = np.around([time_fft1, time_ifft1], decimals=2) columns = ['FFT', 'IFFT'] fig_title = 'Computational Kernel (FFT)' show(origin, optimized, columns, fig_title) # -
arl-python/examples/arl/.ipynb_checkpoints/imaging-demo1-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 # language: python # name: python36 # --- # !pip install bs4 # + import urllib.request as urllib2 from bs4 import BeautifulSoup # - response = urllib2.urlopen('https://en.wikipedia.org/wiki/Natural_language_processing') html_doc = response.read() #print(html_doc) soup = BeautifulSoup(html_doc,'html.parser') #print(soup) strhtm = soup.prettify() print(strhtm[:1000]) print(soup.title) print(soup.title.string) print(soup.a.string) print(soup.b.string) for x in soup.find_all('a'): print(x.string) for x in soup.find_all('p'): print(x.text)
Extract_Data/extract_from_html.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Machine Learning Basics # In this module, you'll be analysing a dataset. You will be using the Cinema Data for the tasks in this module. <br> <br> # **Pipeline:** # * Acquiring the data - done # * Handling files and formats - done # * Data Analysis # * Prediction # * Analysing results import pandas as pd import matplotlib import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [15, 10] # ## Task 1 - Plotting # * Plot the values of occupancy percentage versus number of weeks since release, for all movies in the dataset. # * Plot the number of other releases in the week versus occupancy percentage and weeks until the end of lifetime, for all movies. # * Plot the number of shows in the week versus occupancy percentage and weeks until the end of lifetime, for all movies. # + df = pd.read_csv('./Data/CinemaData.csv') fig,ax=plt.subplots() dfgrp = df.groupby('Movie') ax.scatter(y=df['OccPer'],x=df['WeeksSinceRelease'],c='yellow') ax.set_title('All Movies',fontsize=25) ax.set_xlabel('Weeks Since Release',fontsize=18) ax.set_ylabel('OccPer',fontsize=18) plt.show() fig2,ax2 = plt.subplots() ax2.scatter(y=dfgrp.get_group('0001000002')['OccPer'],x=dfgrp.get_group('0001000002')['WeeksSinceRelease'],c='red') ax2.set_title('Movie: 0001000002',fontsize=25) ax2.set_ylabel('OccPer',fontsize=18) ax2.set_xlabel('Weeks Since Release',fontsize=18) plt.show() # - fig3,ax3 = plt.subplots() ax3.scatter(y=df['OtherReleasesInWeek'],x=df['OccPer']) ax3.set_ylabel('Other Releases in Week',fontsize=18) ax3.set_xlabel('Occupancy Percentage',fontsize=18) fig3,ax4 = plt.subplots() ax4.scatter(x=df['Lifetime']-df['WeeksSinceRelease'],y=df['OtherReleasesInWeek']) ax4.set_xlabel('Weeks Untill end of lifetime',fontsize=18) ax4.set_ylabel('Other Releases on Week',fontsize=18) plt.show() # + fig,ax5 = plt.subplots() ax5.scatter(x= df['OccPer'],y=df['ShowsInWeek']) ax5.set_xlabel('Occupancy',fontsize=18) ax5.set_ylabel('No. of Shows in week',fontsize=18) fig,ax6 = plt.subplots() ax6.scatter(x=df['Lifetime']-df['WeeksSinceRelease'],y=df['ShowsInWeek']) ax6.set_xlabel('Weeks untill end of lifetime',fontsize=18) ax6.set_ylabel('No. of Shows in Week',fontsize=18) plt.show() # - df.head(10) # ## Task 2 - Reasoning # Now that you have the plots, identify the correlation between the various parameters analysed. Justify the correlations, your observations, and your hypotheses for the same with appropriate reasoning. # ## Analysis # - The Occupancy Percentage **decreases gradually** as the movie becomes older. Maximum value is obtained in the first few weeks # - The occupancy percentage of all movies **decreases** if many movies are screened in the said week # - The no. of other releases in the week is low close to the release of a paricular movie; as the same movie becomes older, new movies are released and hence the **number of new releases increases close to the end of the lifetime of a given movie** # - There is a **direct correlation** between the number of shows of a film in a week and the occupancy; more successfull films are given more shows # - A film gets more shows when it has around 10-15 weeks of lifetime left
Module2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from jupyter_cadquery.occ import Part, PartGroup, show from jupyter_cadquery import set_sidecar set_sidecar("OCC") # - # # OCC bottle (ported over to OCP) # + import math from OCP.gp import gp_Pnt, gp_Vec, gp_Trsf, gp_Ax2, gp_Ax3, gp_Pnt2d, gp_Dir2d, gp_Ax2d, gp from OCP.GC import GC_MakeArcOfCircle, GC_MakeSegment from OCP.GCE2d import GCE2d_MakeSegment from OCP.Geom import Geom_Plane, Geom_CylindricalSurface from OCP.Geom2d import Geom2d_Ellipse, Geom2d_TrimmedCurve from OCP.BRepBuilderAPI import (BRepBuilderAPI_MakeEdge, BRepBuilderAPI_MakeWire, BRepBuilderAPI_MakeFace, BRepBuilderAPI_Transform) from OCP.BRepPrimAPI import BRepPrimAPI_MakePrism, BRepPrimAPI_MakeCylinder from OCP.BRepFilletAPI import BRepFilletAPI_MakeFillet from OCP.BRepAlgoAPI import BRepAlgoAPI_Fuse from OCP.BRepOffsetAPI import BRepOffsetAPI_MakeThickSolid, BRepOffsetAPI_ThruSections from OCP.BRepLib import BRepLib from OCP.BRep import BRep_Tool from OCP.BRep import BRep_Builder from OCP.TopoDS import TopoDS, TopoDS_Compound, TopoDS_Builder from OCP.TopExp import TopExp_Explorer from OCP.TopAbs import TopAbs_EDGE, TopAbs_FACE from OCP.TopTools import TopTools_ListOfShape height = 70 width = 50 thickness = 30 print("creating bottle") # The points we'll use to create the profile of the bottle's body aPnt1 = gp_Pnt(-width / 2.0, 0, 0) aPnt2 = gp_Pnt(-width / 2.0, -thickness / 4.0, 0) aPnt3 = gp_Pnt(0, -thickness / 2.0, 0) aPnt4 = gp_Pnt(width / 2.0, -thickness / 4.0, 0) aPnt5 = gp_Pnt(width / 2.0, 0, 0) aArcOfCircle = GC_MakeArcOfCircle(aPnt2, aPnt3, aPnt4) aSegment1 = GC_MakeSegment(aPnt1, aPnt2) aSegment2 = GC_MakeSegment(aPnt4, aPnt5) # Could also construct the line edges directly using the points instead of the resulting line aEdge1 = BRepBuilderAPI_MakeEdge(aSegment1.Value()) aEdge2 = BRepBuilderAPI_MakeEdge(aArcOfCircle.Value()) aEdge3 = BRepBuilderAPI_MakeEdge(aSegment2.Value()) # Create a wire out of the edges aWire = BRepBuilderAPI_MakeWire(aEdge1.Edge(), aEdge2.Edge(), aEdge3.Edge()) # Quick way to specify the X axis xAxis = gp.OX_s() # Set up the mirror aTrsf = gp_Trsf() aTrsf.SetMirror(xAxis) # Apply the mirror transformation aBRespTrsf = BRepBuilderAPI_Transform(aWire.Wire(), aTrsf) # Get the mirrored shape back out of the transformation and convert back to a wire aMirroredShape = aBRespTrsf.Shape() # A wire instead of a generic shape now aMirroredWire = TopoDS.Wire_s(aMirroredShape) # Combine the two constituent wires mkWire = BRepBuilderAPI_MakeWire() mkWire.Add(aWire.Wire()) mkWire.Add(aMirroredWire) myWireProfile = mkWire.Wire() # The face that we'll sweep to make the prism myFaceProfile = BRepBuilderAPI_MakeFace(myWireProfile) # We want to sweep the face along the Z axis to the height aPrismVec = gp_Vec(0, 0, height) myBody = BRepPrimAPI_MakePrism(myFaceProfile.Face(), aPrismVec) # Add fillets to all edges through the explorer mkFillet = BRepFilletAPI_MakeFillet(myBody.Shape()) anEdgeExplorer = TopExp_Explorer(myBody.Shape(), TopAbs_EDGE) while anEdgeExplorer.More(): anEdge = TopoDS.Edge_s(anEdgeExplorer.Current()) mkFillet.Add(thickness / 12.0, anEdge) anEdgeExplorer.Next() myBody = mkFillet # Create the neck of the bottle neckLocation = gp_Pnt(0, 0, height) neckAxis = gp.DZ_s() neckAx2 = gp_Ax2(neckLocation, neckAxis) myNeckRadius = thickness / 4.0 myNeckHeight = height / 10.0 mkCylinder = BRepPrimAPI_MakeCylinder(neckAx2, myNeckRadius, myNeckHeight) myBody = BRepAlgoAPI_Fuse(myBody.Shape(), mkCylinder.Shape()) # Our goal is to find the highest Z face and remove it faceToRemove = None zMax = -1 # We have to work our way through all the faces to find the highest Z face so we can remove it for the shell aFaceExplorer = TopExp_Explorer(myBody.Shape(), TopAbs_FACE) while aFaceExplorer.More(): aFace = TopoDS.Face_s(aFaceExplorer.Current()) aPlane = BRep_Tool.Surface_s(aFace) # We want the highest Z face, so compare this to the previous faces aPnt = aPlane.Location() aZ = aPnt.Z() if aZ > zMax: zMax = aZ faceToRemove = aFace aFaceExplorer.Next() facesToRemove = TopTools_ListOfShape() facesToRemove.Append(faceToRemove) myBody = BRepOffsetAPI_MakeThickSolid(myBody.Shape(), facesToRemove, -thickness / 50.0, 0.001) # Set up our surfaces for the threading on the neck neckAx2_Ax3 = gp_Ax3(neckLocation, gp.DZ_s()) aCyl1 = Geom_CylindricalSurface(neckAx2_Ax3, myNeckRadius * 0.99) aCyl2 = Geom_CylindricalSurface(neckAx2_Ax3, myNeckRadius * 1.05) # Set up the curves for the threads on the bottle's neck aPnt = gp_Pnt2d(2.0 * math.pi, myNeckHeight / 2.0) aDir = gp_Dir2d(2.0 * math.pi, myNeckHeight / 4.0) anAx2d = gp_Ax2d(aPnt, aDir) aMajor = 2.0 * math.pi aMinor = myNeckHeight / 10.0 anEllipse1 = Geom2d_Ellipse(anAx2d, aMajor, aMinor) anEllipse2 = Geom2d_Ellipse(anAx2d, aMajor, aMinor / 4.0) anArc1 = Geom2d_TrimmedCurve(anEllipse1, 0, math.pi) anArc2 = Geom2d_TrimmedCurve(anEllipse2, 0, math.pi) anEllipsePnt1 = anEllipse1.Value(0) anEllipsePnt2 = anEllipse1.Value(math.pi) aSegment = GCE2d_MakeSegment(anEllipsePnt1, anEllipsePnt2) # Build edges and wires for threading anEdge1OnSurf1 = BRepBuilderAPI_MakeEdge(anArc1, aCyl1) anEdge2OnSurf1 = BRepBuilderAPI_MakeEdge(aSegment.Value(), aCyl1) anEdge1OnSurf2 = BRepBuilderAPI_MakeEdge(anArc2, aCyl2) anEdge2OnSurf2 = BRepBuilderAPI_MakeEdge(aSegment.Value(), aCyl2) threadingWire1 = BRepBuilderAPI_MakeWire(anEdge1OnSurf1.Edge(), anEdge2OnSurf1.Edge()) threadingWire2 = BRepBuilderAPI_MakeWire(anEdge1OnSurf2.Edge(), anEdge2OnSurf2.Edge()) # Compute the 3D representations of the edges/wires BRepLib.BuildCurves3d_s(threadingWire1.Shape()) BRepLib.BuildCurves3d_s(threadingWire2.Shape()) # Create the surfaces of the threading aTool = BRepOffsetAPI_ThruSections(True) aTool.AddWire(threadingWire1.Wire()) aTool.AddWire(threadingWire2.Wire()) aTool.CheckCompatibility(False) myThreading = aTool.Shape() # Build the resulting compound bottle = TopoDS_Compound() aBuilder = TopoDS_Builder() aBuilder.MakeCompound(bottle) aBuilder.Add(bottle, myBody.Shape()) aBuilder.Add(bottle, myThreading) print("bottle finished") # - show(Part(bottle, color="aliceblue")) # ## Export to STL from jupyter_cadquery import exportSTL exportSTL(bottle, "bottle.stl", tolerance=0.01, angular_tolerance=0.1)
examples/3-occ.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/ShaimaM/Python/blob/main/W3_D1_matplotlib.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="_oEmLsjNcmP7" # # Group Members: # # # * <NAME> # * <NAME> # * <NAME> # # # # # # # + id="ebr1fJ0NlunK" # import import matplotlib.pyplot as plt import seaborn as sns # + id="hadICNVzHu4w" # to make plot render inline for notebock # %matplotlib inline # + [markdown] id="46cdkZD3cXg_" # # Dataset Selection # + colab={"base_uri": "https://localhost:8080/"} id="x8ZLg1FBH1qP" outputId="847b740e-b711-44b3-f811-86cb5d2513e0" # See dataframe possible list in seaborn sns.get_dataset_names() # + [markdown] id="cpJJEnyFcbur" # # Plants Dataset Exploration # + colab={"base_uri": "https://localhost:8080/", "height": 195} id="EpSldej7LD-D" outputId="d9067477-20d7-4e5c-a1c8-c6daa928e4a2" df = sns.load_dataset("planets") df.head() # + colab={"base_uri": "https://localhost:8080/"} id="pI-dw_XzL1ob" outputId="c76d5e35-9141-4917-90cb-8778756188e6" df.shape # + colab={"base_uri": "https://localhost:8080/", "height": 284} id="_Hnf-347L7E3" outputId="eaa32981-9b5e-4ade-8103-f09faa9b8b1b" df.describe() # + colab={"base_uri": "https://localhost:8080/"} id="qolksd50MPL-" outputId="01ba624d-d08d-463f-c996-aa3407a075b6" df.info() # + [markdown] id="LYvs2huOMEVZ" # ## EDA using Matplotlib # + colab={"base_uri": "https://localhost:8080/"} id="nUCg65vUPASK" outputId="d9518642-3aee-4956-9fb9-7d1d5b243c2d" df.year.value_counts() # + colab={"base_uri": "https://localhost:8080/", "height": 468} id="ylHPzRTwMD1M" outputId="7b57251f-8adf-470c-b1ff-58273b28281d" plt.figure(figsize=(7, 7)); df['year'].plot.hist(); plt.title('Number of descoverd Plant Over Years', fontsize=19); plt.xlabel('Year' , fontsize=15); # + [markdown] id="5ZrJ0IOKQ-Mw" # # Pie Chart for the Most discovered planets # + colab={"base_uri": "https://localhost:8080/"} id="HtH7r7lVcK8F" outputId="e7edd63b-81ae-4e50-8cd6-93f7eed81fe3" df.number.value_counts() # + colab={"base_uri": "https://localhost:8080/", "height": 541} id="5azq4cuDUVRe" outputId="af8093cd-fe60-4020-d4ff-4ecc46e6012f" plt.figure(figsize=(9,9)) colors = ['#66b3ff','#99ff99','#ffcc89' , '#C091FF','#FF98D3','#33FFF9' , '#F2FF4D'] df.number.value_counts().plot.pie(shadow=True , colors = colors) plt.legend(loc='upper right' ); plt.title('The Most Discovered Number Of Planets' , fontsize=19); # + [markdown] id="3negKiAYWYuK" # # Planets Heatmap # + colab={"base_uri": "https://localhost:8080/", "height": 630} id="9IusdfBzWY8T" outputId="4fcb45a2-c7b2-4819-f324-e4196678a444" # Heatmap showing correlations plt.figure(figsize=(15,10)) sns.heatmap(df.corr(), annot=True , cmap= 'Blues'); plt.title('Planets Data Heatmap' , fontsize=20) # + id="w_lgK8jRXUJs"
W3_D1_matplotlib.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.5 ('base') # language: python # name: python3 # --- import utilities as utils # + data_path_1: str = '../../../Data/phase2' data_path_2: str = '../../../Data/phase1/' data_set_1: list = [ 'ctgan_application_dataset_50k.csv', 'ctgan_traffic_dataset_100k.csv', ] data_set_2: list = [ 'Traffic_type_seed.csv', 'Application_type_seed.csv' ] file_path_1 = utils.get_file_path(data_path_1) file_path_2 = utils.get_file_path(data_path_2) file_set_1 : list = list(map(file_path_1, data_set_1)) file_set_2 : list = list(map(file_path_2, data_set_2)) file_set : list = file_set_1 + file_set_2 data_set : list = data_set_1 + data_set_2 current_job: int = 0 utils.data_set = data_set utils.file_set = file_set # - print(f'We will be using {len(file_set)} files:') utils.pretty(file_set) # + ctgan_application_dataset_labels_50 = utils.examine_dataset(1) ctgan_traffic_dataset_100 = utils.examine_dataset(2) baseline_traffic_seed = utils.examine_dataset(3) baseline_application_seed = utils.examine_dataset(4) ctgan_traffic_dataset_100['Dataset'] = ctgan_traffic_dataset_100['Dataset'].drop(['Unnamed: 0'], axis = 1) ctgan_application_dataset_labels_50['Dataset'] = ctgan_application_dataset_labels_50['Dataset'].drop(['Unnamed: 0'], axis = 1) # + def downsample(df: utils.pd.DataFrame, column_name: str, expected_sizes: dict) -> utils.pd.DataFrame(): ''' Function returns a new dataframe with the given column name and value ''' new_dict = utils.pd.DataFrame() for item in df[column_name].unique(): matching_values = df.loc[df[column_name] == item] if df[column_name].value_counts()[item] > expected_sizes[item]: new_dict = utils.pd.concat([new_dict, matching_values.sample(n = expected_sizes[item])]) else: new_dict = utils.pd.concat([new_dict, matching_values]) return new_dict def random_sample(df: utils.pd.DataFrame, column_name: str, element_name: str, size : int) -> utils.pd.DataFrame(): ''' Function returns a new dataframe with the given column name and value ''' new_df = df.loc[df[column_name] == element_name] return new_df.sample(n = size) # - # # Traffic Type Datasets Creation expected_sizes = {"Regular" : 30000, "VPN" : 20000, "Tor": 10000} ctgan_balanced_traffic_labels_dataset_30_20_10 = downsample(baseline_traffic_seed['Dataset'], 'Traffic Type', expected_sizes) ctgan_balanced_traffic_labels_dataset_30_20_10 = utils.pd.concat([ctgan_balanced_traffic_labels_dataset_30_20_10, random_sample(ctgan_traffic_dataset_100['Dataset'], 'Traffic Type', 'Tor', 10000 - baseline_traffic_seed['Dataset']['Traffic Type'].value_counts()['Tor'])]) expected_sizes = {"Regular" : 92659, "VPN" : 92659, "Tor": 92659} ctgan_balanced_traffic_labels_dataset_equal = downsample(baseline_traffic_seed['Dataset'], 'Traffic Type', expected_sizes) ctgan_balanced_traffic_labels_dataset_equal = utils.pd.concat([ctgan_balanced_traffic_labels_dataset_equal, random_sample(ctgan_traffic_dataset_100['Dataset'], 'Traffic Type', 'VPN', 92659 - baseline_traffic_seed['Dataset']['Traffic Type'].value_counts()['VPN'])]) ctgan_balanced_traffic_labels_dataset_equal = utils.pd.concat([ctgan_balanced_traffic_labels_dataset_equal, random_sample(ctgan_traffic_dataset_100['Dataset'], 'Traffic Type', 'Tor', 92659 - baseline_traffic_seed['Dataset']['Traffic Type'].value_counts()['Tor'])]) # # Application Types Data Creation expected_sizes = {"p2p" : 30000, "browsing" : 30000, "audio-streaming": 30000, 'file-transfer' : 30000, 'chat': 30000, 'video-streaming': 30000, 'voip': 30000, 'email': 30000} ctgan_balanced_application_dataset_labels_30_30_30 = downsample(baseline_application_seed['Dataset'], 'Application Type', expected_sizes) ctgan_balanced_application_dataset_labels_30_30_30 = utils.pd.concat([ctgan_balanced_application_dataset_labels_30_30_30, random_sample(ctgan_application_dataset_labels_50['Dataset'], 'Application Type', 'audio-streaming', 30000 - baseline_application_seed['Dataset']['Application Type'].value_counts()['audio-streaming'])]) ctgan_balanced_application_dataset_labels_30_30_30 = utils.pd.concat([ctgan_balanced_application_dataset_labels_30_30_30, random_sample(ctgan_application_dataset_labels_50['Dataset'], 'Application Type', 'file-transfer', 30000 - baseline_application_seed['Dataset']['Application Type'].value_counts()['file-transfer'])]) ctgan_balanced_application_dataset_labels_30_30_30 = utils.pd.concat([ctgan_balanced_application_dataset_labels_30_30_30, random_sample(ctgan_application_dataset_labels_50['Dataset'], 'Application Type', 'chat', 30000 - baseline_application_seed['Dataset']['Application Type'].value_counts()['chat'])]) ctgan_balanced_application_dataset_labels_30_30_30 = utils.pd.concat([ctgan_balanced_application_dataset_labels_30_30_30, random_sample(ctgan_application_dataset_labels_50['Dataset'], 'Application Type', 'video-streaming', 30000 - baseline_application_seed['Dataset']['Application Type'].value_counts()['video-streaming'])]) ctgan_balanced_application_dataset_labels_30_30_30 = utils.pd.concat([ctgan_balanced_application_dataset_labels_30_30_30, random_sample(ctgan_application_dataset_labels_50['Dataset'], 'Application Type', 'voip', 30000 - baseline_application_seed['Dataset']['Application Type'].value_counts()['voip'])]) ctgan_balanced_application_dataset_labels_30_30_30 = utils.pd.concat([ctgan_balanced_application_dataset_labels_30_30_30, random_sample(ctgan_application_dataset_labels_50['Dataset'], 'Application Type', 'email', 30000 - baseline_application_seed['Dataset']['Application Type'].value_counts()['email'])]) # + expected_sizes = {"p2p" : 48020, "browsing" : 48020, "audio-streaming": 48020, 'file-transfer' : 48020, 'chat': 48020, 'video-streaming': 48020, 'voip': 48020, 'email': 48020} ctgan_balanced_application_dataset_labels_equal = downsample(baseline_application_seed['Dataset'], 'Application Type', expected_sizes) ctgan_balanced_application_dataset_labels_equal = utils.pd.concat([ctgan_balanced_application_dataset_labels_equal, random_sample(ctgan_application_dataset_labels_50['Dataset'], 'Application Type', 'audio-streaming', 48020 - baseline_application_seed['Dataset']['Application Type'].value_counts()['audio-streaming'])]) ctgan_balanced_application_dataset_labels_equal = utils.pd.concat([ctgan_balanced_application_dataset_labels_equal, random_sample(ctgan_application_dataset_labels_50['Dataset'], 'Application Type', 'file-transfer', 48020 - baseline_application_seed['Dataset']['Application Type'].value_counts()['file-transfer'])]) ctgan_balanced_application_dataset_labels_equal = utils.pd.concat([ctgan_balanced_application_dataset_labels_equal, random_sample(ctgan_application_dataset_labels_50['Dataset'], 'Application Type', 'chat', 48020 - baseline_application_seed['Dataset']['Application Type'].value_counts()['chat'])]) ctgan_balanced_application_dataset_labels_equal = utils.pd.concat([ctgan_balanced_application_dataset_labels_equal, random_sample(ctgan_application_dataset_labels_50['Dataset'], 'Application Type', 'video-streaming', 48020 - baseline_application_seed['Dataset']['Application Type'].value_counts()['video-streaming'])]) ctgan_balanced_application_dataset_labels_equal = utils.pd.concat([ctgan_balanced_application_dataset_labels_equal, random_sample(ctgan_application_dataset_labels_50['Dataset'], 'Application Type', 'voip', 48020 - baseline_application_seed['Dataset']['Application Type'].value_counts()['voip'])]) ctgan_balanced_application_dataset_labels_equal = utils.pd.concat([ctgan_balanced_application_dataset_labels_equal, random_sample(ctgan_application_dataset_labels_50['Dataset'], 'Application Type', 'email', 48020 - baseline_application_seed['Dataset']['Application Type'].value_counts()['email'])]) ctgan_balanced_application_dataset_labels_equal = utils.pd.concat([ctgan_balanced_application_dataset_labels_equal, random_sample(ctgan_application_dataset_labels_50['Dataset'], 'Application Type', 'browsing', 48020 - baseline_application_seed['Dataset']['Application Type'].value_counts()['browsing'])]) # - ctgan_balanced_traffic_labels_dataset_30_20_10.to_csv('./synthetic/gan_traffic_30_20_10.csv', index=False) ctgan_balanced_traffic_labels_dataset_equal.to_csv('./synthetic/gan_traffic_upsample_to_majority.csv', index=False) ctgan_balanced_application_dataset_labels_30_30_30.to_csv('./synthetic/gan_application_30000.csv', index=False) ctgan_balanced_application_dataset_labels_equal.to_csv('./synthetic/gan_application_upsample_to_majority.csv', index=False) print(f'Last Execution: {utils.datetime.datetime.now()}') assert False, 'Nothing after this point is included in the study'
experiments/data_generation/GAN/Create_Datasets.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # **This notebook is an exercise in the [Data Visualization](https://www.kaggle.com/learn/data-visualization) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/bar-charts-and-heatmaps).** # # --- # # In this exercise, you will use your new knowledge to propose a solution to a real-world scenario. To succeed, you will need to import data into Python, answer questions using the data, and generate **bar charts** and **heatmaps** to understand patterns in the data. # # ## Scenario # # You've recently decided to create your very own video game! As an avid reader of [IGN Game Reviews](https://www.ign.com/reviews/games), you hear about all of the most recent game releases, along with the ranking they've received from experts, ranging from 0 (_Disaster_) to 10 (_Masterpiece_). # # ![ex2_ign](https://i.imgur.com/Oh06Fu1.png) # # You're interested in using [IGN reviews](https://www.ign.com/reviews/games) to guide the design of your upcoming game. Thankfully, someone has summarized the rankings in a really useful CSV file that you can use to guide your analysis. # # ## Setup # # Run the next cell to import and configure the Python libraries that you need to complete the exercise. import pandas as pd pd.plotting.register_matplotlib_converters() import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns print("Setup Complete") # The questions below will give you feedback on your work. Run the following cell to set up our feedback system. # Set up code checking import os if not os.path.exists("../input/ign_scores.csv"): os.symlink("../input/data-for-datavis/ign_scores.csv", "../input/ign_scores.csv") from learntools.core import binder binder.bind(globals()) from learntools.data_viz_to_coder.ex3 import * print("Setup Complete") # ## Step 1: Load the data # # Read the IGN data file into `ign_data`. Use the `"Platform"` column to label the rows. # + # Path of the file to read ign_filepath = "../input/ign_scores.csv" # Fill in the line below to read the file into a variable ign_data ign_data = pd.read_csv(ign_filepath, index_col='Platform') # Run the line below with no changes to check that you've loaded the data correctly step_1.check() # + # Lines below will give you a hint or solution code #step_1.hint() #step_1.solution() # - # ## Step 2: Review the data # # Use a Python command to print the entire dataset. # Print the data ign_data # Your code here # The dataset that you've just printed shows the average score, by platform and genre. Use the data to answer the questions below. # + # Fill in the line below: What is the highest average score received by PC games, # for any genre? high_score = ign_data.loc['PC', :].max() # Fill in the line below: On the Playstation Vita platform, which genre has the # lowest average score? Please provide the name of the column, and put your answer # in single quotes (e.g., 'Action', 'Adventure', 'Fighting', etc.) worst_genre = ign_data.loc['PlayStation Vita', :].idxmin() # Check your answers step_2.check() # + # Lines below will give you a hint or solution code #step_2.hint() #step_2.solution() # - # ## Step 3: Which platform is best? # # Since you can remember, your favorite video game has been [**Mario Kart Wii**](https://www.ign.com/games/mario-kart-wii), a racing game released for the Wii platform in 2008. And, IGN agrees with you that it is a great game -- their rating for this game is a whopping 8.9! Inspired by the success of this game, you're considering creating your very own racing game for the Wii platform. # # #### Part A # # Create a bar chart that shows the average score for **racing** games, for each platform. Your chart should have one bar for each platform. # + # Bar chart showing average score for racing games by platform plt.figure(figsize=(8, 6)) sns.barplot(x=ign_data.Racing, y=ign_data.index) plt.title('Average Score for Racing Games, by Platform') plt.xlabel('Scores') # Check your answer step_3.a.check() # + # Lines below will give you a hint or solution code #step_3.a.hint() #step_3.a.solution_plot() # - # #### Part B # # Based on the bar chart, do you expect a racing game for the **Wii** platform to receive a high rating? If not, what gaming platform seems to be the best alternative? # + #step_3.b.hint() # - # Check your answer (Run this code cell to receive credit!) step_3.b.solution() # ## Step 4: All possible combinations! # # Eventually, you decide against creating a racing game for Wii, but you're still committed to creating your own video game! Since your gaming interests are pretty broad (_... you generally love most video games_), you decide to use the IGN data to inform your new choice of genre and platform. # # #### Part A # # Use the data to create a heatmap of average score by genre and platform. # + # Heatmap showing average game score by platform and genre plt.figure(figsize=(10, 10)) sns.heatmap(data=ign_data, annot=True) plt.title('Average Game Score, by Platform and Genre') plt.xlabel('Genre') # Check your answer step_4.a.check() # + # Lines below will give you a hint or solution code #step_4.a.hint() #step_4.a.solution_plot() # - # #### Part B # # Which combination of genre and platform receives the highest average ratings? Which combination receives the lowest average rankings? # + #step_4.b.hint() # + # Check your answer (Run this code cell to receive credit!) #step_4.b.solution() # - # # Keep going # # Move on to learn all about **[scatter plots](https://www.kaggle.com/alexisbcook/scatter-plots)**! # --- # # # # # *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161291) to chat with other Learners.*
19-Day-Kaggle-Competition/exercise-bar-charts-and-heatmaps.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from anytree import NodeMixin, RenderTree class mynode(NodeMixin): # Add Node feature ... def __init__(self, parent=None, children=None): ... super(mynode, self).__init__() ... self.left = None ... self.name= None ... self.parent = parent root = mynode() root.left = mynode(parent=root) root.name = 'root' >>> for pre, _, node in RenderTree(root): ... treestr = u"%s%s" % (pre, node.name) ... print(treestr.ljust(8)) root = Node("root") child1 = Node("child1", parent=root) child2 = Node( parent=root) root2 = root child3 = Node('c3', parent=child2) root2.children print(RenderTree(root)) print(RenderTree(root2)) = Node("newchild0", parent=root)
1_decisionTree/.ipynb_checkpoints/Untitled-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Demo of MuSCADeT On a larger image #Import libraries import astropy.io.fits as pf import numpy as np import matplotlib.pyplot as plt from MuSCADeT import MCA as MC from MuSCADeT import colour_subtraction as cs # At this point, the data are loaded. ## Openning data cube cube = pf.open('Simu_big/Cube.fits')[0].data nb,n1,n2 = np.shape(cube) # Setting up the parameters for MuSCADeT and running. # + ##param pca = 'None' #Do not use PCA because A is precomputed n = 100 #Number of iterations k = 5 #Threshold ns = 2 #Number of sources angle = 5 #Angle between PCA lines #Run MuSCADeT S, A = MC.mMCA(cube, Aprior, k,n, mode=pca) # - # Display of the results. The first plot shows the model and a comparison with the original image. The model is simply `AS` with `A` and `S` as estimated by MuSCADeT. # + # Models as extracted by MuSCADeT for display model = np.dot(A,S.reshape([A.shape[1], n1*n2])).reshape(cube.shape) normodel = cs.asinh_norm(model, Q=100, range = 1) normcube = cs.asinh_norm(cube, Q = 100, range = 1) normres = cs.asinh_norm(cube-model, Q = 10, range = 1) plt.figure(figsize = (15, 5)) plt.subplot(131) plt.title('model') plt.imshow(normodel) plt.subplot(132) plt.title('data') plt.imshow(normcube) plt.subplot(133) plt.title('Residuals') plt.imshow(normres) plt.show() # - # This sequence of plots shows each component, with what the data looks like once the components are removed, i.e. `Y-AiSi`. for i in range(A.shape[1]): C = A[:,i, np.newaxis, np.newaxis]*S[np.newaxis,i,:,:] normC = cs.asinh_norm(C, Q = 100, range = 1) normCres = cs.asinh_norm(cube-C, Q = 50, range = 1) plt.figure(figsize = (15, 5)) plt.subplot(131) plt.title('data') plt.imshow(normcube) plt.subplot(132) plt.title('component ' + str(i)) plt.imshow(normC) plt.subplot(133) plt.title('data - component ' + str(i)) plt.imshow(normCres) plt.show()
Examples/Example_blend_256x256.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/sampath11/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/module3-make-explanatory-visualizations/LS_DS_124_Sequence_your_narrative_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] colab_type="text" id="JbDHnhet8CWy" # _Lambda School Data Science_ # # # Sequence Your Narrative - Assignment # # Today we will create a sequence of visualizations inspired by [<NAME>'s 200 Countries, 200 Years, 4 Minutes](https://www.youtube.com/watch?v=jbkSRLYSojo). # # Using this [data from Gapminder](https://github.com/open-numbers/ddf--gapminder--systema_globalis/): # - [Income Per Person (GDP Per Capital, Inflation Adjusted) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv) # - [Life Expectancy (in Years) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv) # - [Population Totals, by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv) # - [Entities](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv) # - [Concepts](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv) # + [markdown] colab_type="text" id="zyPYtsY6HtIK" # Objectives # - sequence multiple visualizations # - combine qualitative anecdotes with quantitative aggregates # # Links # - [<NAME>’s TED talks](https://www.ted.com/speakers/hans_rosling) # - [Spiralling global temperatures from 1850-2016](https://twitter.com/ed_hawkins/status/729753441459945474) # - "[The Pudding](https://pudding.cool/) explains ideas debated in culture with visual essays." # - [A Data Point Walks Into a Bar](https://lisacharlotterost.github.io/2016/12/27/datapoint-in-bar/): a thoughtful blog post about emotion and empathy in data storytelling # + [markdown] colab_type="text" id="_R1bj8aXzyVA" # # ASSIGNMENT # # # 1. Replicate the Lesson Code # 2. Take it further by using the same gapminder dataset to create a sequence of visualizations that combined tell a story of your choosing. # # Get creative! Use text annotations to call out specific countries, maybe: change how the points are colored, change the opacity of the points, change their sized, pick a specific time window. Maybe only work with a subset of countries, change fonts, change background colors, etc. make it your own! # + id="gBE_yQRk3pYZ" colab_type="code" colab={} # TODO # #!pip install --upgrade seaborn # + id="35qgDGF_uXlG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="91cab5e3-608c-48bb-e070-f0f72b119b99" import seaborn as sns sns.__version__ # + id="xXoGjjDGue0i" colab_type="code" colab={} # %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd # + id="YdssOb83urfv" colab_type="code" colab={} income = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv') # + id="wiAuAubWuwRd" colab_type="code" colab={} lifespan = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv') # + id="GlWXsQ8Qu00b" colab_type="code" colab={} population = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv') # + id="pP6EYBXlu4VI" colab_type="code" colab={} entities = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv') # + id="fMr-LFf_u51F" colab_type="code" colab={} concepts = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv') # + id="tCkwj-pZu_br" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="babc54fe-8c4f-4476-9e0b-4651edd0cfd9" income.shape, lifespan.shape, population.shape, entities.shape, concepts.shape # + id="cIQBz6aqvHwZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="bcb2c3d4-f4dc-4096-d2df-03b829e6a593" income.head() # + id="Gms_JWA1vLit" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 343} outputId="fffa32ee-ebba-4079-b1ec-a2f44e228a5e" income.sample(10) # + id="UuH6MtL5vOGL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="35cb6d8b-c3bd-443b-c3ce-caa694cc02fa" lifespan.head() # + id="hHzU-kvCvTIW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 343} outputId="b4f5cf42-410f-4fb9-c681-4b5d9bd62271" lifespan.sample(10) # + id="YtoTJr5N3heP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="da918a36-b80c-4d8c-dc56-84e969fcd2b9" population.head() # + id="Kzdjrvu23kAt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 343} outputId="c4759057-93d1-4e03-d381-b6b5015bd5ed" population.sample(10) # + id="saILyocw3tE2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 249} outputId="40b48cda-9a86-4c91-9552-736fa43fb016" entities.head() # + id="u8V5D6-Y3vMh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 501} outputId="6bd2b0cd-edf4-4095-e3b5-7e1d4006c20a" concepts.head() # + id="d_-hQrb54OTd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="8f375191-dc30-4fe8-b2e8-3bab80ae2bd8" df_1 = pd.DataFrame({'Student': ['Peter', 'Zach', 'Jane'], 'Math Test Score': [75, 89, 82]}) df_1 # + id="dpnzck9r4Sxb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="09a4f330-7acc-4f57-cdd8-10f9e4859f48" df_2 = pd.DataFrame({'Student': ['Alice', 'Peter', 'Jane'], 'Biology Test Score': [78, 87, 90]}) df_2 # + id="lTsQRdQz4WJP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="6d64650e-d9e2-4569-b51b-fac673254edc" pd.merge(df_1, df_2, on='Student', how='right') # + id="Pu93hLP84jg3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="1c76e55c-802b-4881-a2ac-4f4d23094b0c" df_3 = pd.DataFrame({'Name': ['Alice', 'Andrew', 'Jane'], 'Chemistry Test Scores': ['A', 'B', 'C']}) df_3 # + id="FbuZawlk4oaQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 106} outputId="5b3bd5cc-f5fa-498d-ae08-6b83b742098f" df_4 = pd.merge(df_2, df_3, left_on='Student', right_on='Name').drop(columns=['Student']) df_4 # + id="R4v6MzOs5J0i" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b86427d7-8260-4206-81dd-85270ec6ad7a" income.shape # + id="Wd-qSdFl5Nv2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d49edf42-e574-4b36-b4b8-62495bcba17d" lifespan.shape # + id="U5hpUGrm5Pf_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="74ad471d-30b0-4b9d-bfd1-5f51dfaafb0c" df = pd.merge(income, lifespan) df.shape # + id="HMTDozrH5Wfm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="edfa169d-ea5e-4b78-defa-8d8b631096ee" df = pd.merge(income, lifespan, on=['geo', 'time'], how='inner') df.shape # + id="Oo_PTxpZ5dqI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="cec29913-eecb-49db-8003-ea2cd1fa31e5" df.head() # + id="jLsHvToU5kHy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="9663c1ab-2148-43ab-9fc1-72c5f07823ad" df = pd.merge(df, population) df.head() # + id="XQUsUBkB51Nn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 249} outputId="99ef8ee8-4d7d-46f2-e939-2972ff36319e" entities.head() # + id="QwYtsJut54CR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 215} outputId="258c7633-9c0e-4f1c-d621-2beb9366f322" subset_cols = ['country', 'name', 'world_4region', 'world_6region'] merged = pd.merge(df, entities[subset_cols], left_on='geo', right_on='country') merged = merged.drop(columns=['geo']) merged.head() # + id="tmCprYNL6FcC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="de30eb7b-1c24-4ce9-e7f0-727aea399120" mapping_1 = { 'time': 'year', 'country': 'country_code', 'income_per_person_gdppercapita_ppp_inflation_adjusted': 'income', 'population_total': 'population', 'world_4region': '4region', 'world_6region': '6region', 'life_expectancy_years': 'lifespan' } mapping_2 = {'name': 'country'} merged = merged.rename(columns=mapping_1) merged = merged.rename(columns=mapping_2) merged.head() # + id="ZJdrFcSa6K2j" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 168} outputId="64e108c8-28a6-4838-c544-d02c6e317bba" merged.dtypes # + id="bwLI7KJD6OmZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 284} outputId="babc52e3-a2ac-448a-b5bd-3d52b424cbb6" merged.describe() # + id="disl6aZB6cUf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 166} outputId="b7e016df-1dec-4c64-b3f4-d648f07ec261" merged.describe(exclude='number') # + id="VFiiFbCB6g56" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="a981aa55-4f87-45f7-8e0d-c654d4ee0aae" merged['country'].value_counts() # + [markdown] id="PKjJTQXI3qGI" colab_type="text" # # STRETCH OPTIONS # # ## 1. Animate! # # - [How to Create Animated Graphs in Python](https://towardsdatascience.com/how-to-create-animated-graphs-in-python-bb619cc2dec1) # - Try using [Plotly](https://plot.ly/python/animations/)! # - [The Ultimate Day of Chicago Bikeshare](https://chrisluedtke.github.io/divvy-data.html) (Lambda School Data Science student) # - [Using Phoebe for animations in Google Colab](https://colab.research.google.com/github/phoebe-project/phoebe2-docs/blob/2.1/tutorials/animations.ipynb) # # ## 2. Study for the Sprint Challenge # # - Concatenate DataFrames # - Merge DataFrames # - Reshape data with `pivot_table()` and `.melt()` # - Be able to reproduce a FiveThirtyEight graph using Matplotlib or Seaborn. # # ## 3. Work on anything related to your portfolio site / Data Storytelling Project # + id="<KEY>" colab_type="code" colab={} # TODO
module3-make-explanatory-visualizations/LS_DS_124_Sequence_your_narrative_Assignment.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Workshop 5: Statistics (Optional) # + tags=[] # standard preamble import numpy as np import scipy as sp import matplotlib.pyplot as plt # %matplotlib inline # - # ## 2d distributions # # You can create two independent samples of events and plot their distribution as a *scatter* plot: # + tags=[] x = np.random.standard_normal(size=1000) y = np.random.standard_normal(size=1000) plt.scatter(x,y) plt.xlabel('x') plt.ylabel('y') # - # You can compute the correlation matrix for two variables: # + tags=[] print (np.corrcoef(x,y)) # - # Although more instructive perhaps is to print the full covariance matrix: # + tags=[] print (np.cov(x,y)) # - # Here is a cute example of plotting projection histograms together with the scatter plot: # (from http://matplotlib.org/examples/pylab_examples/scatter_hist.html ) # + tags=[] import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import NullFormatter # the random data x = np.random.randn(1000) y = np.random.randn(1000) nullfmt = NullFormatter() # no labels # definitions for the axes left, width = 0.1, 0.65 bottom, height = 0.1, 0.65 bottom_h = left_h = left + width + 0.02 rect_scatter = [left, bottom, width, height] rect_histx = [left, bottom_h, width, 0.2] rect_histy = [left_h, bottom, 0.2, height] # start with a rectangular Figure plt.figure(1, figsize=(8, 8)) axScatter = plt.axes(rect_scatter) axHistx = plt.axes(rect_histx) axHisty = plt.axes(rect_histy) # no labels axHistx.xaxis.set_major_formatter(nullfmt) axHisty.yaxis.set_major_formatter(nullfmt) # the scatter plot: axScatter.scatter(x, y) # now determine nice limits by hand: binwidth = 0.25 xymax = np.max([np.max(np.fabs(x)), np.max(np.fabs(y))]) lim = (int(xymax/binwidth) + 1) * binwidth axScatter.set_xlim((-lim, lim)) axScatter.set_ylim((-lim, lim)) bins = np.arange(-lim, lim + binwidth, binwidth) axHistx.hist(x, bins=bins) axHisty.hist(y, bins=bins, orientation='horizontal') axHistx.set_xlim(axScatter.get_xlim()) axHisty.set_ylim(axScatter.get_ylim()) axScatter.set_xlabel('x') axScatter.set_ylabel('y') plt.show() # - # You can also create a correlated sample: # + tags=[] # mean values of two variables mean = [0, 0] # covariance matrix # Note that the covariance matrix must be positive semidefinite (a.k.a. nonnegative-definite). # Otherwise, the behavior of this method is undefined and backwards compatibility is not guaranteed. cov = [[1, 0.8], [0.8, 1]] # produce a sample x, y = np.random.multivariate_normal(mean, cov, 1000).T # plot -- this looks like a streak plt.scatter(x,y) plt.xlabel('x') plt.ylabel('y') # -
Week07/WS05/Workshop05_optional.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from pathlib import Path import calplot import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from matplotlib import colors # - def report_na(df, col): print( "{:0.2f}% rows do not contain {:} recommendation".format( 100 * df[col].isna().mean(), col ) ) # # import data fln = "surfweer_data_2021_01_26_clean.csv" df = pd.read_csv( fln, index_col=0, parse_dates=["report_date", "post_date"], dtype={"wetsuit": "str", "schoen": "str", "cap": "str"}, ) df["week"] = df["report_date"].apply(lambda x: x.isocalendar()[1]) df["weekday"] = df["report_date"].apply(lambda x: x.weekday()) # # calendar heatmap df["test"] = 1 df_sub = df[["report_date", "test"]] df_sub = df_sub.drop_duplicates() events = pd.Series(df_sub["test"].to_numpy(), index=df_sub["report_date"]) cmap = colors.ListedColormap(["xkcd:ocean blue", "green"]) calplot.calplot(events, cmap=cmap); # # monthly surf days df_monthly = df.groupby(["year", "month"])["report_date"].count() df_monthly = df_monthly.reset_index() sns.catplot(x="month", y="report_date", col="year", data=df_monthly, kind="bar") # # wetsuit # ## preprocessing wetsuit thickness # As per [srface](https://srface.com/knowledge-base/neoprene-wetsuit-thickness/), *brands usually advertise their wetsuit neoprene thicknesses as 3/2, 4/3, 5/4, 6/4, etc. 3/2 for instance, means this wetsuit’s main panels are 3mm and 2mm thick. Normally, the chest and back panels are made out of thicker neoprene foam for extra warmth. Arms, shoulders, and legs are usually thinner for more flexibility.* # # I chose to use the main panel thickness only in the analysis, to reduce the number of types. It will be renamed using 3, 4, 5, 6. def rename_wetsuit(wet_suit): """ Rename wetsuit, use only the main panel thickenss (the first character in the string) """ try: return wet_suit[0] except: pass df["wetsuit"] = df["wetsuit"].apply(rename_wetsuit) # + def sort_hue(values): """sort string list """ values.sort(key=float) return values def monthly_summary(df, item): # Create monthly summary monthly = df.groupby(["month", item])["report_date"].count() monthly = monthly.reset_index() values = list(monthly[item].unique()) # plot ax = sns.histplot( x="month", weights="report_date", hue=item, data=monthly, multiple="stack", bins=12, discrete=True, hue_order=sort_hue(values), shrink=0.8, ) ax.set_xlabel("month") ax.set_ylabel("days") ax.set_xticks(np.arange(1, 12 + 1, 1.0)) ax.set_xticklabels( [ "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec", ] ) labels = ["{:} mm".format(value) for value in sort_hue(values)] ax.legend( title=item, labels=labels, ) return ax # - # ## cap report_na(df, "cap") monthly_summary(df, "cap") # ## schoen report_na(df, "schoen") monthly_summary(df, "schoen") # ## wetsuit report_na(df, "wetsuit") ax = monthly_summary(df, "wetsuit") ax.set_xlabel('') fig = ax.figure
data_visualization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.0 64-bit (conda) # metadata: # interpreter: # hash: 03877239ae18a66c3d863e76fda72b4917823ea0b4359fa63eeccf9c06b3cab7 # name: python3 # --- # # Loading Image Data # So far, we have been using datasets which were pre-processed and this will not be the case for real datasets. # Instead, you will deal with datasets which contain full-sized images which you can get by your smart-phone camera. # We are going to see how to load images and how to use them to train neural-networks. # We are going to use a [cat and dogs dataset](https://www.kaggle.com/c/dogs-vs-cats) dataset available by Kaggle. # # In the end, we want to train a neural network able to differentiate between cats and dogs. # + # %matplotlib inline # %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper # - # The easiest way to load image data is `datasets.ImageFolder` from `torchvision`. # `dataset = data.ImageFolder("path_to_folder", transform = transform)` # The `path_to_folder` is a path to the folder where we want download the data and [`transforms`](https://pytorch.org/vision/stable/transforms.html) is the list of preprocessing methods within the module [`torchvision`](https://pytorch.org/vision/stable/index.html). # # The `data.ImageFolder` function expects the folders to be organized in a specific way: # ``` # root\dog\xxx.png # root\dog\xxy.png # root\dog\xxz.png # # root\cat\xxx.png # root\cat\xxy.png # root\cat\xxz.png # ``` # where each class is a ditectory in the `root` directory. So, when the dataset will be loaded, the images in a specific folder will be labeled as the parent folder. # # ## Transforms # When you load the data with `data.ImageFolder` you can perfom some transformations on the loaded dataset. For example, if we have images of different dimension, we need to have them with the same size. So, you can use `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`. We need also to transform the images in PyTorch tensors with `transforms.ToTensor()`. # Typicallt, you have to compose all these transformations in a pipeline `transforms.Compose()` which accepts a list of transforms and make them in the order as the list order. # For example: # ``` # transforms.Compose([transforms.Resize(255), # transforms.CenterCrop(254), # ransforms.ToTensor()]) # ``` # # ## Data Loaders # With the `ImageFolder` loaded, we have to pass it to a `DataLoader` which takes a dataset with the specific structure we have seen, thus coherent with the `ImageFolder`. It returns batches of images with the respective labels. It is possible to set: # 1. the batch size # 2. if the data after each epoch is shuffled or not # ` datatrain = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)`. # The `dataloader` is a [generator](). So, to get the data out of it you have to: # 1. loop through it # 2. convert it in an iterator and call `naxt()` # # ``` # # we loop through it and we get the batches at each loop (until we haven't passed all the data) # for images, labels in trainloader: # pass # # # get one batch # images, labels = next(iter(trainloader)) # ``` # Now, we load the images of training and we set some transformations on the dataset, finally, we load the dataset. # + data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(254), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform = transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # - images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) # ## Data Augmentation # A common way to train a neural network is to introduce some randomness in the input data to avoid overfitting. # So, to make the network generalize. We can randomly: # 1. rotate # 2. mirror # 3. scale # 4. crop # images during training. # ```python # train_transforms = transforms.Compose([transforms.RandomRotation(30), # transforms.RandomResizedCrop(224), # transforms.RandomHorizontalFlip(), # transforms.ToTensor(), # transforms.Normalize([0.5, 0.5, 0.5], # [0.5, 0.5, 0.5])]) # ``` # Typically, you want to normalize the images. You pass: # 1. a list of means # 2. a list of standard deviations # So the input channels are normalized as: # `input[channel] = (input[channel]-mean[channel])/std[channel]` # Normalizing helps keeping the weights near zero, thus it helps to take the training pahse stable (backpropagation). Without the normalization, the training typically fails. # While we are in the testing phase, we would like to use the unaltered images. So, for validation and testing you will not normalize. # # Now, we are going to define the `trainloader` and the `testloader, but for now without doing the normalization for the trainloader. # + data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.Resize(50), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(51), transforms.CenterCrop(50), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # + # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) print(images[ii].shape) # - # Now, we have 2 different folders: # 1. train folder # 2. test folder # So, we can use them in order to classify the images of cats and dogs. # Now, we train our network. # + from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(7500,256) self.fc2 = nn.Linear(256,32) self.fc3 = nn.Linear(32,2) def forward(self, x): x = x.view(x.shape[0],-1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.log_softmax(self.fc3(x), dim=1) return x # - # Now we can do the whole training: # 1. forward pass # 2. backward pass # + model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr = 0.003) epochs = 5 train_losses, test_losses = [], [] for e in range(epochs): current_loss = 0.0 for images, labels in trainloader: # when computing the predictions on a new batch # we have to throw away the gradients previously computed optimizer.zero_grad() # images are already flattened thanks to the preparation # of the datasets log_ps = model(images) loss = criterion(log_ps, labels) current_loss += loss.item() # backward pass loss.backward() # update the parameters optimizer.step() else: # we are out of an epoch accuracy = 0.0 test_loss = 0.0 with torch.no_grad(): for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) log_ps, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.float)) train_losses.append(current_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}".format(e+1, epochs)) print("Train Loss: {:.3f}: ".format(current_loss/len(trainloader))) print("Test Loss: {:.3f}: ".format(test_loss/len(testloader))) # - #
Part 7 - Loading Image Data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Download audio file # # + # !pip install librosa import re import os from tqdm import tqdm import pandas as pd import numpy as np import subprocess import scipy.io.wavfile as wav import librosa from mega import Mega import getpass import warnings; warnings.simplefilter('ignore') # - # ## Create a pandas dataset df=pd.read_csv(os.path.abspath(os.path.join(os.getcwd(),os.pardir,os.pardir))+'/data/data_split.csv',index_col=0) df.sample(10) df['class_label'].value_counts() # ?librosa.feature.mfcc # ## Download Audio => Extract Features => Upload on Mega # def get_audio_features(filename:str)-> np.array: ''' input: filename (.vaw file) output: dictionary containing mffc and mel-spectogram features a ''' hop_length = 512 y, sr = librosa.load(filename) mfcc_ = librosa.feature.mfcc(y=y, sr=sr, hop_length=hop_length, n_mfcc=13,n_fft=513) mel_spect = librosa.feature.melspectrogram(y=y,sr=sr,n_fft=513,win_length=400) return dict(mfcc = mfcc_, mel_spec = mel_spect) def save_on_mega(file:str ,m: Mega): ''' save data in folder 'features_' the folder has been manually created on Mega website ''' folder = m.find('features_') m.upload(file, folder[0]) # + #login to Mega account m = Mega() email = input('insert email ') psw = getpass.getpass(prompt='insert password') m.login(email,psw) # + not_downloaded = dict(Animal = 0, Humans = 0, Natural = 0) #n.b. you might have to chance the working directory (os.chdir())) for i, row in tqdm(df.iterrows()): url = 'https://www.youtube.com/watch?v=' + row['url'] file_name = str(i)+"_"+row['class_label'] try: #download youtube video & create a clipped .wav file subprocess.Popen("ffmpeg -ss " + str(row['start_time']) + " -i $(youtube-dl -f 140 --get-url " + url + ") -t 10 -c:v copy -c:a copy " + file_name + ".mp4", shell=True).wait() subprocess.Popen("ffmpeg -i "+file_name+".mp4 -ab 160k -ac 2 -ar 44100 -vn "+file_name+'.wav',shell=True).wait() #extract mfcc, mel features res = get_audio_features(file_name+'.wav') #save .npy file and upload on mega file = np.save(file_name,res) save_on_mega(file_name+'.npy',m) #remove .mp4, .wav files, .npy file subprocess.Popen('rm '+file_name+'.mp4',shell=True).wait() subprocess.Popen('rm '+file_name+'.wav',shell=True).wait() subprocess.Popen('rm '+file_name+'.npy',shell=True).wait() except Exception as e: not_downloaded[row['class_label']] += 1 pass
src/utils/download_audio.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="JNztBYD8UiId" colab_type="code" outputId="824e8065-5ed5-48e4-8863-d1b5fc507d28" executionInfo={"status": "ok", "timestamp": 1591016751512, "user_tz": 240, "elapsed": 29279, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} colab={"base_uri": "https://localhost:8080/", "height": 122} from google.colab import drive drive.mount("/content/drive/") # + id="-2EswraU-0Ow" colab_type="code" colab={} # notes: scanpy has several versions, afer 17May2020, it become 1.5.1 from 1.4.6 # !pip install scanpy # !pip install leidenalg # + id="yjyyMxYmDOQi" colab_type="code" colab={} # #!pip install bbknn==1.3.6 # #!pip install umap-learn==0.3.9 # + id="GMIkrXu1A8ok" colab_type="code" colab={} import h5py import numpy as np import pandas as pd import scanpy as sc import seaborn as sns import matplotlib.pyplot as plt from cycler import cycler # + id="6495V204e_FL" colab_type="code" outputId="203829d7-ba7a-4803-e84b-2da006e3b957" executionInfo={"status": "ok", "timestamp": 1591016794690, "user_tz": 240, "elapsed": 2133, "user": {"displayName": "<NAME>", "photoUrl": "<KEY>", "userId": "08926911206443480647"}} colab={"base_uri": "https://localhost:8080/", "height": 54} # notice that scanpy already became 1.5.1 after 17May2020 sc.settings.verbosity = 3 sc.logging.print_versions() sc.settings.set_figure_params(dpi=80, figsize=(4, 4)) # + id="u0ZoJX_jUnk6" colab_type="code" colab={} import os os.chdir("/content/drive/Shared drives/CARD/projects/iNDI/line_prioritization/projects_lirong/Florian_data") # + id="yA6RBuNsSn7K" colab_type="code" colab={} adata = sc.read_h5ad("florian_concat_leiden.h5ad") # + id="InYHnVZ8jlf9" colab_type="code" outputId="ab4ed84c-1f53-4275-98d5-d7d613e23828" executionInfo={"status": "ok", "timestamp": 1590797129716, "user_tz": 240, "elapsed": 442, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} colab={"base_uri": "https://localhost:8080/", "height": 156} adata # + [markdown] id="qaZVdaSq8BPp" colab_type="text" # # Calculate the percentage of cells for a specific cluster for a specific donors # + id="OGQm6uFURN2W" colab_type="code" colab={} df_all = adata.obs.loc[:, ["donor_label","batch","leiden_0.6"]] # + id="j5DS4ErRRiKS" colab_type="code" outputId="9b94db77-06c5-498e-d84f-b21303ec78d1" executionInfo={"status": "ok", "timestamp": 1591024888725, "user_tz": 240, "elapsed": 516, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} colab={"base_uri": "https://localhost:8080/", "height": 204} df_all["leiden"] = df_all["leiden_0.6"] #df["cell_count"] = 1 df_all.head() # + [markdown] id="jJoIASZPiqq7" colab_type="text" # ### Check the clusters composition for all donors by sample # + id="8SDlAH79jV93" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="c2e14f80-85de-4f7b-fe24-d16ac0ba9c37" executionInfo={"status": "ok", "timestamp": 1591026432023, "user_tz": 240, "elapsed": 621, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_all_cluster = df_all.reset_index() df_all_cluster.drop(["donor_label","leiden"], axis=1, inplace=True) df_all_cluster.columns = ["cell", "batch", "leiden"] df_all_cluster.head() # + id="tfDeaHofjn5n" colab_type="code" colab={} df_all_cluster1 = df_all_cluster.pivot_table(index="leiden", columns="batch", values = "cell", aggfunc="count").fillna(0).astype("int") # + id="otNAiSq6j_A1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 286} outputId="49a661bc-142f-4d48-d236-c4a664d08afa" executionInfo={"status": "ok", "timestamp": 1591026440972, "user_tz": 240, "elapsed": 474, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_all_cluster2 = df_all_cluster.pivot_table(index="batch", columns="leiden", values = "cell", aggfunc="count").fillna(0).astype("int") df_all_cluster2 # + id="5o-LfnNwGfsm" colab_type="code" colab={} def draw_stackbar(df_cor, sample_name): donor_name = df_cor.index.to_list() cluster_name = df_cor.columns.to_list() bar_l = range(len(donor_name)) num_column =len(cluster_name) cm = plt.get_cmap('nipy_spectral') f, ax = plt.subplots(1, figsize=(8,6),dpi=100) ax.set_prop_cycle(cycler('color',[cm(1.*i/num_column) for i in range(num_column)])) bottom = np.zeros_like(bar_l).astype('float') for i, col in enumerate(cluster_name): ax.bar(df_cor.index, df_cor[col], bottom = bottom, label=col) bottom += df_cor[col].values ax.grid(False) ax.set_title(sample_name) ax.set_xticks(bar_l) ax.set_xticklabels(donor_name, rotation=45) ax.legend(loc="upper left", bbox_to_anchor=(1,1), ncol=2, fontsize='small') f.subplots_adjust(right=0.75, bottom=0.4) f.show() # + id="Wmc4rOEBkSI5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 499} outputId="38db6bc4-d9bd-4c6a-c51a-7401d815a385" executionInfo={"status": "ok", "timestamp": 1591026573522, "user_tz": 240, "elapsed": 1531, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} # draw stack bar draw_stackbar(df_all_cluster2, "all_donors") # + [markdown] id="1tDPnBkcd42q" colab_type="text" # # Extrac the data of KOLF2 # + id="-rD8PsiaeLTa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="da47c2f9-b95d-4fdc-d5ae-4e938a745b92" executionInfo={"status": "ok", "timestamp": 1591024891480, "user_tz": 240, "elapsed": 490, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_kolf2 = df_all[df_all["donor_label"]=="KOLF2-ARID2-A02"] df_kolf2.shape # + id="8Oad80O2_u5V" colab_type="code" outputId="0e2c8d1b-d283-4e83-b2ea-dfc32482c2b3" executionInfo={"status": "ok", "timestamp": 1591024936490, "user_tz": 240, "elapsed": 505, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} colab={"base_uri": "https://localhost:8080/", "height": 204} df_kolf2 = df_kolf2.reset_index() df_kolf2.drop(["donor_label","leiden"], axis=1, inplace=True) df_kolf2.columns = ["cell", "batch", "leiden"] df_kolf2.head() # + id="Y65CeuZuOe4U" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 425} outputId="6ee3e161-0231-42be-d26f-d744f60e10ce" executionInfo={"status": "ok", "timestamp": 1591037544998, "user_tz": 240, "elapsed": 476, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_kolf2.leiden.value_counts() # + id="DYFMhVY5uz_a" colab_type="code" colab={} df_kolf2_1 = df_kolf2.copy() # + id="rxYH389ovLPT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="f39fa0bb-a1c8-4e1a-e57a-e6848c168ddd" executionInfo={"status": "ok", "timestamp": 1591029575554, "user_tz": 240, "elapsed": 553, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_kolf2_1 = df_kolf2_1.rename({"leiden": "leiden_0.6"},axis=1) df_kolf2_1.head() # + id="_BwJSaIJvyRJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 235} outputId="b34faf77-735d-41b0-bbd6-056445b36f94" executionInfo={"status": "ok", "timestamp": 1591029637558, "user_tz": 240, "elapsed": 996, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} #df_kolf2_1 = df_kolf2_1.groupby(["batch", "leiden_0.6"]).count() df_kolf2_1 = df_kolf2_1.fillna(0).astype('int') df_kolf2_1.head() # + id="2P7Ec8O7Otb0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 450} outputId="13c0ef2e-86ab-4761-ac84-772c5fa5af67" executionInfo={"status": "ok", "timestamp": 1591037588056, "user_tz": 240, "elapsed": 452, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_kolf2_1 # + id="soXpOWHwvgYV" colab_type="code" colab={} df_kolf2_1.to_csv("counts_byCluster_byProtocol_florian_kolf2.csv") # + [markdown] id="vQ_7htlgVcVa" colab_type="text" # # Calculate the percentage of clusters for each protocol only for KOLF2 # + id="QNIuSJNRVraq" colab_type="code" colab={} df_cluster = df_kolf2.pivot_table(index="leiden", columns="batch", values = "cell", aggfunc="count").fillna(0).astype("int") # + id="5fKb2E7saabG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 499} outputId="ab4134bc-1a47-4d98-e508-c5fe220f6e25" executionInfo={"status": "ok", "timestamp": 1591026603936, "user_tz": 240, "elapsed": 1473, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} # draw stack bar draw_stackbar(df_cluster2, "KOLF2") # + id="a85UTviG5yqt" colab_type="code" outputId="d6a790c2-5501-4f76-9f74-ae565d8ef7a5" executionInfo={"status": "ok", "timestamp": 1591027906573, "user_tz": 240, "elapsed": 882, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} colab={"base_uri": "https://localhost:8080/", "height": 235} df_cluster.head() # + [markdown] id="wEAqrzGUlfIR" colab_type="text" # ###Combined duplicates # + id="dG7N1jvoZPaR" colab_type="code" colab={} # combined duplicates # + id="l8sOE5DglkrQ" colab_type="code" colab={} df_cluster.index = df_cluster.index.astype(list) df_cluster.columns = df_cluster.columns.astype(list) # + id="yedmSXPLlyFT" colab_type="code" colab={} df_cluster["cortical"] = df_cluster.cortical_1 + df_cluster.cortical_2 df_cluster["dopaminergic"] = df_cluster.dopaminergic_1 + df_cluster.dopaminergic_2 df_cluster["hypothalamic"] = df_cluster.hypothalamic_1 + df_cluster.hypothalamic_2 # + id="d53nv98VmV3S" colab_type="code" colab={} df_cluster3 = df_cluster.iloc[:, 6:9 ] # + id="c3stEr4pnOvq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 235} outputId="40454240-9f79-4a7c-c307-7d90cd1fa7c6" executionInfo={"status": "ok", "timestamp": 1591027918005, "user_tz": 240, "elapsed": 803, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_cluster3.head() # + id="zjDMY6wVaIhE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 266} outputId="623540a4-25c9-4758-9ceb-fda574df5285" executionInfo={"status": "ok", "timestamp": 1591037653398, "user_tz": 240, "elapsed": 460, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_cluster2 = df_kolf2.pivot_table(index="batch", columns="leiden", values = "cell", aggfunc="count").fillna(0).astype("int") df_cluster2 # + id="m-2R9wnkqDtl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 730} outputId="56a17563-5c6c-4bbb-b2e3-06df9c59cfa5" executionInfo={"status": "ok", "timestamp": 1591028110672, "user_tz": 240, "elapsed": 1250, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_cluster3.T.plot(kind="bar", figsize = (12,9), grid=False, width=1) # + id="NYH-42aVZCpE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 357} outputId="9ef170b7-cd94-4fdb-bb8a-9e7706797a61" executionInfo={"status": "ok", "timestamp": 1591027600233, "user_tz": 240, "elapsed": 1123, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_cluster3.cortical.plot(kind="bar", figsize = (8,4), grid=False, color="lightblue", title="cortical") # + id="LZhuvO-1rjLu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 425} outputId="f1aa8fa6-c720-471e-8bf0-e6b7de7d0d0a" executionInfo={"status": "ok", "timestamp": 1591028449499, "user_tz": 240, "elapsed": 784, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} #s.sort_values(ascending=False) df_cluster3.cortical.sort_values(ascending=False) # + id="iRBjGCeJrdhQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 357} outputId="e949c552-7b8f-48db-847c-8f9bbc56bd4e" executionInfo={"status": "ok", "timestamp": 1591028491813, "user_tz": 240, "elapsed": 896, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_cluster3.cortical.sort_values(ascending=False).plot(kind="bar", figsize = (8,4), grid=False, color="lightblue", title="cortical") # + id="M6L1RUoBopq3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 357} outputId="bf633be6-323b-419e-e5dd-e1fa22810039" executionInfo={"status": "ok", "timestamp": 1591027930268, "user_tz": 240, "elapsed": 847, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_cluster3.dopaminergic.plot(kind="bar", figsize = (8,4), grid=False, color="lightgreen", title="dopaminergic") # + id="D4Ya9rvusLra" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 357} outputId="51bacc60-53a1-4e23-fb7e-4c822f3a3517" executionInfo={"status": "ok", "timestamp": 1591028556645, "user_tz": 240, "elapsed": 1370, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_cluster3.dopaminergic.sort_values(ascending=False).plot(kind="bar", figsize = (8,4), grid=False, color="lightgreen", title="dopaminergic") # + id="xxQpWNN5oyzN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 357} outputId="25a98e3b-1bb0-4708-b9d7-47cc373d8ab2" executionInfo={"status": "ok", "timestamp": 1591027944896, "user_tz": 240, "elapsed": 989, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_cluster3.hypothalamic.plot(kind="bar", figsize = (8,4), grid=False, color="orange", title="hypothalamic") # + id="p1OoMoXAsaid" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 357} outputId="7a64188e-2a56-4ffe-a583-aa71e518be66" executionInfo={"status": "ok", "timestamp": 1591028613222, "user_tz": 240, "elapsed": 856, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_cluster3.hypothalamic.sort_values(ascending=False).plot(kind="bar", figsize = (8,4), grid=False, color="orange", title="hypothalamic") # + id="dOtH1FGsZOPT" colab_type="code" colab={} # + [markdown] id="VCeT7zCTtu4y" colab_type="text" # # Use function to generate a preliminary dataframe # + id="lS2V8S-griDX" colab_type="code" colab={} # define function to generate a dataframe from adata def get_df(adata, leiden): df = adata.obs.loc[:, ["donor_label","batch", leiden]] df["leiden"] = df[leiden] df = df.reset_index() df.drop(["index",leiden], axis=1, inplace=True) print(df.shape) return df # + id="jBj-mDwosuvF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="64051092-f3f5-4da1-9e10-1689ad44eac6" executionInfo={"status": "ok", "timestamp": 1591021640503, "user_tz": 240, "elapsed": 989, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df = get_df(adata, "leiden_0.6") # + id="GpkSSZdMs8CK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="d2a90cc8-610a-404e-e5e2-580ad1dcff0f" executionInfo={"status": "ok", "timestamp": 1591021643030, "user_tz": 240, "elapsed": 634, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df.head() # + [markdown] id="LOXx-wKyVkhP" colab_type="text" # # + [markdown] id="oQTC1-AYt15-" colab_type="text" # # Generate individual dataframe for a specific batch or sample # + id="Tv9mTI759QbZ" colab_type="code" colab={} # generate a separate dataframe for the sake of viualization batch_name =['cortical_1','cortical_2','dopaminergic_1','dopaminergic_2','hypothalamic_1','hypothalamic_2'] df_list = [] for batch in batch_name: df_sub = df[df["batch"] == batch] #groupby auto generate index column and aslo generate the column value from aggregation function df_sub = df_sub.groupby(["donor_label", "leiden"]).count().sort_index() df_sub = df_sub.unstack().fillna(0).astype("int") # remove out level of columns df_sub.columns = df_sub.columns.droplevel(level = 0) new_column_name = ["cluster" + s for s in df_sub.columns.to_list()] df_sub.columns = new_column_name #only keep 8 clusters df_sub=df_sub.iloc[0:7, :] df_list.append(df_sub) df_cor1, df_cor2, df_do1, df_do2, df_hy1, df_hy2 = df_list # + id="PTuIOvPxvKWm" colab_type="code" colab={} df_concat = pd.concat([df_cor1, df_cor2, df_do1, df_do2, df_hy1, df_hy2], axis = 0, keys=batch_name) # + id="VAtVC9udwoIo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d2bcedcd-d579-4722-fe80-d4228155c958" executionInfo={"status": "ok", "timestamp": 1590811600059, "user_tz": 240, "elapsed": 2538, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} # !pwd # + id="3YWL_6RwwYQU" colab_type="code" colab={} df_concat.to_csv("counts.csv") # + id="5zTS_maH-Ns0" colab_type="code" outputId="62d668c3-ca86-4ec1-be71-14da179d2e45" executionInfo={"status": "ok", "timestamp": 1591021653972, "user_tz": 240, "elapsed": 1262, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} colab={"base_uri": "https://localhost:8080/", "height": 351} df_cor1 # + [markdown] id="zPSm-tRByqn7" colab_type="text" # # + id="QXDb0GCq_gI1" colab_type="code" colab={} # generate dataframe for each experiment batchs = ["cortical", "dopaminergic", "hypothalamic"] df_list2 = [] for batch in batchs: df_sub = df[df["batch"].str.contains(batch)] #groupby auto generate index column and aslo generate the column value from aggregation function df_sub = df_sub.groupby(["donor_label", "leiden"]).count().sort_index() df_sub = df_sub.unstack().fillna(0).astype("int") # remove out level of columns df_sub.columns = df_sub.columns.droplevel(level = 0) df_sub = df_sub.iloc[0:7, :] new_column_name = ["cluster" + s for s in df_sub.columns.to_list()] df_sub.columns = new_column_name df_list2.append(df_sub) df_cor, df_do, df_hy = df_list2 # + id="oGmDW2Hq_9Pa" colab_type="code" colab={} df2_concat = pd.concat([df_cor, df_do, df_hy], axis = 0, keys=batchs) # + id="ozCiB3-hAMpF" colab_type="code" colab={} df2_concat.to_csv("counts_2.csv") # + [markdown] id="oHhijeEqzu3J" colab_type="text" # # Draw stack bars using functions # + id="5eXMSYYGz17_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 453} outputId="4a284abc-731d-4c40-ecb7-2a1cfcb42d66" executionInfo={"status": "ok", "timestamp": 1590812513555, "user_tz": 240, "elapsed": 1255, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} draw_stackbar(df_cor, "Cortical") # + id="Z3U7KQML0M_o" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 453} outputId="63ae5a0d-e9ef-43e5-e150-11fe950bfb5a" executionInfo={"status": "ok", "timestamp": 1590812553120, "user_tz": 240, "elapsed": 1874, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} draw_stackbar(df_do, "Dopaminergic") # + id="MnB4aBdN0UP5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 453} outputId="75fa9c3b-b7dc-4b86-957f-94d0fdb9a372" executionInfo={"status": "ok", "timestamp": 1590812624803, "user_tz": 240, "elapsed": 2219, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} draw_stackbar(df_hy, "Hypothalamic") # + [markdown] id="KIzsBB7PyEeL" colab_type="text" # # + id="KGnRO4J-yC08" colab_type="code" colab={} # + [markdown] id="DsZkxBeTJqNL" colab_type="text" # # draw figures using percenatage # + id="M6M0IVghrUD9" colab_type="code" colab={} # covert count to percentage def cal_percent(df): # covert categorical into list type df.columns = df.columns.astype(list) df["total"] = df.sum(axis=1) for col in df.columns.to_list(): df[col] = df[col]*100/df["total"] df.drop("total", axis = 1, inplace= True) return df # + id="X5fdPFVm2kXw" colab_type="code" colab={} df_list3 = [] for i in range(len(df_list2)): #df = sample_name + str(i) #print(df) df = cal_percent(df_list2[i]) df_list3.append(df) df_cor_per, df_do_per, df_hy_per = df_list3 # + id="hFGroqYX7CKZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e41c2173-28db-4259-d60d-c54b493596e4" executionInfo={"status": "ok", "timestamp": 1590814361929, "user_tz": 240, "elapsed": 778, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} batchs # + id="pLjBVXUe6j--" colab_type="code" colab={} df_all_per = pd.concat(df_list3, keys=batchs) df_all_per.to_csv("percentage.csv") # + id="6BY0DcWm2PsW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 453} outputId="e1be9b2c-aca4-4cce-8d3b-66dcf2c64ffc" executionInfo={"status": "ok", "timestamp": 1590814584854, "user_tz": 240, "elapsed": 1144, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} draw_stackbar(df_cor_per, "Cortical") # + id="GIWyZBkT8IMa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 453} outputId="a2adc092-9f84-4733-9ecf-b97c339698d6" executionInfo={"status": "ok", "timestamp": 1590814618521, "user_tz": 240, "elapsed": 1380, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} draw_stackbar(df_do_per, "Dopaminergic") # + id="LAgDA3Xv8LQj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 453} outputId="e33845d1-c610-47fc-9afd-621fef8d9f90" executionInfo={"status": "ok", "timestamp": 1590814649511, "user_tz": 240, "elapsed": 1253, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} draw_stackbar(df_hy_per, "Hypothalamic") # + [markdown] id="20iwFcrW8GgX" colab_type="text" # # + id="yDJ84jUp2im0" colab_type="code" colab={} draw_stackbar(df_cor_per, "Cortical") # + id="3f0vzcGOpQ3Y" colab_type="code" colab={} df_cor_copy = df_cor.copy() # + id="8WUBWSdfiyFc" colab_type="code" colab={} # tips: the dataframe bececome categoricalindex, which will cause problem in adding columns or reset_index # so have to covert categorical into list type df_cor_copy.columns = df_cor_copy.columns.astype(list) df_cor_copy["total"] = df_cor_copy.sum(axis=1) # + id="VUeNPO7mo-Xd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 328} outputId="e0a1be6e-09ff-4cba-ee6e-b57be2dae905" executionInfo={"status": "ok", "timestamp": 1590809690280, "user_tz": 240, "elapsed": 923, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AO<KEY>0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_cor_copy # + id="zk3tvy6toqic" colab_type="code" colab={} for col in df_cor_copy.columns.to_list(): df_cor_copy[col] = df_cor_copy[col]/df_cor_copy["total"] # + id="K8dIYhXopaJW" colab_type="code" colab={} df_cor_copy.drop("total", axis = 1, inplace= True) # + id="WbWO9LA0rHKg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 382} outputId="bc6de8a5-79e0-41b7-e07a-4ba52edf701f" executionInfo={"status": "ok", "timestamp": 1590810148325, "user_tz": 240, "elapsed": 886, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df_cor_copy # + id="4xKNFpzSrRNe" colab_type="code" colab={} draw_stackbar(df_cor_copy, "cortical") # + id="I5Nug4Oxnc7d" colab_type="code" colab={} df_cor_copy.index = df_cor_copy.index.astype(list) f_cor_copy.reset_index(inplace=True) # + id="lL_eSwJmJ8Mh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="674e34f1-c7c2-4d20-d24e-8041a6535bd0" executionInfo={"status": "ok", "timestamp": 1590801831842, "user_tz": 240, "elapsed": 854, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} df['Total'] = df.sum(axis=1) df_cor.sum(axis=1).values # + id="Fhr4MEfpCpzw" colab_type="code" outputId="6ae986e0-1934-4f6e-d63b-08dadb4612dc" executionInfo={"status": "ok", "timestamp": 1590799541855, "user_tz": 240, "elapsed": 563, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} colab={"base_uri": "https://localhost:8080/", "height": 68} df_cor1.index.values # + id="2oUyotJMAViB" colab_type="code" colab={} #subset using batch first #df_cor = df[df["batch"].isin(["cortical_2", "cortical_1"])] # + [markdown] id="eN2DnS4B8f7B" colab_type="text" # # Use simple bar to visualize the data # + [markdown] id="hycnwpBBVD2j" colab_type="text" # For this case: calculate protocol of "cortical' and for the donor of KDLF2 # + id="yt1gdrIgkbpe" colab_type="code" outputId="4febdce3-4ed3-4e39-ca1c-8f8b9fce4687" executionInfo={"status": "ok", "timestamp": 1590783437416, "user_tz": 240, "elapsed": 416, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} colab={"base_uri": "https://localhost:8080/", "height": 235} df_cor["total_cell"] = df_cor["cell_count"].sum() df_cor["percentage_per_cluster"] = df_cor["cell_count"]/df_cor["total_cell"] df_cor.head() # + id="9rZElx7sgixZ" colab_type="code" outputId="36416035-0a45-4d6c-9564-1661f6a56477" executionInfo={"status": "ok", "timestamp": 1590783614502, "user_tz": 240, "elapsed": 961, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} colab={"base_uri": "https://localhost:8080/", "height": 378} plt.figure(figsize=(10,5)) sns.set_style("whitegrid", {'axes.grid' : False}) sns.barplot(x=df_cor.index, y = "percentage_per_cluster", data=df_cor) ylim = (0,1) # + id="mIapASE8KMAO" colab_type="code" outputId="b012b96f-591a-4d6d-ae4b-c7482d994e29" executionInfo={"status": "ok", "timestamp": 1590785153411, "user_tz": 240, "elapsed": 379, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4QNksupzJ_tvoHXK0P_PWwExCE1HBSRW-Afh7=s64", "userId": "08926911206443480647"}} colab={"base_uri": "https://localhost:8080/", "height": 34} df.index.to_flat_index # + id="OxMysIfLL65m" colab_type="code" colab={} df.donor_label.value_counts().index.tolist() # + id="2xNDW5yMMm1d" colab_type="code" colab={} df.leiden.value_counts().index.tolist() # + id="A-KfvebHINbO" colab_type="code" outputId="4d480483-7aee-4dc0-ee5c-7bdbe245370c" executionInfo={"status": "ok", "timestamp": 1590784454490, "user_tz": 240, "elapsed": 590, "user": {"displayName": "<NAME>", "photoUrl": "https://<KEY>", "userId": "08926911206443480647"}} colab={"base_uri": "https://localhost:8080/", "height": 298} # We use the keyword bottom to do this # The top bar will have bottom set as height # First Bar video_game_hours = [1, 2, 2, 1, 2] plt.bar(range(len(video_game_hours)), video_game_hours) # Second Bar book_hours = [2, 3, 4, 2, 1] plt.bar(range(len(book_hours)), book_hours, bottom=video_game_hours) # third bar # Get each bottom for 3+ bars sport_hours = np.add(video_game_hours, book_hours)
May2020_integrate_3_analysis/29May2020_florian_concat_sum.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # Copyright 2021 Google LLC # Use of this source code is governed by an MIT-style # license that can be found in the LICENSE file or at # https://opensource.org/licenses/MIT. # Author(s): <NAME> (<EMAIL>) and <NAME> (<EMAIL>) # <a href="https://opensource.org/licenses/MIT" target="_parent"><img src="https://img.shields.io/github/license/probml/pyprobml"/></a> # <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/figures/chapter19_learning_with_fewer_labeled_examples_figures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # ## Figure 19.1:<a name='19.1'></a> <a name='cat-crops'></a> # # Illustration of random crops and zooms of a image images. # To reproduce this figure, click the open in colab button: <a href="https://colab.research.google.com/github/https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/image_augmentation_torch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> #@title Setup { display-mode: "form" } # %%time # If you run this for the first time it would take ~25/30 seconds # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null # !pip3 install nbimporter -qqq # %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt # %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.1.png") # ## Figure 19.2:<a name='19.2'></a> <a name='transfer'></a> # # Illustration of fine-tuning, where we freeze all the layers except for the last, which is optimized on the target dataset. From Figure 13.2.1 of <a href='#dive'>[Zha+20]</a> . Used with kind permission of <NAME>. #@title Setup { display-mode: "form" } # %%time # If you run this for the first time it would take ~25/30 seconds # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null # !pip3 install nbimporter -qqq # %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt # %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.2.png") # ## Figure 19.3:<a name='19.3'></a> <a name='supervisedImputation'></a> # # (a) Context encoder for self-supervised learning. From <a href='#Pathak2016'>[Dee+16]</a> . Used with kind permission of Deepak Pathak. (b) Some other proxy tasks for self-supervised learning. From <a href='#LeCunSSL2018'>[LeC18]</a> . Used with kind permission of <NAME>. #@title Setup { display-mode: "form" } # %%time # If you run this for the first time it would take ~25/30 seconds # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null # !pip3 install nbimporter -qqq # %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt # %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.3_A.png") pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.3_B.png") # ## Figure 19.4:<a name='19.4'></a> <a name='simCLRcrop'></a> # # (a) Illustration of SimCLR training. $\mathcal T $ is a set of stochastic semantics-preserving transformations (data augmentations). (b-c) Illustration of the benefit of random crops. Solid rectangles represent the original image, dashed rectangles are random crops. On the left, the model is forced to predict the local view A from the global view B (and vice versa). On the right, the model is forced to predict the appearance of adjacent views (C,D). From Figures 2--3 of <a href='#chen2020simple'>[Tin+20]</a> . Used with kind permission of <NAME>. #@title Setup { display-mode: "form" } # %%time # If you run this for the first time it would take ~25/30 seconds # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null # !pip3 install nbimporter -qqq # %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt # %cd -q /pyprobml/scripts # ## Figure 19.5:<a name='19.5'></a> <a name='simCLR'></a> # # Visualization of SimCLR training. Each input image in the minibatch is randomly modified in two different ways (using cropping (followed by resize), flipping, and color distortion), and then fed into a Siamese network. The embeddings (final layer) for each pair derived from the same image is forced to be close, whereas the embeddings for all other pairs are forced to be far. From https://ai.googleblog.com/2020/04/advancing-self-supervised-and-semi.html . Used with kind permission of <NAME>. #@title Setup { display-mode: "form" } # %%time # If you run this for the first time it would take ~25/30 seconds # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null # !pip3 install nbimporter -qqq # %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt # %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.5_A.png") pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.5_B.png") # ## Figure 19.6:<a name='19.6'></a> <a name='SSL'></a> # # Illustration of the benefits of semi-supervised learning for a binary classification problem. Labeled points from each class are shown as black and white circles respectively. (a) Decision boundary we might learn given only unlabeled data. (b) Decision boundary we might learn if we also had a lot of unlabeled data points, shown as smaller grey circles. #@title Setup { display-mode: "form" } # %%time # If you run this for the first time it would take ~25/30 seconds # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null # !pip3 install nbimporter -qqq # %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt # %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.6_A.png") pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.6_B.png") # ## Figure 19.7:<a name='19.7'></a> <a name='emvsst'></a> # # Comparison of the entropy minimization, self-training, and ``sharpened'' entropy minimization loss functions for a binary classification problem. #@title Setup { display-mode: "form" } # %%time # If you run this for the first time it would take ~25/30 seconds # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null # !pip3 install nbimporter -qqq # %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt # %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.7.png") # ## Figure 19.8:<a name='19.8'></a> <a name='emgoodbad'></a> # # Visualization demonstrating how entropy minimization enforces the cluster assumption. The classifier assigns a higher probability to class 1 (black dots) or 2 (white dots) in red or blue regions respectively. The predicted class probabilities for one particular unlabeled datapoint is shown in the bar plot. In (a), the decision boundary passes through high-density regions of data, so the classifier is forced to output high-entropy predictions. In (b), the classifier avoids high-density regions and is able to assign low-entropy predictions to most of the unlabeled data. #@title Setup { display-mode: "form" } # %%time # If you run this for the first time it would take ~25/30 seconds # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null # !pip3 install nbimporter -qqq # %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt # %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.8_A.png") pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.8_B.png") # ## Figure 19.9:<a name='19.9'></a> <a name='sevskl'></a> # # Comparison of the squared error and KL divergence lossses for a consistency regularization. This visualization is for a binary classification problem where it is assumed that the model's output for the unperturbed input is 1. The figure plots the loss incurred for a particular value of the logit (i.e.\ the pre-activation fed into the output sigmoid nonlinearity) for the perturbed input. As the logit grows towards infinity, the model predicts a class label of 1 (in agreement with the prediction for the unperturbed input); as it grows towards negative infinity, the model predictions class 0. The squared error loss saturates (and has zero gradients) when the model predicts one class or the other with high probability, but the KL divergence grows without bound as the model predicts class 0 with more and more confidence. #@title Setup { display-mode: "form" } # %%time # If you run this for the first time it would take ~25/30 seconds # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null # !pip3 install nbimporter -qqq # %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt # %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.9.png") # ## Figure 19.10:<a name='19.10'></a> <a name='ssgan'></a> # # Diagram of the semi-supervised GAN framework. The discriminator is trained to output the class of labeled datapoints (red), a ``fake'' label for outputs from the generator (yellow), and any label for unlabeled data (green). #@title Setup { display-mode: "form" } # %%time # If you run this for the first time it would take ~25/30 seconds # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null # !pip3 install nbimporter -qqq # %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt # %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.10.png") # ## Figure 19.11:<a name='19.11'></a> <a name='SimCLR2'></a> # # Combinng self-supervised learning on unlabeled data (left), supervised fine-tuning (middle), and self-training on pseudo-labeled data (right). From Figure 3 of <a href='#Chen2020nips'>[Tin+20]</a> . Used with kind permission of <NAME> #@title Setup { display-mode: "form" } # %%time # If you run this for the first time it would take ~25/30 seconds # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null # !pip3 install nbimporter -qqq # %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt # %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.11.png") # ## Figure 19.12:<a name='19.12'></a> <a name='MAML-PGM'></a> # # Graphical model corresponding to MAML. Left: generative model. Right: During meta-training, each of the task parameters $\boldsymbol \theta _j$'s are updated using their local datasets. The indices $j$ are over tasks (meta datasets), and $i$ are over instances within each task. Solid shaded nodes are always observed; semi-shaded (striped) nodes are only observed during meta training time (i.e., not at test time). From Figure 1 of <a href='#Finn2018'>[CKS18]</a> . Used with kind permission of Chelsea Finn. #@title Setup { display-mode: "form" } # %%time # If you run this for the first time it would take ~25/30 seconds # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null # !pip3 install nbimporter -qqq # %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt # %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.12.png") # ## Figure 19.13:<a name='19.13'></a> <a name='metaLearningFSL'></a> # # Illustration of meta-learning for few-shot learning. Here, each task is a 3-way-2-shot classification problem because each training task contains a support set with three classes, each with two examples. From https://www.borealisai.com/en/blog/tutorial-2-few-shot-learning-and-meta-learning-i . Copyright (2019) Borealis AI. Used with kind permission of <NAME> and <NAME>. #@title Setup { display-mode: "form" } # %%time # If you run this for the first time it would take ~25/30 seconds # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null # !pip3 install nbimporter -qqq # %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt # %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.13.png") # ## Figure 19.14:<a name='19.14'></a> <a name='matchingNetworks'></a> # # Illustration of a matching network for one-shot learning. From Figure 1 of <a href='#Vinyals2016'>[Ori+16]</a> . Used with kind permission of O<NAME>. #@title Setup { display-mode: "form" } # %%time # If you run this for the first time it would take ~25/30 seconds # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null # !pip3 install nbimporter -qqq # %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt # %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/Figure_19.14.png") # ## References: # <a name='Finn2018'>[CKS18]</a> <NAME>, <NAME> and <NAME>. "Probabilistic Model-Agnostic Meta-Learning". (2018). # # <a name='Pathak2016'>[Dee+16]</a> <NAME>, <NAME>, <NAME>, <NAME> and <NAME>. "Context Encoders: Feature Learning by Inpainting". (2016). # # <a name='LeCunSSL2018'>[LeC18]</a> Y. LeCun "Self-supervised learning: could machines learn like humans?". (2018). # # <a name='Vinyals2016'>[Ori+16]</a> <NAME>, <NAME>, <NAME>, <NAME> and <NAME>. "Matching Networks for One Shot Learning". (2016). # # <a name='Chen2020nips'>[Tin+20]</a> <NAME>, <NAME>, <NAME>, <NAME> and <NAME>. "Big Self-Supervised Models are Strong Semi-SupervisedLearners". (2020). # # <a name='dive'>[Zha+20]</a> <NAME>, <NAME>, <NAME> and <NAME>. "Dive into deep learning". (2020). # #
book1/figures/chapter19_learning_with_fewer_labeled_examples_figures.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lambda School Data Science # # *Unit 2, Sprint 3, Module 4* # # --- # # Model Interpretation # # You will use your portfolio project dataset for all assignments this sprint. # # ## Assignment # # Complete these tasks for your project, and document your work. # # - [ ] Continue to iterate on your project: data cleaning, exploratory visualization, feature engineering, modeling. # - [ ] Make at least 1 partial dependence plot to explain your model. # - [ ] Make at least 1 Shapley force plot to explain an individual prediction. # - [ ] **Share at least 1 visualization (of any type) on Slack!** # # If you aren't ready to make these plots with your own dataset, you can practice these objectives with any dataset you've worked with previously. Example solutions are available for Partial Dependence Plots with the Tanzania Waterpumps dataset, and Shapley force plots with the Titanic dataset. (These datasets are available in the data directory of this repository.) # # Please be aware that **multi-class classification** will result in multiple Partial Dependence Plots (one for each class), and multiple sets of Shapley Values (one for each class). # ## Stretch Goals # # #### Partial Dependence Plots # - [ ] Make multiple PDPs with 1 feature in isolation. # - [ ] Make multiple PDPs with 2 features in interaction. # - [ ] Use Plotly to make a 3D PDP. # - [ ] Make PDPs with categorical feature(s). Use Ordinal Encoder, outside of a pipeline, to encode your data first. If there is a natural ordering, then take the time to encode it that way, instead of random integers. Then use the encoded data with pdpbox. Get readable category names on your plot, instead of integer category codes. # # #### Shap Values # - [ ] Make Shapley force plots to explain at least 4 individual predictions. # - If your project is Binary Classification, you can do a True Positive, True Negative, False Positive, False Negative. # - If your project is Regression, you can do a high prediction with low error, a low prediction with low error, a high prediction with high error, and a low prediction with high error. # - [ ] Use Shapley values to display verbal explanations of individual predictions. # - [ ] Use the SHAP library for other visualization types. # # The [SHAP repo](https://github.com/slundberg/shap) has examples for many visualization types, including: # # - Force Plot, individual predictions # - Force Plot, multiple predictions # - Dependence Plot # - Summary Plot # - Summary Plot, Bar # - Interaction Values # - Decision Plots # # We just did the first type during the lesson. The [Kaggle microcourse](https://www.kaggle.com/dansbecker/advanced-uses-of-shap-values) shows two more. Experiment and see what you can learn! # ### Links # # #### Partial Dependence Plots # - [Kaggle / <NAME>: Machine Learning Explainability — Partial Dependence Plots](https://www.kaggle.com/dansbecker/partial-plots) # - [<NAME>: Interpretable Machine Learning — Partial Dependence Plots](https://christophm.github.io/interpretable-ml-book/pdp.html) + [animated explanation](https://twitter.com/ChristophMolnar/status/1066398522608635904) # - [pdpbox repo](https://github.com/SauceCat/PDPbox) & [docs](https://pdpbox.readthedocs.io/en/latest/) # - [Plotly: 3D PDP example](https://plot.ly/scikit-learn/plot-partial-dependence/#partial-dependence-of-house-value-on-median-age-and-average-occupancy) # # #### Shapley Values # - [Kaggle / <NAME>: Machine Learning Explainability — SHAP Values](https://www.kaggle.com/learn/machine-learning-explainability) # - [<NAME>: Interpretable Machine Learning — Shapley Values](https://christophm.github.io/interpretable-ml-book/shapley.html) # - [SHAP repo](https://github.com/slundberg/shap) & [docs](https://shap.readthedocs.io/en/latest/) # + # %%capture import sys if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/bsmrvl/DS-Unit-2-Applied-Modeling/tree/master/data/' # !pip install category_encoders==2.* else: DATA_PATH = '../data/' # + import pandas as pd pd.options.display.max_columns = 100 import numpy as np np.random.seed = 42 import matplotlib.pyplot as plt from category_encoders import OrdinalEncoder from scipy.stats import uniform, truncnorm, randint from xgboost import XGBClassifier from sklearn.inspection import permutation_importance from sklearn.linear_model import LogisticRegression from sklearn.metrics import classification_report, plot_confusion_matrix, precision_score, recall_score from sklearn.model_selection import RandomizedSearchCV, cross_val_score, train_test_split from sklearn.pipeline import make_pipeline # + ## Changing directions a bit, I'm going to try and predict occupation type from ## a variety of political questions. I'm reading these cleaned csv's from my last ## build. AB_demo = pd.read_csv(DATA_PATH + 'AB_demo.csv').drop(columns=['Unnamed: 0','id']) AB_opinions = pd.read_csv(DATA_PATH + 'AB_opinions.csv').drop(columns=['Unnamed: 0','id']) # + ## I will remove all the "other", essentially unemployed categories, ## and group the rest into small business and government/big business smallbiz = ['Private sector employee', 'Owner of a shop/grocery store', 'Manual laborer', 'Craftsperson', 'Professional such as lawyer, accountant, teacher, doctor, etc.', 'Agricultural worker/Owner of a farm', 'Employer/director of an institution with less than 10 employees' ] govbigbiz = ['A governmental employee', 'A student', 'Working at the armed forces or the police', 'Director of an institution or a high ranking governmental employee', 'Employer/director of an institution with 10 employees or more' ] other = ['A housewife', 'Unemployed', 'Retired', 'Other' ] def maketarget(cell): if cell in smallbiz: return 0 elif cell in govbigbiz: return 1 else: return np.NaN # - AB_demo['occu_cat'] = AB_demo['occupation'].apply(maketarget).astype(float) AB_opinions = AB_opinions.merge(AB_demo[['occu_cat']], left_index=True, right_index=True) AB_opinions = AB_opinions.dropna() # + X = AB_opinions.drop(columns='occu_cat') y = AB_opinions['occu_cat'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2, random_state=42) # + classy = XGBClassifier( random_state=42, max_depth=2, ) params = { 'subsample': truncnorm(a=0,b=1, loc=.5, scale=.1), 'learning_rate': truncnorm(a=0,b=1, loc=.1, scale=.1), 'scale_pos_weight': uniform(.1, .3) } prec = .5 recall = .05 while prec < .9 or recall < .06: rand_state = np.random.randint(10, 90) # print('RANDOM STATE:',rand_state) searcher = RandomizedSearchCV( classy, params, n_jobs=-1, # random_state=rand_state, random_state=25, #### 16 for smallbiz, 25 for govbigbiz verbose=1, scoring='precision' ) searcher.fit(X_train, y_train) model = searcher.best_estimator_ prec = precision_score(y_test, model.predict(X_test)) recall = recall_score(y_test, model.predict(X_test)) # print('RANDOM STATE:',rand_state) print(classification_report(y_test, model.predict(X_test))) # - per_imps = permutation_importance(model, X_test, y_test, scoring='precision', random_state=42, n_repeats=10) more_important = pd.Series(per_imps['importances_mean'], index=X.columns) top5 = more_important.sort_values(ascending=False).head() top5 predictions = pd.Series(model.predict(X_test), index=X_test.index, name='predictions') AB_opinions = AB_opinions.merge(predictions, left_index=True, right_index=True) positives = AB_opinions[AB_opinions['predictions'] == 1] positives[top5.index].head() # + from pdpbox.pdp import pdp_isolate, pdp_plot feat = 'q6105' isolate = pdp_isolate( model=model, dataset=X_test, model_features=X_test.columns, feature=feat ) pdp_plot(isolate, feature_name=feat); # + from pdpbox.pdp import pdp_interact, pdp_interact_plot feats = ['q6105', 'q812a1'] interact = pdp_interact( model=model, dataset=X_test, model_features=X_test.columns, features=feats ) fig, ax = pdp_interact_plot(interact, feature_names=feats, plot_params={ 'title': '', 'subtitle': '', 'cmap': 'inferno', }, plot_type='contour') ax['pdp_inter_ax'].set_title('Questions determining government or large\nbusiness \ employee (as opposed to working\nclass/small biz)', ha='left', fontsize=17, x=0, y=1.1) ax['pdp_inter_ax'].text(s='Brighter colors = more likely to be gov/big biz', fontsize=13, x=-2, y=2.25, color='#333333') ax['pdp_inter_ax'].set_xlabel('Do you attend Friday\nprayer/Sunday services?', fontsize=13, labelpad=-5) ax['pdp_inter_ax'].set_ylabel('How important that\nconstitution insures\n\ equal rights for men\n and women?', ha='right', fontsize=13, rotation=0, labelpad=0, y=0.45) ax['pdp_inter_ax'].set_xticks([-1.7,1.7]) ax['pdp_inter_ax'].set_xticklabels(['Never','Always']) ax['pdp_inter_ax'].set_yticks([-1.15,2]) ax['pdp_inter_ax'].set_yticklabels(['Not important at all','Very important'], rotation=90) ax['pdp_inter_ax'].tick_params(axis='both', length=10, color='white') fig.set_facecolor('white') plt.show() # - row = X_test.loc[[2455]] row # + import shap explainer = shap.TreeExplainer(model) shap.initjs() shap.force_plot( base_value=explainer.expected_value, shap_values=explainer.shap_values(row), features=row, link='logit' ) # -
module4-model-interpretation/LS_DS_234_assignment.ipynb