code
stringlengths
38
801k
repo_path
stringlengths
6
263
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Analyzing the Stroop Effect # Perform the analysis in the space below. Remember to follow [the instructions](https://docs.google.com/document/d/1-OkpZLjG_kX9J6LIQ5IltsqMzVWjh36QpnP2RYpVdPU/pub?embedded=True) and review the [project rubric](https://review.udacity.com/#!/rubrics/71/view) before submitting. Once you've completed the analysis and write-up, download this file as a PDF or HTML file, upload that PDF/HTML into the workspace here (click on the orange Jupyter icon in the upper left then Upload), then use the Submit Project button at the bottom of this page. This will create a zip file containing both this .ipynb doc and the PDF/HTML doc that will be submitted for your project. # # # (1) What is the independent variable? What is the dependent variable? # The dependent variable is what we measure. In this case the time between the stimulus and the response. # # The independent variable is what we manipulate. In the Stroop experiment it is whether the word and the color the word is written in is concruent or not. # (2) What is an appropriate set of hypotheses for this task? Specify your null and alternative hypotheses, and clearly define any notation used. Justify your choices. # The Stroop experiment is done by repeating the same task (reading words of a given list) twice and manipulating the test setup between these two tasks. In this case the color of the words presented is changed. All other variables stay constant. For example the number of words. # # As usual instead of testing the whole population (in this expriment we would need to test all human beings) a random sample of humans is tested and statistics can be used to predict whether the manipulation has an effect and whether one can make assumptions about the whole population. # # Appropriate hypotheses for such a testing are: # # - The null hypothesis (that is the assumption before we run any test) is that the average response time of the task under the incongruent words condition is not different than that of the congruent words condition. # - The alternative hypothesis is that the reaction time is significantly different in the incongruent words condition that in the congurent words condition. # # Let $_{c}$ be the subscript for **congruent** and $_{i}$ the subscript for **incongruent**. Then the notation is # # $H_{0}: \mu_{c} - \mu_{i} = 0$ # # and # # $H_{1}: \mu_{c} - \mu_{i} \neq 0$ # (3) Report some descriptive statistics regarding this dataset. Include at least one measure of central tendency and at least one measure of variability. The name of the data file is 'stroopdata.csv'. import pandas as pd df = pd.read_csv('stroopdata.csv') df.describe() df.max() - df.min() df.var() # - There are 24 rows in the dataset. # - On average these 24 testees needed $\overline{x}_{c} = 14.05$ seconds and $\overline{x}_{i} = 22.01$ seconds to respond. # - The response time range is 13.698 seconds in the congruent test and 19.568 seconds in the incongruent test. # - The fastest response was 8.63 seconds in the congruent measure and 15.687 seconds in the incongurent measurement. # - The slowest reponse time was 22.328 seconds in the congurent words condition and 35.255 seconds in the incongurent words condition. # - The median is $\tilde{x}_{c} = 14.357$ seconds and $\tilde{x}_{i} = 14.357$. # - The variance of the data is $s^2_{c} = 12.669$ and $s^2_{i} = 23.011$ as this is the square root of the standard deviation of $s_{c} = 3.599$ and $s_{i} = 4.797$. # These statistics suggest that the average response time in the incongruent words condition is higher than in the congruent word condition. But currently this difference can just be by coincidence. # (4) Provide one or two visualizations that show the distribution of the sample data. Write one or two sentences noting what you observe about the plot or plots. # I think the best plot to display the difference of the mean between two measurements is a box plot. import matplotlib as plt import seaborn as sns # %matplotlib inline sns.set(style="whitegrid") ax = sns.boxplot(data=df) ax.set_title('Stroop Task response time') ax.set_xlabel('Task Condition') ax.set_ylabel('Response Time (s)'); # The plot again suggests that the response time in the incongruent words condition is higher. Most of the values of the incongruent measurement are above the third quartile of the congruent measurement. And this box has two outliers. # # This is consistent with the sample statistic above. A second plot to show the distribution is a histogram. I'll plot both distributions in one plot. fig, dax = plt.pyplot.subplots() sns.distplot(df['Congruent'], ax=dax, bins=20, axlabel=False, label='Congruent', kde=False) sns.distplot(df['Incongruent'], ax=dax, bins=20, axlabel=False, label='Incongruent', kde=False) dax.set_xlabel('Response Time') dax.set_ylabel("Frequency"); dax.set_title("Response time for congruent vs incongruent words"); dax.legend(); # The plot is consistent with the statistics seen so far. # (5) Now, perform the statistical test and report your results. What is your confidence level or Type I error associated with your test? What is your conclusion regarding the hypotheses you set up? Did the results match up with your expectations? **Hint:** Think about what is being measured on each individual, and what statistic best captures how an individual reacts in each environment. # As this is an experiment with one sample and a repeated measurement and I don't know anything about the average response time of the population I choose to perform a Dependent t-test for paired samples to compare the means of two measurements. # # This is done by scipy internally as a one sample t-test using the average response time of the differences between the two measurements. # # I choose a standard confidence level of 5% or 0.05 for this test. from scipy import stats stats.ttest_rel(df['Incongruent'], df['Congruent']) # The t-statistic gives the information about how much the sample mean differs from the null hypothesis. If it lies outside the critical values of the t-distribution corresponding to the confidence level and degrees of freedom, we reject the null hypothesis. We could look up the critical values in a table but we can use scipy again for us by giving it the quantil ranges (for a two tail test and $\alpha = 0.05$ these are 0.025 and 0.975) and the degrees of freedom. This is the number of pairs minus 1 for a paired t-test. t_critical_values = (stats.t.ppf(q=0.025,df=len(df)-1), stats.t.ppf(q=0.975,df=len(df)-1)) t_critical_values # The t-statistic clearly is larger than the t-critical value of 2.0686576104190406 at $\alpha = 0.05$ and the degrees of freedom of 23. That means that if the null hypothesis was true the probability of finding t-statistic as extreme as this is less than 5%. # # The p-value calculated is 4.1030005857111781e-08. Thas means odds of 0.000004103 of having a difference as large as (or larger than) the one in our study if the null hypothesis was true. As this p-value is way smaller than 0.05 I reject the null hypothesis. # # I have evidence to suggest that on average it takes longer to read out words in the incongruent words condition. # (6) Optional: What do you think is responsible for the effects observed? Can you think of an alternative or similar task that would result in a similar effect? Some research about the problem will be helpful for thinking about these two questions! # I think that the Stroop Effect is evidence that our brain is a very efficient machine that automates tasks that are trained over and over again. Reading is s skill humans learn very early and what is done over and over again every day. # # In contrast if we have to do unusual tasks our brain has to think. 🤔 # # That takes longer and explains - in my opinion - why it takes longer to realize the color a word is written in than the word itself. If we would train that every day the time to do that would be shorter and shorter until it takes as less time as reading a word. # # A similar task is that: https://youtu.be/MFzDaBzBlL0 # ### References # # In addition to the documentation of Pandas, Matplotlib and Seaborn I used these websites: # # - [Wikipedia: Stroop Effect](https://en.wikipedia.org/wiki/Stroop_effect) # - [Wikipedia: Student's t-test](https://en.wikipedia.org/wiki/Student%27s_t-test) # - [Neuroscience For Kids](https://faculty.washington.edu/chudler/words.html) # - [Skewed Distribution: Definition, Examples](http://www.statisticshowto.com/probability-and-statistics/skewed-distribution/) # - [Box Plot: Display of Distribution](http://www.physics.csbsju.edu/stats/box2.html)
05 Test a Perceptual Phenomenon/Test a Perceptual Phenomenon.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # <center>Analysis of Crime Incident Reports - Boston</center> # # <center><NAME></center> # ## Introduction # Crime has become so common these days that people are starting to see it as a part of society. There are an endless number of movies and shows which are centered around crime. The entire collection of movies and TV shows under DC comics depict crime to be deeply rooted in the civilization. It has become so commonplace that some even go as far to claim it to be "an integral part of a healthy society" [[1]](#References). According to Durkheim [[1]](#References), the amount of deviance in crime remains relatively stable over time. I find that statement a little suspectful. Just considering mass shootings in the United States, the crime rate went up by 24% since the last year. The average mass shooting in 2018 was 0.88 per day [[2]](#References) and bumped up to 1.24 per day in 2019 [[3]](#References). While this doesn't negate Durkheim's hypothesis, but it is contradicting enough to spark some curiosity. # # For this analysis, a major reason behind picking Boston for the analysis was that I was looking to move to the East Coast for work after graduating. And as part of my basic search, understanding the crime statistics of the cities was an important factor. A potential **stakeholder** for this project could be anyone who is living in or is visiting Boston. # ## Related Work # # # There have been some analysis done in the past such as that in [[5]](#References), [[6]](#References), and [[7]](#References) which bring to light some interesting facts such as Friday being the most common day for crimes and 5 PM a popular time for motor accidents [[5]](#References). A common issue, however, in these studies was there was little to no documentation on the process. No description of when and how the data was collected and processed, what the plots implied etc. And judging by the dates present in the page, they seem to be fairly old. The questions asked by [[6]](#References) are, however, very interesting and would form the basis of a lot of my questions. The page, however, lacks documentation completely. Even the name of the dataset or the link to it is missing, which makes it very skeptical. # ## Research Questions # The focus of this analysis is to understand the distribution of the different kinds of crimes across disticts and time. This should hopefully give us a pretty good sense of which areas are safe and which are not. The questions that I wish to answer in this analysis are as follows: # # 1. What is the overall distribution of the different kinds of crimes in Boston? # 2. How do different districts in Boston compare on the basis of overall crime incident reports? # 3. What are the top 10 crimes in each of the districts? # 4. What are the top 10 crimes at different times of the day? # 5. During what times of the day are shooting incidents most prevelant? # 6. What reported incidents are most commonly associated with a shooting? # 7. Which districts suffer more shooting incidents than the others? # ## Dataset # # For this project, I plan to explore the Crime Incident Reports dataset [[4]](#References) that is published by the Boston Police Department on all the crimes which were reported between June 2015 and November 2019. It is a very rich city-level dataset which provides detailed categorization of the crimes along with the time and location of the incident. The dataset can either be collected using the API provided by the Boston Police Department or by directly downloading the CSV file present on their website [[4]](#References). For this analysis, we go with the latter. The dataset is release under the Open Data Commons Public Domain Dedication and License ([PDDL](https://opendatacommons.org/licenses/pddl/index.html)) [[4, 8]](#References). # # The dataset consists of three files which can be downloaded from [[4]](#References): # # - **raw_data.csv** # This dataset is located at [data/raw_data.csv](data/raw_data.csv). It is the most important file in this analysis which contains details on what crime happened when and where. A detail description of the columns is present in the table below. Note, this description is partially present in one of the files [data/rmscrimeincidentfieldexplanation.xlsx](data/rmscrimeincidentfieldexplanation.xlsx) provided by the BPD. This file is ~78.6 MB in size. # # | Column | Description | # |--------|-------------| # | INCIDENT_NUMBER | Internal BPD report number | # | OFFENSE_CODE | Numerical code of offense description | # | OFFENSE_CODE_GROUP | Internal categorization of OFFENSE_DESCRIPTION | # | OFFENSE_DESCRIPTION | Primary descriptor of incident | # | DISTRICT | What district the crime was reported in | # | REPORTING_AREA | RA number associated with the where the crime was reported from. | # | SHOOTING | Indicated a shooting took place. | # | OCCURRED_ON_DATE | Earliest date and time the incident could have taken place | # | YEAR | Year component of OCCURRED_ON_DATE | # | MONTH | Month component of OCCURRED_ON_DATE | # | DAY_OF_WEEK | Day of Week component of OCCURRED_ON_DATE | # | HOUR | Hour component of OCCURRED_ON_DATE | # | UCR_PART | Universal Crime Reporting Part number (1, 2, 3) | # | STREET | Street name the incident took place | # | Lat | Latitude where the incident took place | # | Long | Longitude where the incident took place | # | Location | Latitude and Longitude where the incident took place | # # - **rmscrimeincidentfieldexplanation.xlsx** # This dataset is located at [data/rmscrimeincidentfieldexplanation.xlsx](data/rmscrimeincidentfieldexplanation.xlsx). It contains decriptions of some of the main columns in [data/raw_data.csv](data/raw_data.csv). Note that this file misses the description of some columns which self explanatory. Description of the columns is present in the table below: # # | Column | Description | # |--------|-------------| # | Field Name, Data Type Required | Name of the column in raw_data.csv along with its datatype and the NULL value constraint | # | Description | Description of the column | # # - **rmsoffensecodes.xlsx** # This dataset is located at [data/rmsoffensecodes.xlsx](data/rmsoffensecodes.xlsx). It contains a list of all the offense codes along with a description of what they stand for. Description of the columns is present in the table below: # # | Column | Description | # |--------|-------------| # | CODE | Offense code value | # | NAME | Name of the offense corresponding to the CODE | # # ## Ethical Considerations # # The dataset provides only the street information and doesn't narrow it down any further to the building number, etc. Hence, I feel that there are no ethical considerations with this dataset. This information should be released publicly so that people can be aware of what is happening in their neighbourhood and city. One shouldn't have to file a Freedom of Information Act (FOIA) request again and again to keep themselves posted of their surroundings. It is credit to this information that the public can find out and take action when they feel that the crime is rising and the law isn't doing enough. # ## Human-Centered Considerations # # This project could be very useful for someone who plans move to the Boston city or is already staying there. It contains the most recent data possible (see [Limitations](#Limitations)). It could help someone understand the crime statistics of the city such as the main places and times where these crimes happen, etc. and help make better decisions about where to stay and which places to avoid in order to stay safe. Survival instinct is one of the most basic instincts to all kinds of life and hopefully this work could help someone improve their chances of getting hurt. # # It could be the case that this analysis might result in showing that certain parts of the city are more common for certain crimes. And many-a-times, the areas could be tied down to a certain community but that is not the intent of the study and hence the mapping is not a part of this study or the dataset. The intent here is just to provide information for people to make smarter choices in order to stay safe and I do not wish for any service provider to alter their services based on this information. Moreover, this information is public and provided by the government and could anyway be misused if so were the intentions of people. This project does not aim to aid misuse of any form. # ## Methodology # # The questions asked in this analysis mainly focus on understanding the distribution of crime incidents across space and time in Boston. To get from raw data to insights, we will explore the data to find the issues in it and clean them accordingly. We aim to use the data as is for most parts, and not impute any null values except for in the case of the `SHOOTING` column where, as you will find below, there is a data entry issue. One interesting cleaning we do is to uninflate the entries made in this dataset. The key to understanding this step is explained in the section where we discover the problem. Find more detailed code walkthrough in the [Data Exploration and Cleaning](#Data-Exploration-and-Cleaning) section below. # # Post cleaning, we begin to answer the questions by constructing graphs and tables whenever necessary. We aim to keep the steps very simple so that it could be easily understood (and consequently accepted) by a wide group of people. For most of the questions, we construct histogram plots of the frequency of values in certain columns. We limit the number of items that we display on the plot when the question asked if of the form "What are the top 10...". The design for these plots are chosen to be very simple and easy to understand. We choose not to combine multiple histograms into one mainly because in some of the questions where such a step can be performed, combining the histograms from all the plots into one would shoot up the number of bars in the image making it hard for the user to interpret. More details on why and how each step is performed and the resulting observations can be found in the [Findings](#Findings) section below. # + import os import numpy as np import pandas as pd import matplotlib.pyplot as plt from IPython.core.interactiveshell import InteractiveShell # %matplotlib inline np.set_printoptions(suppress=True) InteractiveShell.ast_node_interactivity = "all" # - # ## Data Exploration and Cleaning # First we read the raw data file and the offense codes file from the `data/` folder. We then print the number of rows in each of the files. We also display a few top rows from the file to understand what the data looks like. # + raw_data = pd.read_csv('data/raw_data.csv') offense_codes = pd.read_excel('data/rmsoffensecodes.xlsx') print("Number of rows in raw_data file:", len(raw_data)) print("Number of rows in offense_codes file:", len(offense_codes)) # + raw_data.head() # Display the top 5 rows of raw data offense_codes.head() # Display the top 5 rows of offense codes # - # From the tables, we see that there are some redundant columns, for ex., `Location` is just a combination of `Lat` and `Long` and `OCCURED_ON_DATE` has been further broken down into `YEAR`, `MONTH`, etc. Let us drop the `Location` column for now and keep the other redundant columns. We will remove them later if we don't need them at all. We shall also convert the data type of OCCURED_ON_DATE to a datetime format for easy plotting. raw_data = raw_data.drop(columns=['Location']) raw_data['OCCURRED_ON_DATE'] = pd.to_datetime(raw_data['OCCURRED_ON_DATE']) # Now, let us verify if the number of Offense codes present in the offense codes dataset is the same as that in the raw_data file. # + offense_offense_codes = set(offense_codes.CODE) raw_data_offense_codes = set(raw_data.OFFENSE_CODE) print("Number of unique offense codes in offense codes file:", len(offense_offense_codes)) print("Number of unique offense codes in raw_data file:", len(raw_data_offense_codes)) # - # It is interesting there are no records for almost half of the offenses known to the system. Let us next make sure that the description for the offense codes present in both the files match for the common offenses present in both. That way, we can safely discard the offense codes file and only work with the raw_data file. # + common_offense_codes = offense_offense_codes.intersection(raw_data_offense_codes) print('There are a total of', len(common_offense_codes), "common offense codes.") # Get a unique set of rows with common offense codes from raw_data temp_data_raw = \ raw_data[raw_data.OFFENSE_CODE.isin(common_offense_codes)][['OFFENSE_CODE', 'OFFENSE_DESCRIPTION']] temp_data_raw = temp_data_raw.drop_duplicates(subset=["OFFENSE_CODE"]) # Get a unique set of rows with common offense codes from offense dataset temp_data_offense = offense_codes[offense_codes.CODE.isin(common_offense_codes)][['CODE', 'NAME']] temp_data_offense = temp_data_offense.drop_duplicates(subset=["CODE"]) # join the two rows and display rows where names don't match temp_merged_data = temp_data_offense.merge(temp_data_raw, left_on="CODE", right_on="OFFENSE_CODE") temp_merged_data[temp_merged_data.NAME != temp_merged_data.OFFENSE_DESCRIPTION] # - # We see that 6 codes are present in the raw_data with no corresponding record in the offense_codes file. We also see that for 8 of the common offense codes, the descriptions don't match perfectly. However, they are close enough, and judging by the values, it seems as if the description in the raw_data is more descriptive. So, we can safely discard the offense codes file. # Let us now analyze the percentage of null values in the raw_data columns. print("Percentage of null values in each of the columns:") raw_data.isnull().mean().sort_values(ascending=False) * 100 # As we see, there are a suprisingly high number of null values in the `SHOOTING` column. Upon further investigation, I found that this was because the incidents which were not associated with gun shooting were left as null. Let us look at the unique values in that column. raw_data.SHOOTING.unique() # We need to clean up this column by setting all '0' and nan to False and the other two to True. And let us ignore the other null values for now, we will drop the corresponding rows later when we perform specific analysis on them. raw_data['SHOOTING'] = raw_data.SHOOTING.replace({'0': False, np.nan: False, 'Y': True, '1': True}) # Next, let's look at the summary of the columns to see if we find anything interesting in there. raw_data.describe() # Nothing specifically interesting here. One thing I noticed when we printed the top 5 rows of raw_data at the top was that there was a district named 'External'. This value denotes that the crime was reported outside of Boston, so we do not need this in our analysis. I also noticed that the disctricts had weird names, so when I further digged into the Boston Police Department website, I found a [page](https://bpdnews.com/districts) which mapped these coded names to the true district names. So, let's drop the 'External' records and create a new column `DISTRICT_NAME` with the mapped district names. # + raw_data = raw_data[raw_data.DISTRICT != 'External'] district_code_name_map = { 'A1' : 'Downtown', 'A7' : 'East Boston', 'A15' : 'Charlestown', 'B2' : 'Roxbury', 'B3' : 'Mattapan', 'C6' : 'South Boston', 'C11' : 'Dorchester', 'D4' : 'South End', 'D14' : 'Brighton', 'E5' : 'West Roxbury', 'E13' : 'Jamaica Plain', 'E18' : 'Hyde Park' } raw_data['DISTRICT_NAME'] = raw_data.DISTRICT.replace(district_code_name_map) # - # Next, let us have a look at the percentage of unique values in each column. This information is usually helpful in understanding if a feature has some information that could be derived. raw_data.apply(lambda r: len(r.unique())) * 100 / len(raw_data) # The most shocking piece of information here is that the column `INCIDENT_NUMBER` is not 100\%. Upon further analysis, I found that there is an entry for each person involved in a crime report and for each offense. So if there were 5 people commited 2 kinds of offense together, there would be a total of 10 entries in the system instead of 2 or 1. This would clearly inflate the numbers in the dataset! We should remove the number of people factor from this to reduce the inflation amount. There should only be one entry per offense category for a certain incident. So we create a new dataset with this setting. We retain the old dataset in case we wish to analyse the pattern in the number of people involved in crimes. We also offer a flag for someone who wishes to override the creation of this new dataset and wants to work with the inflated numbers. # + uninflate_numbers = True # Set to False if you wish to have the inflated numbers if uninflate_numbers: processed_data = raw_data.drop_duplicates(subset=['INCIDENT_NUMBER', 'OFFENSE_CODE'], keep='first') else: processed_data = raw_data.copy() # - # Now, let us make sure we are aware of the dates in the dataset so we can accordingly modify our definition of `year` if needed. print('Earliest date:', processed_data.OCCURRED_ON_DATE.min()) print('Latest date:', processed_data.OCCURRED_ON_DATE.max()) # Now let us save the processed_data to a CSV file in the `data/` folder. processed_data.to_csv('data/processed_data.csv', index=False) # ## Findings # In this section, we aim to answer the seven research questions that we discussed in the [Research Questions](#Research-Questions) section above. For each of these questions, we will discuss how we would construct the graph, how to interpret the graph, and some observations that we can draw from the graph or the table. # Let us first make a general plotting function which we could use for plotting histograms / distributions. def plot_hist(data, x_label="", y_label="", title="", top=False): if top: data = data.sort_values(ascending=False)[:top] plt.rcParams["figure.figsize"] = [15, 10] plt.xticks(rotation=90) data.plot.bar() plt.ylabel(y_label) plt.xlabel(x_label) plt.title(title) # ### Q1. What is the overall distribution of the different kinds of crimes in Boston? # In order to get a general sense of the field, let us have a look at the frequency of each kind of crime to understand which are the most common ones and which are the least. For this, we will plot the relative frequency of each value in the `OFFENSE_CODE_GROUP` column (y-axis) against the name of the offense (x-axis). plt.rcParams.update({'font.size': 12}) plot_hist(data=processed_data.OFFENSE_CODE_GROUP.value_counts(ascending=False, normalize=True, sort=True, dropna=True), y_label="Relative Frequency", x_label="Offense Category", title="Relative frequency of incidents in Boston") # It seems that Motor Vehicle Accidents and Larceny are some of the most commonly reported incidents and human trafficking is one of the least reported ones. Note here that just because it is least reported does not mean it doesn't happen. It just means that nobody notices it happening or reports if noticed. Now let us repeat this for districts to identify which districts are the least safe and which are the most. # ### Q2. How do different districts in Boston compare on the basis of overall crime incident reports? # # For this, we will plot the relative frequency of each value in the `DISTRICT_NAME` column (y-axis) against the name of the district (x-axis). Higher the relative frequency, more unsafe the district. plot_hist(processed_data.DISTRICT_NAME.value_counts(ascending=False, normalize=True, sort=True), y_label="Relative Frequency", x_label="District", title="Relative frequency of incidents in Boston Districts") # From the above graph, we see that Roxbury, Dorchester, and South End are some of the most unsafe districts in Boston and East Boston and Charlestown are the relatively safe ones. This is interesting because both of these groups share physical proximity and properties. Roxbury, Dorchester, and South End are located in the South-East part of Boston, whereas, East Boston and Charlestown are seperated from the mainland by a water body. # ### Q3. What are the top 10 crimes in each of the districts? # # For this question, we will construct 12 histograph plots, one for each district. In each of the graphs, we plot the relative frequency of the 10 most commonly reported incidents in that district. def get_top_k_crimes_district(data, district, k=10): subset_data = data[data.DISTRICT_NAME == district] subset_data = subset_data.OFFENSE_CODE_GROUP.value_counts(normalize=True, dropna=True).sort_values(ascending=False)[:k] return subset_data # Get unique district names district_names = processed_data.DISTRICT_NAME.dropna().sort_values().unique() n_cols = 2 n_rows = (len(district_names) // 2) + (len(district_names) % 2) plt.rcParams['figure.figsize'] = [20, 60] plt.rcParams.update({'font.size': 13}) fig, axs = plt.subplots(n_rows, n_cols) x = 0 y = 0 for name in district_names: val = get_top_k_crimes_district(processed_data, name) _ = val.plot.bar(ax=axs[x, y]) _ = axs[x, y].set_xticklabels(list(val.index), rotation=90) _ = axs[x, y].set_title(name) plt.subplots_adjust(hspace = .7) y = (y + 1) % 2 if not y: x += 1 # The first important observation here is that 'Motor Vehicle Accident Response' and 'Medical Assistance' are very common across all districts. But if we overlook them, we see that in areas such as Roxbury and South End which suffer from more crime incidents, Larceny seems to be more prevelant. # ### Q4. What are the top 10 crimes at different times of the day? # # In order to simplify the analysis across times, we bucket the 24 hours that we have in a day into 6 bins ("0-3", "4-7", "8-11", "12-15", "16-19", "20-23"). We then follow the same analysis procedure as the previous question and create a graph for each bin. bin_labels = ["0-3", "4-7", "8-11", "12-15", "16-19", "20-23"] bins = {i : bin_labels[i // 4] for i in range(24)} temp_data = processed_data.copy() temp_data['hbins'] = temp_data.HOUR.replace(bins) # + def get_top_k_crimes_hour(data, hbin, k=10): subset_data = data[data.hbins == hbin] subset_data = subset_data.OFFENSE_CODE_GROUP.value_counts(normalize=True, dropna=True).sort_values(ascending=False)[:k] return subset_data n_cols = 2 n_rows = (len(bin_labels) // 2) + (len(bin_labels) % 2) plt.rcParams['figure.figsize'] = [20, 30] plt.rcParams.update({'font.size': 13}) fig, axs = plt.subplots(n_rows, n_cols) x = 0 y = 0 for hbin in bin_labels: val = get_top_k_crimes_hour(temp_data, hbin) _ = val.plot.bar(ax=axs[x, y]) _ = axs[x, y].set_xticklabels(list(val.index), rotation=90) _ = axs[x, y].set_title(hbin) plt.subplots_adjust(hspace = .6) y = (y + 1) % 2 if not y: x += 1 # - # A very interesting general pattern emerges from the data: # As night gets darker, Vandalism and Simple Assualt become very common while Larceny reduces. And during the day, an opposite pattern is observed where Vandalism and Simple Assualt reduce while Larceny becomes popular. # ### Q5. During what times of the day are shooting incidents most prevelant? # Here, we just create a table with an entry for each time bin that we created in the previous question. Against each bin, we count the number of incidents which accompanied shooting. temp_data[temp_data.SHOOTING].hbins.value_counts() # Here, we see that shooting incidents are common in the evening and night. # ### Q6. What reported incidents are most commonly associated with a shooting? # # For this analysis, we will first create a subset of the data where the `SHOOTING` is set to True, and then in this dataset, we plot the relative frequency of each value in the `OFFENSE_CODE_GROUP` column (y-axis) against the name of the offense (x-axis). We will only consider the top 10 crimes here. shooting_data_code = processed_data[processed_data.SHOOTING].OFFENSE_CODE_GROUP.value_counts(ascending=False, normalize=True, sort=True, dropna=True) plot_hist(data=shooting_data_code, y_label="Relative Frequency", x_label="Offense Category", title="Top 10 incidents most commonly associated with shooting", top=10) # Here, we see that Aggravated Assault and Homicides are some crimes that are usually associated with a shooting. Probably the reason why it is called 'Aggravated Assault'. Another interesting way to study this would be to plot the propotion of each crime that is associated with shooting. # # Another shocking and sad pattern we see here is that the 'Medical Assistance' is weakly associated with Shooting incidents which could potentially mean that there was nothing left to medically assist after the shooting. # ### Q7. Which districts suffer more shooting incidents than the others? # We repeat the same steps as the above question to identify districts most commonly associated with a shooting incident. shooting_data_district = processed_data[processed_data.SHOOTING].DISTRICT_NAME.value_counts(ascending=False, normalize=True, sort=True, dropna=True) plot_hist(data=shooting_data_district, y_label="Relative Frequency", x_label="Offense Category", title="Districts most commonly associated with shooting") # Interstingly, Roxbury, Mattapan and Dorchester are again on the top in this list and Charlestown at the bottom. Again, another interesting way to study this would be to plot the propotion of the crime in each district that is associated with shooting. # ## Limitations # # 1. The dataset records the crime incidents where were reported to the Boston Police Department. Just because some crimes seem uncommon doesn't always imply that they are indeed uncommon. It could also mean that the crime is a clean job and leaves no trails to be reported. # # 2. The dataset used for this analysis captures incident reports between 2015-06-15 & 2019-11-14. For a more recent analysis, one would need to update the `raw_data.csv` file in the repository and re-run the project. # ## Conclusions # # 1. It is very obvious from all the graphs above that Boston needs some serious attention on the motor vehicle accident issues. Maybe stricter driving laws and driving tests might help. # 2. Roxbury, Mattapan and Dorchester seem to be very unsafe, and Charlestown, on the other hand, seems to be very safe. # 3. Going out at night is particularly unsafe for anyone and should be avoided when possible. # ## Future Work # # - Merge with information about population and the area of each district and normalize the values accordingly. # - Analyze the trend of change in certain crimes over the past few years. # - Maybe relate the trend with relevant laws which were passed to observe the effect of those laws. # - As pointed out in the [Findings](#Findings) section, study the proportion of crime in each district and the proportion of different kinds of crimes that are associated with shooting. # ## References # # [1] The Normality of Crime - [http://www.d.umn.edu/cla/faculty/jhamlin/4111/Durkheim%20-%20Division%20of%20Labor_files/The%20Normality%20of%20Crime.pdf](http://www.d.umn.edu/cla/faculty/jhamlin/4111/Durkheim%20-%20Division%20of%20Labor_files/The%20Normality%20of%20Crime.pdf) # # [2] List of mass shootings in the United States in 2018 - [https://en.wikipedia.org/wiki/List_of_mass_shootings_in_the_United_States_in_2018](https://en.wikipedia.org/wiki/List_of_mass_shootings_in_the_United_States_in_2018) # # [3] List of mass shootings in the United States in 2019 - [https://en.wikipedia.org/wiki/List_of_mass_shootings_in_the_United_States_in_2019](https://en.wikipedia.org/wiki/List_of_mass_shootings_in_the_United_States_in_2019) # # [4] Crime Incident Reports - [https://data.boston.gov/dataset/crime-incident-reports-august-2015-to-date-source-new-system](https://data.boston.gov/dataset/crime-incident-reports-august-2015-to-date-source-new-system) # # [5] Boston-Crime-Analysis - [https://github.com/MehtaShruti/Boston-Crime-Analysis](https://github.com/MehtaShruti/Boston-Crime-Analysis) # # [6] Boston Crime Analyis | Kaggle - [https://www.kaggle.com/cnchandroo/boston-crime-analysis](https://www.kaggle.com/cnchandroo/boston-crime-analysis) # # [7] Workbook: Boston Crime Data Analysis - [https://public.tableau.com/views/Bostoncrimedataanalysis/Story1](https://public.tableau.com/views/Bostoncrimedataanalysis/Story1) # # [8] Open Data Commons Public Domain Dedication and License (PDDL) - [https://opendatacommons.org/licenses/pddl/index.html](https://opendatacommons.org/licenses/pddl/index.html)
final_analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data-X: Airbnb Price Predictor # Predicts the current price of an Airbnb in San Francisco based on other features # ___ # ### Dependencies # None # ### Imports # + # Import Standard ML packages import numpy as np import pandas as pd # Import and Configure Plotting Libraries import matplotlib.pyplot as plt import seaborn as sns sns.set(style="whitegrid", palette="muted") plt.rcParams['figure.figsize'] = (12, 9) plt.rcParams['font.size'] = 14 # %matplotlib inline # Import Models from sklearn import linear_model from sklearn.linear_model import LinearRegression from sklearn.linear_model import LogisticRegression from sklearn.linear_model import Ridge from sklearn.linear_model import Lasso from sklearn import ensemble from sklearn.ensemble import RandomForestRegressor from sklearn.ensemble import GradientBoostingRegressor # Import Helper Modules from sklearn.externals import joblib from sklearn.model_selection import train_test_split import sklearn.metrics as metrics from sklearn.metrics import make_scorer from sklearn.model_selection import GridSearchCV from sklearn.preprocessing import StandardScaler # - # ### Import Datasets listings_df = pd.read_csv("../raw_datasets/sf_airbnb_nov_18.csv") # ### Data Cleaning/Transformation # + def select_columns(df, *columns): return df.loc[:, columns] def dollar_to_float(df, *columns): for c in columns: df[c] = df[c].str.replace(r'[$,]', '').astype("float64") return df def one_hot_encoding(df, *columns): for c in columns: hot = pd.get_dummies(df[c], prefix=c) df = pd.concat([df, hot], axis=1) df.drop(c, axis=1, inplace=True) return df def fill_na_with_median(df, *columns): for c in columns: df.loc[df[c].isnull(),c] = df.loc[df[c].notnull(),c].median() return df def clean_df(df): df = df.copy() return ( df.set_index("id").pipe( select_columns, "price", "longitude", "latitude", "accommodates", "bedrooms", "bathrooms", "beds", "room_type", "neighbourhood_cleansed", "zipcode" ) .pipe( fill_na_with_median, "bathrooms", "bedrooms", "beds" ) .pipe( dollar_to_float, "price" ) .pipe( one_hot_encoding, "room_type", "neighbourhood_cleansed", "zipcode" ) ) # - cleaned_listings = clean_df(listings_df) cleaned_listings.head() # ### Model Training/Evaluation def scale_X(X): scaler = StandardScaler() scaler.fit(X) return scaler.transform(X) X_df = cleaned_listings.drop("price", axis=1) X = X_df.pipe(scale_X) Y = cleaned_listings["price"] X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=8) # + err_func = metrics.median_absolute_error estimators = [ GridSearchCV( estimator = linear_model.LinearRegression(), param_grid = { "fit_intercept": [True, False] }, scoring = make_scorer(err_func, greater_is_better=False) ), GridSearchCV( estimator = linear_model.Lasso(), param_grid = { "alpha": [1, 5, 10, 20], "fit_intercept": [True, False] }, scoring = make_scorer(err_func, greater_is_better=False) ), GridSearchCV( estimator = linear_model.Ridge(fit_intercept=True), param_grid = { "alpha": [1, 5, 10, 20], "fit_intercept": [True, False] }, scoring = make_scorer(err_func, greater_is_better=False) ), GridSearchCV( estimator = linear_model.OrthogonalMatchingPursuit(), param_grid = { "fit_intercept": [True] }, scoring = make_scorer(err_func, greater_is_better=False) ), GridSearchCV( estimator = linear_model.BayesianRidge(), param_grid = { "alpha_1": [1.e-6, 1], "alpha_2": [1.e-6, 1], "lambda_1": [1.e-6, 1], "lambda_2": [1.e-6, 1], "fit_intercept": [True, False] }, scoring = make_scorer(err_func, greater_is_better=False) ), GridSearchCV( estimator = linear_model.ElasticNet(), param_grid = { "alpha": [1, 5, 10, 20], "l1_ratio": [0.3, 0.5, 0.7] }, scoring = make_scorer(err_func, greater_is_better=False) ), GridSearchCV( estimator = ensemble.RandomForestRegressor(), param_grid = { "n_estimators": [5, 10, 20] }, scoring = make_scorer(err_func, greater_is_better=False) ), GridSearchCV( estimator = ensemble.GradientBoostingRegressor(), param_grid = { "loss": ["lad"], "n_estimators": [300] }, scoring = make_scorer(err_func, greater_is_better=False) ) ] estimator_labels = np.array([ 'Linear', 'Lasso', 'Ridge', 'OMP', 'BayesRidge', 'ElasticNet', 'RForest', 'GBoosting' ]) def get_estimator_name(e): return estimator_labels[estimators.index(e)] estimator_errs = np.array([]) best_model = None min_err = float("inf") for e in estimators: e.fit(X_train, y_train) y_pred = e.predict(X_test) curr_err = err_func(y_test, y_pred) estimator_errs = np.append(estimator_errs, curr_err) print(f"""{get_estimator_name(e)}: {e.best_params_}, Error: {curr_err}""") if curr_err < min_err: min_err = curr_err best_model = e print(f""" Best Estimator: {get_estimator_name(best_model)} MAE: {min_err} """) x_vals = np.arange(len(estimator_errs)) sorted_indices = np.argsort(estimator_errs) plt.figure(figsize=(8,6)) plt.title("Estimator Median Absolute Error") plt.xlabel('Estimator') plt.ylabel('Median Absolute Error') plt.bar(x_vals, estimator_errs[sorted_indices], align='center') plt.xticks(x_vals, estimator_labels[sorted_indices]) plt.savefig('../plots/Airbnb Price Predictor MAE.png', bbox_inches='tight') plt.show() # + feature_importance = best_model.best_estimator_.feature_importances_ sorted_indices = np.argsort(feature_importance) x_sorted = np.arange(len(feature_importance)) y_sorted = feature_importance[sorted_indices] c_sorted = X_df.columns[sorted_indices] x_vals = x_sorted[-10:] y_vals = y_sorted[-10:] cs = c_sorted[-10:] plt.figure(figsize=(10,8)) plt.title('Top 10 Most Important Variables') plt.xlabel('Feature Importance') plt.yticks(x_vals, cs) plt.barh(x_vals, y_vals) plt.savefig('../plots/Airbnb Price Predictor Variable Importances.png', bbox_inches='tight') plt.show() # - best_model.fit(X_train, y_train) plt.title(f"""Best Model ({get_estimator_name(best_model)}) Residuals""") plt.xlabel("Residuals (Price)") plt.ylabel("Count") plt.xlim((-300, 300)) plt.hist((y_test - best_model.predict(X_test)).values, bins=500) plt.savefig('../plots/Airbnb Price Predictor Residuals.png', bbox_inches='tight') plt.show() # ### Export Models export_path = "../exported_models/airbnb_price_predictor.hdf" X_df.to_hdf(export_path, "X_df") Y.to_hdf(export_path, "Y") export_path = "../exported_models/airbnb_price_predictor.pkl" joblib.dump(best_model, export_path);
ipython_notebooks/airbnb_price_predictor.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- # + Objectives: 1) Reading tracer variable values from NEMO *grid_T.nc results files 2) Plotting sea surface height fields at selected times 3) Approximate land areas masks for tracer field plots 4) Using slices to zoom in on domain regions 5) Plotting contour bands and lines 6) Plotting temperature fields at selected depths 7) Adding contour lines to colour mesh plots 8) Anomaly plots 9) Plotting salinity fields with various colour scales # - import matplotlib.pyplot as plt import numpy as np import netCDF4 as nc from salishsea_tools import ( viz_tools, nc_tools, ) # %matplotlib inline # + # load the nc file tracers = nc.Dataset('/ocean/ssahu/nemo-code/NEMOGCM/CONFIG/MY_GYRE/EXP00/GYRE_5d_00010101_00011230_grid_T.nc') # - nc_tools.show_dimensions(tracers) # + active="" # 1> x and y are the sizes which govern the lateral grid points to be given by 'nv_lat' and 'nv_lon' # # 2> deptht gives the depth at vertical grid levels # # 3> time_counter specifies time at center of each of the model time intervals # # 4> tbnds specifies the time intervals around each of the time-counter # - nc_tools.show_variables(tracers) nc_tools.show_dataset_attrs(tracers) # + nc_tools.show_variable_attrs(tracers, 'time_counter') #why time is showing incorrectly when the run started ? # - tracers.variables['time_counter'][:] #time values measured in seconds since the date and time the run started # + #to use further let us alias the time counter as time steps timesteps = tracers.variables['time_counter'] # + active="" # Plotting SSH fields and time slicing # - nc_tools.show_variable_attrs(tracers, 'sossheig') ssh = tracers.variables['sossheig'] lats = tracers.variables['nav_lat'] lons = tracers.variables['nav_lon'] ssh.shape fig, ax = plt.subplots(1, 1, figsize=(10, 8)) viz_tools.set_aspect(ax) mesh = ax.pcolormesh(ssh[0]) fig.colorbar(mesh) # + # now let us mask the land ssh0 = np.ma.masked_values(ssh[0], 0) #for higher accuracy masking should always be done from bathymetry and not from ssh data fig, ax = plt.subplots(1, 1, figsize=(10, 8)) viz_tools.set_aspect(ax) mesh = ax.pcolormesh(ssh0) fig.colorbar(mesh) # + # plotting the gyre run ssh with the latitude and longitude ssh0 = np.ma.masked_values(ssh[0], 0) #for higher accuracy masking should always be done from bathymetry and not from ssh data fig, ax = plt.subplots(1, 1, figsize=(10, 8)) viz_tools.set_aspect(ax) mesh = ax.pcolormesh(ssh0) cmap = plt.get_cmap('jet') cmap.set_bad('burlywood') mesh = ax.pcolormesh(ssh0, cmap=cmap) cbar = fig.colorbar(mesh) ax.set_xlabel('{longitude.long_name} [{longitude.units}]'.format(longitude=lons)) ax.set_ylabel('{latitude.long_name} [{latitude.units}]'.format(latitude=lats)) cbar.set_label('{label} [{units}]'.format(label=ssh.long_name.title(), units=ssh.units)) # - ssh.shape # + #to plot ssh at different time using a loop and zip command as different subplots # + fig, axs = plt.subplots(1, 3, figsize=(16, 8), sharey=True) cmap = plt.get_cmap('jet') cmap.set_bad('burlywood') time_steps = (0, 3, 6) for ax, t in zip(axs, time_steps): ssh_t = np.ma.masked_equal(ssh[t], 0) viz_tools.set_aspect(ax) mesh = ax.pcolormesh(ssh_t, cmap=cmap) cbar = fig.colorbar(mesh, ax=ax) ax.set_title('t = {:.1f}h'.format(timesteps[t] / 3600)) ax.set_xlabel('{longitude.long_name} [{longitude.units}]'.format(longitude=lons)) ax.set_ylabel('{latitude.long_name} [{latitude.units}]'.format(latitude=lats)) cbar.set_label('{label} [{units}]'.format(label=ssh.long_name.title(), units=ssh.units)) # + # keeping the colorbar limits constant fig, axs = plt.subplots(1, 3, figsize=(16, 8), sharey=True) cmap = plt.get_cmap('jet') cmap.set_bad('burlywood') time_steps = (0, 3, 6) for ax, t in zip(axs, time_steps): ssh_t = np.ma.masked_equal(ssh[t], 0) viz_tools.set_aspect(ax) mesh = ax.pcolormesh(ssh_t, cmap=cmap, vmin = -0.06, vmax=0.06) cbar = fig.colorbar(mesh, ax=ax) ax.set_title('t = {:.1f}h'.format(timesteps[t] / 3600)) ax.set_xlabel('{longitude.long_name} [{longitude.units}]'.format(longitude=lons)) ax.set_ylabel('{latitude.long_name} [{latitude.units}]'.format(latitude=lats)) cbar.set_label('{label} [{units}]'.format(label=ssh.long_name.title(), units=ssh.units)) # + # need the nc file for the gyre run to plot the coastline # + active="" # Plotting temperature in horizontal planes # - nc_tools.show_variable_attrs(tracers, 'votemper') #this goes on to show that temperature has (t,z,y,x) in 4 dimensions # + # ssh is only for the surface but temperature shows a four dimensional array # lets check how the vertical levels are stacked in the NEMO grids nc_tools.show_variable_attrs(tracers, 'deptht') tracers.variables['deptht'][:] # + # we see that the vertical grid spacing gradually (not gradually but rather rapidly :-D) increases as we go further down # let us assign python names to the variables which we are going to plot eventually temper = tracers.variables['votemper'] depth = tracers.variables['deptht'] # - # + t, zlevel = 0, 0 temper_tz = np.ma.masked_values(temper[t, zlevel], 0) fig, ax = plt.subplots(1, 1, figsize=(10, 8)) viz_tools.set_aspect(ax) cmap = plt.get_cmap('jet') cmap.set_bad('burlywood') mesh = ax.pcolormesh(temper_tz, cmap=cmap) cbar = fig.colorbar(mesh) plt.axis((0, temper_tz.shape[1], 0, temper_tz.shape[0])) ax.grid() ax.set_xlabel('x Index') ax.set_ylabel('y Index') cbar.set_label('{label} [{units}]'.format(label=temper.long_name.title(), units=temper.units)) ax.set_title(u't = {t:.1f}h, depth \u2248 {d:.2f}{z.units}'.format(t=timesteps[t] / 3600, d = depth[zlevel], z=depth)) # + fig, axs = plt.subplots(1, 3, figsize=(16, 8), sharey=True) for ax in axs: viz_tools.set_aspect(ax) cmap = plt.get_cmap('jet') cmap.set_bad('burlywood') t = 0 levels = (0, 10, 26) for ax, z in zip(axs, levels): temper_tz = np.ma.masked_equal(temper[t, z], 0) mesh = ax.pcolormesh(temper_tz, cmap=cmap) cbar = fig.colorbar(mesh, ax=ax) ax.set_title(u't = {t:.1f}h, depth \u2248 {d:.2f}{z.units}'.format(t=timesteps[t] / 3600, d=depth[z], z=depth)) ax.set_xlabel('x Index') ax.grid() axs[0].set_ylabel('y Index') cbar.set_label('{label} [{units}]'.format(label=temper.long_name.title(), units=temper.units)) # + fig, axs = plt.subplots(1, 3, figsize=(16, 8), sharey=True) for ax in axs: viz_tools.set_aspect(ax) cmap = plt.get_cmap('jet') anomaly_cmap = plt.get_cmap('bwr') for c in (cmap, anomaly_cmap): c.set_bad('burlywood') time_steps = (0, 2) z = 0 # Temperature fields at the time steps for ax, t in zip(axs[:2], time_steps): temper_tz = np.ma.masked_values(temper[t, z], 0) mesh = ax.pcolormesh(temper_tz, cmap=cmap) cbar = fig.colorbar(mesh, ax=ax) ax.set_title(u't = {t:.1f}h, depth \u2248 {d:.2f}{z.units}'.format(t=timesteps[t] / 3600, d=depth[z], z=depth)) ax.set_xlabel('x Index') ax.set_xlim(0, temper_tz.shape[1]) ax.set_ylim(0, temper_tz.shape[0]) ax.grid() axs[0].set_ylabel('y Index') cbar.set_label('{label} [{units}]'.format(label=temper.long_name.title(), units=temper.units)) # Temperature field difference between the time steps ax = axs[2] temper_diff = temper[time_steps[1], z] - temper[time_steps[0], z] temper_diff = np.ma.masked_values(temper_diff, 0, copy=False) abs_max = viz_tools.calc_abs_max(temper_diff) mesh = ax.pcolormesh(temper_diff, cmap=anomaly_cmap, vmin=-abs_max, vmax=abs_max) cbar = fig.colorbar(mesh, ax=ax) ax.set_title(u'depth \u2248 {d:.2f}{z.units}'.format(d=depth[z], z=depth)) ax.set_xlabel('x Index') ax.set_xlim(0, temper_diff.shape[1]) ax.set_ylim(0, temper_diff.shape[0]) ax.grid() cbar.set_label('{label} Difference [{units}]'.format(label=temper.long_name.title(), units=temper.units)) # -
gyre/tracers_on_horizontal_plane_gyre_run.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from bs4 import BeautifulSoup import requests import re import wget url = "https://docs.google.com/spreadsheets/d/e/2PACX-1vQG13ojbnTnmoF_UF69QVA5OjOOjB57m-xam6Ac1RhsMkOnsLPCn57xcVqsZ33ZFdP17gD38z7M_m5o/pubhtml#" page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') title = soup.find(id='doc-title').get_text().replace(' ', '_') print(title) tags = soup.find_all(id=re.compile("^sheet-button-")) gids = [re.search('sheet-button-(.+?)"', str(tag)).group(1) for tag in tags] print(gids) sheet_names = [tag.get_text().replace(' ', '_') for tag in tags] print(sheet_names) for gid, sheet_name in zip(gids, sheet_names): file_url = url.rstrip('#').replace('pubhtml', f'pub?output=csv&gid={gid}') wget.download(file_url, f'{title}-{sheet_name}.csv')
pubhtml_scraper.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from bayes_tec.datapack import DataPack from bayes_tec.logging import logging from bayes_tec.utils.data_utils import make_coord_array import numpy as np import os import astropy.time as at def make_example_datapack(Nd,Nf,Nt,pols=None, time_corr=50.,dir_corr=0.5*np.pi/180.,tec_scale=0.02,tec_noise=1e-3,name='test.hdf5',clobber=False): logging.info("=== Creating example datapack ===") name = os.path.abspath(name) if os.path.isfile(name) and clobber: os.unlink(name) datapack = DataPack(name,readonly=False) with datapack: datapack.add_antennas() datapack.add_sources(np.random.normal(np.pi/4.,np.pi/180.*2.5,size=[Nd,2])) _, directions = datapack.sources _, antennas = datapack.antennas ref_dist = np.linalg.norm(antennas - antennas[0:1,:],axis=1)[None,None,:,None]#1,1,Na,1 times = at.Time(np.linspace(0,Nt*8,Nt)[:,None],format='gps').mjd*86400.#mjs freqs = np.linspace(120,160,Nf)*1e6 if pols is not None: use_pols = True assert isinstance(pols,(tuple,list)) else: use_pols = False pols = ['XX'] tec_conversion = -8.440e9/freqs #Nf X = make_coord_array(directions/dir_corr, times/time_corr)# Nd*Nt, 3 X2 = np.sum((X[:,:,None] - X.T[None,:,:])**2, axis=1)#N,N K = tec_scale**2 * np.exp(-0.5*X2) L = np.linalg.cholesky(K + 1e-6*np.eye(K.shape[0]))#N,N Z = np.random.normal(size=(K.shape[0],len(pols)))#N,npols tec = np.einsum("ab,bc->ac",L,Z)#N,npols tec = tec.reshape((Nd,Nt,len(pols))).transpose((2,0,1))#Npols,Nd,Nt tec = tec[:,:,None,:]*(0.2+ref_dist/np.max(ref_dist))#Npols,Nd,Na,Nt # print(tec) tec += tec_noise*np.random.normal(size=tec.shape) phase = tec[:,:,:,None,:]*tec_conversion[None,None,None,:,None]##Npols,Nd,Na,Nf,Nt # print(phase) phase = np.angle(np.exp(1j*phase)) if not use_pols: phase = phase[0,...] pols = None datapack.add_freq_dep_tab('phase',times=times,freqs=freqs,pols=pols,vals=phase) datapack.phase = phase return datapack # + import pylab as plt from bayes_tec.plotting.plot_datapack import DatapackPlotter,animate_datapack datapack = make_example_datapack(90,10,100,pols=['XX'],name='new_test.hdf5',time_corr=80,tec_scale=0.02,clobber=True) # animate_datapack(datapack,'figs',num_processes=8) with datapack: phase,axes = datapack.phase plt.imshow(phase[0,0,51,:,:],aspect='auto',cmap='hsv',vmin=-np.pi,vmax=np.pi) plt.colorbar() plt.show() # -
notebooks/devel/simulate_datapack.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Report Title # ### Team Name # ## Abstract # Short summary about the data and what it tells you about your project. # ## Data input # In this section include code that reads in the csv file # ## Data Cleaning # In this section provide code for converting the raw data into clean and usable data structures (if needed) # ## Data Modeling # # This section that builds a model. This doesn't have to be all that advanced. You should probably start with some simple statistical models such as an average distance matrix. # ## Data Visualization # # This section make some graphs visualizing your results. A distance matrix and/or network graph may be cool. Think though the best way to show what you learned. # ## Conclusion # # This should be similar to the abstract but with more details. What can you conclude about your project from this data?
Example_Report_Template.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="T1PdEo5I_ZFw" # --- # # *This exercise is not mandatory.* # # To help familiarize yourself with the programming environment, small exercise # notebooks will be provided for each session. # # Each of the notebooks will contain # - code examples # - comments and background information # - exercises for you to solve # # Submit your solutions through the online form provided for each exercise until 23:55 of the day before the upcoming session. # # Solutions to the exercises will be presented at the beginning of the session # and made available online. # # --- # # + [markdown] colab_type="text" id="lPu3aIPaDFlA" # # Exercise: Getting Started with `pandas` and the Advertising Data Set # # The [pandas](https://pandas.pydata.org/) library provides # a rich set of data structures and tools for data analysis purposes. # # In this exercise, we will get to know pandas by exploring the advertising data set. # # # ## Getting Ready # # Go through the [10 minutes to pandas](https://pandas.pydata.org/pandas-docs/stable/getting_started/10min.html) tutorial to get an overview. # If you are not yet familiar with the notebook environment, revisit the official # [Welcome To Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) introduction. # # # ## Background Information # # The data set consists of the `sales` of a product in different markets, # along with advertising budget for the product in each of those markets for # three different media: `TV`, `radio`, `newspaper`. # + [markdown] colab_type="text" id="YNdpS3fREHIW" # Use pandas (and Python) to answer below questions. # The code snippeds already contained in the notebook will provide you with hints. # # - **For each question, give the answer by adding it to this cell.** # - **Submit your answers through this [form](https://forms.gle/9zmqBTqiEt8UkfMw6).** # # # ## Questions # 1. How many records does the data show? # 1. What is the maximum `TV` spending? # 1. What `sales` correspond to the maximum `radio` spending? # 1. Which budget shows the smallest spread? # 1. How would you create a [scatterplot](https://en.wikipedia.org/wiki/Scatter_plot) depicting how `sales` depend on the `radio` budget? # 1. What does computing [`df.corr()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.corr.html) tell you about our data, interpret # the [correlation coefficients](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient). # # ## Answers # 1. TBA # 1. TBA # 1. TBA # 1. TBA # 1. TBA # 1. TBA # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="kFaVGpM2_Q_j" outputId="8684198b-7e8b-409a-fe28-66ca2fa90ad9" # to use pandas, we first need to import is import pandas as pd # we can read in the data (a csv file) using the corresponding function as follows df = pd.read_csv('http://faculty.marshall.usc.edu/gareth-james/ISL/Advertising.csv', index_col=0) df # + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" id="0i-T5A8ZHB-n" outputId="052c85e7-531e-4c3d-97fe-7a6f7e721f1c" # to only display the first few rows of a DataFrame df.head() # + colab={"base_uri": "https://localhost:8080/", "height": 297} colab_type="code" id="E7oV61m4Ho9G" outputId="f07b906a-59c7-4360-a777-9bf0f32418d2" # to get a quick number summary df.describe() # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="mPShtpPwHtKM" outputId="ac7db9b1-3109-430c-d8c0-adae5bc48d2d" # to find the e.g. the minium value in TV df['TV'].min() # + colab={"base_uri": "https://localhost:8080/", "height": 80} colab_type="code" id="toTr90YoH8uj" outputId="6ca26593-d833-49b7-8841-051fbfc32173" # to see the entire record with the smallest TV spending df[df['TV'] == df['TV'].min()] # + colab={"base_uri": "https://localhost:8080/", "height": 286} colab_type="code" id="DjccT6diJKxl" outputId="ec349b55-9de3-4ac5-b29c-35e89d54b5cf" # while df.describe() is useful, at times a picture is easier to grasp import seaborn as sns # https://seaborn.pydata.org/index.html is a very useful library for statistical data visualization sns.boxplot(data=df[['TV', 'radio', 'newspaper']]) # we see that by far the most money is spent on TV # + colab={"base_uri": "https://localhost:8080/", "height": 300} colab_type="code" id="86QFsPtdILvp" outputId="aa187c34-6760-452d-a901-6636ccc8026e" # make a scatterplot to visualize the relationship between variables df.plot.scatter(x='TV', y='sales') # it seems the more we spend on TV, the more we sell # + colab={"base_uri": "https://localhost:8080/", "height": 173} colab_type="code" id="V-IBUziVKPwn" outputId="3101e787-3faa-4d8b-e0bd-de3ac27f9173" df.corr()
0_pre_class/exercise.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="ZFIqwYGbZ-df" # # <NAME> # # # + id="S_jXCnwZ2QYW" outputId="66a09d4a-2b49-429d-eaf1-3c1eaec14a63" colab={"base_uri": "https://localhost:8080/"} USE_PRIVATE_DISTRO = True DRIVE_BASE_DIR = '/content/drive/MyDrive/SMC 10/DDSP-10/' DRIVE_DISTRO = DRIVE_BASE_DIR + 'dist/ddsp-1.2.0.tar.gz' if USE_PRIVATE_DISTRO: print("[INFO] Using private distro.") from google.colab import drive drive.mount('/content/drive') # !pip install -qU "$DRIVE_DISTRO" else: # !pip install -qU ddsp import warnings import gin import tensorflow as tf # %reload_ext tensorboard import tensorboard as tb import numpy as np import seaborn as sns import matplotlib.pyplot as plt # %config InlineBackend.figure_format='retina' from ddsp.colab.colab_utils import specplot from ddsp.colab.colab_utils import play from ddsp.training import data from ddsp.training import models from ddsp import core # + [markdown] id="zpetvejYO0KQ" # #### Configuration # + id="mkFYv_DUZ7lW" SAMPLE_RATE = 48000 DURATION = 4 FRAME_RATE = 250 TIME_STEPS = FRAME_RATE * DURATION N_SAMPLES = SAMPLE_RATE * DURATION MOD_FREQ = 50 INSTRUMENT = 'sr{}k_mf{}'.format(SAMPLE_RATE//1000, MOD_FREQ) sns.set(style="whitegrid") warnings.filterwarnings("ignore") OUTPUT_FOLDER = 'am_timbre_transfer' #@param {type: "string"} DRIVE_CHECKPOINTS_DIR = DRIVE_BASE_DIR + 'audio/' + OUTPUT_FOLDER + '/' + \ INSTRUMENT + '_checkpoints/' DRIVE_TFRECORD_PATTERN = DRIVE_BASE_DIR + 'audio/' + OUTPUT_FOLDER + '/' + \ INSTRUMENT + '_dataset/train.synthrecord*' # !mkdir -p "$DRIVE_CHECKPOINTS_DIR" # + [markdown] id="Op0V8onI0VUK" # #### Start Tensorboard # + id="hBvbrMQvGzK9" tb.notebook.start('--logdir "{}"'.format(DRIVE_CHECKPOINTS_DIR)) # + [markdown] id="Q9D9ozX6PAXB" # #### Train the model # + id="VDGtUMk3PGGy" # !ddsp_run \ # --mode=train \ # --alsologtostderr \ # --save_dir="$DRIVE_CHECKPOINTS_DIR" \ # --gin_file=models/am_simple.gin \ # --gin_file=datasets/synthrecord.gin \ # --gin_param="SynthRecordProvider.file_pattern='$DRIVE_TFRECORD_PATTERN'" \ # --gin_param="SynthRecordProvider.sample_rate=$SAMPLE_RATE" \ # --gin_param="SynthRecordProvider.frame_rate=$FRAME_RATE" \ # --gin_param="SynthRecordProvider.example_secs=$DURATION" \ # --gin_param="F0MIDILoudnessPreprocessor.time_steps=$TIME_STEPS" \ # --gin_param="FilteredNoise.n_samples=$N_SAMPLES" \ # --gin_param="AmplitudeModulation.n_samples=$N_SAMPLES" \ # --gin_param="AmplitudeModulation.sample_rate=$SAMPLE_RATE" \ # --gin_param="train_util.train.batch_size=7" \ # --gin_param="train_util.train.num_steps=20000" \ # --gin_param="train_util.train.steps_per_save=100" \ # --gin_param="train_util.train.steps_per_summary=50" \ # --gin_param="Trainer.checkpoints_to_keep=2" \ # --early_stop_loss_value=5.3 \ # # --gin_param="Trainer.learning_rate=0.0001" \ # + [markdown] id="Ep_TMUitRz6y" # #### Load pretrained model # + id="qaS1PmlqR3JB" data_provider_eval = data.SynthRecordProvider(DRIVE_TFRECORD_PATTERN, sample_rate=SAMPLE_RATE, frame_rate=FRAME_RATE, example_secs=DURATION) dataset_eval = data_provider_eval.get_batch(batch_size=1, shuffle=True, repeats=-1) dataset_eval_iter = iter(dataset_eval) gin_file = DRIVE_CHECKPOINTS_DIR + 'operative_config-0.gin' gin.parse_config_file(gin_file) model = models.Autoencoder() model.restore(DRIVE_CHECKPOINTS_DIR) for f in range(12): frame = next(dataset_eval_iter) # + id="jJNBloZ84hcW" outputId="ada02a29-8866-49bb-9a9e-a27ba9a7dac8" colab={"base_uri": "https://localhost:8080/", "height": 1000} # frame = next(dataset_eval_iter) audio_baseline = frame['audio'] x = np.linspace(0,1,FRAME_RATE) controls = model(frame, training=False) audio_full = model.get_audio_from_outputs(controls) audio_full /= tf.reduce_max(audio_full[0,:], axis=0).numpy()*1.5 print('Original Audio') play(audio_baseline, sample_rate=SAMPLE_RATE) print('Full reconstruction') play(audio_full, sample_rate=SAMPLE_RATE) for synth in ['harmonic', 'am', 'filtered_noise']: if synth in controls: print('Only ' + synth) play(controls[synth]['signal'], sample_rate=SAMPLE_RATE) specplot(audio_baseline) specplot(audio_full) get = lambda key: core.nested_lookup(key, controls)[0] #batch 0 amps = get('am/controls/amps') mod_amps = get('am/controls/mod_amps') f0 = get('am/controls/f0_hz') mod_f0 = get('am/controls/mod_f0_hz') f, ax = plt.subplots(3, 1, figsize=(15, 8), sharex=True) ax[0].title.set_text('Controls') # f.suptitle('Inferred controls', fontsize=14) ax[0].plot(x, amps[:FRAME_RATE], linewidth=1.5) ax[0].plot(x, mod_amps[:FRAME_RATE], linewidth=1.5) ax[0].set_ylabel('Amplitude') ax[0].legend(['Carrier', 'Modulator']) ax[1].plot(x, f0[:FRAME_RATE], linewidth=1.5) ax[1].plot(x, mod_f0[:FRAME_RATE], linewidth=1.5) # ax[1].plot(get('inputs/f0_hz_midi'), linestyle='--') ax[1].set_ylabel('Freq (Hz)') ax[1].legend(['Carrier', 'Modulator']) ax[2].title.set_text('Audio') ax[2].plot(np.linspace(0,1,SAMPLE_RATE), audio_full.numpy()[0,0:SAMPLE_RATE], linewidth=1.5) ax[2].set_ylabel('Amplitude') ax[2].set_xlabel('Seconds') plt.tight_layout() # + id="7q-rQHL1PAuS" raise SystemExit("Stop right there!")
ddsp/colab/fm/40_ttransfer_AM.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Based on Water Quality Data from Hope. # From Environment Canada via <NAME> and <NAME> from datetime import datetime, timedelta import matplotlib.pyplot as plt import numpy as np import pandas as pd import scipy.io # %matplotlib inline mat = scipy.io.loadmat('fraser_waterquality.mat') mtime = mat['fraserqual'][0][0][1] ptime = [] for i in range(mtime.shape[0]): ptime.append(datetime.fromordinal(int(mtime[i][0])) + timedelta(days=mtime[i][0]%1) - timedelta(days = 366)) dissolved_NO2 = mat['fraserqual'][0][0][2] dissolved_NO23 = mat['fraserqual'][0][0][3][:,0] dissolved_Si = mat['fraserqual'][0][0][13][:,0] print (dissolved_NO23.shape) fig, ax = plt.subplots(2,2,figsize=(10,10)) ax[0,0].plot(ptime, dissolved_NO23, 'o') ax[0,1].plot(ptime, dissolved_NO23, 'o') ax[0,1].set_ylim((0, 2)) ax[1,0].plot(ptime, dissolved_Si, 'o') ax[1,1].plot(ptime, dissolved_Si, 'o') ax[1,1].set_ylim((0, 2)) # Put data in a pandas dataframe but NO23: remove the four high outliers and five low outliers. # Si: remove the single high and five low outliers. # # + df = pd.DataFrame({'dissolved_NO23': dissolved_NO23}, index=ptime) df = df[df.dissolved_NO23 < 15] df = df[df.dissolved_NO23 > 1] grouper = pd.TimeGrouper("1M") df['mon'] = df.index.month monmean = df.groupby('mon').agg(['mean','count', 'std', 'sem']) monthsy1 = range(1,13) monthsy2 = range(13,25) plt.plot(monthsy1, monmean['dissolved_NO23']['mean'], 'b') plt.plot(monthsy2, monmean['dissolved_NO23']['mean'], 'b') plt.plot(monthsy1, monmean['dissolved_NO23']['mean'] + monmean['dissolved_NO23']['sem'], 'g') plt.plot(monthsy2, monmean['dissolved_NO23']['mean'] + monmean['dissolved_NO23']['sem'], 'g') plt.plot(monthsy1, monmean['dissolved_NO23']['mean'] - monmean['dissolved_NO23']['sem'], 'g') plt.plot(monthsy2, monmean['dissolved_NO23']['mean'] - monmean['dissolved_NO23']['sem'], 'g') plt.xlabel('Months (2 years)') plt.ylabel('Nitrate + Nitrite (uM)') plt.title('Seasonal Cycle of Nitrate at Hope') monmean # + df = pd.DataFrame({'dissolved_Si': dissolved_Si}, index=ptime) df = df[df.dissolved_Si < 100] df = df[df.dissolved_Si > 1] grouper = pd.TimeGrouper("1M") df['mon'] = df.index.month monmean = df.groupby('mon').agg(['mean','count', 'std', 'sem']) monthsy1 = range(1,13) monthsy2 = range(13,25) plt.plot(monthsy1, monmean['dissolved_Si']['mean'], 'b') plt.plot(monthsy2, monmean['dissolved_Si']['mean'], 'b') plt.plot(monthsy1, monmean['dissolved_Si']['mean'] + monmean['dissolved_Si']['sem'], 'g') plt.plot(monthsy2, monmean['dissolved_Si']['mean'] + monmean['dissolved_Si']['sem'], 'g') plt.plot(monthsy1, monmean['dissolved_Si']['mean'] - monmean['dissolved_Si']['sem'], 'g') plt.plot(monthsy2, monmean['dissolved_Si']['mean'] - monmean['dissolved_Si']['sem'], 'g') plt.xlabel('Months (2 years)') plt.ylabel('Silicon (uM)') plt.title('Seasonal Cycle of Silicon') monmean # -
I_ForcingFiles/Rivers/FraserRiverNutrients.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="QMjf_Gq4xbuO" # ## Instructions # # 1. Run the following blocks of code. # 2. Restart after installing requirements. # 3. Download weights file from link: 'https://drive.google.com/file/d/1lLiJgaTa_LAW7iezngl6IWvMaKAC7e5x/view?usp=sharing' and paste in ./data/yolov4.weights # 3. Run the code block below and outputs will be available in folder ./outputs # # # # + id="KPCW3Q2Smrqf" colab={"base_uri": "https://localhost:8080/"} outputId="63fd4f31-4980-4b2f-c5f1-eb603344a1dd" # !git clone https://github.com/VishnuK11/People-Counting # + id="ZAX07AbZCqbL" colab={"base_uri": "https://localhost:8080/"} outputId="dbf5302b-9c41-4aa3-858b-f2d3d4ad873d" # %cd People-Counting # + id="6gwgFM1BCtk_" # !pip install -r requirements-gpu.txt # + [markdown] id="VAPNQhZqSQl7" # * Restart Runtime after udpate installations # + [markdown] id="KAjUMZpavw0G" # * Download weights from link provided # + colab={"base_uri": "https://localhost:8080/"} id="WtrpxiZXSxJt" outputId="23c7b5b0-0655-4d68-bcc9-4b8066b60ec7" # # %cd .. # from google.colab import drive # drive.mount("/content/drive") # + colab={"base_uri": "https://localhost:8080/"} id="0-DNVBedT8oj" outputId="8a732dad-fdea-4c92-c29b-14df41e4f940" # # %cd ./People-Counting/ # # !cp ../drive/MyDrive/TangoEye_submission/yolov4.weights ./data # + id="Sq_p34Y1DA4-" colab={"base_uri": "https://localhost:8080/"} outputId="9636e28d-8f80-4ef5-f63d-d56bc1b2e2d7" # !python save_model.py --model yolov4 # + colab={"base_uri": "https://localhost:8080/"} id="736saXjEXTII" outputId="16853269-6141-4ea7-f5ba-13fa8632fe3f" # !python object_tracker.py --video ./data/video/test.mp4 --output ./outputs/output_0.avi --model yolov4 --dont_show True # + id="yMwyIpeb4nQW" colab={"base_uri": "https://localhost:8080/"} outputId="afe5249b-f4aa-46a7-cc6d-80011b0103b2" # !python object_tracker.py --video ./data/video/VideoFeed.mp4 --output ./outputs/output_1.avi --model yolov4 --dont_show True # + colab={"base_uri": "https://localhost:8080/"} id="PgiZU3qhVgsq" outputId="95aee7a7-6bf4-4933-b231-de4ab8f4ced0" # !python object_tracker.py --video ./data/video/VideoFeed2.mp4 --output ./outputs/output_2.avi --model yolov4 --dont_show True # + colab={"base_uri": "https://localhost:8080/"} id="zrPPfqq-VhQI" outputId="18fe3a20-21d0-4a45-c5e5-a53d78b3001c" # !python object_tracker.py --video ./data/video/VideoFeed3.mp4 --output ./outputs/output_3.avi --model yolov4 --dont_show True # + colab={"base_uri": "https://localhost:8080/"} id="mEqcH01tbGMy" outputId="c3691469-569e-4aad-8b46-bbfbc2a0cf0b" # !dir ./data/video # + colab={"base_uri": "https://localhost:8080/", "height": 17} id="n8OoHAm-UgFB" outputId="2920b15f-487c-4ba4-8a8d-39c72aa7ae65" from google.colab import files files.download('./outputs/output_0.avi') # files.download('./outputs/output_1.avi') # files.download('./outputs/output_2.avi') # files.download('./outputs/output_3.avi') # + colab={"base_uri": "https://localhost:8080/"} id="CwR3nqevXkE7" outputId="90018299-cace-4584-eef1-573a3afb1525" # !git pull 'https://github.com/VishnuK11/People-Counting' # + id="ifoeHxxJXm37"
tangoeye_run.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/simecek/PseudoDNA_Generator/blob/master/data/Random_5UTR_Seqs.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="yKEF8W8FHnfh" colab_type="text" # # Generate Random 5'UTR Sequences # # + [markdown] id="tMfWhUbtHsJ9" colab_type="text" # ## Setup # # Installation for colab environment. # + id="AM0BwFWvHwhC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="9c061b80-8e61-4ba3-b1b7-6b3f8d1c89d5" # !pip install biopython pyensembl # + id="LlD3-EB2ICp9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 128} outputId="6db348e5-d50e-42ab-9325-da76efa55a31" from google.colab import drive drive.mount('/content/drive') # + id="4DCA8e0mIQns" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="12ddf4af-5505-4edf-9dcc-2b47caf38d70" # !pyensembl install --release 97 --species human # + id="xMxJPPWxHnfj" colab_type="code" colab={} import pandas as pd import numpy as np import gzip from tqdm.notebook import tqdm from Bio import SeqIO # for reading fasta files from pyensembl import EnsemblRelease # to get the gene list ENSEMBL_RELEASE = 97 DNA_TOPLEVEL_FASTA_PATH = "/content/drive/My Drive/data/ensembl/Homo_sapiens.GRCh38.dna.toplevel.fa.gz" # to generate random sequences N = 25_000 # how many K = 200 # how long OUTPUT_FILE = '/content/drive/My Drive/data/random/random_5utr_25k.csv' # where to save them CHRS = [str(chr) for chr in range(1,23)] + ['X', 'Y', 'MT'] # + [markdown] id="eLtdacLDHnfx" colab_type="text" # ## Get transcript list # + id="iRzWAYenHnfy" colab_type="code" colab={} # release 97 uses human reference genome GRCh38 data = EnsemblRelease(ENSEMBL_RELEASE) # + id="EVJQO00LHnf0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="41a94fd8-bb3d-487f-a043-061dd6b634c2" human_transcripts = data.transcript_ids() len(human_transcripts) # + id="i1dIXhzvfv6u" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 74} outputId="34e2066d-77b2-45b7-d331-cc46a230facd" human_transcripts[0], data.transcript_by_id(human_transcripts[0]) # + id="8r9jj_B-Hnf3" colab_type="code" colab={} transcripts_full_info = [data.transcript_by_id(transcript) for transcript in human_transcripts] # + id="97lrmGvNHnf6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 198} outputId="f2efabfb-c0d4-4503-ce2a-70a76a3ecc50" human_transcript_tuples = [(x.transcript_id, x.gene_id, x.biotype, x.contig, x.start, x.end, x.strand, x.five_prime_utr_sequence) for x in transcripts_full_info if x.contains_start_codon & x.contains_stop_codon] human_transcript_table = pd.DataFrame.from_records(human_transcript_tuples, columns=["id", "gene_id", "biotype", "chr", "start", "end", "strand", "five_prime_utr_sequence"]) assert all(human_transcript_table.start <= human_transcript_table.end) human_transcript_table.head() # + id="IsJXGYXY2Rsf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="e424b5b1-7f62-420b-978a-7757edab7ea3" assert ~human_transcript_table.five_prime_utr_sequence.str.contains('N').any() human_transcript_table['length'] = human_transcript_table.five_prime_utr_sequence.apply(len) selected_regions = human_transcript_table[human_transcript_table.length > K].copy() human_transcript_table.shape, selected_regions.shape # + id="P2X_MtbZHnf_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 327} outputId="4d6f3ab9-0364-4bc1-c314-7574f2e4c6ad" selected_regions['random_start'] = [np.random.randint(c_len - K) for c_len in selected_regions.length] selected_regions['random_end'] = selected_regions['random_start'] + K - 1 selected_regions['seq'] = '' for i in range(selected_regions.shape[0]): selected_regions['seq'].iloc[i] = (selected_regions['five_prime_utr_sequence'].iloc[i])[selected_regions['random_start'].iloc[i]:selected_regions['random_start'].iloc[i]+200] selected_regions.head() # + [markdown] id="zQ4J1qGeHngN" colab_type="text" # ## Random transcript selection # + id="M_-EBl92HngN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="3ef992e2-08b8-42db-daaf-8ec4c997feba" sample_regions = selected_regions.sample(N) sample_regions.shape # + id="Ia_xkzHqHngX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 198} outputId="f90cd4c1-c013-4987-bf5b-48855fa52dae" seqs = sample_regions[['id', 'chr', 'random_start', 'random_end', 'seq']].copy().reset_index(drop=True) seqs.head() # + id="d9-E5cG9Hngf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 74} outputId="76f0cae7-1073-4e3c-a1a8-1b155471283c" len(seqs.seq.values[0]), seqs.seq.values[0] # + [markdown] id="pLGZ-XrKHngk" colab_type="text" # ## Save generated sequences to file # + id="Sfj_KTv1Hngk" colab_type="code" colab={} seqs.to_csv(OUTPUT_FILE, index=False)
data/Random_5UTR_Seqs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="1SenNHqGgmUg" # # Task # Perform all preprocessing tasks on any other text dataset. Here I have taken movie review dataset. # + id="o2K9s47zgaou" import nltk from nltk.corpus import movie_reviews as mr import matplotlib.pyplot as plt import random import os from collections import defaultdict # + colab={"base_uri": "https://localhost:8080/"} id="SgFcwThphaHC" outputId="e7421677-f9c7-4a3e-9b89-e2aeec77c1fe" movie_reviews=nltk.download('movie_reviews') # + colab={"base_uri": "https://localhost:8080/"} id="dw5flCOYhk55" outputId="e5ec9a6b-d495-4fee-89f8-8026ff80dcb0" documents = defaultdict(list) for i in mr.fileids(): # print(i) documents[i.split('/')[0]].append(i) print (documents['pos'][:10]) pos_review = documents['pos'] print (documents['neg'][:10]) neg_review = documents['neg'] # + colab={"base_uri": "https://localhost:8080/", "height": 303} id="cn9qptQ_mnr4" outputId="4d930b1c-ec01-4008-d250-58b088025006" fig=plt.figure(figsize=(5,5)) labels='Bahubali','King`s Man','Money Heist' size=['2','4','5'] plt.pie(size,labels=labels,autopct='%.2f%%',startangle=90) plt.axis('equal'); plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="-abNjdopoCca" outputId="28f1fbb5-415c-4cec-d7fa-3422544e5739" print('Number of positive review: ', len(pos_review)) print('Number of negative review: ', len(neg_review)) print('\nThe type of all_positive_review is: ', type(pos_review)) print('The type of a tweet entry is: ', type(neg_review[0])) # + colab={"base_uri": "https://localhost:8080/", "height": 303} id="WYz5Sv9aoQt-" outputId="ce4dff11-bf45-4ee1-cee2-e40151bf6470" fig=plt.figure(figsize=(5,5)) labels='Positives','Negative' sizes=[len(pos_review),len(neg_review)] plt.pie(sizes,labels=labels,autopct='%1.1f%%',startangle=90) # plt.pie(sizes, labels=labels, autopct='%1.1f%%', # shadow=True, startangle=90) # plt.axispl plt.axis('equal') plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="vU5D-V-nn6pN" outputId="b93b2e43-400b-4dd7-f981-5b1d5b07313b" random_positive =mr.words(pos_review[random.randint(0,10)]) print("Positive review ") print(random_positive) random_negative =mr.words(neg_review[random.randint(0,10)]) print("Negative review ") print(random_negative) # + id="yMbJn_7yo_FJ"
LAB 1/Lab_1_2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Sympy - Symbolic algebra in Python # <NAME> (jrjohansson at gmail.com) # # The latest version of this [IPython notebook](http://ipython.org/notebook.html) lecture is available at [http://github.com/jrjohansson/scientific-python-lectures](http://github.com/jrjohansson/scientific-python-lectures). # # The other notebooks in this lecture series are indexed at [http://jrjohansson.github.io](http://jrjohansson.github.io). # + jupyter={"outputs_hidden": false} # %matplotlib inline import matplotlib.pyplot as plt # - # ## Introduction # There are two notable Computer Algebra Systems (CAS) for Python: # # * [SymPy](http://sympy.org/en/index.html) - A python module that can be used in any Python program, or in an IPython session, that provides powerful CAS features. # * [Sage](http://www.sagemath.org/) - Sage is a full-featured and very powerful CAS enviroment that aims to provide an open source system that competes with Mathematica and Maple. Sage is not a regular Python module, but rather a CAS environment that uses Python as its programming language. # # Sage is in some aspects more powerful than SymPy, but both offer very comprehensive CAS functionality. The advantage of SymPy is that it is a regular Python module and integrates well with the IPython notebook. # # In this lecture we will therefore look at how to use SymPy with IPython notebooks. If you are interested in an open source CAS environment I also recommend to read more about Sage. # # To get started using SymPy in a Python program or notebook, import the module `sympy`: # + jupyter={"outputs_hidden": false} from sympy import * # - # To get nice-looking $\LaTeX$ formatted output run: # + jupyter={"outputs_hidden": false} init_printing() # or with older versions of sympy/ipython, load the IPython extension # #%load_ext sympy.interactive.ipythonprinting # or # #%load_ext sympyprinting # - # ## Symbolic variables # In SymPy we need to create symbols for the variables we want to work with. We can create a new symbol using the `Symbol` class: # + jupyter={"outputs_hidden": false} x = Symbol('x') # + jupyter={"outputs_hidden": false} (pi + x)**2 # + jupyter={"outputs_hidden": false} # alternative way of defining symbols a, b, c = symbols("a, b, c") # + jupyter={"outputs_hidden": false} type(a) # - # We can add assumptions to symbols when we create them: # + jupyter={"outputs_hidden": false} x = Symbol('x', real=True) # + jupyter={"outputs_hidden": false} x.is_imaginary # + jupyter={"outputs_hidden": false} x = Symbol('x', positive=True) # + jupyter={"outputs_hidden": false} x > 0 # - # ### Complex numbers # The imaginary unit is denoted `I` in Sympy. # + jupyter={"outputs_hidden": false} 1+1*I # + jupyter={"outputs_hidden": false} I**2 # + jupyter={"outputs_hidden": false} (x * I + 1)**2 # - # ### Rational numbers # There are three different numerical types in SymPy: `Real`, `Rational`, `Integer`: # + jupyter={"outputs_hidden": false} r1 = Rational(4,5) r2 = Rational(5,4) # + jupyter={"outputs_hidden": false} r1 # + jupyter={"outputs_hidden": false} r1+r2 # + jupyter={"outputs_hidden": false} r1/r2 # - # ## Numerical evaluation # SymPy uses a library for artitrary precision as numerical backend, and has predefined SymPy expressions for a number of mathematical constants, such as: `pi`, `e`, `oo` for infinity. # # To evaluate an expression numerically we can use the `evalf` function (or `N`). It takes an argument `n` which specifies the number of significant digits. # + jupyter={"outputs_hidden": false} pi.evalf(n=50) # + jupyter={"outputs_hidden": false} y = (x + pi)**2 # + jupyter={"outputs_hidden": false} N(y, 5) # same as evalf # - # When we numerically evaluate algebraic expressions we often want to substitute a symbol with a numerical value. In SymPy we do that using the `subs` function: # + jupyter={"outputs_hidden": false} y.subs(x, 1.5) # + jupyter={"outputs_hidden": false} N(y.subs(x, 1.5)) # - # The `subs` function can of course also be used to substitute Symbols and expressions: # + jupyter={"outputs_hidden": false} y.subs(x, a+pi) # - # We can also combine numerical evolution of expressions with NumPy arrays: # + jupyter={"outputs_hidden": false} import numpy # + jupyter={"outputs_hidden": false} x_vec = numpy.arange(0, 10, 0.1) # + jupyter={"outputs_hidden": false} y_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec]) # + jupyter={"outputs_hidden": false} fig, ax = plt.subplots() ax.plot(x_vec, y_vec); # - # However, this kind of numerical evolution can be very slow, and there is a much more efficient way to do it: Use the function `lambdify` to "compile" a Sympy expression into a function that is much more efficient to evaluate numerically: # + jupyter={"outputs_hidden": false} f = lambdify([x], (x + pi)**2, 'numpy') # the first argument is a list of variables that # f will be a function of: in this case only x -> f(x) # + jupyter={"outputs_hidden": false} y_vec = f(x_vec) # now we can directly pass a numpy array and f(x) is efficiently evaluated # - # The speedup when using "lambdified" functions instead of direct numerical evaluation can be significant, often several orders of magnitude. Even in this simple example we get a significant speed up: # + jupyter={"outputs_hidden": false} # %%timeit y_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec]) # + jupyter={"outputs_hidden": false} # %%timeit y_vec = f(x_vec) # - # ## Algebraic manipulations # One of the main uses of an CAS is to perform algebraic manipulations of expressions. For example, we might want to expand a product, factor an expression, or simply an expression. The functions for doing these basic operations in SymPy are demonstrated in this section. # ### Expand and factor # The first steps in an algebraic manipulation # + jupyter={"outputs_hidden": false} (x+1)*(x+2)*(x+3) # + jupyter={"outputs_hidden": false} expand((x+1)*(x+2)*(x+3)) # - # The `expand` function takes a number of keywords arguments which we can tell the functions what kind of expansions we want to have performed. For example, to expand trigonometric expressions, use the `trig=True` keyword argument: # + jupyter={"outputs_hidden": false} sin(a+b) # + jupyter={"outputs_hidden": false} expand(sin(a+b), trig=True) # - # See `help(expand)` for a detailed explanation of the various types of expansions the `expand` functions can perform. # The opposite a product expansion is of course factoring. The factor an expression in SymPy use the `factor` function: # + jupyter={"outputs_hidden": false} factor(x**3 + 6 * x**2 + 11*x + 6) # - # ### Simplify # The `simplify` tries to simplify an expression into a nice looking expression, using various techniques. More specific alternatives to the `simplify` functions also exists: `trigsimp`, `powsimp`, `logcombine`, etc. # # The basic usages of these functions are as follows: # + jupyter={"outputs_hidden": false} # simplify expands a product simplify((x+1)*(x+2)*(x+3)) # + jupyter={"outputs_hidden": false} # simplify uses trigonometric identities simplify(sin(a)**2 + cos(a)**2) # + jupyter={"outputs_hidden": false} simplify(cos(x)/sin(x)) # - # ### apart and together # To manipulate symbolic expressions of fractions, we can use the `apart` and `together` functions: # + jupyter={"outputs_hidden": false} f1 = 1/((a+1)*(a+2)) # + jupyter={"outputs_hidden": false} f1 # + jupyter={"outputs_hidden": false} apart(f1) # + jupyter={"outputs_hidden": false} f2 = 1/(a+2) + 1/(a+3) # + jupyter={"outputs_hidden": false} f2 # + jupyter={"outputs_hidden": false} together(f2) # - # Simplify usually combines fractions but does not factor: # + jupyter={"outputs_hidden": false} simplify(f2) # - # ## Calculus # In addition to algebraic manipulations, the other main use of CAS is to do calculus, like derivatives and integrals of algebraic expressions. # ### Differentiation # Differentiation is usually simple. Use the `diff` function. The first argument is the expression to take the derivative of, and the second argument is the symbol by which to take the derivative: # + jupyter={"outputs_hidden": false} y # + jupyter={"outputs_hidden": false} diff(y**2, x) # - # For higher order derivatives we can do: # + jupyter={"outputs_hidden": false} diff(y**2, x, x) # + jupyter={"outputs_hidden": false} diff(y**2, x, 2) # same as above # - # To calculate the derivative of a multivariate expression, we can do: # + jupyter={"outputs_hidden": false} x, y, z = symbols("x,y,z") # + jupyter={"outputs_hidden": false} f = sin(x*y) + cos(y*z) # - # $\frac{d^3f}{dxdy^2}$ # + jupyter={"outputs_hidden": false} diff(f, x, 1, y, 2) # - # ## Integration # Integration is done in a similar fashion: # + jupyter={"outputs_hidden": false} f # + jupyter={"outputs_hidden": false} integrate(f, x) # - # By providing limits for the integration variable we can evaluate definite integrals: # + jupyter={"outputs_hidden": false} integrate(f, (x, -1, 1)) # - # and also improper integrals # + jupyter={"outputs_hidden": false} integrate(exp(-x**2), (x, -oo, oo)) # - # Remember, `oo` is the SymPy notation for inifinity. # ### Sums and products # We can evaluate sums and products using the functions: 'Sum' # + jupyter={"outputs_hidden": false} n = Symbol("n") # + jupyter={"outputs_hidden": false} Sum(1/n**2, (n, 1, 10)) # + jupyter={"outputs_hidden": false} Sum(1/n**2, (n,1, 10)).evalf() # + jupyter={"outputs_hidden": false} Sum(1/n**2, (n, 1, oo)).evalf() # - # Products work much the same way: # + jupyter={"outputs_hidden": false} Product(n, (n, 1, 10)) # 10! # - # ## Limits # Limits can be evaluated using the `limit` function. For example, # + jupyter={"outputs_hidden": false} limit(sin(x)/x, x, 0) # - # We can use 'limit' to check the result of derivation using the `diff` function: # + jupyter={"outputs_hidden": false} f # + jupyter={"outputs_hidden": false} diff(f, x) # - # $\displaystyle \frac{\mathrm{d}f(x,y)}{\mathrm{d}x} = \frac{f(x+h,y)-f(x,y)}{h}$ # + jupyter={"outputs_hidden": false} h = Symbol("h") # + jupyter={"outputs_hidden": false} limit((f.subs(x, x+h) - f)/h, h, 0) # - # OK! # We can change the direction from which we approach the limiting point using the `dir` keywork argument: # + jupyter={"outputs_hidden": false} limit(1/x, x, 0, dir="+") # + jupyter={"outputs_hidden": false} limit(1/x, x, 0, dir="-") # - # ## Series # Series expansion is also one of the most useful features of a CAS. In SymPy we can perform a series expansion of an expression using the `series` function: # + jupyter={"outputs_hidden": false} series(exp(x), x) # - # By default it expands the expression around $x=0$, but we can expand around any value of $x$ by explicitly include a value in the function call: # + jupyter={"outputs_hidden": false} series(exp(x), x, 1) # - # And we can explicitly define to which order the series expansion should be carried out: # + jupyter={"outputs_hidden": false} series(exp(x), x, 1, 10) # - # The series expansion includes the order of the approximation, which is very useful for keeping track of the order of validity when we do calculations with series expansions of different order: # + jupyter={"outputs_hidden": false} s1 = cos(x).series(x, 0, 5) s1 # + jupyter={"outputs_hidden": false} s2 = sin(x).series(x, 0, 2) s2 # + jupyter={"outputs_hidden": false} expand(s1 * s2) # - # If we want to get rid of the order information we can use the `removeO` method: # + jupyter={"outputs_hidden": false} expand(s1.removeO() * s2.removeO()) # - # But note that this is not the correct expansion of $\cos(x)\sin(x)$ to $5$th order: # + jupyter={"outputs_hidden": false} (cos(x)*sin(x)).series(x, 0, 6) # - # ## Linear algebra # ### Matrices # Matrices are defined using the `Matrix` class: # + jupyter={"outputs_hidden": false} m11, m12, m21, m22 = symbols("m11, m12, m21, m22") b1, b2 = symbols("b1, b2") # + jupyter={"outputs_hidden": false} A = Matrix([[m11, m12],[m21, m22]]) A # + jupyter={"outputs_hidden": false} b = Matrix([[b1], [b2]]) b # - # With `Matrix` class instances we can do the usual matrix algebra operations: # + jupyter={"outputs_hidden": false} A**2 # + jupyter={"outputs_hidden": false} A * b # - # And calculate determinants and inverses, and the like: # + jupyter={"outputs_hidden": false} A.det() # + jupyter={"outputs_hidden": false} A.inv() # - # ## Solving equations # For solving equations and systems of equations we can use the `solve` function: # + jupyter={"outputs_hidden": false} solve(x**2 - 1, x) # + jupyter={"outputs_hidden": false} solve(x**4 - x**2 - 1, x) # - # System of equations: # + jupyter={"outputs_hidden": false} solve([x + y - 1, x - y - 1], [x,y]) # - # In terms of other symbolic expressions: # + jupyter={"outputs_hidden": false} solve([x + y - a, x - y - c], [x,y]) # - # ## Further reading # * http://sympy.org/en/index.html - The SymPy projects web page. # * https://github.com/sympy/sympy - The source code of SymPy. # * http://live.sympy.org - Online version of SymPy for testing and demonstrations. # ## Versions # + jupyter={"outputs_hidden": false} # %reload_ext version_information # %version_information numpy, matplotlib, sympy # -
001-Jupyter/001-Tutorials/004-Scientific-Python-Lectures/Lecture-5-Sympy.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Seleção de atributos import pandas as pd import numpy as np from sklearn.feature_selection import SelectFdr, chi2 from sklearn.naive_bayes import GaussianNB from sklearn.metrics import accuracy_score dataset = pd.read_csv('../0_datasets/ad.data', header=None) dataset.head() X = dataset.iloc[:,0:1558].values X y = dataset.iloc[:,1558] y np.unique(y, return_counts=True) naive1 = GaussianNB() naive1.fit(X,y) previsoes1 = naive1.predict(X) accuracy_score(y,previsoes1) selecao = SelectFdr(chi2, alpha=0.01) X_novo = selecao.fit_transform(X,y) X.shape, X_novo.shape selecao.pvalues_, len(selecao.pvalues_) np.sum(selecao.pvalues_ <= 0.01) colunas = selecao.get_support() colunas indices = np.where(colunas==True) indices naive2 = GaussianNB() naive2.fit(X_novo,y) previsoes2 = naive2.predict(X_novo) accuracy_score(y,previsoes2)
8_intervalo_confianca/selecao_atributos_qui-quadrado.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np from collections import Counter def euclidean_distance(x1, x2): return np.sqrt(np.sum((x1 - x2)**2)) class KNN: def __init__(self, k=3): self.k = k def fit(self, X, y): self.X_train = X self.y_train = y def predict(self, X): y_pred = [self._predict(x) for x in X] return np.array(y_pred) def _predict(self, x): # Compute distances between x and all examples in the training set distances = [euclidean_distance(x, x_train) for x_train in self.X_train] # Sort by distance and return indices of the first k neighbors k_idx = np.argsort(distances)[:self.k] # Extract the labels of the k nearest neighbor training samples k_neighbor_labels = [self.y_train[i] for i in k_idx] # return the most common class label most_common = Counter(k_neighbor_labels).most_common(1) return most_common[0][0] # + import numpy as np from sklearn import datasets from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap cmap = ListedColormap(['#FF0000', '#00FF00', '#0000FF']) def accuracy(y_true, y_pred): accuracy = np.sum(y_true == y_pred) / len(y_true) return accuracy iris = datasets.load_iris() X, y = iris.data, iris.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1234) # Inspect data #print(X_train.shape) #print(X_train[0]) #print(y_train.shape) #print(y_train) #plt.figure() #plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap, edgecolor='k', s=20) #plt.show() k = 3 clf = KNN(k=k) clf.fit(X_train, y_train) predictions = clf.predict(X_test) print("custom KNN classification accuracy", accuracy(y_test, predictions)) # + # Inspect data print(X_train.shape) print(X_train[0]) print(y_train.shape) print(y_train) plt.figure() plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap, edgecolor='k', s=20) plt.show() # -
KNearestNeighbour.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- from pyspark.sql.functions import size from pyspark.sql.functions import array_contains reader = sqlContext.read.format('com.databricks.spark.avro') df = reader.load('data/spark_metadata.avro') df.columns selection1 = df[df['from'].startswith('maggie')] #selection2 = selection1[selection1['to'].apply(lambda name: name.startswith('kev'))] #selection2.select(['id', 'from', 'to', 'subject']).show() # col = selection1['to'] selection1.show() from pyspark.sql.functions import udf from pyspark.sql.types import BooleanType # + # selection1[col.getItem(0).startswith('kevin')].show() # selection1.select(size(col)).show() kevudf = udf(hasKevin, BooleanType()) selection1.filter(kevudf(df.to)).show() # - df1 = df.toPandas() # + def array_any(f, col): return any(map(f, col)) def lambda_startswith(prefix): return lambda value: value.startswith(prefix) def hasKevin(tos): return array_any(lambda_startswith('kev'), tos) def contains(value): return udf(lambda name: value in name, BooleanType()) df2 = df1[df1['from'].apply(lambda_startswith('maggie'))] selection2 = df2[df2['to'].apply(hasKevin)] selection2[['from', 'to', 'subject']] # - selection2['to'] df1[df1['from'].apply(lambda x: x.startswith('wally'))] df.select(df['from'])[~contains('enron')(df['from'])].distinct().count() # # Load CSV # + dicFile = 'enron_small_dic.csv' csvLoader = sqlContext.read.format('com.databricks.spark.csv') dic = csvLoader.options(delimiter='\t', header='false', inferschema='true').load(dicFile) dic = dic.select(dic['C0'].alias('id'), dic['C1'].alias('word'), dic['C2'].alias('count')) # - dic.columns dic1
Avro loading data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Importing libraries import os, glob, csv, subprocess, sys, re, operator from git import * from subprocess import Popen, PIPE from os import path import pandas as pd # # Configure repository and directories # + userhome = os.path.expanduser('~') txt_file = open(userhome + r"/DifferentDiffAlgorithms/SZZ/code_document/project_identity.txt", "r") pid = txt_file.read().split('\n') project = pid[0] bugidentifier = pid[1] repository = userhome + r'/DifferentDiffAlgorithms/SZZ/datasource/' + project + '/' analyze_dir = userhome + r'/DifferentDiffAlgorithms/SZZ/projects_analyses/' + project + '/' print ("Project name = %s" % project) print ("Project key = %s" % bugidentifier) # - # # Defining function to execute git command def execute_command(cmd, work_dir): #Executes a shell command in a subprocess, waiting until it has completed. pipe = subprocess.Popen(cmd, shell=True, cwd=work_dir, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (out, error) = pipe.communicate() return out, error pipe.wait() # # Loading the files data fields = ['bug_id','bugfix_commitID','parent_id','status','filename'] dtfiles = pd.read_csv(analyze_dir + "02_diff_extraction/01_modified_files/allmodified_files.csv") dtfiles = dtfiles[fields] dtfiles # # Extract the frequency of deleted lines algorithms = ['myers','histogram'] buggylinefiles = [] for n in range(0,len(algorithms)): bf = [] for aa in range(0,len(dtfiles)): sys.stdout.write('\r%i: ' %(n+1) + ' Extracting data: %i' %(aa+1)) sys.stdout.flush() del_count = "git diff -w --ignore-blank-lines --diff-algorithm=" + algorithms[n] + " " + dtfiles.iloc[aa][2] + " " + dtfiles.iloc[aa][1] + " -- " + dtfiles.iloc[aa][4] + " | grep '^[-]' | grep -Ev '^(--- a/|\+\+\+ b/)' | wc -l" del_num = re.search("(\d+)",str(execute_command(del_count, repository))) del_num = int(del_num.group()) zzz = [dtfiles.iloc[aa][0], dtfiles.iloc[aa][1], dtfiles.iloc[aa][2], dtfiles.iloc[aa][4], del_num] bf.append(zzz) buggylinefiles.append(bf) print('\nExtraction is complete') head = ['bug_id','bugfix_commitID','parent_id','filepath','#deletions'] for nn, algo in enumerate(algorithms): with open (analyze_dir + '02_diff_extraction/02_list_of_file_with_number_of_deletedlines/' + algo + '_buggylines_files.csv','w') as csvfile: writers = csv.writer(csvfile, delimiter=',') writers.writerow(head) for item in buggylinefiles[nn]: writers.writerow(item) print ("The csv file has been created") # # Merge 2 datasets of modified files that have deleted lines # + diffmyers = pd.read_csv(analyze_dir + '02_diff_extraction/02_list_of_file_with_number_of_deletedlines/myers_buggylines_files.csv') diffmyers = diffmyers[head][diffmyers[head]['#deletions'] != 0] diffhist = pd.read_csv(analyze_dir + '02_diff_extraction/02_list_of_file_with_number_of_deletedlines/histogram_buggylines_files.csv') diffhist = diffhist[head][diffhist[head]['#deletions'] != 0] # - df_merge = diffmyers.merge(diffhist, on=['bug_id','bugfix_commitID','parent_id','filepath'], how='outer', suffixes=('_myers', '_histogram')) df_merge.fillna(0, inplace=True) cols = ['bug_id','bugfix_commitID','parent_id','filepath','#deletions_myers','#deletions_histogram'] df_merge = df_merge[cols] df_merge.drop_duplicates(subset=None, inplace=True) df_merge # # Deleted lines extraction del_files = [] for aa in range(0,len(df_merge)): sys.stdout.write('\rExtracting data: %i' %(aa+1)) sys.stdout.flush() f_names = "_" + ((df_merge.iloc[aa][3].split('/'))[-1:])[0] + "_"+ df_merge.iloc[aa][1] + "-" + df_merge.iloc[aa][2][:10] + "_" + df_merge.iloc[aa][0] + "_" + str(aa+1) delmyers_num = df_merge.iloc[aa][4] if delmyers_num != 0: m_name = f_names + "_myersbuglines_" + str(aa+1) + ".diff" myers_name = analyze_dir + "02_diff_extraction/03_file_having_deletedlines/myers/" + m_name diff_cmd = "git diff -w --ignore-blank-lines --diff-algorithm=myers " + df_merge.iloc[aa][2] + " " + df_merge.iloc[aa][1] + " -- " + df_merge.iloc[aa][3] + " | grep '^[-]' | grep -Ev '^(--- a/|\+\+\+ b/)' > " + myers_name execute_command(diff_cmd, repository) else: m_name = "-" delhist_num = df_merge.iloc[aa][5] if delhist_num != 0: h_name = f_names + "_histogrambuglines_" + str(aa+1) + ".diff" hist_name = analyze_dir + "02_diff_extraction/03_file_having_deletedlines/histogram/" + h_name diff_cmd = "git diff -w --ignore-blank-lines --diff-algorithm=histogram " + df_merge.iloc[aa][2] + " " + df_merge.iloc[aa][1] + " -- " + df_merge.iloc[aa][3] + " | grep '^[-]' | grep -Ev '^(--- a/|\+\+\+ b/)' > " + hist_name execute_command(diff_cmd, repository) else: h_name = "-" zzz = [df_merge.iloc[aa][0], df_merge.iloc[aa][1], df_merge.iloc[aa][2], df_merge.iloc[aa][3], m_name, h_name, df_merge.iloc[aa][4], df_merge.iloc[aa][5]] del_files.append(zzz) print('\nExtraction is complete') header = ['bug_id','bugfix_commitID','parent_id','filepath','filename_myers','filename_histogram','#deletions_myers','#deletions_histogram'] with open (analyze_dir + '02_diff_extraction/03_file_having_deletedlines/buggyfiles_from_both_algorithms.csv','w') as csvfile: writers = csv.writer(csvfile, delimiter=',') writers.writerow(header) for item in del_files: writers.writerow(item) print ("The csv file has been created")
SZZ/code_document/03_bugline_extraction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # We'll work with a data set of customer preferences on trains, available [here](http://vincentarelbundock.github.io/Rdatasets/doc/Ecdat/Train.html). This is a static # dataset and isn't being updated, but you could imagine that each month the Dutch authorities # upload a new month's worth of data. # We can start by making some very basic assertions, that the dataset is the correct shape, and that a few columns are the correct dtypes. Assertions are made as decorators to functions that return a DataFrame. # + import pandas as pd import engarde.decorators as ed pd.set_option('display.max_rows', 10) dtypes = dict( price1=int, price2=int, time1=int, time2=int, change1=int, change2=int, comfort1=int, comfort2=int ) @ed.is_shape((-1, 11)) @ed.has_dtypes(items=dtypes) def unload(): url = "http://vincentarelbundock.github.io/Rdatasets/csv/Ecdat/Train.csv" trains = pd.read_csv(url, index_col=0) return trains # - df = unload() df.head() # Notice two things: we only specified the dtypes for some of the columns, and we don't care about the length of the DataFrame (just its width), so we passed -1 for the first dimension of the shape. # Since people are rational, their first choice is surely going to be better in *at least* one way than their second choice. This is fundamental to our analysis later on, so we'll explicilty state it in our code, and check it in our data. # + def rational(df): """ Check that at least one criteria is better. """ r = ((df.price1 < df.price2) | (df.time1 < df.time2) | (df.change1 < df.change2) | (df.comfort1 > df.comfort2)) return r @ed.is_shape((-1, 11)) @ed.has_dtypes(items=dtypes) @ed.verify_all(rational) def unload(): url = "http://vincentarelbundock.github.io/Rdatasets/csv/Ecdat/Train.csv" trains = pd.read_csv(url, index_col=0) return trains # - df = unload() # OK, so apparently people aren't rational... We'll fix this problem by ignoring those people (why change your mind when you can change the data?). # + @ed.verify_all(rational) def drop_silly_people(df): r = ((df.price1 < df.price2) | (df.time1 < df.time2) | (df.change1 < df.change2) | (df.comfort1 > df.comfort2)) return df[r] @ed.is_shape((-1, 11)) @ed.has_dtypes(items=dtypes) def unload(): url = "http://vincentarelbundock.github.io/Rdatasets/csv/Ecdat/Train.csv" trains = pd.read_csv(url, index_col=0) return trains def main(): df = (unload() .pipe(drop_silly_people) ) return df # - df = main() # There's a couple things to notice here. The checks are always performed on the *result* of a function. That's why our `ed.verify_all(rational)` works now. I also like how the assertions don't clutter the logic of the code.
examples/Trains.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Quantum Circuits # Quantum computers can only use a specific set of gates (universal gate set). Given the entanglers and their amplitudes found in Step 3, one can find corresponding representation of these operators in terms of elementary gates using the following procedure. from qiskit import IBMQ #IBMQ.save_account(MY_API_TOKEN) # + import numpy as np import tequila as tq from utility import * threshold = 1e-6 #VGG 1e-6 #Cutoff for UCC MP2 amplitudes and QCC ranking gradients tol=threshold # - # First, we set up the Hamiltonian in Tequila's format and the unitary gates obtained in Step 3. # + R=1.5 bond_lengths = np.linspace(0.4,2.4,10) molecule='h2' basis='sto-3g' qubit_transf='jw' #VGG 'bk' is not working well! #Define number of entanglers to enter ansatz n_ents = 1 # + estimate1=[] satring_e=[] U_QCC_e=[] CO_QCC_e=[] QCC_energy=[] IBM_QC_e=[] for R in bond_lengths: #Quantum Chemistry methods xyz_data = get_molecular_data(molecule, geometry=R, xyz_format=True) hm = tq.quantumchemistry.Molecule(geometry=xyz_data, basis_set=basis) print('Number of spin-orbitals (qubits): {} \n'.format(2*hm.n_orbitals)) #e0=hm.compute_energy(method='cisd') #VGG problem for H2 e0=obtain_PES(molecule,[R], basis=basis, method='cisd') hf_reference = hf_occ(2*hm.n_orbitals, hm.n_electrons) H = hm.make_hamiltonian() ##Got the first estimate for the energy estimate1.append(e0) print('CISD energy: {}'.format(e0)) print("\nHamiltonian has {} terms\n".format(len(H))) #Rank entanglers using energy gradient criterion ranked_entangler_groupings = generate_QCC_gradient_groupings(H.to_openfermion(), 2*hm.n_orbitals, hf_reference, cutoff=threshold) print('Grouping gradient magnitudes (Grouping : Gradient magnitude):') for i in range(len(ranked_entangler_groupings)): print('{} : {}'.format(i+1,ranked_entangler_groupings[i][1])) entanglers = get_QCC_entanglers(ranked_entangler_groupings, n_ents, 2*hm.n_orbitals) print('\nSelected entanglers:') for ent in entanglers: print(ent) #Mean-field part of U (Omega): U_MF = construct_QMF_ansatz(n_qubits = 2*hm.n_orbitals) #Entangling part of U: U_ENT = construct_QCC_ansatz(entanglers) U_QCC = U_MF + U_ENT E = tq.ExpectationValue(H=H, U=U_QCC) initial_vals = init_qcc_params(hf_reference, E.extract_variables()) ##Got the initial energy estimate based on HF for QCC satring_e.append(tq.simulate(E, variables=initial_vals)) #Minimize wrt the entangler amplitude and MF angles: result = tq.minimize(objective=E, method="BFGS", initial_values=initial_vals, tol=tol) U_QCC_e.append(result.energy) ##Got the optimization estimate based on U_QCC print('\nObtained QCC energy ({} entanglers): {}'.format(len(entanglers), result.energy)) #QC based methods - the VQE H_q = tq.QubitHamiltonian.from_openfermion(get_qubit_hamiltonian( molecule, R, basis, qubit_transf=qubit_transf)) #VGG note 'bk' is not working well stating_angels=result.angles vars={str(kw):stating_angels[kw] for kw in stating_angels} n_qubits=2*hm.n_orbitals a = tq.Variable("tau_0") U = construct_QMF_ansatz(n_qubits) U += tq.gates.ExpPauli(paulistring=tq.PauliString.from_string("X(0)Y(1)X(2)X(3)"), angle=a) E_q = tq.ExpectationValue(H=H_q, U=U) #Perform VQE if feasable #Minimize wrt the entangler amplitude and MF angles: #result_q = tq.minimize(objective=E_q, method="BFGS", initial_values=vars, tol=tol) #CO_QCC_e.append(result_q.energy) #stating_angels=result_q.angles #vars={str(kw):stating_angels[kw] for kw in stating_angels} e_q=tq.simulate(E_q, variables=vars) QCC_energy.append(e_q) #Got the circuit energy print() print(U) print(vars) print("Energy:",e_q) # list of devices available can be found in ibmq account page # %time q_result=tq.simulate(E_q, variables=vars, samples=100, backend="qiskit", device='ibmq_essex')# device='ibmq_16_melbourne') IBM_QC_e.append(q_result) print("QC_device:",q_result) print() # - # One can check the expectation value to see it is near the ground state energy. # One can run the same experiment on a real quantum computer through IBM Quantum Experience (ibmq). After activating your account here (https://quantum-computing.ibm.com/login), copy the API token and execute the commented block below. # + #Plot the PESs import matplotlib.pyplot as plt title_text=molecule.upper()+' dissociation, basis:'+basis.upper() plt.title(title_text) plt.xlabel('R, Angstrom') plt.ylabel('E, Hartree') plt.plot(bond_lengths, estimate1, label='CISD') plt.scatter(bond_lengths, satring_e, label='U_QCC_start',marker='.', color='b') plt.scatter(bond_lengths, U_QCC_e, label='U_QCC_energy', marker='^',color='r') #plt.scatter(bond_lengths, CO_QCC_e, label='CO_QCC_e', marker='o',color='orange') plt.scatter(bond_lengths, QCC_energy, label='QCC_energy', marker='+',color='g') plt.scatter(bond_lengths, IBM_QC_e, label='IBM_QC_run', marker='x', color='k') plt.legend() # - # The following code block prints the circuit. # + tags=[] circ = tq.circuit.compiler.compile_exponential_pauli_gate(U) tq.draw(circ, backend="qiskit") # -
Project_2_VQE_Molecules/S5_Circuits-H2_on_IBMq-sussex.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # TensorFlow Tutorial #03-C # # Keras API # # by [<NAME>](http://www.hvass-labs.org/) # / [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) # ## Introduction # # Tutorial #02 showed how to implement a Convolutional Neural Network in TensorFlow. We made a few helper-functions for creating the layers in the network. It is essential to have a good high-level API because it makes it much easier to implement complex models, and it lowers the risk of errors. # # There are several of these builder API's available for TensorFlow: PrettyTensor (Tutorial #03), Layers API (Tutorial #03-B), and several others. But they were never really finished and now they seem to be more or less abandoned by their developers. # # This tutorial is about the Keras API which is already highly developed with very good documentation - and the development continues. It seems likely that Keras will be the standard API for TensorFlow in the future so it is recommended that you use it instead of the other APIs. # # The author of Keras has written a [blog-post](https://blog.keras.io/user-experience-design-for-apis.html) on his API design philosophy which you should read. # ## Flowchart # The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. See Tutorial #02 for a more detailed description of convolution. # # There are two convolutional layers, each followed by a down-sampling using max-pooling (not shown in this flowchart). Then there are two fully-connected layers ending in a softmax-classifier. # ![Flowchart](images/02_network_flowchart.png) # ## Imports # %matplotlib inline import matplotlib.pyplot as plt import tensorflow as tf import numpy as np import math # We need to import several things from Keras. Note the long import-statements. This might be a bug. Hopefully it will be possible to write shorter and more elegant lines in the future. # from tf.keras.models import Sequential # This does not work! from tensorflow.python.keras.models import Sequential from tensorflow.python.keras.layers import InputLayer, Input from tensorflow.python.keras.layers import Reshape, MaxPooling2D from tensorflow.python.keras.layers import Conv2D, Dense, Flatten # This was developed using Python 3.6 (Anaconda) and TensorFlow version: tf.__version__ # ## Load Data # The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path. from mnist import MNIST data = MNIST(data_dir="data/MNIST/") # The MNIST data-set has now been loaded and consists of 70.000 images and class-numbers for the images. The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial. print("Size of:") print("- Training-set:\t\t{}".format(data.num_train)) print("- Validation-set:\t{}".format(data.num_val)) print("- Test-set:\t\t{}".format(data.num_test)) # Copy some of the data-dimensions for convenience. # + # The number of pixels in each dimension of an image. img_size = data.img_size # The images are stored in one-dimensional arrays of this length. img_size_flat = data.img_size_flat # Tuple with height and width of images used to reshape arrays. img_shape = data.img_shape # Tuple with height, width and depth used to reshape arrays. # This is used for reshaping in Keras. img_shape_full = data.img_shape_full # Number of classes, one class for each of 10 digits. num_classes = data.num_classes # Number of colour channels for the images: 1 channel for gray-scale. num_channels = data.num_channels # - # ### Helper-function for plotting images # Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image. def plot_images(images, cls_true, cls_pred=None): assert len(images) == len(cls_true) == 9 # Create figure with 3x3 sub-plots. fig, axes = plt.subplots(3, 3) fig.subplots_adjust(hspace=0.3, wspace=0.3) for i, ax in enumerate(axes.flat): # Plot image. ax.imshow(images[i].reshape(img_shape), cmap='binary') # Show true and predicted classes. if cls_pred is None: xlabel = "True: {0}".format(cls_true[i]) else: xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i]) # Show the classes as the label on the x-axis. ax.set_xlabel(xlabel) # Remove ticks from the plot. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show() # ### Plot a few images to see if data is correct # + # Get the first images from the test-set. images = data.x_test[0:9] # Get the true classes for those images. cls_true = data.y_test_cls[0:9] # Plot the images and labels using our helper-function above. plot_images(images=images, cls_true=cls_true) # - # ### Helper-function to plot example errors # # Function for plotting examples of images from the test-set that have been mis-classified. def plot_example_errors(cls_pred): # cls_pred is an array of the predicted class-number for # all images in the test-set. # Boolean array whether the predicted class is incorrect. incorrect = (cls_pred != data.y_test_cls) # Get the images from the test-set that have been # incorrectly classified. images = data.x_test[incorrect] # Get the predicted classes for those images. cls_pred = cls_pred[incorrect] # Get the true classes for those images. cls_true = data.y_test_cls[incorrect] # Plot the first 9 images. plot_images(images=images[0:9], cls_true=cls_true[0:9], cls_pred=cls_pred[0:9]) # ## PrettyTensor API # # This is how the Convolutional Neural Network was implemented in Tutorial #03 using the PrettyTensor API. It is shown here for easy comparison to the Keras implementation below. if False: x_pretty = pt.wrap(x_image) with pt.defaults_scope(activation_fn=tf.nn.relu): y_pred, loss = x_pretty.\ conv2d(kernel=5, depth=16, name='layer_conv1').\ max_pool(kernel=2, stride=2).\ conv2d(kernel=5, depth=36, name='layer_conv2').\ max_pool(kernel=2, stride=2).\ flatten().\ fully_connected(size=128, name='layer_fc1').\ softmax_classifier(num_classes=num_classes, labels=y_true) # ## Sequential Model # # The Keras API has two modes of constructing Neural Networks. The simplest is the Sequential Model which only allows for the layers to be added in sequence. # + # Start construction of the Keras Sequential model. model = Sequential() # Add an input layer which is similar to a feed_dict in TensorFlow. # Note that the input-shape must be a tuple containing the image-size. model.add(InputLayer(input_shape=(img_size_flat,))) # The input is a flattened array with 784 elements, # but the convolutional layers expect images with shape (28, 28, 1) model.add(Reshape(img_shape_full)) # First convolutional layer with ReLU-activation and max-pooling. model.add(Conv2D(kernel_size=5, strides=1, filters=16, padding='same', activation='relu', name='layer_conv1')) model.add(MaxPooling2D(pool_size=2, strides=2)) # Second convolutional layer with ReLU-activation and max-pooling. model.add(Conv2D(kernel_size=5, strides=1, filters=36, padding='same', activation='relu', name='layer_conv2')) model.add(MaxPooling2D(pool_size=2, strides=2)) # Flatten the 4-rank output of the convolutional layers # to 2-rank that can be input to a fully-connected / dense layer. model.add(Flatten()) # First fully-connected / dense layer with ReLU-activation. model.add(Dense(128, activation='relu')) # Last fully-connected / dense layer with softmax-activation # for use in classification. model.add(Dense(num_classes, activation='softmax')) # - # ### Model Compilation # # The Neural Network has now been defined and must be finalized by adding a loss-function, optimizer and performance metrics. This is called model "compilation" in Keras. # # We can either define the optimizer using a string, or if we want more control of its parameters then we need to instantiate an object. For example, we can set the learning-rate. # + from tensorflow.python.keras.optimizers import Adam optimizer = Adam(lr=1e-3) # - # For a classification-problem such as MNIST which has 10 possible classes, we need to use the loss-function called `categorical_crossentropy`. The performance metric we are interested in is the classification accuracy. model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy']) # ### Training # # Now that the model has been fully defined with loss-function and optimizer, we can train it. This function takes numpy-arrays and performs the given number of training epochs using the given batch-size. An epoch is one full use of the entire training-set. So for 10 epochs we would iterate randomly over the entire training-set 10 times. model.fit(x=data.x_train, y=data.y_train, epochs=1, batch_size=128) # ### Evaluation # # Now that the model has been trained we can test its performance on the test-set. This also uses numpy-arrays as input. result = model.evaluate(x=data.x_test, y=data.y_test) # We can print all the performance metrics for the test-set. for name, value in zip(model.metrics_names, result): print(name, value) # Or we can just print the classification accuracy. print("{0}: {1:.2%}".format(model.metrics_names[1], result[1])) # ### Prediction # # We can also predict the classification for new images. We will just use some images from the test-set but you could load your own images into numpy arrays and use those instead. images = data.x_test[0:9] # These are the true class-number for those images. This is only used when plotting the images. cls_true = data.y_test_cls[0:9] # Get the predicted classes as One-Hot encoded arrays. y_pred = model.predict(x=images) # Get the predicted classes as integers. cls_pred = np.argmax(y_pred, axis=1) plot_images(images=images, cls_true=cls_true, cls_pred=cls_pred) # ### Examples of Mis-Classified Images # # We can plot some examples of mis-classified images from the test-set. # # First we get the predicted classes for all the images in the test-set: y_pred = model.predict(x=data.x_test) # Then we convert the predicted class-numbers from One-Hot encoded arrays to integers. cls_pred = np.argmax(y_pred, axis=1) # Plot some of the mis-classified images. plot_example_errors(cls_pred) # ## Functional Model # # The Keras API can also be used to construct more complicated networks using the Functional Model. This may look a little confusing at first, because each call to the Keras API will create and return an instance that is itself callable. It is not clear whether it is a function or an object - but we can call it as if it is a function. This allows us to build computational graphs that are more complex than the Sequential Model allows. # + # Create an input layer which is similar to a feed_dict in TensorFlow. # Note that the input-shape must be a tuple containing the image-size. inputs = Input(shape=(img_size_flat,)) # Variable used for building the Neural Network. net = inputs # The input is an image as a flattened array with 784 elements. # But the convolutional layers expect images with shape (28, 28, 1) net = Reshape(img_shape_full)(net) # First convolutional layer with ReLU-activation and max-pooling. net = Conv2D(kernel_size=5, strides=1, filters=16, padding='same', activation='relu', name='layer_conv1')(net) net = MaxPooling2D(pool_size=2, strides=2)(net) # Second convolutional layer with ReLU-activation and max-pooling. net = Conv2D(kernel_size=5, strides=1, filters=36, padding='same', activation='relu', name='layer_conv2')(net) net = MaxPooling2D(pool_size=2, strides=2)(net) # Flatten the output of the conv-layer from 4-dim to 2-dim. net = Flatten()(net) # First fully-connected / dense layer with ReLU-activation. net = Dense(128, activation='relu')(net) # Last fully-connected / dense layer with softmax-activation # so it can be used for classification. net = Dense(num_classes, activation='softmax')(net) # Output of the Neural Network. outputs = net # - # ### Model Compilation # # We have now defined the architecture of the model with its input and output. We now have to create a Keras model and compile it with a loss-function and optimizer, so it is ready for training. from tensorflow.python.keras.models import Model # Create a new instance of the Keras Functional Model. We give it the inputs and outputs of the Convolutional Neural Network that we constructed above. model2 = Model(inputs=inputs, outputs=outputs) # Compile the Keras model using the RMSprop optimizer and with a loss-function for multiple categories. The only performance metric we are interested in is the classification accuracy, but you could use a list of metrics here. model2.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) # ### Training # # The model has now been defined and compiled so it can be trained using the same `fit()` function as used in the Sequential Model above. This also takes numpy-arrays as input. model2.fit(x=data.x_train, y=data.y_train, epochs=1, batch_size=128) # ### Evaluation # # Once the model has been trained we can evaluate its performance on the test-set. This is the same syntax as for the Sequential Model. result = model2.evaluate(x=data.x_test, y=data.y_test) # The result is a list of values, containing the loss-value and all the metrics we defined when we compiled the model. Note that 'accuracy' is now called 'acc' which is a small inconsistency. for name, value in zip(model2.metrics_names, result): print(name, value) # We can also print the classification accuracy as a percentage: print("{0}: {1:.2%}".format(model2.metrics_names[1], result[1])) # ### Examples of Mis-Classified Images # # We can plot some examples of mis-classified images from the test-set. # # First we get the predicted classes for all the images in the test-set: y_pred = model2.predict(x=data.x_test) # Then we convert the predicted class-numbers from One-Hot encoded arrays to integers. cls_pred = np.argmax(y_pred, axis=1) # Plot some of the mis-classified images. plot_example_errors(cls_pred) # ## Save & Load Model # # NOTE: You need to install `h5py` for this to work! # # Tutorial #04 was about saving and restoring the weights of a model using native TensorFlow code. It was an absolutely horrible API! Fortunately, Keras makes this very easy. # # This is the file-path where we want to save the Keras model. path_model = 'model.keras' # Saving a Keras model with the trained weights is then just a single function call, as it should be. model2.save(path_model) # Delete the model from memory so we are sure it is no longer used. del model2 # We need to import this Keras function for loading the model. from tensorflow.python.keras.models import load_model # Loading the model is then just a single function-call, as it should be. model3 = load_model(path_model) # We can then use the model again e.g. to make predictions. We get the first 9 images from the test-set and their true class-numbers. images = data.x_test[0:9] cls_true = data.y_test_cls[0:9] # We then use the restored model to predict the class-numbers for those images. y_pred = model3.predict(x=images) # Get the class-numbers as integers. cls_pred = np.argmax(y_pred, axis=1) # Plot the images with their true and predicted class-numbers. plot_images(images=images, cls_pred=cls_pred, cls_true=cls_true) # ## Visualization of Layer Weights and Outputs # ### Helper-function for plotting convolutional weights def plot_conv_weights(weights, input_channel=0): # Get the lowest and highest values for the weights. # This is used to correct the colour intensity across # the images so they can be compared with each other. w_min = np.min(weights) w_max = np.max(weights) # Number of filters used in the conv. layer. num_filters = weights.shape[3] # Number of grids to plot. # Rounded-up, square-root of the number of filters. num_grids = math.ceil(math.sqrt(num_filters)) # Create figure with a grid of sub-plots. fig, axes = plt.subplots(num_grids, num_grids) # Plot all the filter-weights. for i, ax in enumerate(axes.flat): # Only plot the valid filter-weights. if i<num_filters: # Get the weights for the i'th filter of the input channel. # See new_conv_layer() for details on the format # of this 4-dim tensor. img = weights[:, :, input_channel, i] # Plot image. ax.imshow(img, vmin=w_min, vmax=w_max, interpolation='nearest', cmap='seismic') # Remove ticks from the plot. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show() # ### Get Layers # # Keras has a simple way of listing the layers in the model. model3.summary() # We count the indices to get the layers we want. # # The input-layer has index 0. layer_input = model3.layers[0] # The first convolutional layer has index 2. layer_conv1 = model3.layers[2] layer_conv1 # The second convolutional layer has index 4. layer_conv2 = model3.layers[4] # ### Convolutional Weights # # Now that we have the layers we can easily get their weights. weights_conv1 = layer_conv1.get_weights()[0] # This gives us a 4-rank tensor. weights_conv1.shape # Plot the weights using the helper-function from above. plot_conv_weights(weights=weights_conv1, input_channel=0) # We can also get the weights for the second convolutional layer and plot them. weights_conv2 = layer_conv2.get_weights()[0] plot_conv_weights(weights=weights_conv2, input_channel=0) # ### Helper-function for plotting the output of a convolutional layer def plot_conv_output(values): # Number of filters used in the conv. layer. num_filters = values.shape[3] # Number of grids to plot. # Rounded-up, square-root of the number of filters. num_grids = math.ceil(math.sqrt(num_filters)) # Create figure with a grid of sub-plots. fig, axes = plt.subplots(num_grids, num_grids) # Plot the output images of all the filters. for i, ax in enumerate(axes.flat): # Only plot the images for valid filters. if i<num_filters: # Get the output image of using the i'th filter. img = values[0, :, :, i] # Plot image. ax.imshow(img, interpolation='nearest', cmap='binary') # Remove ticks from the plot. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show() # ### Input Image # # Helper-function for plotting a single image. def plot_image(image): plt.imshow(image.reshape(img_shape), interpolation='nearest', cmap='binary') plt.show() # Plot an image from the test-set which will be used as an example below. image1 = data.x_test[0] plot_image(image1) # ### Output of Convolutional Layer - Method 1 # # There are different ways of getting the output of a layer in a Keras model. This method uses a so-called K-function which turns a part of the Keras model into a function. from tensorflow.python.keras import backend as K output_conv1 = K.function(inputs=[layer_input.input], outputs=[layer_conv1.output]) # We can then call this function with the input image. Note that the image is wrapped in two lists because the function expects an array of that dimensionality. Likewise, the function returns an array with one more dimensionality than we want so we just take the first element. layer_output1 = output_conv1([[image1]])[0] layer_output1.shape # We can then plot the output of all 16 channels of the convolutional layer. plot_conv_output(values=layer_output1) # ### Output of Convolutional Layer - Method 2 # # Keras also has another method for getting the output of a layer inside the model. This creates another Functional Model using the same input as the original model, but the output is now taken from the convolutional layer that we are interested in. output_conv2 = Model(inputs=layer_input.input, outputs=layer_conv2.output) # This creates a new model-object where we can call the typical Keras functions. To get the output of the convoloutional layer we call the `predict()` function with the input image. layer_output2 = output_conv2.predict(np.array([image1])) layer_output2.shape # We can then plot the images for all 36 channels. plot_conv_output(values=layer_output2) # ## Conclusion # # This tutorial showed how to use the so-called *Keras API* for easily building Convolutional Neural Networks in TensorFlow. Keras is by far the most complete and best designed API for TensorFlow. # # This tutorial also showed how to use Keras to save and load a model, as well as getting the weights and outputs of convolutional layers. # # It seems likely that Keras will be the standard API for TensorFlow in the future, for the simple reason that is already very good and it is constantly being improved. So it is recommended that you use Keras. # ## Exercises # # These are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly. # # You may want to backup this Notebook before making any changes. # # * Train for more epochs. Does it improve the classification accuracy? # * Change the activation function to sigmoid for some of the layers. # * Can you find a simple way of changing the activation function for all the layers? # * Plot the output of the max-pooling layers instead of the conv-layers. # * Replace the 2x2 max-pooling layers with stride=2 in the convolutional layers. Is there a difference in classification accuracy? What if you optimize it again and again? The difference is random, so how would you measure if there really is a difference? What are the pros and cons of using max-pooling vs. stride in the conv-layer? # * Change the parameters for the layers, e.g. the kernel, depth, size, etc. What is the difference in time usage and classification accuracy? # * Add and remove some convolutional and fully-connected layers. # * What is the simplest network you can design that still performs well? # * Change the Functional Model so it has another convolutional layer that connects in parallel to the existing conv-layers before going into the dense layers. # * Change the Functional Model so it outputs the predicted class both as a One-Hot encoded array and as an integer, so we don't have to use `numpy.argmax()` afterwards. # * Remake the program yourself without looking too much at this source-code. # * Explain to a friend how the program works. # ## License (MIT) # # Copyright (c) 2016-2017 by [<NAME>](http://www.hvass-labs.org/) # # Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
.ipynb_checkpoints/03C_Keras_API-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + active="" # <script> # function code_toggle() { # if (code_shown){ # $('div.input').hide('500'); # $('#toggleButton').val('Show Code') # } else { # $('div.input').show('500'); # $('#toggleButton').val('Hide Code') # } # code_shown = !code_shown # } # # $( document ).ready(function(){ # code_shown=false; # $('div.input').hide() # }); # </script> # <form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form> # - # ### SINGLE TUNE RESONANCES # $m \cdot dQ = p \quad\forall\; m\in\mathbb{Z}^*, p\in\mathbb{Z} $ <br> # The resonance order is given by: $|m|$ # + from __future__ import division, print_function from mpl_toolkits.axes_grid1 import make_axes_locatable maxorder = 5 axrange = [-0.03, 1.03] cmap = cm.rainbow colors = cmap(linspace(0, 1, maxorder)) lws = linspace(3, 1, maxorder) def fraclabel(n,d): return ('$\\frac{' + '{:}'.format(n) + '}{' + '{:}'.format(d) + '}$') fracs, ticklabels, orders = [0], ['0'], [0] for d in range(1, maxorder + 1): for n in range(1, d): # numerator/denominator if n/d not in fracs: orders.append(d - 1) fracs.append(n/d) ticklabels.append(fraclabel(n, d)) fracs.append(1) ticklabels.append('1') orders.append(0) fig, ax = subplots(1, 1, figsize=[12, 10], frameon=False) for frac, order in zip(fracs, orders): ax.plot([frac, frac], axrange, '-', zorder=190, lw=lws[order], c=colors[order]) ax.plot(axrange, [frac, frac], '-', zorder=200, lw=lws[order], c=colors[order]) ax.set_xticks(fracs) ax.set_xticklabels(ticklabels, fontsize=24) ax.set_xlabel('$dQ_x$', fontsize=24) ax.set_xlim(axrange) ax.set_yticks(fracs) ax.set_yticklabels(ticklabels, fontsize=24, rotation=90, va='center') ax.set_ylabel('$dQ_y$', fontsize=24) ax.set_ylim(axrange) for key in ax.spines: ax.spines[key].set_visible(0) ax.grid(0) ax.axis('equal') divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.05) sm = cm.ScalarMappable(cmap=cmap, norm=plt.Normalize(vmin=0, vmax=maxorder-1)) sm._A = [] # fake up the array of the scalar mappable. cbar = fig.colorbar(sm, cax=cax, orientation='vertical', ticks=range(maxorder)) cbar.ax.set_yticklabels(range(1, maxorder + 1)) # horizontal colorbar fig.tight_layout() fig.show() fig.savefig('resonancediagramm_single.pdf') # - # ### FAREY SEQUENCE # The Farey sequence is the ascending sequence of irreducible fractions between 0 and 1. (https://en.wikipedia.org/wiki/Farey_sequence)<br> # Using the floor function <br> # $\lfloor x \rfloor := max\{ k\in\mathbb{Z} \;|\; k \leq x \} $ <br> # the following rule can be used to efficiently calculate the k-th element of the n-th order Farey sequence from the preceeding i-th and j-th elements since the first element is always 0/1 and the second 1/n: # # $F_n^i = \frac{a}{b}$ <br> # $F_n^j = \frac{c}{d}$ <br> # $F_n^k = \frac{\lfloor(n+b)/d\rfloor\cdot c - a}{\lfloor(n+b)/d\rfloor\cdot d - b}$ <br> # # Here are the first eight Farey Sequences generated from this rule: # + def farey_sequence(n): a, b, c, d = 0, 1, 1, n # F_n's 1st and 2nd element Fn = [[a, b]] while c <= n: Fn.append([c, d]) flr = int((n + b) / d) a, b, c, d = c, d, (flr*c-a), (flr*d-b) return Fn for n in range(1, 9): Fn = farey_sequence(n) for frac in Fn: print('{:}/{:} '.format(frac[0], frac[1]), end='') print() # + #### SINGLE TUNE RESONANCES ... are simple to find with farey # m*dQ = p with m, p Integer order = |m| # dQ = 1/j -> m = j -> order = j maxorder = 5 cmap = cm.rainbow colors = cmap(linspace(0, 1, maxorder)) lws = linspace(3, 1, maxorder) fracs = farey_sequence(maxorder) ticklabels = [fraclabel(*frac) for frac in fracs] orders = [frac[1] - 1 for frac in fracs] fracs = [frac[0]/frac[1] for frac in fracs] fig, ax = subplots(1, 1, figsize=[12, 10], frameon=False) # RESONANCE LINES ON SINGLE TUNE RESONANCES for frac, order in zip(fracs, orders): ax.plot([frac, frac], axrange, '-', lw=lws[order], c=colors[order]) # Qx ax.plot(axrange, [frac, frac], '-', lw=lws[order], c=colors[order]) # Qy ax.set_xticks(fracs) ax.set_xticklabels(ticklabels, fontsize=24) ax.set_xlabel('$dQ_x$', fontsize=24) ax.set_xlim(axrange) ax.set_yticks(fracs) ax.set_yticklabels(ticklabels, fontsize=24, rotation=90, va='center') ax.set_ylabel('$dQ_y$', fontsize=24) ax.set_ylim(axrange) for key in ax.spines: ax.spines[key].set_visible(0) ax.grid(0) axrange = [-0.03, 1.03] ax.axis('equal') divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.05) sm = cm.ScalarMappable(cmap=cmap, norm=plt.Normalize(vmin=0, vmax=maxorder-1)) sm._A = [] # fake up the array of the scalar mappable. cbar = fig.colorbar(sm, cax=cax, orientation='vertical', ticks=range(maxorder)) cbar.ax.set_yticklabels(range(1, maxorder + 1)) # horizontal colorbar fig.tight_layout() fig.show() fig.savefig('resonancediagramm_single_farey.pdf') # + slideshow={"slide_type": "slide"} #### COUPLING RESONANCES # m*dQx + n*dQy = p with m, n, p Integer order: |m| + |n| # dQx = a/b, dQy = c/d -> m = j, n = k -> order = j + k # m*x + n*y = p # y = (p - m*x)/n # m=-1, n=1 -> y = p + x , p=0, 11 (2nd order: |m| + |n| = 2) # m=-1, n=-1 -> y = -p - x , p=0, 1 (2nd order: |m| + |n| = 2) # # + fig, ax = subplots(1, 1, figsize=[12, 10], frameon=False) ax.grid(0) axrange = [-0.03, 1.03] cmap = cm.rainbow maxorder = 3 colors = cmap(linspace(0, 1, 2*(maxorder + 1))) lws = linspace(3, 1, 2*(maxorder + 1)) fracs = farey_sequence(maxorder) ticklabels = [fraclabel(*frac) for frac in fracs] orders = [frac[1] for frac in fracs] ordersmesh= meshgrid(orders, orders) fracs = [frac[0]/frac[1] for frac in fracs] fracmesh = meshgrid(fracs, fracs) # RESONANCE LINES ON SINGLE TUNE RESONANCES for frac, order in zip(fracs, orders): ax.plot([frac, frac], axrange, '-', lw=lws[order], c=colors[order]) # Qx ax.plot(axrange, [frac, frac], '-', lw=lws[order], c=colors[order]) # Qy fracs2 = farey_sequence(2*maxorder)[1:-1] orders = [frac[1] for frac in fracs2] ordersmesh= meshgrid(orders, orders) fracs2 = [frac[0]/frac[1] for frac in fracs2] fracmesh = meshgrid(fracs2, fracs2) # RESONANCE STARS ON COUPLING RESONANCES for frac1, frac2, order1, order2 in zip(fracmesh[0].flatten(), fracmesh[1].flatten(), ordersmesh[0].flatten(), ordersmesh[1].flatten()): order = order1 + order2 if order <= 2*maxorder: ax.plot(frac1, frac2, '*', ms=10, mew=5, c=colors[order]) ax.set_xticks(fracs) ax.set_xticklabels(ticklabels, fontsize=24) ax.set_xlabel('$dQ_x$', fontsize=24) ax.set_xlim(axrange) ax.set_yticks(fracs) ax.set_yticklabels(ticklabels, fontsize=24, rotation=90, va='center') ax.set_ylabel('$dQ_y$', fontsize=24) ax.set_ylim(axrange) for key in ax.spines: ax.spines[key].set_visible(0) ax.axis('equal') divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.05) sm = cm.ScalarMappable(cmap=cmap, norm=plt.Normalize(vmin=0, vmax=maxorder-1)) sm._A = [] # fake up the array of the scalar mappable. cbar = fig.colorbar(sm, cax=cax, orientation='vertical', ticks=range(maxorder)) cbar.ax.set_yticklabels(range(1, maxorder + 1)) # horizontal colorbar fig.tight_layout() fig.show() # fig.savefig('resonancediagramm.pdf') # + x = range(10) y = range(10) xy = meshgrid(x, y) print(shape(xy)) print(shape(xy[0])) print(shape(xy[1])) plot(xy[0], xy[1], '*b') show() # + # define gcd function def gcd(x, y): """This function implements the Euclidian algorithm to find greatest common divisor (gcd) of two numbers""" while(y): x, y = y, x % y return x # define lcm function def lcm(x, y): """This function takes two integers and returns the L.C.M.""" lcm = (x*y)//gcd(x,y) return lcm # - gcd(3, 9) # + active="" # <script> # $(document).ready(function(){ # $('div.prompt').hide(); # $('div.back-to-top').hide(); # $('nav#menubar').hide(); # }); # </script>
resonance_diagramm.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Building and Visualizing word frequencies # # # In this lab, we will focus on the `build_freqs()` helper function and visualizing a dataset fed into it. In our goal of tweet sentiment analysis, this function will build a dictionary where we can lookup how many times a word appears in the lists of positive or negative tweets. This will be very helpful when extracting the features of the dataset in the week's programming assignment. Let's see how this function is implemented under the hood in this notebook. # ## Setup # # Let's import the required libraries for this lab: # + import nltk # Python library for NLP from nltk.corpus import twitter_samples # sample Twitter dataset from NLTK import matplotlib.pyplot as plt # visualization library import numpy as np # library for scientific computing and matrix operations nltk.download('twitter_samples') # - # #### Import some helper functions that we provided in the utils.py file: # * `process_tweet()`: Cleans the text, tokenizes it into separate words, removes stopwords, and converts words to stems. # * `build_freqs()`: This counts how often a word in the 'corpus' (the entire set of tweets) was associated with a positive label `1` or a negative label `0`. It then builds the `freqs` dictionary, where each key is a `(word,label)` tuple, and the value is the count of its frequency within the corpus of tweets. # + # download the stopwords for the process_tweet function nltk.download('stopwords') # import our convenience functions from utils import process_tweet, build_freqs # - # ## Load the NLTK sample dataset # # As in the previous lab, we will be using the [Twitter dataset from NLTK](http://www.nltk.org/howto/twitter.html#Using-a-Tweet-Corpus). # + # select the lists of positive and negative tweets all_positive_tweets = twitter_samples.strings('positive_tweets.json') all_negative_tweets = twitter_samples.strings('negative_tweets.json') # concatenate the lists, 1st part is the positive tweets followed by the negative tweets = all_positive_tweets + all_negative_tweets # let's see how many tweets we have print("Number of tweets: ", len(tweets)) # - # Next, we will build an array of labels that matches the sentiments of our tweets. This data type works pretty much like a regular list but is optimized for computations and manipulation. The `labels` array will be composed of 10000 elements. The first 5000 will be filled with `1` labels denoting positive sentiments, and the next 5000 will be `0` labels denoting the opposite. We can do this easily with a series of operations provided by the `numpy` library: # # * `np.ones()` - create an array of 1's # * `np.zeros()` - create an array of 0's # * `np.append()` - concatenate arrays # make a numpy array representing labels of the tweets labels = np.append(np.ones((len(all_positive_tweets))), np.zeros((len(all_negative_tweets)))) # ## Dictionaries # # In Python, a dictionary is a mutable and indexed collection. It stores items as key-value pairs and uses [hash tables](https://en.wikipedia.org/wiki/Hash_table) underneath to allow practically constant time lookups. In NLP, dictionaries are essential because it enables fast retrieval of items or containment checks even with thousands of entries in the collection. # ### Definition # # A dictionary in Python is declared using curly brackets. Look at the next example: dictionary = {'key1': 1, 'key2': 2} # The former line defines a dictionary with two entries. Keys and values can be almost any type ([with a few restriction on keys](https://docs.python.org/3/tutorial/datastructures.html#dictionaries)), and in this case, we used strings. We can also use floats, integers, tuples, etc. # # ### Adding or editing entries # # New entries can be inserted into dictionaries using square brackets. If the dictionary already contains the specified key, its value is overwritten. # + # Add a new entry dictionary['key3'] = -5 # Overwrite the value of key1 dictionary['key1'] = 0 print(dictionary) # - # ### Accessing values and lookup keys # # Performing dictionary lookups and retrieval are common tasks in NLP. There are two ways to do this: # # * Using square bracket notation: This form is allowed if the lookup key is in the dictionary. It produces an error otherwise. # * Using the [get()](https://docs.python.org/3/library/stdtypes.html#dict.get) method: This allows us to set a default value if the dictionary key does not exist. # # Let us see these in action: # Square bracket lookup when the key exist print(dictionary['key2']) # However, if the key is missing, the operation produce an error # The output of this line is intended to produce a KeyError print(dictionary['key8']) # When using a square bracket lookup, it is common to use an if-else block to check for containment first (with the keyword `in`) before getting the item. On the other hand, you can use the `.get()` method if you want to set a default value when the key is not found. Let's compare these in the cells below: # + # This prints a value if 'key1' in dictionary: print("item found: ", dictionary['key1']) else: print('key1 is not defined') # Same as what you get with get print("item found: ", dictionary.get('key1', -1)) # + # This prints a message because the key is not found if 'key7' in dictionary: print(dictionary['key7']) else: print('key does not exist!') # This prints -1 because the key is not found and we set the default to -1 print(dictionary.get('key7', -1)) # - # ## Word frequency dictionary # Now that we know the building blocks, let's finally take a look at the **build_freqs()** function in **utils.py**. This is the function that creates the dictionary containing the word counts from each corpus. # ```python # def build_freqs(tweets, ys): # """Build frequencies. # Input: # tweets: a list of tweets # ys: an m x 1 array with the sentiment label of each tweet # (either 0 or 1) # Output: # freqs: a dictionary mapping each (word, sentiment) pair to its # frequency # """ # # Convert np array to list since zip needs an iterable. # # The squeeze is necessary or the list ends up with one element. # # Also note that this is just a NOP if ys is already a list. # yslist = np.squeeze(ys).tolist() # # # Start with an empty dictionary and populate it by looping over all tweets # # and over all processed words in each tweet. # freqs = {} # for y, tweet in zip(yslist, tweets): # for word in process_tweet(tweet): # pair = (word, y) # if pair in freqs: # freqs[pair] += 1 # else: # freqs[pair] = 1 # return freqs # ``` # You can also do the for loop like this to make it a bit more compact: # # ```python # for y, tweet in zip(yslist, tweets): # for word in process_tweet(tweet): # pair = (word, y) # freqs[pair] = freqs.get(pair, 0) + 1 # ``` # As shown above, each key is a 2-element tuple containing a `(word, y)` pair. The `word` is an element in a processed tweet while `y` is an integer representing the corpus: `1` for the positive tweets and `0` for the negative tweets. The value associated with this key is the number of times that word appears in the specified corpus. For example: # # ``` # # "followfriday" appears 25 times in the positive tweets # ('followfriday', 1.0): 25 # # # "shame" appears 19 times in the negative tweets # ('shame', 0.0): 19 # ``` # Now, it is time to use the dictionary returned by the `build_freqs()` function. First, let us feed our `tweets` and `labels` lists then print a basic report: # + # create frequency dictionary freqs = build_freqs(tweets, labels) # check data type print(f'type(freqs) = {type(freqs)}') # check length of the dictionary print(f'len(freqs) = {len(freqs)}') # - # Now print the frequency of each word depending on its class. print(freqs) # Unfortunately, this does not help much to understand the data. It would be better to visualize this output to gain better insights. # ## Table of word counts # We will select a set of words that we would like to visualize. It is better to store this temporary information in a table that is very easy to use later. # + # select some words to appear in the report. we will assume that each word is unique (i.e. no duplicates) keys = ['happi', 'merri', 'nice', 'good', 'bad', 'sad', 'mad', 'best', 'pretti', '❤', ':)', ':(', '😒', '😬', '😄', '😍', '♛', 'song', 'idea', 'power', 'play', 'magnific'] # list representing our table of word counts. # each element consist of a sublist with this pattern: [<word>, <positive_count>, <negative_count>] data = [] # loop through our selected words for word in keys: # initialize positive and negative counts pos = 0 neg = 0 # retrieve number of positive counts if (word, 1) in freqs: pos = freqs[(word, 1)] # retrieve number of negative counts if (word, 0) in freqs: neg = freqs[(word, 0)] # append the word counts to the table data.append([word, pos, neg]) data # - # We can then use a scatter plot to inspect this table visually. Instead of plotting the raw counts, we will plot it in the logarithmic scale to take into account the wide discrepancies between the raw counts (e.g. `:)` has 3691 counts in the positive while only 2 in the negative). The red line marks the boundary between positive and negative areas. Words close to the red line can be classified as neutral. # + fig, ax = plt.subplots(figsize = (8, 8)) # convert positive raw counts to logarithmic scale. we add 1 to avoid log(0) x = np.log([x[1] + 1 for x in data]) # do the same for the negative counts y = np.log([x[2] + 1 for x in data]) # Plot a dot for each pair of words ax.scatter(x, y) # assign axis labels plt.xlabel("Log Positive count") plt.ylabel("Log Negative count") # Add the word as the label at the same position as you added the points just before for i in range(0, len(data)): ax.annotate(data[i][0], (x[i], y[i]), fontsize=12) ax.plot([0, 9], [0, 9], color = 'red') # Plot the red line that divides the 2 areas. plt.show() # - # This chart is straightforward to interpret. It shows that emoticons `:)` and `:(` are very important for sentiment analysis. Thus, we should not let preprocessing steps get rid of these symbols! # # Furthermore, what is the meaning of the crown symbol? It seems to be very negative! # ### That's all for this lab! We've seen how to build a word frequency dictionary and this will come in handy when extracting the features of a list of tweets. Next up, we will be reviewing Logistic Regression. Keep it up!
C1 Natural Language Processing with Classification/W1/Labs/2 Visualizing word frequencies.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: tensorflow # language: python # name: kernel_for_tf # --- import tensorflow as tf import numpy as np import matplotlib.pyplot as plt # ### Create a dataset # Set a seed np.random.seed(100) # Create three variables, x1, x2 and x3 x1 = 10*np.random.rand(1000) x2 = -3*np.random.rand(1000) x3 = 2*np.random.rand(1000) # make y a linear combination of these variables, plus some noise y = 11 + 1.5*x1 + 6*x2 -3*x3 + np.random.normal(loc = 0, scale = 3, size = 1000) y = y.reshape(1000,1) # ### Plot The data plt.plot(x1,y, 'ro') plt.show() plt.plot(x2,y,'ro') plt.show() plt.plot(x3,y,'ro') plt.show() # ### Solve via least squares # #### Arrange data in a matrix # Create a 1000 x 4 matrix, with a column for ones as the intercept # Call the matrix X model_mat = np.matrix([[1]*1000,x1,x2,x3]).transpose() X = model_mat # #### Solve for \hat{Y} = (X'X)^{-1}X'Y beta_hat = (X.getT()*X).getI()*X.getT()*y print(beta_hat) # #### What is the SSE? residuals = beta_hat[0] + beta_hat[1]*x1 + beta_hat[2]*x2 + beta_hat[3]*x3 sse = np.sum(np.multiply(residuals,residuals)) print(sse) # ### Solve via tensorflow # + # placeholders are for data input # variables are parameters to train b = tf.Variable([.3], dtype=tf.float32) #bias w1 = tf.Variable([.2], dtype=tf.float32) w2 = tf.Variable([.1], dtype=tf.float32) w3 = tf.Variable([[0]], dtype=tf.float32) # using capital "X" to indicate variables, the data is stored in lowercase "x" X1 = tf.placeholder(tf.float32) X2 = tf.placeholder(tf.float32) X3 = tf.placeholder(tf.float32) # - # #### Create the graph model = b + w1*X1 + w2*X2 + w3*X3 init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # This will evaluate the model at the current values of w1, w2 and w3 at the 100 differenet levels of x1, x2 and x3 # This isn't training the model at all, but executing the computation graph. print(sess.run(model, {X1: x1, X2: x2, X3: x3})) # #### Create the loss function - because we're attempting to mimic least squares regression we'll use squared error as our loss # create the variable Y using the uppercase Y = tf.placeholder(tf.float32) squared_error = tf.square(model - Y) loss = tf.reduce_sum(squared_error) sess.run(init) # this was key #sess.run(model, {X1: x1, X2: x2, X3: x3, Y: y}) print(sess.run(loss, {X1: x1, X2: x2, X3: x3, Y: y})) # #### Use an algorithm to find the correct weights # + optimizer = tf.train.GradientDescentOptimizer(0.00000001) train = optimizer.minimize(loss) sess.run(init) # reset values to incorrect defaults. for i in range(100): sess.run(train, {X1: x1, X2: x2, X3: x3, Y: y}) #print(sess.run([loss], {X1: x1, X2: x2, X3: x3, Y: y})) curr_b, curr_w1, curr_w2, curr_w3, curr_loss = sess.run([b,w1,w2,w3,loss], {X1: x1, X2: x2, X3: x3, Y: y}) print(curr_b, curr_w1, curr_w2, curr_w3, curr_loss) # - print(sse) # The best fit line has error 100,500, roughly but after 100 iterations gradient descent as error of 64 million! print([curr_b, curr_w1, curr_w2, curr_w3]) print(beta_hat) # Instead of 100 iterations, try 10,000 and see where we are # + optimizer = tf.train.GradientDescentOptimizer(0.00000001) train = optimizer.minimize(loss) sess.run(init) # reset values to incorrect defaults. for i in range(10000): sess.run(train, {X1: x1, X2: x2, X3: x3, Y: y}) if i % 1000 == 0: print(i) curr_loss = sess.run([loss], {X1: x1, X2: x2, X3: x3, Y: y}) print(curr_loss) # - # After that the error is still really high! Go for 1 million. # + optimizer = tf.train.GradientDescentOptimizer(0.00000001) train = optimizer.minimize(loss) sess.run(init) # reset values to incorrect defaults. for i in range(100000): sess.run(train, {X1: x1, X2: x2, X3: x3, Y: y}) if i % 10000 == 0: print(i) curr_loss = sess.run([loss], {X1: x1, X2: x2, X3: x3, Y: y}) print(curr_loss) # - # What makes this challenging is that the learning rate needs to be this low otherwise the loss function blows up. # What else can we try: # - center and scale the input parameters # - minibatches # - adam optimization, velocity # ### Center and Scale the inputs # + x1_scaled = (x1 - np.mean(x1))/np.std(x1) x2_scaled = (x2 - np.mean(x2))/np.std(x2) x3_scaled = (x3 - np.mean(x3))/np.std(x3) #expect to see 0,1 in each of these print([np.mean(x1_scaled),np.std(x1_scaled)]) print([np.mean(x2_scaled),np.std(x2_scaled)]) print([np.mean(x3_scaled),np.std(x3_scaled)]) # - model_mat_scaled = np.matrix([[1]*1000,x1_scaled,x2_scaled,x3_scaled]).transpose() X_scaled = model_mat_scaled beta_hat_scaled = (X_scaled.getT()*X_scaled).getI()*X_scaled.getT()*y print(beta_hat_scaled) residuals_scaled = beta_hat_scaled[0] + beta_hat_scaled[1]*x1 + beta_hat_scaled[2]*x2 + beta_hat_scaled[3]*x3 sse_scaled = np.sum(np.multiply(residuals_scaled,residuals_scaled)) print(sse_scaled) # #### Try again with tensor flow # + b_cs = tf.Variable([.3], dtype=tf.float32) #bias w1_cs = tf.Variable([.2], dtype=tf.float32) w2_cs = tf.Variable([.1], dtype=tf.float32) w3_cs = tf.Variable([[0]], dtype=tf.float32) # using capital "X" to indicate variables, the data is stored in lowercase "x" X1_cs = tf.placeholder(tf.float32) X2_cs = tf.placeholder(tf.float32) X3_cs = tf.placeholder(tf.float32) model_cs = b_cs + w1_cs*X1_cs + w2_cs*X2_cs + w3_cs*X3_cs Y = tf.placeholder(tf.float32) squared_error = tf.square(model_cs - Y) loss = tf.reduce_sum(squared_error) optimizer = tf.train.GradientDescentOptimizer(0.00000001) train = optimizer.minimize(loss) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # reset values to incorrect defaults. for i in range(10000): sess.run(train, {X1_cs: x1_scaled, X2_cs: x2_scaled, X3_cs: x3_scaled, Y: y}) curr_b, curr_w1, curr_w2, curr_w3, curr_loss = sess.run([b,w1,w2,w3,loss], {X1_cs: x1_scaled, X2_cs: x2_scaled, X3_cs: x3_scaled, Y: y}) print(curr_b, curr_w1, curr_w2, curr_w3, curr_loss) # - print(curr_loss/sse_scaled) print(61014650.0/sse) # a 6-fold decrease! # But that could also be a function of the weights, or the learning rate. Could I learn __faster__ with the centered # and scaled versions? # + optimizer = tf.train.GradientDescentOptimizer(0.0000001) train = optimizer.minimize(loss) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # reset values to incorrect defaults. for i in range(10000): sess.run(train, {X1_cs: x1_scaled, X2_cs: x2_scaled, X3_cs: x3_scaled, Y: y}) curr_b, curr_w1, curr_w2, curr_w3, curr_loss = sess.run([b,w1,w2,w3,loss], {X1_cs: x1_scaled, X2_cs: x2_scaled, X3_cs: x3_scaled, Y: y}) print(curr_b, curr_w1, curr_w2, curr_w3, curr_loss) # - print(curr_loss/sse_scaled) print(61014650.0/sse) # So a larger learning rate did not effect this... # Try a different optimizer, Adam. Based on this [link](https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/) it may not be a good fit for this problem. Use the defaults. # + optimizer = tf.train.AdamOptimizer() train = optimizer.minimize(loss) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # reset values to incorrect defaults. for i in range(10000): sess.run(train, {X1_cs: x1_scaled, X2_cs: x2_scaled, X3_cs: x3_scaled, Y: y}) curr_b, curr_w1, curr_w2, curr_w3, curr_loss = sess.run([b,w1,w2,w3,loss], {X1_cs: x1_scaled, X2_cs: x2_scaled, X3_cs: x3_scaled, Y: y}) print(curr_b, curr_w1, curr_w2, curr_w3, curr_loss) # - # Compare just to non Adam optimizer: # adam = 61014564.0 grad_desc = 61014756.0 print(adam/grad_desc) # Hardly a difference! # ### What else can I try to make this algorithm learn faster? # #### Try mini-batch learning # + # I'm revisiting this...what are the variables? b_cs = tf.Variable([.3], dtype=tf.float32) #bias w1_cs = tf.Variable([.2], dtype=tf.float32) w2_cs = tf.Variable([.1], dtype=tf.float32) w3_cs = tf.Variable([[0]], dtype=tf.float32) # using capital "X" to indicate variables, the data is stored in lowercase "x" X1_cs = tf.placeholder(tf.float32) X2_cs = tf.placeholder(tf.float32) X3_cs = tf.placeholder(tf.float32) model_mini_batch = b_cs + w1_cs*X1_cs + w2_cs*X2_cs + w3_cs*X3_cs # we're calling "Y" a placeholder so that means it corresponds to the labeled outcomes Y = tf.placeholder(tf.float32) squared_error = tf.square(model_mini_batch - Y) loss = tf.reduce_sum(squared_error) optimizer = tf.train.GradientDescentOptimizer(0.00000001) train = optimizer.minimize(loss) batch_input = [x1_scaled,x2_scaled,x3_scaled,y] batch_output = tf.train.batch(batch_input,batch_size = 128,allow_smaller_final_batch = True) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # reset values to incorrect defaults. tensor_dict2 = {"X1_cs": x1_scaled, "X2_cs": x2_scaled, "X3_cs": x3_scaled, "y": y} tensor_dict3 = {tf.placeholder(tf.float32): x1_scaled, tf.placeholder(tf.float32): x2_scaled, tf.placeholder(tf.float32): x3_scaled, tf.placeholder(tf.float32): y} #tensor_dict = {x1_scaled, x2_scaled, x3_scaled, y} #tensor_list = [x1_scaled, x2_scaled, x3_scaled, y] #I think I need to set up the batches before hand... #x1_batch, x2_batch, x3_batch, y_batch = tf.train.batch(tensor_dict,batch_size=128) #features = [x1_scaled, x2_scaled, x3_scaled] #labels = y #batch_dict = tf.train.batch(tensor_dict3,batch_size=128,allow_smaller_final_batch=True) #batch_list = tf.train.batch([x1_scaled, x2_scaled, x3_scaled, y], batch_size = 128) #features_batch, labels_batch = tf.train.batch([features,labels], batch_size = 128, allow_smaller_final_batch = True) #feature_label_batch_dict = {"features": feature_batch, # "labels": labels_batch} #x1_batch,x2_batch,x3_batch,y_batch = tf.train.batch([x1_scaled,x2_scaled,x3_scaled,y], # batch_size = 128, # allow_smaller_final_batch = True) #batch_input = [x1_scaled,x2_scaled,x3_scaled,y] #batch_output = tf.train.batch(batch_input,batch_size = 128,allow_smaller_final_batch = True) #x1_batch, x2_batch, x3_batch, y_batch = tf.train.batch(tensor_list,batch_size=128) for i in range(10000): # sess.run(train, {X1_cs: x1_batch, X2_cs: x2_batch, X3_cs: x3_batch, Y: y_batch}) print(sess.run([batch_input, batch_output])) # sess.run(train, {X1_cs: x1_batch, X2_cs: x2_batch, X3_cs: x3_batch, Y: y_batch}) # sess.run(batch_dict) # sess.run(train, feature_label_batch_dict) # sess.run(train, batch_list) #curr_b, curr_w1, curr_w2, curr_w3, curr_loss = sess.run([b,w1,w2,w3,loss], {X1_cs: x1_scaled, X2_cs: x2_scaled, X3_cs: x3_scaled, Y: y}) #print(curr_b, curr_w1, curr_w2, curr_w3, curr_loss) # - # So I'm getting caught up in some details in using the High Level API vs. the Low Level API. I've been trying to execute low level API things, such as loop through the data so many times and inputting things that belong in an # estimator. # + #### Maybe my computation graph isn't what I think it is? Let's try TensorBoard # -
drafts/.ipynb_checkpoints/simple regressions-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- # + # %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from epytox import guts from epytox import exposure sns.set(rc={"figure.figsize": (8, 6)}, font_scale=1.5, style='white', context='notebook') # + #Load exposure data and create interpolating functions exposure_data = pd.read_excel('../data/carbaryl.xlsx', sheetname='Exposure', index_col=0) exposure_funcs = exposure.multi_interp_treatment_generator(exposure_data) times = np.linspace(exposure_data.index[0], exposure_data.index[-1], 200) ax = exposure_data.plot(marker='s', ls='') for treat, f in exposure_funcs.items(): ax.plot(times, f(times), 'k--') exposure_data.head() # - #Load survival data survival_data = pd.read_excel('../data/carbaryl.xlsx', sheetname='Survival', index_col=0) survival_data.plot(marker='o') survival_data.head() guts_sic_sd = guts.SIC_SD() fig, ax = guts.plot_fit(guts_sic_sd, survival_data, exposure_funcs, subplots=False) datasets = [(exposure_funcs[treat], survival_data[treat]) for treat in survival_data.columns] # Fit model by minimizing negative log likelihood function guts_sic_sd = guts.SIC_SD() print(guts_sic_sd.params) # %time res = guts_sic_sd.fit(datasets) guts_sic_sd.params = res.params fig, ax = guts.plot_fit(guts_sic_sd, survival_data, exposure_funcs, subplots=False) #Sample posterior with emcee # %time mcmcres = guts_sic_sd.mcmc(datasets, nwalkers=10, nsteps=1000) #Plot mcmc results fig, ax, c = guts.plot_mcmc(mcmcres, guts_sic_sd.mle_result)
notebooks/carbaryl_survival.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import cirpy as cpy import numpy as np names = pd.read_excel('../../big datasets/cpdb.chemname.xls') info = pd.read_excel('../../big datasets/cpdb.ncintp.xls') df = pd.merge(info, names, on='chemcode') df.head() df = df[df.opinion != '0'] df = df[df.opinion != '-'] df.head() df.shape df.sort_values('name', inplace=True) df.drop_duplicates(subset='name', keep='first', inplace=True) df.shape # smiles = [] # for index, row in df.iterrows(): # smiles.append(cpy.resolve(row['name'], 'smiles')) cpy.resolve('1-AZOXYPROPANE', 'smiles') small = df.iloc[:5, :] small.head() smiles = [] for index, row in df.iterrows(): smile = cpy.resolve(row['name'], 'smiles') smiles.append(smile) smiles smiles2= [str(i) for i in smiles] df['SMILES'] = smiles2 df.head() from rdkit import rdBase from rdkit import Chem from rdkit.Chem.rdmolfiles import SmilesWriter from rdkit.Chem.rdmolfiles import SDWriter from rdkit.Chem.Fingerprints import FingerprintMols def fingerprint(input_df): '''From the input dataframe, makes a list of rdkit Mol objects and makes a list of rdkit fingerprints generated from those Mol objects. Inserts both lists as new columns and returns the expanded dataframe.''' mol_list = [] fp_list = [] for index, row in input_df.iterrows(): mol = Chem.rdmolfiles.MolFromSmiles(row['SMILES']) if not mol: mol_list.append('None') fp_list.append('None') continue mol_list.append(mol) #get mols from SMILES and add mols to list fp = FingerprintMols.FingerprintMol(mol) fp_list.append(mol) #get fingerprints from mols and and fingerprints to list input_df['Mol'] = mol_list input_df['Fingerprint'] = fp_list return input_df # testmol = Chem.rdmolfiles.MolFromSmiles('Cc1cc(C)c(N)cc1C') fingerprint(df) neg = df[df.Mol != 'None'] neg.shape neg.to_csv('../../big datasets/drug-neg-data.csv') mols = [mol for mol in neg['Mol']] writer = SDWriter('../../big datasets/neg.sdf') for mol in mols: writer.write(mol) writer.close() tox = pd.read_csv('../../Downloads/PaDEL-Descriptor/toxic/neg-drug.csv') tox.head() tox.drop_duplicates(inplace=True, keep='first') from natsort import natsorted, index_natsorted, order_by_index tox = tox.reindex(index=order_by_index(tox.index, index_natsorted(tox.Name))) neg = pd.read_csv('../../big datasets/drug-neg-data.csv') neg.head() neg1=neg[['SMILES', 'name', 'cas', 'chemcode']] tox = tox.drop(['Name'], axis=1) drugneg = neg1.join(tox, how='left') drugneg.head() zincpos.head(1) zincpos.shape columnss = list(zincpos) columnss.remove('Unnamed: 0') drugneg1 = drugneg[columnss].copy() drugneg2 = drugneg[columnss].copy() listed = [] for index, row in drugneg2.iterrows(): listed.append('0') drugneg2.insert(loc=0, column='Unnamed: 0', value=listed) drugneg2.head() drugneg.to_csv('../../big datasets/drugneg.csv') zincpos = pd.read_csv('../../big datasets/use_me_zinc_positive.csv') zincpos.shape zincpos.head() drugall = zincpos.append(drugneg2) drugall.shape zincpos.shape drugall.to_csv('../../big datasets/drug_data_to_use.csv') drugall = pd.read_csv('../../big datasets/drug_data_to_use.csv') drug_neg = drugall.iloc[1017:,:] # + classtox = [] for index, row in drug_neg.iterrows(): classtox.append('Tox') drug_neg.insert(loc=1, column='Class', value=classtox) # - drug_neg.head() # + drug_pos = drugall.iloc[:1017,:] drug_pos.head() # + classdrug = [] for index, row in drug_pos.iterrows(): classdrug.append('Drug') drug_pos.insert(loc=1, column='Class', value=classdrug) # - drug_pos.head() # + drug_pos_features = drug_pos.iloc[:,2:] drug_pos_features.head() drug_pos_class = drug_pos['Class'] drug_neg_features = drug_neg.iloc[:,2:] drug_neg_class = drug_neg['Class'] # - from sklearn.model_selection import train_test_split from imblearn.over_sampling import SMOTE def encode(series): return pd.get_dummies(series.astype(str)) x_pos = drug_pos_features y_pos = encode(drug_pos_class) x_neg = drug_neg_features y_neg = encode(drug_neg_class) x_train_pos, x_test_pos, y_train_pos, y_test_pos = train_test_split(x_pos, y_pos, test_size=0.2, random_state=12) x_train_neg, x_test_neg, y_train_neg, y_test_neg = train_test_split(x_neg, y_neg, test_size=0.2, random_state=12) # + x_test_neg.drop(x_test_neg.select_dtypes(['object']), inplace=True, axis=1) x_test_neg.astype('float64') x_test_neg.fillna(0) x_train_neg.drop(x_train_neg.select_dtypes(['object']), inplace=True, axis=1) x_train_neg.astype('float64') x_train_neg.fillna(0) #dfs = (x_test_neg, x_train_neg) #for df in dfs: # df.replace([np.inf, -np.inf], np.nan) # df.fillna(0) # - dfs = (x_test_neg, x_train_neg) for df in dfs: df = pd.DataFrame(df) df.reset_index() df.dropna # + x_test_neg = pd.DataFrame(x_test_neg.replace([np.inf, -np.inf], np.nan)) x_test_neg.fillna(0) x_train_neg = pd.DataFrame(x_train_neg.replace([np.inf, -np.inf], np.nan)) x_train_neg.fillna(0) # - from sklearn.preprocessing import StandardScaler xscaler = StandardScaler().fit(x_train_neg) x_train_neg = xscaler.transform(x_train_neg) testscaler = StandardScaler().fit(x_test_neg) x_test_neg = testscaler.transform(x_test_neg) np.all(np.isfinite(x_train_neg)) print(x_test_neg.isnull().sum()) sm = SMOTE(random_state=2) x_train_res, y_train_res = sm.fit_resample(x_train_neg, y_train_neg)
notebooks/Toxicity data .ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Seaborn Workshop # Seaborn is a Python data visualization library based on matplotlib. # It provides a high-level interface for drawing attractive and informative statistical graphics. # # ___ # # Installing Seaborn (conda installation recommend) # # https://seaborn.pydata.org/installing.html # # ___ # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # seaborn library # - # For this session, you will need the data set named ```heart.csv```, which can be downloaded from our [GitHub repository](https://github.com/IC-Computational-Biology-Society/Pandas_Matplotlib_session.git) dedicated to today's workshop. Make sure you save it in the same directory as this Jupyter notebook. # # ___ # ## Getting Started df = pd.read_csv('heart.csv') display(df.head()) # **Dataset description** # # - ```age```: The patient's age # - ```gender```: 0 = female and 1 = male # - ```cp```: The chest pain experienced (Value 1: typical angina, Value 2: atypical angina, Value 3: non-anginal pain, Value 4: asymptomatic) # - ```trestbps```: The patient's resting blood pressure (mm Hg on admission to the hospital) # - ```chol```: The patient's cholesterol measurement in mg/dl # - ```fbs```: The patient's fasting blood sugar (> 120 mg/dl, 1 = true; 0 = false) # - ```restecg```: Resting electrocardiographic measurement (0 = normal, 1 = having ST-T wave abnormality, 2 = showing probable or definite left ventricular hypertrophy by Estes' criteria) # - ```thalach```: The patient's maximum heart rate achieved # - ```exang```: Exercise induced angina (0 = no, 1 = yes) # - ```oldpeak```: ST depression induced by exercise relative to rest ('ST' relates to positions on the ECG plot. See more here) # - ```slope```: the slope of the peak exercise ST segment (Value 1: upsloping, Value 2: flat, Value 3: downsloping) # - ```ca```: The number of major vessels (0-3) # - ```thal```: A blood disorder called thalassemia (3 = normal; 6 = fixed defect; 7 = reversable defect) # - ```target```: Heart disease (0 = no, 1 = yes) # get the number of patients (number of rows) and columns of the dataset print ("number of patients :", len(df)) print ("number of columns :", len(df.columns)) # check if any values are missing df.isnull().sum() # ## Task 1 # Plot a histogram using ```sns.histplot``` function of the patients age distribution. # Set the paramete ```kde``` to ```True``` inlcude the kernel density estimate. # # Don't forget to include the plot's title. # ___ # + plt.subplots(1, 1, figsize=(6, 4)) sns.histplot(data=df, x = 'age', color = "mediumseagreen", kde=True, ec = 'w') plt.title("Histogram of age", fontsize=12) plt.show() # - # Create a new histogram using again the ```sns.histplot``` function but showing in different colour the patients with disease (target = 1) and patients without disease (target = 0) # ____ # + plt.subplots(1, 1, figsize=(6, 4)) sns.histplot(data=df, x = 'age', hue="target", # set hue to 'target' to indicate that this variable will be shown in different colour element="step", # easier for visualisation of overlapping bars palette= ["steelblue", "indianred"]) #set a specific colour to each histogram plt.title("Histogram of age", fontsize=12) plt.show() # - # ## Task 2 # Similar to histograms are kernel density estimate (KDE) plots, which can be used for visualising the distribution of observations in a dataset. KDE represents the data using a continuous probability density curve. # # ___ # # Use the ```sns.kdeplot``` function to visualise to the distribution of resting blood pressure ```trestbps``` with the patients ```gender``` (0 = female, 1 = male) # # Rename the legend to 'female' and 'male' # ___ # + plt.subplots(1, 1, figsize=(6, 4)) sns.kdeplot(data=df, x="trestbps", hue="gender", shade=True) plt.legend(['female', 'male']) plt.title ("Distribution of resting blood pressure based on gender") plt.show() # - # Create a new figure that contains two subplots, one showing the distribution of resting blood pressure ```trestbps``` with gender and the other showing the distribution of cholesterol ```chol``` with gender. # # Include a title to each of the subplots and rename the legends to 'female' and 'male'. # ___ # + figure, axes = plt.subplots(1, 2, figsize=(12, 4)) sns.kdeplot(data=df, x="trestbps", hue="gender", shade=True, ax=axes[0]) axes[0].set_title("Distribution of resting blood pressure based on gender") axes[0].legend(['female', 'male']) sns.kdeplot(data=df, x="chol", hue="gender", shade=True, ax=axes[1]) axes[1].set_title("Distribution of cholesterol based on gender") axes[1].legend(['female', 'male']) plt.show() # - # ## Task 3 # Use the ```sns.countplot``` function to visualise the counts of patients with and without the disease based on their gender. # # Rename the x ticks labels to 'female' and 'male' and the legend values to 'disease' and 'no disease'. # ___ # + figure, axes = plt.subplots(1, 1, figsize=(6, 4)) axes = sns.countplot(data= df, x="gender", hue="target", palette= ["steelblue", "indianred"], alpha=0.75) axes.set_xticklabels(['female', 'male']) axes.legend(['no disease', 'disease']) plt.show() # - # ## Task 4 # Correlation indicates how the features are related to each other or to the target variable. # The correlation may be positive (increase in one value of the feature increases the value of the target variable) or negative (increase in one value of the feature decreases the value of the target variable). # # Plot a correlation matrix using the ```sns.heatmap``` showing the correlation of the features to each other and the target value. # # **Hint** # # The correlation between the variable in the data can be caluated using ``` df.corr()```, which needs to be added as the data parameter of the heatmap function. plt.figure(figsize = (12, 10)) ax = sns.heatmap(df.corr(), linewidth=0.01, annot = True, fmt='.2g', cmap="YlGnBu") plt.title("Correlation matrix", fontsize=14) plt.show()
Seaborn_tutorial_supervisor_version.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Recommendation System for Netflix Prize Dataset using SVD # Import libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt # # To load the 'combined_data_1' dataset after uploading it to Jupyter notebook # + # Reading dataset file dataset = pd.read_csv('/Users/samyakjain/Downloads/combined_data_1.txt',header = None, names = ['Cust_Id', 'Rating'], usecols = [0,1]) # Convert Ratings column to a float dataset['Rating'] = dataset['Rating'].astype(float) # - #To print the datatype of columns dataset.dtypes # + #To inspect the shape of the datset dataset.shape # - #To print the head of dataset dataset.head() # + #To find the distribution of different ratings in the datset p = dataset.groupby('Rating')['Rating'].agg(['count']) p # + # get movie count by counting nan values movie_count = dataset.isnull().sum()[1] movie_count # + # get customer count cust_count = dataset['Cust_Id'].nunique()-movie_count cust_count # + # get rating count rating_count = dataset['Cust_Id'].count() - movie_count rating_count # - # ## To plot the distribution of the ratings in as a bar plot # + ax = p.plot(kind = 'barh', legend = False, figsize = (15,10)) plt.title(f'Total pool: {movie_count} Movies, {cust_count} customers, {rating_count} ratings given', fontsize=20) plt.axis('off') for i in range(1,6): ax.text(p.iloc[i-1][0]/4, i-1, 'Rating {}: {:.0f}%'.format(i, p.iloc[i-1][0]*100 / p.sum()[0]), color = 'white', weight = 'bold') # - # # To create a numpy array containing movie ids corresponding to the rows in the 'ratings' dataset # + # To count all the 'nan' values in the Ratings column in the 'ratings' dataset df_nan = pd.DataFrame(pd.isnull(dataset.Rating), ) df_nan.head() # + # To store the index of all the rows containing 'nan' values df_nan = df_nan[df_nan['Rating'] == True] df_nan.shape # + # To reset the index of the dataframe df_nan = df_nan.reset_index() df_nan.head() # + #To create a numpy array containing movie ids according the 'ratings' dataset movie_np = [] movie_id = 1 for i,j in zip(df_nan['index'][1:],df_nan['index'][:-1]): # numpy approach temp = np.full((1,i-j-1), movie_id) movie_np = np.append(movie_np, temp) movie_id += 1 # Account for last record and corresponding length # numpy approach last_record = np.full((1,len(dataset) - df_nan.iloc[-1, 0] - 1),movie_id) movie_np = np.append(movie_np, last_record) print(f'Movie numpy: {movie_np}') print(f'Length: {len(movie_np)}') # + #x =zip(df_nan['index'][1:],df_nan['index'][:-1]) # + #temp = np.full((1,547), 1) # + #print(temp) # + #tuple(x) # + #To append the above created array to the datset after removing the 'nan' rows dataset = dataset[pd.notnull(dataset['Rating'])] dataset['Movie_Id'] = movie_np.astype(int) dataset['Cust_Id'] =dataset['Cust_Id'].astype(int) print('-Dataset examples-') dataset.head() # - dataset.shape # # Data Cleaning # + f = ['count','mean'] # + #To create a list of all the movies rated less often(only include top 30% rated movies) dataset_movie_summary = dataset.groupby('Movie_Id')['Rating'].agg(f) dataset_movie_summary.index = dataset_movie_summary.index.map(int) movie_benchmark = round(dataset_movie_summary['count'].quantile(0.7),0) drop_movie_list = dataset_movie_summary[dataset_movie_summary['count'] < movie_benchmark].index print('Movie minimum times of review: {}'.format(movie_benchmark)) # - # + #To create a list of all the inactive users(users who rate less often) dataset_cust_summary = dataset.groupby('Cust_Id')['Rating'].agg(f) dataset_cust_summary.index = dataset_cust_summary.index.map(int) cust_benchmark = round(dataset_cust_summary['count'].quantile(0.7),0) drop_cust_list = dataset_cust_summary[dataset_cust_summary['count'] < cust_benchmark].index print(f'Customer minimum times of review: {cust_benchmark}') # - print(f'Original Shape: {dataset.shape}') dataset = dataset[~dataset['Movie_Id'].isin(drop_movie_list)] dataset = dataset[~dataset['Cust_Id'].isin(drop_cust_list)] print('After Trim Shape: {}'.format(dataset.shape)) # + print('-Data Examples-') dataset.head() # - # # Create ratings matrix for 'ratings' matrix with Rows = userId, Columns = movieId # + df_p = pd.pivot_table(dataset,values='Rating',index='Cust_Id',columns='Movie_Id') print(df_p.shape) # - df_p.head() # ### To load the movie_titles dataset # + df_title = pd.read_csv('/Users/samyakjain/Downloads/movie_titles.csv', encoding = "ISO-8859-1", header = None, names = ['Movie_Id', 'Year', 'Name']) df_title.set_index('Movie_Id', inplace = True) print (df_title.head(10)) # - # # To install the scikit-surprise library for implementing SVD # ### Run the following command in the Anaconda Prompt to install surprise package conda install -c conda-forge scikit-surprise # + # Import required libraries import math import re import matplotlib.pyplot as plt from surprise import Reader, Dataset, SVD from surprise.model_selection import cross_validate # + # Load Reader library reader = Reader() # get just top 100K rows for faster run time data = Dataset.load_from_df(dataset[['Cust_Id', 'Movie_Id', 'Rating']][:100000], reader) # Use the SVD algorithm. svd = SVD() # Compute the RMSE of the SVD algorithm cross_validate(svd, data, measures=['RMSE', 'MAE'], cv=3, verbose=True) # - dataset.head() # ## To find all the movies rated as 5 stars by user with userId = 712664 dataset_712664 = dataset[(dataset['Cust_Id'] == 712664) & (dataset['Rating'] == 5)] dataset_712664 = dataset_712664.set_index('Movie_Id') dataset_712664 = dataset_712664.join(df_title)['Name'] dataset_712664.head(10) # # Train an SVD to predict ratings for user with userId = 1 # + # Create a shallow copy for the movies dataset user_712664 = df_title.copy() user_712664 = user_712664.reset_index() #To remove all the movies rated less often user_712664 = user_712664[~user_712664['Movie_Id'].isin(drop_movie_list)] # getting full dataset data = Dataset.load_from_df(dataset[['Cust_Id', 'Movie_Id', 'Rating']], reader) #create a training set for svd trainset = data.build_full_trainset() svd.fit(trainset) #Predict the ratings for user_712664 user_712664['Estimate_Score'] = user_712664['Movie_Id'].apply(lambda x: svd.predict(712664, x).est) #Drop extra columns from the user_712664 data frame user_712664 = user_712664.drop('Movie_Id', axis = 1) # Sort predicted ratings for user_712664 in descending order user_712664 = user_712664.sort_values('Estimate_Score', ascending=False) #Print top 10 recommendations print(user_712664.head(10)) # -
Recommender_System.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # GGBunch # # *GGBunch* allows to show a collection of plots on one figure. Each plot in the collection can have arbitrary location and size. Where is no automatic layot inside the bunch. # # + import numpy as np from lets_plot import * LetsPlot.setup_html() # + cov=[[1, 0], [0, 1]] x, y = np.random.multivariate_normal(mean=[0,0], cov=cov, size=400).T data = dict( x = x, y = y ) # - # ### View this data as a scatter plot and as a histogram # + p = ggplot(data) + ggsize(600,200) scatter = p + geom_point(aes('x', 'y'), color='black', alpha=.4) scatter # - histogram = p + geom_histogram(aes('x', y = '..count..'), fill='dark_magenta') histogram # ### Combine both plots in one figure # Set scale X limits manually because of computed automatically # the scale used by each plot would be slightly different # and the stacked plots wouldn't be aligned. scale_x = scale_x_continuous(limits=[-3.5, 3.5]) bunch = GGBunch() bunch.add_plot(histogram + scale_x, 0, 0) bunch.add_plot(scatter + scale_x, 0, 200) bunch # ### Adjust visuals of the bunch figure upper_theme = theme(axis_title_x='blank', axis_ticks_x='blank', axis_line='blank') lower_theme = theme(axis_text_x='blank', axis_ticks_x='blank', axis_line='blank') bunch1 = GGBunch() bunch1.add_plot(histogram + upper_theme + scale_x, 0, 0) bunch1.add_plot(scatter + lower_theme + scale_x, 0, 200) bunch1 # ### Adjust plot sizes # # *add_plot()* method has two more (optional) parameters: *width* and *height*. # # This values will override plot size earlier defined via *ggsize()* function. bunch2 = GGBunch() bunch2.add_plot(histogram + upper_theme + scale_x, 0, 0, 600, 100) bunch2.add_plot(scatter + lower_theme + scale_x, 0, 100, 600, 300) bunch2
source/examples/cookbook/ggbunch.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # # # Neural Style Transfer with ``pystiche`` # # This example showcases how a basic Neural Style Transfer (NST), i.e. image # optimization, could be performed with ``pystiche``. # # <div class="alert alert-info"><h4>Note</h4><p>This is an *example how to implement an NST* and **not** a # *tutorial on how NST works*. As such, it will not explain why a specific choice was # made or how a component works. If you have never worked with NST before, we # **strongly** suggest you to read the `gist` first.</p></div> # # # ## Setup # # We start this example by importing everything we need and setting the device we will # be working on. # # # + import pystiche from pystiche import demo, enc, loss, ops, optim from pystiche.image import show_image from pystiche.misc import get_device, get_input_image print(f"I'm working with pystiche=={pystiche.__version__}") device = get_device() print(f"I'm working with {device}") # - # ## Multi-layer Encoder # The ``content_loss`` and the ``style_loss`` operate on the encodings of an image # rather than on the image itself. These encodings are generated by a pretrained # encoder. Since we will be using encodings from multiple layers we load a # multi-layer encoder. In this example we use the ``vgg19_multi_layer_encoder`` that is # based on the ``VGG19`` architecture introduced by Simonyan and Zisserman # :cite:`SZ2014` . # # multi_layer_encoder = enc.vgg19_multi_layer_encoder() print(multi_layer_encoder) # ## Perceptual Loss # # The core components of every NST are the ``content_loss`` and the ``style_loss``. # Combined they make up the perceptual loss, i.e. the optimization criterion. In this # example we use the ``feature_reconstruction_loss`` introduced by Mahendran and # Vedaldi :cite:`MV2015` as ``content_loss``. # # We first extract the ``content_encoder`` that generates encodings from the # ``content_layer``. Together with the ``content_weight`` we initialize a # :class:`~pystiche.ops.comparison.FeatureReconstructionOperator` serving as content # loss. # # content_layer = "relu4_2" content_encoder = multi_layer_encoder.extract_encoder(content_layer) content_weight = 1e0 content_loss = ops.FeatureReconstructionOperator( content_encoder, score_weight=content_weight ) print(content_loss) # We use the ``gram_loss`` introduced by <NAME>, and Bethge :cite:`GEB2016` as # ``style_loss``. Unlike before we use multiple ``style_layers``. The individual # :class:`~pystiche.ops.comparison.GramOperator` s can be conveniently bundled in a # :class:`~pystiche.ops.container.MultiLayerEncodingOperator`. # # # + style_layers = ("relu1_1", "relu2_1", "relu3_1", "relu4_1", "relu5_1") style_weight = 1e3 def get_style_op(encoder, layer_weight): return ops.GramOperator(encoder, score_weight=layer_weight) style_loss = ops.MultiLayerEncodingOperator( multi_layer_encoder, style_layers, get_style_op, score_weight=style_weight, ) print(style_loss) # - # We combine the ``content_loss`` and ``style_loss`` into a joined # :class:`~pystiche.loss.perceptual.PerceptualLoss`, which will serve as ``criterion`` # for the optimization. # # criterion = loss.PerceptualLoss(content_loss, style_loss).to(device) print(criterion) # ## Images # # We now load and show the images that will be used in the NST. The images will be # resized to ``size=500`` pixels. # # images = demo.images() images.download() size = 500 # <div class="alert alert-info"><h4>Note</h4><p>``ìmages.download()`` downloads **all** demo images upfront. If you only want to # download the images for this example remove this line. They will be downloaded at # runtime instead.</p></div> # # <div class="alert alert-info"><h4>Note</h4><p>If you want to work with other images you can load them with # :func:`~pystiche.image.io.read_image`: # # .. code-block:: python # # from pystiche.image import read_image # # my_image = read_image("my_image.jpg", size=size, device=device)</p></div> # # content_image = images["bird1"].read(size=size, device=device) show_image(content_image, title="Content image") style_image = images["paint"].read(size=size, device=device) show_image(style_image, title="Style image") # ## Neural Style Transfer # # After loading the images they need to be set as targets for the optimization # ``criterion``. # # criterion.set_content_image(content_image) criterion.set_style_image(style_image) # As a last preliminary step we create the input image. We start from the # ``content_image`` since this way the NST converges quickly. # # <div class="alert alert-info"><h4>Note</h4><p>If you want to start from a white noise image instead use # ``starting_point = "random"`` instead: # # .. code-block:: python # # starting_point = "random" # input_image = get_input_image(starting_point, content_image=content_image)</p></div> # # starting_point = "content" input_image = get_input_image(starting_point, content_image=content_image) show_image(input_image, title="Input image") # Finally we run the NST with the :func:`~pystiche.optim.optim.image_optimization` for # ``num_steps=500`` steps. # # In every step perceptual loss is calculated # with the ``criterion`` and propagated backward to the ``input_image``. If # ``get_optimizer`` is not specified, as is the case here, the # :func:`~pystiche.optim.optim.default_image_optimizer`, i.e. # :class:`~torch.optim.lbfgs.LBFGS` is used. # # <div class="alert alert-info"><h4>Note</h4><p>By default ``pystiche`` logs the time during an optimization. In order to reduce # the clutter, we use a minimal :func:`~pystiche.demo.logger` here.</p></div> # # output_image = optim.image_optimization( input_image, criterion, num_steps=500, logger=demo.logger() ) # After the NST is complete we show the result. # # show_image(output_image, title="Output image") # ## Conclusion # # If you started with the basic NST example without ``pystiche`` this example hopefully # convinced you that ``pystiche`` is a helpful tool. But this was just the beginning: # to unleash its full potential head over to the more advanced examples. # #
examples_jupyter/beginner/example_nst_with_pystiche.ipynb
# --- # jupyter: # jupytext: # formats: ipynb,py:percent # text_representation: # extension: .py # format_name: percent # format_version: '1.3' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %% [markdown] # # Theoretical Foundations of Buffer Stock Saving # <p style="text-align: center;"><small><small>Generator: REMARK-make/REMARKs/BufferStockTheory.sh</small></small></p> # %% [markdown] # [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/econ-ark/REMARK/master?filepath=REMARKs%2FBufferStockTheory%2FBufferStockTheory.ipynb) # # This notebook uses the [Econ-ARK/HARK](https://github.com/econ-ark/hark) toolkit to describe the main results and reproduce the figures in the paper [Theoretical Foundations of Buffer Stock Saving](http://econ.jhu.edu/people/ccarroll/papers/BufferStockTheory). # # If you are not familiar with the HARK toolkit, you may wish to browse the ["Gentle Introduction to HARK"](https://mybinder.org/v2/gh/econ-ark/DemARK/master?filepath=Gentle-Intro-To-HARK.ipynb) before continuing (since you are viewing this document, you presumably know a bit about [Jupyter Notebooks](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/)). # # For instructions on how to install the [Econ-ARK/HARK](https://github.com/econ-ark/hark) toolkit on your computer, please refer to the [QUICK START GUIDE](https://github.com/econ-ark/HARK/blob/master/README.md). # # The main HARK tool used here is $\texttt{ConsIndShockModel.py}$, in which agents have CRRA utility and face idiosyncratic shocks to permanent and transitory income. For an introduction to this module, see the [ConsIndShockModel.ipynb](https://econ-ark.org/notebooks) notebook at the [Econ-ARK](https://econ-ark.org) website. # %% code_folding=[0] # This cell does some setup and imports generic tools used to produce the figures Generator=False # Is this notebook the master or is it generated? # Import related generic python packages import numpy as np from time import clock mystr = lambda number : "{:.4f}".format(number) # This is a jupytext paired notebook that autogenerates BufferStockTheory.py # which can be executed from a terminal command line via "ipython BufferStockTheory.py" # But a terminal does not permit inline figures, so we need to test jupyter vs terminal # Google "how can I check if code is executed in the ipython notebook" from IPython import get_ipython # In case it was run from python instead of ipython def in_ipynb(): try: if str(type(get_ipython())) == "<class 'ipykernel.zmqshell.ZMQInteractiveShell'>": return True else: return False except NameError: return False # Determine whether to make the figures inline (for spyder or jupyter) # vs whatever is the automatic setting that will apply if run from the terminal if in_ipynb(): # # %matplotlib inline generates a syntax error when run from the shell # so do this instead get_ipython().run_line_magic('matplotlib', 'inline') else: get_ipython().run_line_magic('matplotlib', 'auto') # Import the plot-figure library matplotlib import matplotlib.pyplot as plt # In order to use LaTeX to manage all text layout in our figures, we import rc settings from matplotlib. from matplotlib import rc plt.rc('font', family='serif') # LaTeX is huge and takes forever to install on mybinder # so if it is not installed then do not use it from distutils.spawn import find_executable iflatexExists=False if find_executable('latex'): iflatexExists=True plt.rc('text', usetex= iflatexExists) plt.rc('text', usetex= iflatexExists) # The warnings package allows us to ignore some harmless but alarming warning messages import warnings warnings.filterwarnings("ignore") # The tools for navigating the filesystem import sys import os sys.path.insert(0, os.path.abspath('../../lib')) # REMARKs directory is two down from root from HARK.utilities import plotFuncsDer, plotFuncs from copy import copy, deepcopy # Define (and create, if necessary) the figures directory "Figures" if Generator: my_file_path = os.path.dirname(os.path.abspath("BufferStockTheory.ipynb")) # Find pathname to this file: Figures_HARK_dir = os.path.join(my_file_path,"Figures/") # LaTeX document assumes figures will be here Figures_HARK_dir = os.path.join(my_file_path,"/tmp/Figures/") # Uncomment to make figures outside of git path if not os.path.exists(Figures_HARK_dir): os.makedirs(Figures_HARK_dir) # %% [markdown] # ## [The Problem](http://econ.jhu.edu/people/ccarroll/papers/BufferStockTheory/#The-Problem) # # The paper defines and calibrates a small set of parameters: # # | Parameter | Description | Code | Value | # | :---: | --- | --- | :---: | # | $\newcommand{\PermGroFac}{\Gamma}\PermGroFac$ | Permanent Income Growth Factor | $\texttt{PermGroFac}$ | 1.03 | # | $\newcommand{\Rfree}{\mathrm{\mathsf{R}}}\Rfree$ | Interest Factor | $\texttt{Rfree}$ | 1.04 | # | $\newcommand{\DiscFac}{\beta}\DiscFac$ | Time Preference Factor | $\texttt{DiscFac}$ | 0.96 | # | $\newcommand{\CRRA}{\rho}\CRRA$ | Coefficient of Relative Risk Aversion| $\texttt{CRRA}$ | 2 | # | $\newcommand{\UnempPrb}{\wp}\UnempPrb$ | Probability of Unemployment | $\texttt{UnempPrb}$ | 0.005 | # | $\newcommand{\IncUnemp}{\mu}\IncUnemp$ | Income when Unemployed | $\texttt{IncUnemp}$ | 0. | # | $\newcommand{\PermShkStd}{\sigma_\psi}\PermShkStd$ | Std Dev of Log Permanent Shock| $\texttt{PermShkStd}$ | 0.1 | # | $\newcommand{\TranShkStd}{\sigma_\theta}\TranShkStd$ | Std Dev of Log Transitory Shock| $\texttt{TranShkStd}$ | 0.1 | # # For a microeconomic consumer with 'Market Resources' (net worth plus current income) $M_{t}$, end-of-period assets $A_{t}$ will be the amount remaining after consumption of $C_{t}$. <!-- Next period's 'Balances' $B_{t+1}$ reflect this period's $A_{t}$ augmented by return factor $R$:--> # \begin{eqnarray} # A_{t} &=&M_{t}-C_{t} \label{eq:DBCparts} \\ # %B_{t+1} & = & A_{t} R \notag \\ # \end{eqnarray} # # The consumer's permanent noncapital income $P$ grows by a predictable factor $\PermGroFac$ and is subject to an unpredictable lognormally distributed multiplicative shock $\mathbb{E}_{t}[\psi_{t+1}]=1$, # \begin{eqnarray} # P_{t+1} & = & P_{t} \PermGroFac \psi_{t+1} # \end{eqnarray} # # and actual income is permanent income multiplied by a logormal multiplicative transitory shock, $\mathbb{E}_{t}[\theta_{t+1}]=1$, so that next period's market resources are # \begin{eqnarray} # %M_{t+1} &=& B_{t+1} +P_{t+1}\theta_{t+1}, \notag # M_{t+1} &=& A_{t}R +P_{t+1}\theta_{t+1}. \notag # \end{eqnarray} # # When the consumer has a CRRA utility function $u(c)=\frac{c^{1-\rho}}{1-\rho}$, the paper shows that the problem can be written in terms of ratios of money variables to permanent income, e.g. $m_{t} \equiv M_{t}/P_{t}$, and the Bellman form of [the problem reduces to](http://econ.jhu.edu/people/ccarroll/papers/BufferStockTheory/#The-Related-Problem): # # \begin{eqnarray*} # v_t(m_t) &=& \max_{c_t}~~ u(c_t) + \beta~\mathbb{E}_{t} [(\Gamma\psi_{t+1})^{1-\rho} v_{t+1}(m_{t+1}) ] \\ # & s.t. & \\ # a_t &=& m_t - c_t \\ # m_{t+1} &=& R/(\Gamma \psi_{t+1}) a_t + \theta_{t+1} \\ # \end{eqnarray*} # # %% code_folding=[0] # Define a parameter dictionary with baseline parameter values # Set the baseline parameter values PermGroFac = 1.03 Rfree = 1.04 DiscFac = 0.96 CRRA = 2.00 UnempPrb = 0.005 IncUnemp = 0.0 PermShkStd = 0.1 TranShkStd = 0.1 # Import default parameter values import HARK.ConsumptionSaving.ConsumerParameters as Params # Make a dictionary containing all parameters needed to solve the model base_params = Params.init_idiosyncratic_shocks # Set the parameters for the baseline results in the paper # using the variable values defined in the cell above base_params['PermGroFac'] = [PermGroFac] # Permanent income growth factor base_params['Rfree'] = Rfree # Interest factor on assets base_params['DiscFac'] = DiscFac # Time Preference Factor base_params['CRRA'] = CRRA # Coefficient of relative risk aversion base_params['UnempPrb'] = UnempPrb # Probability of unemployment (e.g. Probability of Zero Income in the paper) base_params['IncUnemp'] = IncUnemp # Induces natural borrowing constraint base_params['PermShkStd'] = [PermShkStd] # Standard deviation of log permanent income shocks base_params['TranShkStd'] = [TranShkStd] # Standard deviation of log transitory income shocks # Some technical settings that are not interesting for our purposes base_params['LivPrb'] = [1.0] # 100 percent probability of living to next period base_params['CubicBool'] = True # Use cubic spline interpolation base_params['T_cycle'] = 1 # No 'seasonal' cycles base_params['BoroCnstArt'] = None # No artificial borrowing constraint # %% from ConsIndShockModel import IndShockConsumerType # %% [markdown] # ## Convergence of the Consumption Rules # # [The paper's first figure](http://econ.jhu.edu/people/ccarroll/papers/BufferStockTheory/#Convergence-of-the-Consumption-Rules) depicts the successive consumption rules that apply in the last period of life $(c_{T}(m))$, the second-to-last period, and earlier periods under the baseline parameter values given above. # %% code_folding=[0] # Create a buffer stock consumer instance by passing the dictionary to the class. baseEx = IndShockConsumerType(**base_params) baseEx.cycles = 100 # Make this type have a finite horizon (Set T = 100) baseEx.solve() # Solve the model baseEx.unpackcFunc() # Make the consumption function easily accessible # %% code_folding=[0] # Plot the different periods' consumption rules. m1 = np.linspace(0,9.5,1000) # Set the plot range of m m2 = np.linspace(0,6.5,500) c_m = baseEx.cFunc[0](m1) # c_m can be used to define the limiting infinite-horizon consumption rule here c_t1 = baseEx.cFunc[-2](m1) # c_t1 defines the second-to-last period consumption rule c_t5 = baseEx.cFunc[-6](m1) # c_t5 defines the T-5 period consumption rule c_t10 = baseEx.cFunc[-11](m1) # c_t10 defines the T-10 period consumption rule c_t0 = m2 # c_t0 defines the last period consumption rule plt.figure(figsize = (12,9)) plt.plot(m1,c_m,color="black") plt.plot(m1,c_t1,color="black") plt.plot(m1,c_t5,color="black") plt.plot(m1,c_t10,color="black") plt.plot(m2,c_t0,color="black") plt.xlim(0,11) plt.ylim(0,7) plt.text(7,6,r'$c_{T}(m) = 45$ degree line',fontsize = 22,fontweight='bold') plt.text(9.6,5.3,r'$c_{T-1}(m)$',fontsize = 22,fontweight='bold') plt.text(9.6,2.6,r'$c_{T-5}(m)$',fontsize = 22,fontweight='bold') plt.text(9.6,2.1,r'$c_{T-10}(m)$',fontsize = 22,fontweight='bold') plt.text(9.6,1.7,r'$c(m)$',fontsize = 22,fontweight='bold') plt.arrow(6.9,6.05,-0.6,0,head_width= 0.1,width=0.001,facecolor='black',length_includes_head='True') plt.tick_params(labelbottom=False, labelleft=False,left='off',right='off',bottom='off',top='off') plt.text(0,7.05,"$c$",fontsize = 26) plt.text(11.1,0,"$m$",fontsize = 26) # Save the figures in several formats if Generator: plt.savefig(os.path.join(Figures_HARK_dir, 'cFuncsConverge.png')) plt.savefig(os.path.join(Figures_HARK_dir, 'cFuncsConverge.jpg')) plt.savefig(os.path.join(Figures_HARK_dir, 'cFuncsConverge.pdf')) plt.savefig(os.path.join(Figures_HARK_dir, 'cFuncsConverge.svg')) # %% [markdown] # ## Factors and Conditions # # ### The Finite Human Wealth Condition # # Human wealth for a perfect foresight consumer is defined as the present discounted value of future income: # # \begin{eqnarray} # H_{t} & = & \mathbb{E}[P_{t} + R^{-1} P_{t+1} + R^{2} P_{t+2} ... ] \\ # & = & P_{t}\mathbb{E}[P_{t} + (\Gamma/R) + (\Gamma/R)^{2} ... ] # \end{eqnarray} # which is an infinite number if $\Gamma/R \geq 1$. We say that the 'Finite Human Wealth Condition' (FHWC) holds if # $0 \leq (\Gamma/R) \leq 1$. # %% [markdown] # ### Absolute Patience and the AIC # # The paper defines an object which it calls the Absolute Patience Factor, equal to the ratio of $C_{t+1}/C_{t}$ for a perfect foresight consumer. The Old English character <span style="font-size:larger;">"&#222;"</span> is used for this object in the paper, but <span style="font-size:larger;">"&#222;"</span> cannot currently be rendered conveniently in Jupyter notebooks, so we will substitute $\Phi$ here: # # \begin{equation} # \Phi = (R \beta)^{1/\rho} # \end{equation} # # If $\Phi = 1$, a perfect foresight consumer will spend exactly the amount that can be sustained perpetually (given their current and future resources). If $\Phi < 1$ (the consumer is 'absolutely impatient'; or, 'the absolute impatience condition holds'), the consumer is consuming more than the sustainable amount, so consumption will fall, and if the consumer is 'absolutely patient' with $\Phi > 1$ consumption will grow over time. # # # %% [markdown] # ### Growth Patience and the GIC # # For a [perfect foresight consumer](http://econ.jhu.edu/people/ccarroll/public/lecturenotes/consumption/PerfForesightCRRA), whether the ratio of consumption to the permanent component of income $P$ is rising, constant, or falling depends on the relative growth rates of consumption and permanent income, which is measured by the "Perfect Foresight Growth Patience Factor": # # \begin{eqnarray} # \Phi_{\Gamma} & = & \Phi/\Gamma # \end{eqnarray} # and whether the ratio is falling or rising over time depends on whether $\Phi_{\Gamma}$ is below or above 1. # # An analogous condition can be defined when there is uncertainty about permanent income. Defining $\tilde{\Gamma} = (\mathbb{E}[\psi^{-1}])^{-1}\Gamma$, the 'Growth Impatience Condition' (GIC) is that # \begin{eqnarray} # \Phi/\tilde{\Gamma} & < & 1 # \end{eqnarray} # %% [markdown] # ### The Finite Value of Autarky Condition (FVAC) # %% [markdown] # The paper [shows](http://econ.jhu.edu/people/ccarroll/papers/BufferStockTheory/#Autarky-Value) that a consumer who planned to spend his permanent income $\{ p_{t}, p_{t+1}, ...\} $ in every period would have value defined by # # \begin{equation} # v_{t}^{\text{autarky}} = u(p_{t})\left(\frac{1}{1-\beta \Gamma^{1-\rho} \mathbb{E}[\psi^{1-\rho}]}\right) # \end{equation} # # and defines the 'Finite Value of Autarky Condition' as the requirement that the denominator of this expression be a positive finite number: # # \begin{equation} # \beta \Gamma^{1-\rho} \mathbb{E}[\psi^{1-\rho}] < 1 # \end{equation} # %% [markdown] # ### The Weak Return Impatience Condition (WRIC) # # The 'Return Impatience Condition' $\Phi/R < 1$ has long been understood to be required for the perfect foresight model to have a nondegenerate solution (when $\rho=1$, this reduces to $\beta < R$). If the RIC does not hold, the consumer is so patient that the optimal consumption function approaches zero as the horizon extends. # # When the probability of unemployment is $\wp$, the paper articulates an analogous (but weaker) condition: # # \begin{eqnarray} # \wp^{1/\rho} \Phi/R & < & 1 # \end{eqnarray} # %% [markdown] # # Key Results # # ## Nondegenerate Solution Requires FVAC and WRIC # # The central result of the paper is that the conditions required for the model to have a nondegenerate solution ($0 < c(m) < \infty$ for feasible $m$) are that the Finite Value of Autarky (FVAC) and Weak Return Impatience Condition (WRAC) hold. # # A [table](http://econ.jhu.edu/people/ccarroll/papers/BufferStockTheory/#Sufficient-Conditions-For-Nondegenerate-Solution) puts this result in the context of implications of other conditions and restrictions. # %% [markdown] # ## Natural versus Artificial Borrowing Constraints # %% [markdown] # Defining $\chi(\wp)$ as the consumption function associated with any particular value of $\wp$, and defining $\hat{\chi}$ as the consumption function that would apply in the absence of the zero-income shocks but in the presence of an 'artificial' borrowing constraint requiring $a \geq 0$, a la Deaton (1991), the paper shows that # # \begin{eqnarray} # \lim_{\wp \downarrow 0}~\chi(\wp) & = & \hat{\chi} # \end{eqnarray} # # That is, as $\wp$ approaches zero the problem with uncertainty becomes identical to the problem that instead has constraints. (See [Precautionary Saving and Liquidity Constraints](http://econ.jhu.edu/people/ccarroll/papers/LiqConstr) for a full treatment of the relationship between precautionary saving and liquidity constraints). # %% [markdown] # ## $c(m)$ is Finite Even When Human Wealth Is Infinite # # In the perfect foresight model, if $R < \Gamma$ the present discounted value of future labor income is infinite and so the limiting consumption function is $c(m) = \infty$ for all $m$. # # The presence of uncertainty changes this: The limiting consumption function is finite for all values of $m$. # # This is because uncertainty imposes a "natural borrowing constraint" that deters the consumer from borrowing against their unbounded future labor income. # %% [markdown] # ## If the GIC Holds, $\exists$ a finite 'target' $m$ # # Section [There Is Exactly One Target $m$ Ratio, Which Is Stable]() shows that, under parameter values for which the limiting consumption function exists, if the GIC holds then there will be a value $\check{m}$ such that: # # \begin{eqnarray} # \mathbb{E}[m_{t+1}] & > & m_{t}~\text{if $m_{t} < \check{m}$} \\ # \mathbb{E}[m_{t+1}] & < & m_{t}~\text{if $m_{t} > \check{m}$} \\ # \mathbb{E}[m_{t+1}] & = & m_{t}~\text{if $m_{t} = \check{m}$} # \end{eqnarray} # %% [markdown] # ## Target Wealth is Infinite if the GIC Fails # # The section [The GIC](http://econ.jhu.edu/people/ccarroll/papers/BufferStockTheory/#The-GIC) depicts a solution when the **FVAC** (Finite Value of Autarky Condition) and **WRIC** hold (so that the model has a solution) but the **GIC** (Growth Impatience Condition) fails. In this case the target wealth ratio is infinity. # # The parameter values in this specific example are: # # | Param | Description | Code | Value | # | :---: | --- | --- | :---: | # | $\Gamma$ | Permanent Income Growth Factor | $\texttt{PermGroFac}$ | 1.00 | # | $\mathrm{\mathsf{R}}$ | Interest Factor | $\texttt{Rfree}$ | 1.08 | # # The figure is reproduced below. # %% code_folding=[] # Construct the "GIC fails" example. GIC_fail_dictionary = dict(base_params) GIC_fail_dictionary['Rfree'] = 1.08 GIC_fail_dictionary['PermGroFac'] = [1.00] GICFailExample = IndShockConsumerType( cycles=0, # cycles=0 makes this an infinite horizon consumer **GIC_fail_dictionary) # %% [markdown] # The $\mathtt{IndShockConsumerType}$ tool automatically checks various parametric conditions, and will give a warning as well as the values of the factors if any conditions fail to be met. # # We can also directly check the conditions, in which case results will be a little more verbose by default. # %% code_folding=[] # The checkConditions method does what it sounds like it would GICFailExample.checkConditions(verbose=True) # %% [markdown] # Next we define the function $\mathrm{\mathbb{E}}_{t}[\Delta m_{t+1}]$ that shows the ‘sustainable’ level of spending at which $m$ is expected to remain unchanged. # %% code_folding=[0] # Calculate "Sustainable" consumption that leaves expected m unchanged # In the perfect foresight case, this is just permanent income plus interest income # A small adjustment is required to take account of the consequences of uncertainty InvEpShInvAct = np.dot(GICFailExample.PermShkDstn[0][0], GICFailExample.PermShkDstn[0][1]**(-1)) InvInvEpShInvAct = (InvEpShInvAct) ** (-1) PermGroFacAct = GICFailExample.PermGroFac[0] * InvInvEpShInvAct ER = GICFailExample.Rfree / PermGroFacAct Er = ER - 1 mSSfunc = lambda m : 1 + (m-1)*(Er/ER) # %% code_folding=[] hidden=true # Plot GICFailExample consumption function against the sustainable level of consumption GICFailExample.solve() # Above, we set up the problem but did not solve it GICFailExample.unpackcFunc() # Make the consumption function easily accessible for plotting m = np.linspace(0,5,1000) c_m = GICFailExample.cFunc[0](m) E_m = mSSfunc(m) plt.figure(figsize = (12,8)) plt.plot(m,c_m,color="black") plt.plot(m,E_m,color="black") plt.xlim(0,5.5) plt.ylim(0,1.6) plt.text(0,1.63,"$c$",fontsize = 26) plt.text(5.55,0,"$m$",fontsize = 26) plt.tick_params(labelbottom=False, labelleft=False,left='off',right='off',bottom='off',top='off') plt.text(1,0.6,"$c(m_{t})$",fontsize = 18) plt.text(1.5,1.2,"$\mathrm{\mathsf{E}}_{t}[\Delta m_{t+1}] = 0$",fontsize = 18) plt.arrow(0.98,0.62,-0.2,0,head_width= 0.02,width=0.001,facecolor='black',length_includes_head='True') plt.arrow(2.2,1.2,0.3,-0.05,head_width= 0.02,width=0.001,facecolor='black',length_includes_head='True') if Generator: plt.savefig(os.path.join(Figures_HARK_dir, 'FVACnotGIC.png')) plt.savefig(os.path.join(Figures_HARK_dir, 'FVACnotGIC.jpg')) plt.savefig(os.path.join(Figures_HARK_dir, 'FVACnotGIC.pdf')) plt.savefig(os.path.join(Figures_HARK_dir, 'FVACnotGIC.svg')) # This figure reproduces the figure shown in the paper. # The gap between the two functions actually increases with $m$ in the limit. # %% [markdown] hidden=true # As a foundation for the remaining figures, we define another instance of the class $\texttt{IndShockConsumerType}$, which has the same parameter values as the instance $\texttt{baseEx}$ defined previously but is solved to convergence (our definition of an infinite horizon agent type) # # %% code_folding=[0] hidden=true # cycles=0 tells the solver to find the infinite horizon solution baseEx_inf = IndShockConsumerType(cycles=0,**base_params) baseEx_inf.solve() baseEx_inf.unpackcFunc() # %% [markdown] # ### Target $m$, Expected Consumption Growth, and Permanent Income Growth # # The next figure is shown in [Analysis of the Converged Consumption Function](https://econ.jhu.edu/people/ccarroll/papers/BufferStockTheory/#Analysis-of-the-Converged-Consumption-Function), which shows the expected consumption growth factor $\mathrm{\mathbb{E}}_{t}[c_{t+1}/c_{t}]$ for a consumer behaving according to the converged consumption rule. # # The first step of the figure's construction is to calculate the t+1 period expected consumption. We define an auxiliary function to calculate the expectation of t+1 period consumption given t end-of-period assets. # %% code_folding=[0] # Define a function to calculate expected consumption def exp_consumption(a): ''' Taking end-of-period assets as input, return expectation of next period's consumption Inputs: a: end-of-period assets Returns: expconsump: next period's expected consumption ''' GrowFactp1 = baseEx_inf.PermGroFac[0]* baseEx_inf.PermShkDstn[0][1] Rnrmtp1 = baseEx_inf.Rfree / GrowFactp1 # end-of-period assets plus normalized returns btp1 = Rnrmtp1*a # expand dims of btp1 and use broadcasted sum of a column and a row vector # to obtain a matrix of possible beginning-of-period assets next period mtp1 = np.expand_dims(btp1, axis=1) + baseEx_inf.TranShkDstn[0][1] part_expconsumption = GrowFactp1*baseEx_inf.cFunc[0](mtp1).T # finish expectation over permanent income shocks by right multiplying with # the weights part_expconsumption = np.dot(part_expconsumption, baseEx_inf.PermShkDstn[0][0]) # finish expectation over transitory income shocks by right multiplying with # weights expconsumption = np.dot(part_expconsumption, baseEx_inf.TranShkDstn[0][0]) # return expected consumption return expconsumption # %% code_folding=[0] # Calculate the expected consumption growth factor m1 = np.linspace(1,baseEx_inf.solution[0].mNrmSS,50) # m1 defines the plot range on the left of target m value (e.g. m <= target m) c_m1 = baseEx_inf.cFunc[0](m1) a1 = m1-c_m1 exp_consumption_l1 = [] for i in range(len(a1)): exp_consumption_tp1 = exp_consumption(a1[i]) exp_consumption_l1.append(exp_consumption_tp1) # growth1 defines the values of expected consumption growth factor when m is less than target m growth1 = np.array(exp_consumption_l1)/c_m1 # m2 defines the plot range on the right of target m value (e.g. m >= target m) m2 = np.linspace(baseEx_inf.solution[0].mNrmSS,1.9,50) c_m2 = baseEx_inf.cFunc[0](m2) a2 = m2-c_m2 exp_consumption_l2 = [] for i in range(len(a2)): exp_consumption_tp1 = exp_consumption(a2[i]) exp_consumption_l2.append(exp_consumption_tp1) # growth 2 defines the values of expected consumption growth factor when m is bigger than target m growth2 = np.array(exp_consumption_l2)/c_m2 # %% code_folding=[0] # Define a function to construct the arrows on the consumption growth rate function def arrowplot(axes, x, y, narrs=15, dspace=0.5, direc='neg', hl=0.01, hw=3, c='black'): ''' The function is used to plot arrows given the data x and y. Input: narrs : Number of arrows that will be drawn along the curve dspace : Shift the position of the arrows along the curve. Should be between 0. and 1. direc : can be 'pos' or 'neg' to select direction of the arrows hl : length of the arrow head hw : width of the arrow head c : color of the edge and face of the arrow head ''' # r is the distance spanned between pairs of points r = np.sqrt(np.diff(x)**2+np.diff(y)**2) r = np.insert(r, 0, 0.0) # rtot is a cumulative sum of r, it's used to save time rtot = np.cumsum(r) # based on narrs set the arrow spacing aspace = r.sum() / narrs if direc is 'neg': dspace = -1.*abs(dspace) else: dspace = abs(dspace) arrowData = [] # will hold tuples of x,y,theta for each arrow arrowPos = aspace*(dspace) # current point on walk along data # could set arrowPos to 0 if you want # an arrow at the beginning of the curve ndrawn = 0 rcount = 1 while arrowPos < r.sum() and ndrawn < narrs: x1,x2 = x[rcount-1],x[rcount] y1,y2 = y[rcount-1],y[rcount] da = arrowPos-rtot[rcount] theta = np.arctan2((x2-x1),(y2-y1)) ax = np.sin(theta)*da+x1 ay = np.cos(theta)*da+y1 arrowData.append((ax,ay,theta)) ndrawn += 1 arrowPos+=aspace while arrowPos > rtot[rcount+1]: rcount+=1 if arrowPos > rtot[-1]: break for ax,ay,theta in arrowData: # use aspace as a guide for size and length of things # scaling factors were chosen by experimenting a bit dx0 = np.sin(theta)*hl/2.0 + ax dy0 = np.cos(theta)*hl/2.0 + ay dx1 = -1.*np.sin(theta)*hl/2.0 + ax dy1 = -1.*np.cos(theta)*hl/2.0 + ay if direc is 'neg' : ax0 = dx0 ay0 = dy0 ax1 = dx1 ay1 = dy1 else: ax0 = dx1 ay0 = dy1 ax1 = dx0 ay1 = dy0 axes.annotate('', xy=(ax0, ay0), xycoords='data', xytext=(ax1, ay1), textcoords='data', arrowprops=dict( headwidth=hw, frac=1., ec=c, fc=c)) # %% code_folding=[0] # Plot consumption growth as a function of market resources # Calculate Absolute Patience Factor Phi = lower bound of consumption growth factor AbsPatientFac = (baseEx_inf.Rfree*baseEx_inf.DiscFac)**(1.0/baseEx_inf.CRRA) fig = plt.figure(figsize = (12,8)) ax = fig.add_subplot(111) # Plot the Absolute Patience Factor line ax.plot([0,1.9],[AbsPatientFac,AbsPatientFac],color="black") # Plot the Permanent Income Growth Factor line ax.plot([0,1.9],[baseEx_inf.PermGroFac[0],baseEx_inf.PermGroFac[0]],color="black") # Plot the expected consumption growth factor on the left side of target m ax.plot(m1,growth1,color="black") # Plot the expected consumption growth factor on the right side of target m ax.plot(m2,growth2,color="black") # Plot the arrows arrowplot(ax, m1,growth1) arrowplot(ax, m2,growth2, direc='pos') # Plot the target m ax.plot([baseEx_inf.solution[0].mNrmSS,baseEx_inf.solution[0].mNrmSS],[0,1.4],color="black",linestyle="--") ax.set_xlim(1,2.05) ax.set_ylim(0.98,1.08) ax.text(1,1.082,"Growth Rate",fontsize = 26,fontweight='bold') ax.text(2.055,0.98,"$m_{t}$",fontsize = 26,fontweight='bold') ax.text(1.9,1.01,"$\mathrm{\mathsf{E}}_{t}[c_{t+1}/c_{t}]$",fontsize = 22,fontweight='bold') ax.text(baseEx_inf.solution[0].mNrmSS,0.975, r'$\check{m}$', fontsize = 26,fontweight='bold') ax.tick_params(labelbottom=False, labelleft=False,left='off',right='off',bottom='off',top='off') ax.text(1.9,0.998,r'$\Phi = (\mathrm{\mathsf{R}}\beta)^{1/\rho}$',fontsize = 22,fontweight='bold') ax.text(1.9,1.03, r'$\Gamma$',fontsize = 22,fontweight='bold') if Generator: fig.savefig(os.path.join(Figures_HARK_dir, 'cGroTargetFig.png')) fig.savefig(os.path.join(Figures_HARK_dir, 'cGroTargetFig.jpg')) fig.savefig(os.path.join(Figures_HARK_dir, 'cGroTargetFig.pdf')) fig.savefig(os.path.join(Figures_HARK_dir, 'cGroTargetFig.svg')) # %% [markdown] code_folding=[] # ### The Consumption Function Bounds # # The next figure is also shown in [Analysis of the Converged Consumption Function](https://econ.jhu.edu/people/ccarroll/papers/BufferStockTheory/#Analysis-of-the-Converged-Consumption-Function), and illustrates theoretical bounds for the consumption function. # # We define two useful variables: lower bound of $\kappa$ (marginal propensity to consume) and limit of $h$ (Human wealth), along with some functions such as limiting perfect foresight consumption functions ($\bar{c}(m)$), $\bar{\bar c}(m)$ and $\underline{c}(m)$. # %% code_folding=[0] # Define k_lower, h_inf and perfect foresight consumption function, upper bound of consumption function and lower # bound of consumption function. k_lower = 1.0-(baseEx_inf.Rfree**(-1.0))*(baseEx_inf.Rfree*baseEx_inf.DiscFac)**(1.0/baseEx_inf.CRRA) h_inf = (1.0/(1.0-baseEx_inf.PermGroFac[0]/baseEx_inf.Rfree)) conFunc_PF = lambda m: (h_inf -1)* k_lower + k_lower*m conFunc_upper = lambda m: (1 - baseEx_inf.UnempPrb ** (1.0/baseEx_inf.CRRA)*(baseEx_inf.Rfree*baseEx_inf.DiscFac)**(1.0/baseEx_inf.CRRA)/baseEx_inf.Rfree)*m conFunc_lower = lambda m: (1 -(baseEx_inf.Rfree*baseEx_inf.DiscFac)**(1.0/baseEx_inf.CRRA)/baseEx_inf.Rfree) * m intersect_m = ((h_inf-1)* k_lower)/((1 - baseEx_inf.UnempPrb **(1.0/baseEx_inf.CRRA)*(baseEx_inf.Rfree*baseEx_inf.DiscFac)**(1.0/baseEx_inf.CRRA)/baseEx_inf.Rfree)-k_lower) # %% code_folding=[0] # Plot the consumption function and its bounds x1 = np.linspace(0,25,1000) x3 = np.linspace(0,intersect_m,300) x4 = np.linspace(intersect_m,25,700) cfunc_m = baseEx_inf.cFunc[0](x1) cfunc_PF_1 = conFunc_PF(x3) cfunc_PF_2 = conFunc_PF(x4) cfunc_upper_1 = conFunc_upper(x3) cfunc_upper_2 = conFunc_upper(x4) cfunc_lower = conFunc_lower(x1) plt.figure(figsize = (12,8)) plt.plot(x1,cfunc_m, color="black") plt.plot(x1,cfunc_lower, color="black",linewidth=2.5) plt.plot(x3,cfunc_upper_1, color="black",linewidth=2.5) plt.plot(x4,cfunc_PF_2 , color="black",linewidth=2.5) plt.plot(x4,cfunc_upper_2 , color="black",linestyle="--") plt.plot(x3,cfunc_PF_1 , color="black",linestyle="--") plt.tick_params(labelbottom=False, labelleft=False,left='off',right='off',bottom='off',top='off') plt.xlim(0,25) plt.ylim(0,1.12*conFunc_PF(25)) plt.text(0,1.12*conFunc_PF(25)+0.05,"$c$",fontsize = 22) plt.text(25+0.1,0,"$m$",fontsize = 22) plt.text(2.5,1,r'$c(m)$',fontsize = 22,fontweight='bold') plt.text(6,5,r'$\overline{\overline c}(m)= \overline{\kappa}m = (1-\wp^{1/\rho}\Phi_{R})m$',fontsize = 22,fontweight='bold') plt.text(2.2,3.8, r'$\overline{c}(m) = (m-1+h)\underline{\kappa}$',fontsize = 22,fontweight='bold') plt.text(9,4.1,r'Upper Bound $ = $ Min $[\overline{\overline c}(m),\overline{c}(m)]$',fontsize = 22,fontweight='bold') plt.text(7,0.7,r'$\underline{c}(m)= (1-\Phi_{R})m = \underline{\kappa}m$',fontsize = 22,fontweight='bold') plt.arrow(2.45,1.05,-0.5,0.02,head_width= 0.05,width=0.001,facecolor='black',length_includes_head='True') plt.arrow(2.15,3.88,-0.5,0.1,head_width= 0.05,width=0.001,facecolor='black',length_includes_head='True') plt.arrow(8.95,4.15,-0.8,0.05,head_width= 0.05,width=0.001,facecolor='black',length_includes_head='True') plt.arrow(5.95,5.05,-0.4,0,head_width= 0.05,width=0.001,facecolor='black',length_includes_head='True') plt.arrow(14,0.70,0.5,-0.1,head_width= 0.05,width=0.001,facecolor='black',length_includes_head='True') if Generator: plt.savefig(os.path.join(Figures_HARK_dir, 'cFuncBounds.png')) plt.savefig(os.path.join(Figures_HARK_dir, 'cFuncBounds.jpg')) plt.savefig(os.path.join(Figures_HARK_dir, 'cFuncBounds.pdf')) plt.savefig(os.path.join(Figures_HARK_dir, 'cFuncBounds.svg')) # %% [markdown] heading_collapsed=true # ### The Consumption Function and Target $m$ # # This figure shows the $\mathrm{\mathbb{E}}_{t}[\Delta m_{t+1}]$ and consumption function $c(m_{t})$, along with the intrsection of these two functions, which defines the target value of $m$ # %% code_folding=[] hidden=true # This just plots objects that have already been constructed m1 = np.linspace(0,4,1000) cfunc_m = baseEx_inf.cFunc[0](m1) mSSfunc = lambda m:(baseEx_inf.PermGroFac[0]/baseEx_inf.Rfree)+(1.0-baseEx_inf.PermGroFac[0]/baseEx_inf.Rfree)*m mss = mSSfunc(m1) plt.figure(figsize = (12,8)) plt.plot(m1,cfunc_m, color="black") plt.plot(m1,mss, color="black") plt.xlim(0,3) plt.ylim(0,1.45) plt.plot([baseEx_inf.solution[0].mNrmSS, baseEx_inf.solution[0].mNrmSS],[0,2.5],color="black",linestyle="--") plt.tick_params(labelbottom=False, labelleft=False,left='off',right='off',bottom='off',top='off') plt.text(0,1.47,r"$c$",fontsize = 26) plt.text(3.02,0,r"$m$",fontsize = 26) plt.text(2.3,0.95,r'$\mathrm{\mathsf{E}}[\Delta m_{t+1}] = 0$',fontsize = 22,fontweight='bold') plt.text(2.3,1.1,r"$c(m_{t})$",fontsize = 22,fontweight='bold') plt.text(baseEx_inf.solution[0].mNrmSS,-0.05, r"$\check{m}$",fontsize = 26) plt.arrow(2.28,1.12,-0.1,0.03,head_width= 0.02,width=0.001,facecolor='black',length_includes_head='True') plt.arrow(2.28,0.97,-0.1,0.02,head_width= 0.02,width=0.001,facecolor='black',length_includes_head='True') if Generator: plt.savefig(os.path.join(Figures_HARK_dir, 'cRatTargetFig.png')) plt.savefig(os.path.join(Figures_HARK_dir, 'cRatTargetFig.jpg')) plt.savefig(os.path.join(Figures_HARK_dir, 'cRatTargetFig.pdf')) plt.savefig(os.path.join(Figures_HARK_dir, 'cRatTargetFig.svg')) # %% [markdown] heading_collapsed=true # ### Upper and Lower Limits of the Marginal Propensity to Consume # # The paper shows that as $m_{t}~\uparrow~\infty$ the consumption function in the presence of risk gets arbitrarily close to the perfect foresight consumption function. Defining $\underline{\kappa}$ as the perfect foresight model's MPC, this implies that $\lim_{m_{t}~\uparrow~\infty} c^{\prime}(m) = \underline{\kappa}$. # # The paper also derives an analytical limit $\bar{\kappa}$ for the MPC as $m$ approaches 0., its bounding value. Strict concavity of the consumption function implies that the consumption function will be everywhere below a function $\bar{\kappa}m$, and strictly declining everywhere. The last figure plots the MPC between these two limits. # %% code_folding=[] hidden=true # The last figure shows the upper and lower limits of the MPC plt.figure(figsize = (12,8)) # Set the plot range of m m = np.linspace(0.001,8,1000) # Use the HARK method derivative to get the derivative of cFunc, and the values are just the MPC MPC = baseEx_inf.cFunc[0].derivative(m) # Define the upper bound of MPC MPCUpper = (1 - baseEx_inf.UnempPrb ** (1.0/baseEx_inf.CRRA)*(baseEx_inf.Rfree*baseEx_inf.DiscFac)**(1.0/baseEx_inf.CRRA)/baseEx_inf.Rfree) # Define the lower bound of MPC MPCLower = k_lower plt.plot(m,MPC,color = 'black') plt.plot([0,8],[MPCUpper,MPCUpper],color = 'black') plt.plot([0,8],[MPCLower,MPCLower],color = 'black') plt.xlim(0,8) plt.ylim(0,1) plt.text(1.5,0.6,r'$\kappa(m) \equiv c^{\prime}(m)$',fontsize = 26,fontweight='bold') plt.text(6,0.87,r'$(1-\wp^{1/\rho}\Phi_{R})\equiv \overline{\kappa}$',fontsize = 26,fontweight='bold') plt.text(0.5,0.07,r'$\underline{\kappa}\equiv(1-\Phi_{R})$',fontsize = 26,fontweight='bold') plt.text(8.05,0,"$m$",fontsize = 26) plt.arrow(1.45,0.61,-0.4,0,head_width= 0.02,width=0.001,facecolor='black',length_includes_head='True') plt.arrow(1.7,0.07,0.2,-0.01,head_width= 0.02,width=0.001,facecolor='black',length_includes_head='True') plt.arrow(5.95,0.875,-0.2,0.03,head_width= 0.02,width=0.001,facecolor='black',length_includes_head='True') if Generator: plt.savefig(os.path.join(Figures_HARK_dir, 'MPCLimits.png')) plt.savefig(os.path.join(Figures_HARK_dir, 'MPCLimits.jpg')) plt.savefig(os.path.join(Figures_HARK_dir, 'MPCLimits.pdf')) plt.savefig(os.path.join(Figures_HARK_dir, 'MPCLimits.svg'))
REMARKs/BufferStockTheory/BufferStockTheory.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import seaborn as sns import matplotlib.pyplot as plt data = pd.read_csv("sensor_data_700.txt", delimiter=" ", header=None, names=("date", "time", "ir", "lidar")) #d = data d = data[(data["time"] < 160000) & (data["time"] >= 120000)] # 12時から16時までのデータだけ抽出 d = d.loc[:, ["ir", "lidar"]] sns.jointplot(d["ir"], d["lidar"], d, kind="kde") plt.show() # + print("光センサの計測値の分散", d.ir.var()) print("LIDARの計測値の分散", d.lidar.var()) diff_ir = d.ir - d.ir.mean() diff_lidar = d.lidar - d.lidar.mean() a = diff_ir * diff_lidar print("共分散:", sum(a)/(len(d)-1)) d.mean() # - d.cov() d.mean().values.T # + from scipy.stats import multivariate_normal irlidar = multivariate_normal(mean=d.mean().values.T, cov=d.cov().values) # + import numpy as np x, y = np.mgrid[0:40, 710:750] # 2次元平面に均等にX座標,Y座標を作る pos = np.empty(x.shape + (2,)) # xは40x40の2次元リストで,これに3次元目を加えて40x40x2のリストを作成 pos[:,:,0] = x # 加えた3次元目にx,yを代入 pos[:,:,1] = y cont = plt.contour(x, y, irlidar.pdf(pos)) # x,y座標と,それに対応する密度を算出 cont.clabel(fmt="%1.1e") # 投稿線に値を書き込むためのフォーマット提案 plt.show() # - print("X座標: ", x) print("Y座標: ", y) print(pos) irlidar.pdf(pos) c = d.cov().values + np.array([[0, 20], [20, 0]]) tmp = multivariate_normal(mean=d.mean().values.T, cov=c) cont = plt.contour(x, y, tmp.pdf(pos)) # x,y座標と,それに対応する密度を算出 cont.clabel(fmt="%1.1e") # 投稿線に値を書き込むためのフォーマット提案 plt.show()
my_section_2/multi_gauss1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Chapter 13 of [A guided tour of mathematical methods in the physical sciences](https://www.cambridge.org/nz/academic/subjects/physics/mathematical-methods/guided-tour-mathematical-methods-physical-sciences-3rd-edition?format=PB&isbn=9781107641600) introduces the Dirac Delta function. As discussed this function is not really a function, but a *distribution*. Still, there are functions that can mimick certain qualities of the Dirac Delta functions. We'll discuss three of them. # # ### 1. Gaussian # One way to approximate the Dirac Delta function is by making a Gaussian skinnier and taller. In the notebook for Chapter 4, we already introduced the Gaussian function: # # In its most general form, the Gaussian is $$ f(x)=ae^{-{\frac {(x-b)^{2}}{2c^{2}}}}.$$ In Python, that is # + import numpy as np def gaussian(x,a,b,c): return a*np.exp(-(x-b)**2/(2*c**2)) # - # We can now play with the parameters to see how they affect the shape of the Gaussian: # + import matplotlib.pyplot as plt x= np.linspace(-10,10,num=1000) plt.plot(x,gaussian(x,2,3,1)) plt.xlabel('x') plt.ylabel('f(x)') plt.show() # - # Let us play with a Gaussian function centered on zero (b=0), where we couple the amplitude and width: # $$ f(x) = \frac{1}{c\sqrt{\pi}}\exp{\left(-x^2/c^2\right)}$$ # + def gaussian2(x,c): return 1/(c*np.sqrt(np.pi))*np.exp(-(x)**2/(2*c**2)) x= np.linspace(-2,2,num=250) cs = [0.1,0.3,1] for c in cs: plt.plot(x,gaussian2(x,c),label='c= '+str(c)) plt.xlabel('x') plt.ylabel('f(x)') plt.legend() plt.show() # - # In the limit that c goes to zero, this function resembles a Delta Dirac function: # $$ \delta(x-b) \approx \lim_{c\downarrow 0}\frac{1}{c\sqrt{\pi}}e^ {- \frac{(x-b)^2}{c^2}}$$ # # You may ask yourself why the square root of pi in there? Well, if you integrate this Gaussian as we did in the notebook for Chapter 4, you will see it is needed to satisfy one of the properties of the Dirac Delta function, namely that the area under the curve is 1. # ### 2. Sinc # Another popular way to approximate the shape of the Dirac Delta function is via the sinc function: # $$ f(x) = \mbox{sinc}(bx) = \frac{sin(bx)}{bx}:$$ def oursinc(b,x): return np.sin(b*x)/(b*x) # We can plot this sinc function, and compare it to the built-in version of numpy (it is slightly different): x= np.linspace(-10,10,num=1000) plt.plot(x,oursinc(np.pi,x),linewidth=4,alpha=0.5,label='our sinc') plt.plot(x,np.sinc(x),'--r',label='numpy sinc') # note that the sinc function built into numpy is "normalized: x'= bx=np.pi x" plt.xlabel('x') plt.ylabel('f(x)') plt.legend() plt.show() # When you play with the value for $b$ in our sinc function, you see how we can make it skinnier. The same parameter can be used to make the function taller. # #### Homework # Work out the limit of $b$ to approximate the Dirac Delta function (i.e., infinitely skinny and tall). Normalize your result (i.e., make sure the area under the curve is always unity), so that another property of the Dirac Delta function is conserved: its area is unity. # ### 3. Rectangles # We could have also approximated the Dirac Delta function with a rectangle, as discussed in Section 13.1 in our book: # # \begin{equation} # B_{a}(x)\equiv \left\{ # \begin{array}{@{}l@{\qquad}l@{}} # \dfrac{1}{2a} &\text{for}{\quad}\left| x\right| \leq a \\ # & \\[-5pt] # 0 & \text{for}{\quad}\left| x\right| >a # \end{array} # \right. . \label{Del.7} # \end{equation} # Again, the factor of 2 is for normalization purposes. # + def box(x,a): B = np.zeros(len(x)) for i in range(len(B)): if abs(x[i]) < a: B[i] = 1/(2*a) return B aaa = [0.1, 0.5, 1] for a in aaa: plt.plot(x,box(x,a),label='a= '+str(a)) plt.xlabel('x') plt.ylabel('$B_a$(x)') plt.legend() plt.show() # - # In either of these 3 examples, the approximation is called a [nascent delta function](https://en.wikipedia.org/wiki/Dirac_delta_function#Representations_of_the_delta_function), and when you read about these, you will see flashes of the sifting property of the Dirac Delta function expressed in these approximations.
13_Dirac_Delta_Function.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import glob import sys import argparse as argp import random as rand mut_50_dat = pd.read_csv('/Users/leg2015/workspace/Aagos/Data/Mut_Treat_Change_50_CleanedDataRep.csv', index_col="update", float_precision="high") mut_0_dat = pd.read_csv('/Users/leg2015/workspace/Aagos/Data/Mut_Treat_Change_0_CleanedDataRep.csv', index_col="update", float_precision="high") change_dat = pd.read_csv('/Users/leg2015/workspace/Aagos/Data/Change_Treat_f_.003_CleanedDataRep.csv', index_col="update", float_precision="high") mut_50_max = mut_50_dat.loc[50000] mut_0_max = mut_0_dat.loc[50000] change_max = change_dat.loc[50000] # change_max mut_0_max['overlap'] = (mut_0_max["one_gene_overlap"] + mut_0_max["two_gene_overlap"] + mut_0_max["three_gene_overlap"] + mut_0_max["four_gene_overlap"] + mut_0_max["five_gene_overlap"] + mut_0_max["six_gene_overlap"] + mut_0_max["seven_gene_overlap"] + mut_0_max["eight_gene_overlap"] + mut_0_max["nine_gene_overlap"] + mut_0_max["ten_gene_overlap"] + mut_0_max["eleven_gene_overlap"] + mut_0_max["twelve_gene_overlap"] + mut_0_max["thirteen_gene_overlap"] + mut_0_max["fourteen_gene_overlap"] + mut_0_max["fifteen_gene_overlap"] + mut_0_max["sixteen_gene_overlap"]) / 17 mut_50_max['overlap'] = (mut_50_max["one_gene_overlap"] + mut_50_max["two_gene_overlap"] + mut_50_max["three_gene_overlap"] + mut_50_max["four_gene_overlap"] + mut_50_max["five_gene_overlap"] + mut_50_max["six_gene_overlap"] + mut_50_max["seven_gene_overlap"] + mut_50_max["eight_gene_overlap"] + mut_50_max["nine_gene_overlap"] + mut_50_max["ten_gene_overlap"] + mut_50_max["eleven_gene_overlap"] + mut_50_max["twelve_gene_overlap"] + mut_50_max["thirteen_gene_overlap"] + mut_50_max["fourteen_gene_overlap"] + mut_50_max["fifteen_gene_overlap"] + mut_50_max["sixteen_gene_overlap"]) / 17 change_max['overlap'] = (change_max["one_gene_overlap"] + change_max["two_gene_overlap"] + change_max["three_gene_overlap"] + change_max["four_gene_overlap"] + change_max["five_gene_overlap"] + change_max["six_gene_overlap"] + change_max["seven_gene_overlap"] + change_max["eight_gene_overlap"] + change_max["nine_gene_overlap"] + change_max["ten_gene_overlap"] + change_max["eleven_gene_overlap"] + change_max["twelve_gene_overlap"] + change_max["thirteen_gene_overlap"] + change_max["fourteen_gene_overlap"] + change_max["fifteen_gene_overlap"] + change_max["sixteen_gene_overlap"]) / 17 mut_0_max['change'] = 0 mut_50_max['change'] = 50 mut_tot = pd.concat([mut_0_max, mut_50_max], axis=0) mut_tot.to_csv("R_Data_mut") change_max.to_csv("R_Data_Change") # mut_tot sns.set_style('ticks') sns.set(font_scale=1.5) sns.set(font="sans-serif") # paliete = ['#a6cee3','#1f78b4','#b2df8a','#33a02c','#fb9a99','#e31a1c','#fdbf6f','#ff7f00','#cab2d6','#6a3d9a','#ffff99','#b15928'] # pooplete = ['#8dd3c7','#ffffb3','#bebada','#fb8072','#80b1d3','#fdb462','#b3de69','#fccde5','#d9d9d9','#bc80bd','#ccebc5','#ffed6f'] # rand.shuffle(pooplete) # rand.shuffle(paliete) sns.palplot(pooplete) sns.palplot(paliete) # + # a4_dims = (6, 4) # df = mylib.load_data() # fig, ax = plt.subplots(figsize=a4_dims) palette = ["#9CC378"] ch_mut_cod_0_plot = sns.boxplot( y=mut_0_max["coding_sites"], x="f", data=mut_0_max, color=pooplete[0] ,fliersize=8, linewidth=1) plt.ylim(0, 128) # ch_mut_cod_0_plot.set(title="Effect of Mutation Rate on Number of Coding Sites" , xlabel='Bit Flip Mutation Rate', ylabel='Number of Coding Sites') ch_mut_cod_0_plot.set( xlabel='Bit Flip Mutation Rate', ylabel='Number of Coding Sites') labels = [] for t in ch_mut_cod_0_plot.get_xticklabels(): if float(t.get_text()) == .00001: labels.append('%.5f' % float(t.get_text())) else: labels.append(t.get_text()) ch_mut_cod_0_plot.set_xticklabels(labels) # leg = ch_mut_cod_0_plot.get_legend() # new_title = 'Change Rate' # leg.set_title(new_title) # # leg.loc(3) # plt.legend(loc=3) # leg = ch_mut_cod_0_plot.get_legend() # new_title = 'Change Rate' # leg.set_title(new_title) sns.set(font_scale=1.25) sns.despine() sns.set_style('ticks') plt.tight_layout() plt.savefig("ch_mut_0_change.pdf") # + # a4_dims = (6, 4) # df = mylib.load_data() # fig, ax = plt.subplots(figsize=a4_dims) palette = ["#d95f02"] ch_mut_fit_0_plot = sns.boxplot( y=mut_0_max["fitness"], x="f", data=mut_0_max, color=paliete[0] ,fliersize=8, linewidth=1) plt.ylim(8, 16) # ch_mut_fit_0_plot.set(title="Effect of Mutation Rate on Fitness" , xlabel='Bit Flip Mutation Rate', ylabel='Fitness') ch_mut_fit_0_plot.set( xlabel='Bit Flip Mutation Rate', ylabel='Fitness') labels = [] for t in ch_mut_fit_0_plot.get_xticklabels(): if float(t.get_text()) == .00001: labels.append('%.5f' % float(t.get_text())) else: labels.append(t.get_text()) ch_mut_fit_0_plot.set_xticklabels(labels) # leg = ch_mut_fit_0_plot.get_legend() # new_title = 'Change Rate' # leg.set_title(new_title) # # leg.loc(3) # plt.legend(loc=3) # leg = ch_mut_fit_0_plot.get_legend() # new_title = 'Change Rate' # leg.set_title(new_title) sns.set(font_scale=1.25) sns.despine() sns.set_style('ticks') plt.tight_layout() plt.savefig("ch_mut_0_fitness.pdf") # - plt.ylim(8, 16) ch_env_fit_plot = sns.boxplot(y=change_max["fitness"], x="change", data=change_max, linewidth=1.1, palette=paliete) # ch_env_fit_plot.set(title="Effect of Environmental Change Rate on Fitness", xlabel="Environmental Change Rate", ylabel="Fitness") ch_env_fit_plot.set( xlabel="Environmental Change Rate", ylabel="Fitness") # plt.suptitle("Effect of Changing Environment Rate on Fitness") sns.despine() # sns.color_palette("colorblind") sns.set(font_scale=1.3) sns.set_style('ticks') plt.tight_layout() plt.savefig("ch_env_fitness.pdf") # + plt.ylim(0, 128) ch_env_cod_plot = sns.boxplot(y=change_max["coding_sites"], x="change", data=change_max, linewidth=1.1, palette=pooplete) # plt.suptitle("Effect of Changing Environment Rate on Number of Coding Sites") # ch_env_cod_plot.set(title="Effect of Environmental Change Rate on Number of Coding Sites", xlabel="Environmental Change Rate", ylabel="Number of Coding Sites") ch_env_cod_plot.set( xlabel="Environmental Change Rate", ylabel="Number of Coding Sites") sns.set(font_scale=1.3) sns.despine() sns.set_style('ticks') plt.tight_layout() plt.savefig("ch_env_coding.pdf") # + a4_dims = (9, 4) # df = mylib.load_data() fig, ax = plt.subplots(figsize=a4_dims) plt.ylim(8, 16) palite = [paliete[0] , paliete[6]] ch_mut_fit_both_plot = sns.boxplot(ax=ax, y=mut_tot["fitness"], x="f", hue="change", data=mut_tot, palette=palite, linewidth=1) # plt.suptitle("Effect of Changing Environment on Fitness of Best Organism at Gen. 50,000") labels = [] for t in ch_mut_fit_both_plot.get_xticklabels(): if float(t.get_text()) == .00001: labels.append('%.5f' % float(t.get_text())) else: labels.append(t.get_text()) ch_mut_fit_both_plot.set_xticklabels(labels) # ch_mut_fit_both_plot.set(title="Comparison of Environmental Change Rate to Fitness", xlabel="Bit Flip Mutation Rate", ylabel="Fitness") ch_mut_fit_both_plot.set( xlabel="Bit Flip Mutation Rate", ylabel="Fitness") # title new_title = 'Change Rate' # ch_mut_fit_both_plot.legend.set_title(new_title) leg = ch_mut_fit_both_plot.get_legend() # new_title = 'My title' leg.set_title(new_title) sns.set(font_scale=1.5) sns.despine() sns.set_style('ticks') plt.tight_layout() plt.savefig("ch_mut_both_fitness.pdf") # + a4_dims = (9, 4) # df = mylib.load_data() fig, ax = plt.subplots(figsize=a4_dims) plt.ylim(0, 128) pal2 = [ pooplete[0], pooplete[6]] ch_mut_cod_both_plot = sns.boxplot(y=mut_tot["coding_sites"], x="f", hue="change", data=mut_tot, palette=pal2, linewidth=1) # ch_mut_cod_both_plot.set(title="Comparison of Environmental Change Rate to Number of Coding Sites", xlabel="Bit Flip Mutation Rate", ylabel="Number of Coding Sites") ch_mut_cod_both_plot.set(xlabel="Bit Flip Mutation Rate", ylabel="Number of Coding Sites") labels = [] for t in ch_mut_cod_both_plot.get_xticklabels(): if float(t.get_text()) == .00001: labels.append('%.5f' % float(t.get_text())) else: labels.append(t.get_text()) ch_mut_cod_both_plot.set_xticklabels(labels) new_title = 'Change Rate' # ch_mut_fit_both_plot.legend.set_title(new_title) leg = ch_mut_cod_both_plot.get_legend() # new_title = 'My title' leg.set_title(new_title) sns.set(font_scale=1.5) sns.despine() sns.set_style('ticks') plt.tight_layout() plt.savefig("ch_mut_both_change.pdf") # - # ## Coding Sites Vs. Bit Flips Statistics (change rate 0) # %load_ext rpy2.ipython # + magic_args="-i mut_0_max " language="R" # ?kruskal.test # kruskal.test(coding_sites ~ f, mut_0_max) # ?pairwise.wilcox.test # # pairwise.wilcox.test(coding_sites ~ mut_0_max) # pairwise.wilcox.test(mut_0_max$coding_sites, mut_0_max$f, p.adjust.method = 'bonferroni' ) # # ?p.adjust # - # ## Fitness Vs. Bit Flips Statistics (change rate 0) # + magic_args="-i mut_0_max " language="R" # ?kruskal.test # print(kruskal.test(fitness ~ f, mut_0_max)) # ?pairwise.wilcox.test # # pairwise.wilcox.test(coding_sites ~ mut_0_max) # pairwise.wilcox.test(mut_0_max$fitness, mut_0_max$f, p.adjust.method = 'bonferroni' ) # # ?p.adjust # - # ## Coding Sites vs. Environmental Change rate (Bit flip rate of .003) # + magic_args="-i change_max " language="R" # ?kruskal.test # print(kruskal.test(coding_sites ~ change, change_max)) # ?pairwise.wilcox.test # # pairwise.wilcox.test(coding_sites ~ mut_0_max) # pairwise.wilcox.test(change_max$coding_sites, change_max$change, p.adjust.method = 'bonferroni' ) # # ?p.adjust # - # ## Fitness vs. Environmental Change rate (Bit flip rate of .003) # + magic_args="-i change_max " language="R" # ?kruskal.test # print(kruskal.test(fitness ~ change, change_max)) # ?pairwise.wilcox.test # # pairwise.wilcox.test(coding_sites ~ mut_0_max) # pairwise.wilcox.test(change_max$fitness, change_max$change, p.adjust.method = 'bonferroni' ) # # ?p.adjust # - # ## Coding sites of change 0 vs. change 50 # + magic_args="-i mut_compare,arr " language="R" # print(arr) # for (i in arr) { # print(i) # print(wilcox.test(coding_sites ~ change, mut_compare[mut_compare$f == i, ])) # # print(mut_compare[mut_compare$f == i, ]) # } # # ?kruskal.test # # mut_tot[(mut_tot["f"] == .0) & ((mut_tot["change"] == 0 ) | (mut_tot["change"] == 50 ))] # # # # # for i in [] # # # p?rint(kruskal.test(coding_sites ~ change, change_max)) # # ?pairwise.wilcox.test # # pairwise.wilcox.test(coding_sites ~ mut_0_max) # # ?pairwise.wilcox.test(change_max$fitness, change_max$change, p.adjust.method = 'bonferroni' ) # # ?p.adjust # # ?wilcox.test # # wilcox.test(coding_sites ~ change, mut_tot[[""],]) # + magic_args="-i mut_compare,arr " language="R" # print(arr) # for (i in arr) { # print(i) # print(wilcox.test(fitness ~ change, mut_compare[mut_compare$f == i, ])) # # print(mut_compare[mut_compare$f == i, ]) # } # # ?kruskal.test # # mut_tot[(mut_tot["f"] == .0) & ((mut_tot["change"] == 0 ) | (mut_tot["change"] == 50 ))] # # # # # for i in [] # # # p?rint(kruskal.test(coding_sites ~ change, change_max)) # # ?pairwise.wilcox.test # # pairwise.wilcox.test(coding_sites ~ mut_0_max) # # ?pairwise.wilcox.test(change_max$fitness, change_max$change, p.adjust.method = 'bonferroni' ) # # ?p.adjust # # ?wilcox.test # # wilcox.test(coding_sites ~ change, mut_tot[[""],]) # - # ## Fitness Vs. Bit Flips Statistics (change rate 0)
scripts/2018-summer/Notebook_scripts/FinalDataVis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.5 64-bit ('openstreetmap') # language: python # name: python3 # --- # # graph_to_gdfs # # Convert a MultiDiGraph to node and/or edge GeoDataFrames. # + # OSMnx: New Methods for Acquiring, Constructing, Analyzing, and Visualizing Complex Street Networks import osmnx as ox ox.config(use_cache=True, log_console=False) ox.__version__ # + query = '중구, 서울특별시, 대한민국' network_type = 'drive' # "all_private", "all", "bike", "drive", "drive_service", "walk" # Create graph from OSM within the boundaries of some geocodable place(s). G = ox.graph_from_place(query, network_type=network_type) # Plot a graph. fig, ax = ox.plot_graph(G) # - # Convert a MultiDiGraph to node and/or edge GeoDataFrames. gdf = ox.utils_graph.graph_to_gdfs( G, nodes=False, # AttributeError: 'tuple' object has no attribute 'head' edges=True, node_geometry=True, fill_edge_geometry=True ) gdf.head()
osmnx/utils_graph/graph_to_gdfs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # My notes in Jupyterlab working # > This post is about notes from my experineces in work with Jupyterlab # # - toc: true # - badges: true # - comments: true # - categories: [jupyterlab, notes] # - image: images/jupyterlab.png # ## Install Jupyterlab App Desktop # https://github.com/jupyterlab/jupyterlab_app # After install the jupyterlab app an run it we can install any app with conda or pip, but it is better to install with conda. # # Sometimes, we may have a problem such as below: # # Collecting package metadata (current_repodata.json): failed # # UnavailableInvalidChannel: The channel is not accessible or is invalid. # channel name: pkgs/main # channel url: https://repo.anaconda.com/pkgs/main # error code: 403 # # You will need to adjust your conda configuration to proceed. # Use `conda config --show channels` to view your configuration's current state, # and use `conda config --show-sources` to view config file locations. # # For solve this problem in terminal of jupyterlab app run: # # conda config --remove-key channels # # Then we can create a virtual environment (venv) by: # # conda create -n <your envn name> # # With *conda info* and *conda config --show-sources*, it can be get some information about jupyterlab app and its venv and etc. # ## Jupyterlab spellchecker # # install: # # jupyter labextension install @ijmbarr/jupyterlab_spellchecker # ## Uninstall full program # sudo apt-get purge package-name # # sudo apt-get autoremove # ## Install fonts # sudo apt install git # # git clone https://github.com/fzerorubigd/persian-fonts-linux.git # # # cd persian-fonts-linux # # ./farsifonts.sh # ## Show disk space # df -h # ## Clean Ubuntu # sudo du -sh /var/cache/apt/archives # # sudo apt-get clean # # df -h # ## Install Miniconda # > Best resources for install Anaconda or Miniconda with spatial packages* # https://medium.com/@chrieke/howto-install-python-for-geospatial-applications-1dbc82433c05 # Download miniconda python 3.7 (some packages do not work with python 3.8) # https://repo.anaconda.com/miniconda/Miniconda3-py37_4.8.3-Linux-x86_64.sh # To install Miniconda on Ubuntu 20.04 from command line, # it only takes 3 steps excluding creating and activating a conda environment. # 1-Download the latest shell script # *wget https://repo.anaconda.com/miniconda/Miniconda3-py37_4.8.3-Linux-x86_64.sh* # # 2-Make the miniconda installation script executable: # *chmod +x Miniconda3-py37_4.8.3-Linux-x86_64.sh* # # 3-Run miniconda installation script: # *./Miniconda3-py37_4.8.3-Linux-x86_64.sh* # ## Create and activate the conda environment # To create a conda environment, run conda create -n newenv # # You can also create the environment from a file like *environment.yml*, you can use use the conda env create -f command: # > conda env create -f environment.yml # # The environment name will be the directory name. # ## Install Packages # - Install miniconda # - Anaconda Prompt (miniconda3)[Open miniconda] # - conda update -n base -c defaults conda [Update Conda installed] # - conda install -c anaconda anaconda-navigator [Install Navigator] # - Open Anaconda Navigator # - Open terminal # - conda install -c conda-forge gdal [Install GDAL] # - conda install -c anaconda numpy # - conda install -c anaconda pandas # - conda install -c conda-forge geopandas # - conda install -c anaconda xarray # - conda install -c conda-forge matplotlib # - conda install -c conda-forge cartopy # - conda install -c conda-forge descartes [for countries plot in cartopy] # - conda install -c conda-forge shapely # - conda install -c conda-forge fiona # - conda install -c conda-forge pyproj # - conda install -c conda-forge bqplot # - conda install -c conda-forge ipyleaflet # - conda install -c conda-forge nodejs # > npm install npm@latest -g [https://nodejs.org/en/] # # - conda list # ## Install Jupyterlab # -conda install -c conda-forge jupyterlab # # -In terminal type: jupyter lab [Open jupyterlab in default browser] # # -Ctrl+C or Ctrl+Conda [Close] # # -jupyter notebook --generate-config [Change Workdirectory] # # ``` # open "jupyter_notebook_config.py" file # Find " #c.NotebookApp.notebook_dir=' ' " # Change to " c.NotebookApp.notebook_dir = '~/Anaconda_Projects' " # In terminal type: jupyter lab # ``` # ## Install packages in Jupyterlab # - jupyter labextension install @jupyter-widgets/jupyterlab-manager jupyter-leaflet # # (https://ipyleaflet.readthedocs.io/en/latest/api_reference/basemaps.html) # # -jupyter labextension install @jupyter-widgets/jupyterlab-manager # # -jupyter lab build # # -jupyter nbextension enable --py widgetsnbextension --sys-prefix # # -pip install sidecar # # -jupyter labextension install @jupyter-widgets/jupyterlab-manager # # -jupyter labextension install @jupyter-widgets/jupyterlab-sidecar # # -conda install -c pyviz holoviews bokeh # # > jupyter labextension install @pyviz/jupyterlab_pyviz # ## Install Rasterio # rasterio package must be install with gdal, from " https://www.lfd.uci.edu/~gohlke/pythonlibs/#gdal " yuou can find last gdal and rasterio wheel files, then run something like this from the downloads folder: # # pip install GDAL-3.1.2-cp39-cp39-win_amd64.whl rasterio-1.1.5-cp39-cp39-win_amd64.whl # # conda install -c conda-forge rasterio # > [with python 3.7 (rasterio works with this version) # Rasterio 1.0.x works with Python versions 2.7.x and 3.5.0 through 3.7.x, and GDAL versions 1.11.x through 2.4.x. # Rasterio 1.0.x is not compatible with GDAL versions 3.0.0 or greater.] # -with the below command and after failed and failed, installed finally.</br> # conda install -c https://conda.anaconda.org/ioos rasterio # ## Cartopy or Basemap # Basemap is going away and being replaced with Cartopy in the near future. For this reason, new python learners are recommended to learn Cartopy. # So, install cartopy. # # NOTE: DO NOT INSTALL cartopy with basemap, they are conflict. # ## For UPDATE # - pip uninstall -y setuptools # # - pip install setuptools # # - conda update conda # # - conda update -all # # - conda update -n base -c defaults conda # ## Run plotly in jupyterlab # For use in Jupyter lab, you will have to install the plotly jupyterlab extension: # # jupyter labextension install jupyterlab-plotly # # OR # # jupyter labextension install @jupyterlab/plotly-extension # # jupyter labextension list # # jupyter lab build # # Then reopen anaconda jupyterlab # ## Persian fornt # pip install python-bidi # -lpympl # ## Markdown Formatting # The five most important concepts to format your code appropriately when using markdown are: # # 1. *Italics*: Surround your text with '\_' or '\*' # 2. **Bold**: Surround your text with '\__' or '\**' # 3. `inline`: Surround your text with '\`' # 4. > blockquote: Place '\>' before your text. # 5. [Links](https://github.com/ncar-hackathons): Surround the text you want to link with '\[\]' and place the link adjacent to the text, surrounded with '()' # # ### Headings # Notice that including a hashtag before the text in a markdown cell makes the text a heading. The number of hashtags you include will determine the priority of the header ('#' is level one, '##' is level two, '###' is level three and '####' is level four). # # ```no-highlight # # H1 # ## H2 # ### H3 # #### H4 # ##### H5 # ###### H6 # # Alternatively, for H1 and H2, an underline-ish style: # # Alt-H1 # ====== # # Alt-H2 # ------ # ``` # # # H1 # ## H2 # ### H3 # #### H4 # ##### H5 # ###### H6 # # Alternatively, for H1 and H2, an underline-ish style: # # Alt-H1 # ====== # # Alt-H2 # ------ # # <a name="emphasis"/> # # ### Emphasis # # ```no-highlight # Emphasis, aka italics, with *asterisks* or _underscores_. # # Strong emphasis, aka bold, with **asterisks** or __underscores__. # # Combined emphasis with **asterisks and _underscores_**. # # Strikethrough uses two tildes. ~~Scratch this.~~ # ``` # # Emphasis, aka italics, with *asterisks* or _underscores_. # # Strong emphasis, aka bold, with **asterisks** or __underscores__. # # Combined emphasis with **asterisks and _underscores_**. # # Strikethrough uses two tildes. ~~Scratch this.~~ # # # ### Lists # There are three types of lists in markdown. # # Ordered list: # # 1. Step 1 # 2. Step 1B # 3. Step 3 # Unordered list # # * CESM-POP # * CESM-MOM # * CESM-CAM # Task list # # - [x] Learn Jupyter Notebooks # - [x] Writing # - [x] Modes # - [x] Other Considerations # - [ ] Submit Paper # --- # # **NOTE:** # # Double click on each to see how they are built! # # --- # $-b \pm \sqrt{b^2 - 4ac} \over 2a$ # $x = a_0 + \frac{1}{a_1 + \frac{1}{a_2 + \frac{1}{a_3 + a_4}}}$ # $\forall x \in X, \quad \exists y \leq \epsilon$ # ## Shortcuts and tricks # ### Command Mode Shortcuts # There are a couple of useful keyboard shortcuts in `Command Mode` that you can leverage to make Jupyter Notebook faster to use. Remember that to switch back and forth between `Command Mode` and `Edit Mode` with <kbd>Esc</kbd> and <kbd>Enter</kbd>. # <kbd>m</kbd>: Convert cell to Markdown # <kbd>y</kbd>: Convert cell to Code # <kbd>D</kbd>+<kbd>D</kbd>: Delete cell # <kbd>o</kbd>: Toggle between hide or show output # <kbd>Shift</kbd>+<kbd>Arrow up/Arrow down</kbd>: Selects multiple cells. Once you have selected them you can operate on them like a batch (run, copy, paste etc). # <kbd>Shift</kbd>+<kbd>M</kbd>: Merge selected cells. # <kbd>Shift</kbd>+<kbd>Tab</kbd>: [press once] Tells you which parameters to pass on a function # # <kbd>Shift</kbd>+<kbd>Tab</kbd>: [press three times] Gives additional information on the method
_notebooks/2021-06-07-My-Notes-Jupyterlab.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 01: Úvod a vizualizace # ## Import základních balíčků # ### NumPy # * Balíček pro rychlé „vědecké“ výpočty (zejména lineární algebra a náhodná čísla). # * Většinou se jedná jen o interface k vysoce optimalizovaným C/C++/Fortran knihovnám. # * http://www.numpy.org/ # # ### pandas # * Oblíbený nástroj pro datovou analýzu. # * Pomáhá usnadnit práci s tabulkovými daty. # * http://pandas.pydata.org/ # # ### scikit-learn (sklearn) # * Soubor nástrojů datové vědy psaný v Pythonu. # * Staví na NumPy, [SciPy](https://www.scipy.org/) a matplotlib # * http://scikit-learn.org/stable/ # # ### matplotlib # * Základní knihovna pro vykreslování grafů. # * https://matplotlib.org/ # # ### seaborn # * Nástroj pro vizualizaci dat, založený na matplotlib. # * https://seaborn.pydata.org/ import numpy as np import pandas as pd import sklearn as skit import matplotlib.pyplot as plt import seaborn as sns # ## Základní práce s daty pomocí knihovny pandas # # - Načtěte datasety data1.csv a data2.csv pomocí pandas. # - Zjistěte, jakých typů jsou data ve sloupcích (obsahují stringy, čísla, ...? Jaký mají rozsah?) # ### Načítání dat # # - Načítání dat z csv souborů do pandas typu DataFrame. data1 = pd.read_csv('data1.csv') data2 = pd.read_csv('data2.csv',sep=';') # ### Pomocí pandas funkcí si můžeme zobrazit základní informace o datasetu df = data2 #df.head() #df.info() #df.describe() #df.isnull().sum() #df.notnull().sum() #display(df.head()) #df.head() # ### Základy přístupu k datům # + #data1['Age'] # vrátí sloupec se jménem (pandas.Series.name) Age #data1.Age # totéž co výše #data1['Age'][:10] # vrátí prvních 10 záznamů ze sloupce Age #data1['Age'][:3][[True, False, True]] #data1['Age'] > 30 # podmínka se použije na všechny záznamy -> vrací pandas.Series s výsledky (hodnoty True nebo False) #data1[data1['Age'] > 30] # vrátí jen osoby starší než 30 let #data1[['Age', 'Survived']].head() # vrátí jen zadané sloupce #data1_tmp = data1.copy() # vytvoří hlubokou kopii dataframe #data1_tmp.columns = range(12) # přejmenování sloupců #display(data1.head()) #data1_tmp.head() #data1[1:2] # vrací první řádek #data1.loc[1,['Age', 'Sex']] # indexy (viz .loc? a .iloc?) # - # ## Úloha 01: Zřetězení dat # # - Připojte data2.csv za data1.csv následujícím způsobem: # - Data (sloupce), která nejsou v data1.csv jsou v data2.csv vynechána. # - Spočítejte věk pomocí sloupce BirthYear (rok narození) v data2.csv a uložte jej do sloupce Age. # - PasangerId musí být unikátní. # - Použijte metodu pandas.concat. # + ### sem napište svůj kód # - # ## Vizualizace pomocí pandas a seaborn import matplotlib.pyplot as plt # takto se matplotlib standardně importuje import matplotlib # bez násl. řádku někdy nefunguje vykreslování grafů v Jupyter noteboocích # %matplotlib inline matplotlib.style.use('ggplot') # ### Vliv sloupců Pclass, Age a Sex na přežití cestujících # + data = data1 #data.plot() # výchozí chování metody plot() #podívejte se, jaké druhy grafů jsou k dispozici # #data.plot? # získání přeživších a nepřeživších osob survived = data[data['Survived'] == 1] not_survived = data[data['Survived'] == 0] ax = survived.plot.scatter(x='Age', y='Pclass', color='Green', label='Survived') not_survived.plot(x='Age', y='Pclass', kind='scatter', color='Black', label='Not Survived') # vykreslení grafů do jednoho obrázku: #not_survived.plot.scatter(x='Age', y='Pclass', color='Black', label='Not Survived', ax = ax) # - plt.figure(figsize=(9,12)) # velikost figsize se určuje v palcích (angl. inches) plt.subplot(321) # tři řádky a dva sloupce, přiřaď následující graf do prvního slotu survived['Age'].plot.hist(color='Green') plt.subplot(322) not_survived['Age'].plot.hist(color='Black') plt.subplot(323) survived['Pclass'].plot.hist(color='Green') plt.subplot(324) not_survived['Pclass'].plot.hist(color='Black') plt.subplot(325) survived['Sex'].apply(lambda x: 1 if x == 'female' else 0).plot.hist(color='Green') plt.subplot(326) not_survived['Sex'].apply(lambda x: 1 if x == 'female' else 0).plot.hist(color='Black') # ## Seaborne: jak zjistit vztahy mezi příznaky plt.figure(figsize=(14,12)) data['Sex'] = data['Sex'].apply(lambda x: 1 if x == 'female' else 0) cor_matrix = data.drop('PassengerId', axis=1).corr() print(cor_matrix) sns.heatmap(cor_matrix, annot=True) # ## Úloha 02: vytvořte bodový graf (scatter plot) pro všechny dvojice příznaků # # - K vykreslení všech (smysluplných) dvojic příznaků použijte metodu sns.pairplot, podobným způsobem jako metodu níže. plt.figure(figsize=(12,4)) sns.stripplot(x="Pclass", y="Age", hue="Survived", data=data, palette= ['black','green']) # add jitter=True # + ### sem napište svůj kód # - # ## Stahování dat z webu pomocí Pythonu a pandas (1. úkol) # # ### Tipy pro scrapping s Pythonem: # - K získání HTML zdrojáku stránky s `url` použijte `import requests`: # - `r = requests.get(url)` # - `html = r.text` # - Metoda `pandas.read_html(r.text)` uloží všechny tabulky `<table>` jako seznam pandas DataFrames: # - `list_of_data_frames = pd.read_html(html,flavor='html5lib')` # - Pro HTML parsování lze použít `from bs4 import BeautifulSoup`. # příklad pro statutární město Kladno, výsledky voleb pro rok 2010 url = 'https://www.volby.cz/pls/kv2018/kv1111?xjazyk=CZ&xid=0&xdz=3&xnumnuts=5103&xobec=563889&xstat=0&xvyber=0' # výsledky voleb dfs = pd.read_html(url,flavor='html5lib') pd.options.display.max_columns = None pd.options.display.max_rows = None display(dfs[0:]) # ### Pro zajímavost (z loňského roku): Příklad jednoduchého stažení dat z webu pomocí POST formuláře. # # Úkolem je stáhnout všechna data z http://kap.ujak.cz/index.php a uložit je ve formátu pandas DataFrame. # + # url s formulářem url = 'http://kap.ujak.cz/index.php?strana={}' # nastavení POST proměnných simulující odeslání formuláře data = { 'typ' : 'kap', 'prace' : 'BP', # DP = diplomka, DR = disertace, RI = rigorozní 'nazev' : '%%%', # alespoň tři písmena z názvu hledané práce 'pocet' : '0', 'klic' : '', # alespoň tři písmena z klíčových slov 'kl' : 'c', # c = částečně odpovídá, n = plně odpovídá 'hledat' : 'Vyhledat' } data_all = pd.DataFrame() for prace in ['BP', 'DP']: data['prace'] = prace r = requests.post(url, data) r.encoding='cp1250' ldf = pd.read_html(r.text,flavor='html5lib', header=0) df = ldf[0] strana = 30 if data_all.shape[0] == 0: data_all = df.copy() else: data_all = pd.concat([data_all,df], ignore_index=True) while df.shape[0] > 0: if data_all.shape[0] > 200: # just to prevent from downloading all data break print(url.format(strana)) r = requests.post(url.format(strana), data) r.encoding='cp1250' ldf = pd.read_html(r.text,flavor='html5lib', header=0) df = ldf[0] strana = strana + 30 data_all = pd.concat([data_all,df], ignore_index=True) # - dataUJAK = pd.read_csv('ujak.csv', index_col=0) display(dataUJAK.head()) dataUJAK[dataUJAK['Rok'] > 2000].groupby(['Název práce']).size().sort_values(ascending=False)
seminars/01/01_introduction_and_visualisation_cs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: saturn (Python 3) # language: python # name: python3 # --- # <img src="../img/saturn_logo.png" width="300" /> # # # # Transfer Learning # # In this project, we will use the Stanford Dogs dataset, and starting with Resnet50, and we will use transfer learning to make it perform better at dog image identification. # # In order to make this work, we have a few steps to carry out: # * Preprocessing our data appropriately # * Applying infrastructure for parallelizing the learning process # * Running the transfer learning workflow and generating evaluation data # # # ### Start and Check Cluster # + from dask_saturn import SaturnCluster from dask.distributed import Client import s3fs import re from torchvision import transforms cluster = SaturnCluster() client = Client(cluster) client.wait_for_workers(3) client # - import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # *** # # ## Preprocessing Data # # We are using `dask-pytorch-ddp` to handle a lot of the work involved in training across the entire cluster. This will abstract away lots of worker management tasks, and also sets up a tidy infrastructure for managing model output, but if you're interested to learn more about this, we maintain the [codebase and documentation on Github](https://github.com/saturncloud/dask-pytorch). # # Because we want to load our images directly from S3, without saving them to memory (and wasting space/time!) we are going to use the `dask-pytorch-ddp` custom class inheriting from the Dataset class called `S3ImageFolder`. # # The preprocessing steps are quite short- we want to load images using the class we discussed above, and apply the transformation of our choosing. If you like, you can make the transformations an argument to this function and pass it in. # from dask_pytorch_ddp import results, data, dispatch from torch.utils.data.sampler import SubsetRandomSampler def prepro_batches(bucket, prefix): '''Initialize the custom Dataset class defined above, apply transformations.''' transform = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(250), transforms.ToTensor()]) whole_dataset = data.S3ImageFolder( bucket, prefix, transform=transform, anon = True ) return whole_dataset # ### Optional: Checking Data Labels # # Because our task is transfer learning, we're going to be starting with the pretrained Resnet50 model. In order to take full advantage of the training that the model already has, we need to make sure that the label indices on our Stanford Dogs dataset match their equivalents in the Resnet50 label data. (Hint: they aren't going to match, but we'll fix it!) # + s3 = s3fs.S3FileSystem() with s3.open('s3://saturn-public-data/dogs/imagenet1000_clsidx_to_labels.txt') as f: imagenetclasses = [line.strip() for line in f.readlines()] whole_dataset = prepro_batches(bucket = "saturn-public-data", prefix = "dogs/Images") # - # Any dataset loaded in a PyTorch image folder object will have a few attributes, including `class_to_idx` which returns a dictionary of the class names and their assigned indices. Let's look at the one for our dog images. list(whole_dataset.class_to_idx.items())[0:5] # So let's look at the Imagenet classes - do they match? imagenetclasses[0:5] # Well, that's not going to work! Our model thinks 1 = goldfish while our dataset thinks 1 = Japanese Spaniel. Fortunately, this is a pretty easy fix. # # I've created a function called `replace_label()` that checks the labels by text with regex, so that we can be assured that we match them up correctly. This is important, because we can't assume all our dog labels are in exactly the same consecutive order in the imagenet labels. def replace_label(dataset_label, model_labels): label_string = re.search('n[0-9]+-([^/]+)', dataset_label).group(1) for i in model_labels: i = str(i).replace('{', '').replace('}', '') model_label_str = re.search('''b["'][0-9]+: ["']([^\/]+)["'],["']''', str(i)) model_label_idx = re.search('''b["']([0-9]+):''', str(i)).group(1) if re.search(str(label_string).replace('_', ' '), str(model_label_str).replace('_', ' ')): return i, model_label_idx break # We can use this function in a couple of lines of list comprehension to create our new `class_to_idx` object. Now we have the indices assigned to match our imagenet dataset! new_class_to_idx = {x: int(replace_label(x, imagenetclasses)[1]) for x in whole_dataset.classes} list(new_class_to_idx.items())[0:5] imagenetclasses[151:156] # Let's also make sure our old and new datasets have the same length, so that nothing got missed. len(new_class_to_idx.items()) == len(whole_dataset.class_to_idx.items()) # *** # # ### Select Training and Evaluation Samples # # In order to run our training, we'll create training and evaluation sample sets to use later. These generate DataLoader objects which we can iterate over. We'll use both later to run and monitor our model's learning. # # Note the `multiprocessing_context` argument that we are using in the DataLoader objects - this will allow our large batch jobs to efficiently load more than one image simultaneously, and save us a lot of time. def get_splits_parallel(train_pct, data, batch_size, num_workers=64): '''Select two samples of data for training and evaluation''' classes = data.classes train_size = math.floor(len(data) * train_pct) indices = list(range(len(data))) np.random.shuffle(indices) train_idx = indices[:train_size] test_idx = indices[train_size:len(data)] train_sampler = SubsetRandomSampler(train_idx) test_sampler = SubsetRandomSampler(test_idx) train_loader = torch.utils.data.DataLoader( data, sampler=train_sampler, batch_size=batch_size, num_workers=num_workers, multiprocessing_context=mp.get_context('fork')) test_loader = torch.utils.data.DataLoader( data, sampler=train_sampler, batch_size=batch_size, num_workers=num_workers, multiprocessing_context=mp.get_context('fork')) return train_loader, test_loader # Aside from using our custom data object, this should be very similar to other PyTorch workflows. While I am using the `S3ImageFolder` class here, you definitely don't have to in your own work. Any standard PyTorch data object type should be compatible with the Dask work we're doing next. # # Now, it's time for learning, in [Notebook 6a](06a-transfer-training-s3.ipynb)! # # <img src="https://media.giphy.com/media/mC7VjtF9sYofs9DUa5/giphy.gif" alt="learn" style="width: 300px;"/>
transfer_learning_demo/05-transfer-prepro.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Hyperlink Induced Topic Search (HITS) Algorithm # # Hyperlink Induced Topic Search (HITS) Algorithm is a Link Analysis Algorithm that rates webpages, developed by <NAME>. This algorithm is used to the web link-structures to discover and rank the webpages relevant for a particular search. # HITS uses hubs and authorities to define a recursive relationship between webpages. # # HITS uses hubs and authorities to define a recursive relationship between webpages. # # <b>Authority:</b> A node is high-quality if many high-quality nodes link to it. # # <b>Hub:</b> A node is high-quality if it links to many high-quality nodes # # # ### Steps to implement HITS # # * Initialize the hub and authority of each node with a value of 1 # * For each iteration, update the hub and authority of every node in the graph # * The new authority is the sum of the hub of its parents # * The new hub is the sum of the authority of its children # * Normalize the new authority and hub # # ![image1](imagehits1.png) # ![image2](imagehits2.png) # ![image3](image3.png) # ![image4](image4.png) import matplotlib.pyplot as plt import networkx as nx G = nx.DiGraph() G.add_node("A") G.add_node("B") G.add_node("C") G.add_node("D") G.add_node("E") G.add_node("F") G.add_node("G") G.add_node("H") G.add_edges_from([("A","D"),("B","C"),("B","E"),("C","A"),("D","B"),("D","C"),("E","F"),("E","D"),("E","B"),("E","C"),("F","C"),("F","H"),("G","C"),("G","A"),("H","A")]) len(G.edges) nx.draw(G, with_labels=True, font_weight='bold') nx.hits(G ,max_iter = 50, normalized = True) import numpy as np import csv,random import math import matplotlib.pyplot as plt class Node: def __init__(self,name,connected_to=None): self.name=name self.connected_to=connected_to self.incoming_edges=[] self.hub=1 self.new_hub=0 self.authority=1 self.new_auth=0 def __str__(self): return self.name A=Node("A") B=Node("B") C=Node("C") D=Node("D") E=Node("E") F=Node("F") G=Node("G") H=Node("H") graph_nodes=[A,B,C,D,E,F,G,H] A.connected_to=[D] B.connected_to=[C,E] C.connected_to=[A] D.connected_to=[B,C] E.connected_to=[F,D,B,C] F.connected_to=[C,H] G.connected_to=[C,A] H.connected_to=[A] for i in graph_nodes: for j in graph_nodes: if i in j.connected_to: i.incoming_edges.append(j) B.incoming_edges # + for i in range(2): print("-"*50,f"Iteration:{i}","-"*50) sum_auth=0 sum_hub=0 for node in (graph_nodes): if len(node.connected_to)==0: node.authority=0 node.new_auth=0 elif len(node.connected_to)==1: node.new_auth=node.connected_to[0].hub else: node.new_auth=0 for conn in node.connected_to: node.new_auth+=conn.hub if len(node.incoming_edges)==0: node.hub=0 node.new_hub=0 elif len(node.incoming_edges)==1: node.new_hub=node.incoming_edges[0].authority else: node.new_hub=0 for conn in node.incoming_edges: node.new_hub+=conn.authority sum_auth+=node.new_auth sum_hub+=node.new_hub for node in graph_nodes: node.authority=node.new_auth/sum_auth print(f"{node.name} Hub=",node.authority) node.hub=node.new_hub/sum_hub print(f"{node.name} Authority=",node.hub) print("-"*50) # - A.hub
60004180028_HITS.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="wWECL_E0S-cl" # # Managed Pipelines Experimental: Custom containers and resource specs # # This notebook shows how to build and use custom containers for Pipeline components. It also shows how to pass typed artifact data between component, and how to specify required resources when defining a pipeline. # # This example uses one of the TensorFlow Datasets, in particular the [Large Movie Review Dataset](https://www.tensorflow.org/datasets/catalog/imdb_reviews#imdb_reviewssubwords8k), for a binary sentiment classification task: predicting whether a movie review is negative or positive. # + [markdown] id="DNu_BtiA5h9N" # ## Setup # # Before you run this notebook, ensure that your Google Cloud user account and project are granted access to the Managed Pipelines Experimental. To be granted access to the Managed Pipelines Experimental, fill out this [form](http://go/cloud-mlpipelines-signup) and let your account representative know you have requested access. # # This notebook is intended to be run on either one of: # * [AI Platform Notebooks](https://cloud.google.com/ai-platform-notebooks). See the "AI Platform Notebooks" section in the Experimental [User Guide](https://docs.google.com/document/d/1JXtowHwppgyghnj1N1CT73hwD1caKtWkLcm2_0qGBoI/edit?usp=sharing) for more detail on creating a notebook server instance. # * [Google Colab](https://colab.research.google.com/notebooks/intro.ipynb) # # **To run this notebook on AI Platform Notebooks**, click on the **File** menu, then select "Download .ipynb". Then, upload that notebook from your local machine to AI Platform Notebooks. (In the AI Platform Notebooks left panel, look for an icon of an arrow pointing up, to upload). # # We'll first install some libraries and set up some variables. # # + [markdown] id="fZ-GWdI7SmrN" # Set `gcloud` to use your project. **Edit the following cell before running it**. # + id="pD5jOcSURdcU" PROJECT_ID = 'rthallam-demo-project' # <---CHANGE THIS # + [markdown] id="GAaCPLjgiJrO" # Set `gcloud` to use your project. # + id="VkWdxe4TXRHk" # !gcloud config set project {PROJECT_ID} # + [markdown] id="gckGHdW9iPrq" # If you're running this notebook on colab, authenticate with your user account: # + id="kZQA0KrfXCvU" import sys if 'google.colab' in sys.modules: from google.colab import auth auth.authenticate_user() # + [markdown] id="aaqJjbmk6o0o" # ----------------- # # **If you're on AI Platform Notebooks**, authenticate with Google Cloud before running the next section, by running # ```sh # gcloud auth login # ``` # **in the Terminal window** (which you can open via **File** > **New** in the menu). You only need to do this once per notebook instance. # + [markdown] id="fOpZ41iBW7bl" # ### Install the KFP SDK and AI Platform Pipelines client library # # For Managed Pipelines Experimental, you'll need to download a special version of the AI Platform client library. # + [markdown] id="injJzlmllbEL" # Then, install the libraries and restart the kernel. If you see a permissions error for the Metadata libraries, make sure you've run the `gcloud auth login` command as indicated above. # + id="bnRCttVlajjw" # !gsutil cp gs://cloud-aiplatform-pipelines/releases/latest/kfp-1.5.0rc5.tar.gz . # !gsutil cp gs://cloud-aiplatform-pipelines/releases/latest/aiplatform_pipelines_client-0.1.0.caip20210415-py3-none-any.whl . # Get the Metadata SDK to query the produced metadata. # !gsutil cp gs://cloud-aiplatform-metadata/sdk/google-cloud-aiplatform-metadata-0.0.1.tar.gz . # + id="TmUZzSv6YA9-" if 'google.colab' in sys.modules: USER_FLAG = '' else: USER_FLAG = '--user' # + [markdown] id="CFSsfPr-Uad1" # Install the libraries: # + id="fbZl0NsXSsmh" # !python3 -m pip install {USER_FLAG} kfp-1.5.0rc5.tar.gz google-cloud-aiplatform-metadata-0.0.1.tar.gz aiplatform_pipelines_client-0.1.0.caip20210415-py3-none-any.whl --upgrade # + id="o5kaReN2lbEN" # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) # + [markdown] id="N33S1ikHIOPS" # The KFP version should be >= 1.5. # # # + id="a4uvTyimMYOr" # Check the KFP version # !python3 -c "import kfp; print('KFP version: {}'.format(kfp.__version__))" # + [markdown] id="t1GX5KDOUJuI" # If you're on colab, re-authorize after the kernel restart. **Edit the following cell for your project ID before running it.** # + id="PpkxFp93xBk5" import sys if 'google.colab' in sys.modules: PROJECT_ID = 'rthallam-demo-project' # <---CHANGE THIS # !gcloud config set project {PROJECT_ID} from google.colab import auth auth.authenticate_user() USER_FLAG = '' # + [markdown] id="tskC13YxW7b3" # ### Set some variables # # **Before you run the next cell**, **edit it** to set variables for your project. See the "Before you begin" section of the User Guide for information on creating your API key. For `BUCKET_NAME`, enter the name of a Cloud Storage (GCS) bucket in your project. Don't include the `gs://` prefix. # + id="zHsVifdTW7b4" # PATH=%env PATH # %env PATH={PATH}:/home/jupyter/.local/bin # Required Parameters USER = 'rthallam' # <---CHANGE THIS BUCKET_NAME = 'cloud-ai-platform-2f444b6a-a742-444b-b91a-c7519f51bd77' # <---CHANGE THIS PIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(BUCKET_NAME, USER) PROJECT_ID = 'rthallam-demo-project' # <---CHANGE THIS REGION = 'us-central1' API_KEY = '<KEY>' # <---CHANGE THIS print('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT)) # + [markdown] id="ZCi94vXkS-db" # ## Build custom container components # # # We'll first build the two components that we'll use in our pipeline. The first component generates train and test data, and the second component consumes that data to train a model (to predict movie review sentiment). # # These components are based on custom Docker container images that we'll build and upload to the Google Container Registry, using Cloud Build. # + [markdown] id="RBs5UldB4_jI" # ### Container 1: Generate examples # # First, we'll define and write out the `generate_examples.py` code. It generates train and test set files from the [IMDB review data](https://www.tensorflow.org/datasets/catalog/imdb_reviews#imdb_reviewssubwords8k), in `TFRecord` format. # + id="F2J01k4hZaOm" # !mkdir -p generate # + id="48YXDQIiS-dc" # %%writefile generate/generate_examples.py import argparse import json import os import numpy as np import tensorflow as tf import tensorflow_datasets as tfds def _serialize_example(example, label): example_value = tf.io.serialize_tensor(example).numpy() label_value = tf.io.serialize_tensor(label).numpy() feature = { 'examples': tf.train.Feature( bytes_list=tf.train.BytesList(value=[example_value])), 'labels': tf.train.Feature(bytes_list=tf.train.BytesList(value=[label_value])), } return tf.train.Example(features=tf.train.Features( feature=feature)).SerializeToString() def _tf_serialize_example(example, label): serialized_tensor = tf.py_function(_serialize_example, (example, label), tf.string) return tf.reshape(serialized_tensor, ()) def generate_examples(training_data_uri, test_data_uri, config_file_uri): (train_data, test_data), info = tfds.load( # Use the version pre-encoded with an ~8k vocabulary. 'imdb_reviews/subwords8k', # Return the train/test datasets as a tuple. split=(tfds.Split.TRAIN, tfds.Split.TEST), # Return (example, label) pairs from the dataset (instead of a dictionary). as_supervised=True, with_info=True) serialized_train_examples = train_data.map(_tf_serialize_example) serialized_test_examples = test_data.map(_tf_serialize_example) filename = os.path.join(training_data_uri, "train.tfrecord") writer = tf.data.experimental.TFRecordWriter(filename) writer.write(serialized_train_examples) filename = os.path.join(test_data_uri, "test.tfrecord") writer = tf.data.experimental.TFRecordWriter(filename) writer.write(serialized_test_examples) encoder = info.features['text'].encoder config = { 'vocab_size': encoder.vocab_size, } config_file = os.path.join(config_file_uri, "config") with tf.io.gfile.GFile(config_file, 'w') as f: f.write(json.dumps(config)) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--training_data_uri', type=str) parser.add_argument('--test_data_uri', type=str) parser.add_argument('--config_file_uri', type=str) args = parser.parse_args() generate_examples(args.training_data_uri, args.test_data_uri, args.config_file_uri) # + [markdown] id="tRFMWnhPS-df" # Next, we'll create a Dockerfile that builds a container to run `generate_examples.py`. We are using a Google [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers) image as our base, since the image already includes most of what we need. # You may use your own image as the base image instead. Note that we're also installing the `tensorflow_datasets` library. # + id="mzRx9XikS-df" # %%writefile generate/Dockerfile FROM gcr.io/deeplearning-platform-release/tf2-cpu.2-3:latest WORKDIR /pipeline COPY generate_examples.py generate_examples.py RUN pip install tensorflow_datasets ENV PYTHONPATH="/pipeline:${PYTHONPATH}" # + [markdown] id="PMrMmGB5Hm8y" # We'll use [Cloud Build](https://cloud.google.com/cloud-build/docs) to build the container image and write it to [GCR](https://cloud.google.com/container-registry). # + id="nQ3Y9CErXs9M" # !gcloud builds submit --tag gcr.io/{PROJECT_ID}/custom-container-generate:{USER} generate # + [markdown] id="PEJdna3RS-ds" # ### Container 2: Train Examples # # Next, we'll do the same for the 'Train Examples' custom container. We'll first write out a `train_examples.py` file, then build a container that runs it. This script takes as input training and test data in `TFRecords` format and trains a Keras binary classification model to predict review sentiment. When training has finished, it writes out model and metrics information. # + id="he0VlF2Hbghh" # !mkdir -p train # + id="E2t0eLglS-ds" # %%writefile train/train_examples.py import argparse import json import os import numpy as np import tensorflow as tf def _parse_example(record): f = { 'examples': tf.io.FixedLenFeature((), tf.string, default_value=''), 'labels': tf.io.FixedLenFeature((), tf.string, default_value='') } return tf.io.parse_single_example(record, f) def _to_tensor(record): examples = tf.io.parse_tensor(record['examples'], tf.int64) labels = tf.io.parse_tensor(record['labels'], tf.int64) return (examples, labels) def train_examples(training_data_uri, test_data_uri, config_file_uri, output_model_uri, output_metrics_uri): train_examples = tf.data.TFRecordDataset( [os.path.join(training_data_uri, 'train.tfrecord')]) test_examples = tf.data.TFRecordDataset( [os.path.join(test_data_uri, 'test.tfrecord')]) train_batches = train_examples.map(_parse_example).map(_to_tensor) test_batches = test_examples.map(_parse_example).map(_to_tensor) with tf.io.gfile.GFile(os.path.join(config_file_uri, 'config')) as f: config = json.loads(f.read()) model = tf.keras.Sequential([ tf.keras.layers.Embedding(config['vocab_size'], 16), tf.keras.layers.GlobalAveragePooling1D(), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.summary() model.compile( optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) train_batches = train_batches.shuffle(1000).padded_batch( 32, (tf.TensorShape([None]), tf.TensorShape([]))) test_batches = test_batches.padded_batch( 32, (tf.TensorShape([None]), tf.TensorShape([]))) history = model.fit( train_batches, epochs=10, validation_data=test_batches, validation_steps=30) loss, accuracy = model.evaluate(test_batches) metrics = { 'loss': str(loss), 'accuracy': str(accuracy), } model_json = model.to_json() with tf.io.gfile.GFile(os.path.join(output_model_uri, 'model.json'), 'w') as f: f.write(model_json) with tf.io.gfile.GFile(os.path.join(output_metrics_uri, 'metrics.json'), 'w') as f: f.write(json.dumps(metrics)) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--training_data_uri', type=str) parser.add_argument('--test_data_uri', type=str) parser.add_argument('--config_file_uri', type=str) parser.add_argument('--output_model_uri', type=str) parser.add_argument('--output_metrics_uri', type=str) args = parser.parse_args() train_examples(args.training_data_uri, args.test_data_uri, args.config_file_uri, args.output_model_uri, args.output_metrics_uri) # + [markdown] id="GVJyxdeCS-du" # Next, we'll create a Dockerfile that builds a container to run `train_examples.py`. Again we're using a Google [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers) image as our base. # + id="HoDYRpzlS-dv" # %%writefile train/Dockerfile FROM gcr.io/deeplearning-platform-release/tf2-cpu.2-3:latest WORKDIR /pipeline COPY train_examples.py train_examples.py ENV PYTHONPATH="/pipeline:${PYTHONPATH}" # + [markdown] id="xgd_zce_AXYv" # We'll use [Cloud Build](https://cloud.google.com/cloud-build/docs) to build the container image and write it to [GCR](https://cloud.google.com/container-registry). # + id="YPmhXcXFbqVK" # !gcloud builds submit --tag gcr.io/{PROJECT_ID}/custom-container-train:{USER} train # + [markdown] id="y3BqUxQ_9c31" # ### Create pipeline components using the custom container images # # Next, we'll define components for the 'generate' and 'train' steps, using the container images we just built. # # # + id="c2llvI0w9r3f" import time from kfp import components from kfp.v2 import dsl from kfp.v2 import compiler # + [markdown] id="fQAff2NRAHKZ" # The 'generate' component specifies three outputs: training and test data, of type `Dataset`, and a config file, of type `File`. # # The component definition uses `outputUri` in specifying the `generate_example.py` script args. These args are set to automatically-generated GCS URIs, and when `generate_examples` writes to those URIs, the outputs are available to downstream components. # # # + id="XKqGpulg9b_J" generate_op = components.load_component_from_text(""" name: GenerateExamples outputs: - {name: training_data, type: Dataset} - {name: test_data, type: Dataset} - {name: config_file, type: File} implementation: container: image: gcr.io/%s/custom-container-generate:%s command: - python - /pipeline/generate_examples.py args: - --training_data_uri - {outputUri: training_data} - --test_data_uri - {outputUri: test_data} - --config_file_uri - {outputUri: config_file} """ % (PROJECT_ID, USER)) # + [markdown] id="B8qtwaqU9zbC" # The train component takes as input training and test data of type `Dataset`, and a config `File`: it can consume the outputs of the "generate" component. It specifies two outputs, one of type `Model` and one of type `Metrics`. # # The component definition uses `inputUri` and `outputUri` when passing args to the `train_examples` script. So, the script's arg values will be GCS URIs, from which it will read its inputs and write its outputs. # + id="IM_tmbv09prn" train_op = components.load_component_from_text(""" name: Train inputs: - {name: training_data, type: Dataset} - {name: test_data, type: Dataset} - {name: config_file, type: File} outputs: - {name: model, type: Model} - {name: metrics, type: Metrics} implementation: container: image: gcr.io/%s/custom-container-train:%s command: - python - /pipeline/train_examples.py args: - --training_data_uri - {inputUri: training_data} - --test_data_uri - {inputUri: test_data} - --config_file_uri - {inputUri: config_file} - --output_model_uri - {outputUri: model} - --output_metrics_uri - {outputUri: metrics} """ % (PROJECT_ID, USER)) # + [markdown] id="w1r1CWF0S-d7" # ## Define a KFP pipeline that uses the components # # Now we're ready to define a pipeline that uses these components. The `train` step takes its inputs from the `generate` step's outputs. # # Note also that we are able to define pipeline *resource* specs, which we do here for the training step, including memory constraints, the number of GPUs to allocate, and the type of accelerator to use. # + id="tYhopKUBS-d7" @dsl.pipeline(name='custom-container-pipeline-{}-{}'.format(USER, str(int(time.time())))) def pipeline(): generate = generate_op() train = (train_op( training_data=generate.outputs['training_data'], test_data=generate.outputs['test_data'], config_file=generate.outputs['config_file']). set_cpu_limit('4'). set_memory_limit('14Gi'). add_node_selector_constraint( 'cloud.google.com/gke-accelerator', 'nvidia-tesla-k80'). set_gpu_limit(1)) # + [markdown] id="huP7loXFG6s8" # Compile the pipeline: # + id="6qf9KkkoA1y7" compiler.Compiler().compile(pipeline_func=pipeline, package_path='custom_container_pipeline_spec.json') # + [markdown] id="mH5QFfSuW7cJ" # ### Submit the pipeline job # # Here, we'll create an API client using the API key you generated. # # Then, we'll submit the pipeline job by passing the compiled spec to the `create_run_from_job_spec()` method. Note that we're passing a `parameter_values` dict that specifies the pipeline input parameters we want to use. # + id="NSnrYUDAW7cK" from aiplatform.pipelines import client api_client = client.Client(project_id=PROJECT_ID, region=REGION, api_key=API_KEY) response = api_client.create_run_from_job_spec( job_spec_path='custom_container_pipeline_spec.json', pipeline_root=PIPELINE_ROOT, ) # + [markdown] id="dhfJYO3T613t" # ## Query the metadata produced by the pipeline. # # The set of artifacts and executions produced by the pipeline can also be queried using the AIPlatform Metadata SDK. The following shows a snippet for querying the metadata for a given pipeline run: # + id="csZSsQHO1ZdQ" from google.cloud import aiplatform from google import auth from google.cloud.aiplatform_v1alpha1 import MetadataServiceClient from google.auth.transport import grpc, requests from google.cloud.aiplatform_v1alpha1.services.metadata_service.transports import grpc as transports_grpc import pandas as pd def _initialize_metadata_service_client() -> MetadataServiceClient: scope = 'https://www.googleapis.com/auth/cloud-platform' api_uri = 'us-central1-aiplatform.googleapis.com' credentials, _ = auth.default(scopes=(scope,)) request = requests.Request() channel = grpc.secure_authorized_channel(credentials, request, api_uri) return MetadataServiceClient( transport=transports_grpc.MetadataServiceGrpcTransport(channel=channel)) client = _initialize_metadata_service_client() # + id="9hBBZz5g41Ey" def get_run_context_name(pipeline_run): contexts = client.list_contexts(parent='projects/{}/locations/{}/metadataStores/default'.format(PROJECT_ID, REGION)) for context in contexts: if context.display_name == pipeline_run: return context.name run_context_name = get_run_context_name('my-pipeline-run-1') # <- Name of the pipeline run client.query_context_lineage_subgraph(context=run_context_name) # + [markdown] id="6Kgtx8-bW7cM" # ### Monitor the pipeline run in the Cloud Console # # Once you've deployed the pipeline run, you can monitor it in the [Cloud Console](https://console.cloud.google.com/ai/platform/pipelines) under **AI Platform (Unified)** > **Pipelines**. # # Click in to the pipeline run to see the run graph (for our pipeline, this consists of two steps), and click on a step to view the job detail and the logs for that step. # # As you look at the pipeline graph, you'll see that you can inspect the artifacts passed between the pipeline steps. # + [markdown] id="hFwR7J6O3TCq" # <a href="https://storage.googleapis.com/amy-jo/images/kf-pls/generate_train.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/kf-pls/generate_train.png" width="70%"/></a> # + [markdown] id="feV62LXyW7cN" # ## What next? # # Next, try out some of the other notebooks. # # - a [KFP intro notebook](https://colab.research.google.com/drive/1mrud9HjsVp5fToHwwNL0RotFtJCKtfZ1#scrollTo=feV62LXyW7cN). # - a simple KFP example that [shows how data can be passed between pipeline steps](https://colab.research.google.com/drive/1NztsGV-FAp71MU7zfMHU0SlfQ8dpw-9u). # # - A TFX notebook that [shows the canonical 'Chicago taxi' example](https://colab.research.google.com/drive/1dNLlm21F6f5_4aeIg-Zs_F1iGGRPEvhW), and how to use custom Python functions and custom containers. # + [markdown] id="89fYarRLW7cN" # ----------------------------- # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
ucaip-notebooks/managed-pipelines/MP_Alpha_notebooks/mp_kfp_custom_containers_resources.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #hide #skip ! [[ -e /content ]] && pip install -Uqq fastai # upgrade fastai on colab # # Tutorial - Transformers # # > An example of how to incorporate the transfomers library from HuggingFace with fastai # + #all_slow # - # In this tutorial, we will see how we can use the fastai library to fine-tune a pretrained transformer model from the [transformers library](https://github.com/huggingface/transformers) by HuggingFace. We will use the mid-level API to gather the data. Even if this tutorial is self contained, it might help to check the [imagenette tutorial](http://docs.fast.ai/tutorial.imagenette) to have a second look on the mid-level API (with a gentle introduction using the higher level APIs) in computer vision. # ## Importing a transformers pretrained model # First things first, we will need to install the transformers library. If you haven't done it yet, install the library: # # ``` # # !pip install -Uq transformers # ``` # Then let's import what will need: we will fine-tune the GPT2 pretrained model and fine-tune on wikitext-2 here. For this, we need the `GPT2LMHeadModel` (since we want a language model) and the `GPT2Tokenizer` to prepare the data. from transformers import GPT2LMHeadModel, GPT2TokenizerFast # We can use several versions of this GPT2 model, look at the [transformers documentation](https://huggingface.co/transformers/pretrained_models.html) for more details. Here we will use the basic version (that already takes a lot of space in memory!) You can change the model used by changing the content of `pretrained_weights` (if it's not a GPT2 model, you'll need to change the classes used for the model and the tokenizer of course). pretrained_weights = 'gpt2' tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_weights) model = GPT2LMHeadModel.from_pretrained(pretrained_weights) # Before we move on to the fine-tuning part, let's have a look at this `tokenizer` and this `model`. The tokenizers in HuggingFace usually do the tokenization and the numericalization in one step (we ignore the padding warning for now): ids = tokenizer.encode('This is an example of text, and') ids # Like fastai `Transform`s, the tokenizer has a `decode` method to give you back a text from ids: tokenizer.decode(ids) # The model can be used to generate predictions (it is pretrained). It has a `generate` method that expects a batch of prompt, so we feed it our ids and add one batch dimension (there is a padding warning we can ignore as well): import torch t = torch.LongTensor(ids)[None] preds = model.generate(t) # The predictions, by default, are of length 20: preds.shape,preds[0] # We can use the decode method (that prefers a numpy array to a tensor): tokenizer.decode(preds[0].numpy()) # ## Bridging the gap with fastai # Now let's see how we can use fastai to fine-tune this model on wikitext-2, using all the training utilities (learning rate finder, 1cycle policy etc...). First, we import all the text utilities: from fastai.text.all import * # ### Preparing the data # Then we download the dataset (if not present), it comes as two csv files: path = untar_data(URLs.WIKITEXT_TINY) path.ls() # Let's have a look at what those csv files look like: df_train = pd.read_csv(path/'train.csv', header=None) df_valid = pd.read_csv(path/'test.csv', header=None) df_train.head() # We gather all texts in one numpy array (since it will be easier to use this way with fastai): all_texts = np.concatenate([df_train[0].values, df_valid[0].values]) # To process this data to train a model, we need to build a `Transform` that will be applied lazily. In this case we could do the pre-processing once and for all and only use the transform for decoding (we will see how just after), but the fast tokenizer from HuggingFace is, as its name indicates, fast, so it doesn't really impact performance to do it this way. # # In a fastai `Transform` you can define: # - an <code>encodes</code> method that is applied when you call the transform (a bit like the `forward` method in a `nn.Module`) # - a <code>decodes</code> method that is applied when you call the `decode` method of the transform, if you need to decode anything for showing purposes (like converting ids to a text here) # - a <code>setups</code> method that sets some inner state of the `Transform` (not needed here so we skip it) class TransformersTokenizer(Transform): def __init__(self, tokenizer): self.tokenizer = tokenizer def encodes(self, x): toks = self.tokenizer.tokenize(x) return tensor(self.tokenizer.convert_tokens_to_ids(toks)) def decodes(self, x): return TitledStr(self.tokenizer.decode(x.cpu().numpy())) # Two comments on the code above: # - in <code>encodes</code> we don't use the `tokenizer.encode` method since it does some additional preprocessing for the model after tokenizing and numericalizing (the part throwing a warning before). Here we don't need any post-processing so it's fine to skip it. # - in <code>decodes</code> we return a `TitledStr` object and not just a plain string. That's a fastai class that adds a `show` method to the string, which will allow us to use all the fastai show methods. # You can then group your data with this `Transform` using a `TfmdLists`. It has an s in its name because it contains the training and validation set. We indicate the indices of the training set and the validation set with `splits` (here all the first indices until `len(df_train)` and then all the remaining indices): splits = [range_of(df_train), list(range(len(df_train), len(all_texts)))] tls = TfmdLists(all_texts, TransformersTokenizer(tokenizer), splits=splits, dl_type=LMDataLoader) # We specify `dl_type=LMDataLoader` for when we will convert this `TfmdLists` to `DataLoaders`: we will use an `LMDataLoader` since we have a language modeling problem, not the usual fastai `TfmdDL`. # # In a `TfmdLists` you can access the elements of the training or validation set quite easily: tls.train[0],tls.valid[0] # They look the same but only because they begin and end the same way. We can see the shapes are different: tls.tfms(tls.train.items[0]).shape, tls.tfms(tls.valid.items[0]).shape # And we can have a look at both decodes using `show_at`: # + # show_at(tls.train, 0) # + # show_at(tls.valid, 0) # - # The fastai library expects the data to be assembled in a `DataLoaders` object (something that has a training and validation dataloader). We can get one by using the `dataloaders` method. We just have to specify a batch size and a sequence length. Since the GPT2 model was trained with sequences of size 1024, we use this sequence length (it's a stateless model, so it will change the perplexity if we use less): bs,sl = 8,1024 dls = tls.dataloaders(bs=bs, seq_len=sl) # Note that you may have to reduce the batch size depending on your GPU RAM. # In fastai, as soon as we have a `DataLoaders`, we can use `show_batch` to have a look at the data (here texts for inputs, and the same text shifted by one token to the right for validation): dls.show_batch(max_n=2) # Another way to gather the data is to preprocess the texts once and for all and only use the transform to decode the tensors to texts: # + def tokenize(text): toks = tokenizer.tokenize(text) return tensor(tokenizer.convert_tokens_to_ids(toks)) tokenized = [tokenize(t) for t in progress_bar(all_texts)] # - # Now we change the previous `Tokenizer` like this: class TransformersTokenizer(Transform): def __init__(self, tokenizer): self.tokenizer = tokenizer def encodes(self, x): return x if isinstance(x, Tensor) else tokenize(x) def decodes(self, x): return TitledStr(self.tokenizer.decode(x.cpu().numpy())) # In the <code>encodes</code> method, we still account for the case where we get something that's not already tokenized, just in case we were to build a dataset with new texts using this transform. tls = TfmdLists(tokenized, TransformersTokenizer(tokenizer), splits=splits, dl_type=LMDataLoader) dls = tls.dataloaders(bs=bs, seq_len=sl) # And we can check it still works properly for showing purposes: dls.show_batch(max_n=2) # ### Fine-tuning the model # The HuggingFace model will return a tuple in outputs, with the actual predictions and some additional activations (should we want to use them in some regularization scheme). To work inside the fastai training loop, we will need to drop those using a `Callback`: we use those to alter the behavior of the training loop. # # Here we need to write the event `after_pred` and replace `self.learn.pred` (which contains the predictions that will be passed to the loss function) by just its first element. In callbacks, there is a shortcut that lets you access any of the underlying `Learner` attributes so we can write `self.pred[0]` instead of `self.learn.pred[0]`. That shortcut only works for read access, not write, so we have to write `self.learn.pred` on the right side (otherwise we would set a `pred` attribute in the `Callback`). class DropOutput(Callback): def after_pred(self): self.learn.pred = self.pred[0] # Of course we could make this a bit more complex and add some penalty to the loss using the other part of the tuple of predictions, like the `RNNRegularizer`. # # Now, we are ready to create our `Learner`, which is a fastai object grouping data, model and loss function and handles model training or inference. Since we are in a language model setting, we pass perplexity as a metric, and we need to use the callback we just defined. Lastly, we use mixed precision to save every bit of memory we can (and if you have a modern GPU, it will also make training faster): learn = Learner(dls, model, loss_func=CrossEntropyLossFlat(), cbs=[DropOutput], metrics=Perplexity()).to_fp16() # We can check how good the model is without any fine-tuning step (spoiler alert, it's pretty good!) learn.validate() # This lists the validation loss and metrics (so 26.6 as perplexity is kind of amazing). # # Now that we have a `Learner` we can use all the fastai training loop capabilities: learning rate finder, training with 1cycle etc... learn.lr_find() # The learning rate finder curve suggests picking something between 1e-4 and 1e-3. learn.fit_one_cycle(1, 1e-4) # Now with just one epoch of fine-tuning and not much regularization, our model did not really improve since it was already amazing. To have a look at some generated texts, let's take a prompt that looks like a wikipedia article: df_valid.head(1) # Article seems to begin with new line and the title between = signs, so we will mimic that: prompt = "\n = Unicorn = \n \n A unicorn is a magical creature with a rainbow tail and a horn" # The prompt needs to be tokenized and numericalized, so we use the same function as before to do this, before we use the `generate` method of the model. prompt_ids = tokenizer.encode(prompt) inp = tensor(prompt_ids)[None].cuda() inp.shape preds = learn.model.generate(inp, max_length=40, num_beams=5, temperature=1.5) tokenizer.decode(preds[0])
nbs/39_tutorial.transformers.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # STELLARSTRUC-NG.IPYNB -- Solve equations of stellar structure in NG # + ### IMPORT STUFF ### import numpy as np from scipy.interpolate import interp1d from scipy.integrate import odeint import matplotlib.pyplot as plt from crust import crust G = 6.674e-8 # Newton's constant in cgs units c = 2.998e10 # speed of light in cm/s Msun = 1.988e33 # solar mass in g rhonuc = 2.7e14 # nuclear density in g/cm^3 # + ### MAKE SOME HELPFUL UTILITIES ### def geteos(eospath,eosname): # import tabulated EoS data eos = np.genfromtxt(eospath+eosname+".dat") # EoS data (rho=mass density, p=pressure/c^2, mu=total energy density/c^2) in g/cm^3 [rhodat,mudat,pdat] = crust(eos) # affix low-density crust EoS, return (rho,mu,p) in units of rhonuc return [rhodat, rhodat, pdat] # set mu = rho for Newtonian gravity def intpeos(rhodat,mudat,pdat): # interpolate full EoS from tabulated data pmuintp = interp1d(mudat,pdat,kind='linear',bounds_error=False,fill_value=0.) dpdmumuintp = interp1d(mudat,np.gradient(pdat)/np.gradient(mudat),kind='linear',bounds_error=False,fill_value=0.) def p(mu): # pressure as a function of total energy density return pmuintp(mu) def dpdmu(mu): # sound speed squared return dpdmumuintp(mu) murhointp = interp1d(rhodat,mudat,kind='linear',bounds_error=False,fill_value=0.) def Mu(rho): # total energy density as a function of rest-mass energy density, for calculating central value of total energy density return murhointp(rho) return [p, dpdmu, Mu] # + ### DEFINE KEY FUNCTIONS ### def hydro(y,r): # condition of hydrostatic equilibrium mu, m = y return -(mu)*(m)/(r**2) # note that we are using G=c=1 units in this code def mass(y,r): # defining equation for the mass mu, m = y return 4.*np.pi*r**2*mu def struceqs(y,r): # implement equations of stellar structure as a set of coupled ODEs return hydro(y,r), mass(y,r) # + ### PROVIDE INPUT PARAMETERS ### eosname = "APR4" # SET EQUATION OF STATE HERE rhoc = 1e-5 # SET CENTRAL MASS DENSITY HERE eospath = "./" # path to EoS data files stp = 1e-4 # starting step for numerical integration pts = 5e3 # number of points at which to evaluate numerical integration tol = 1e-6 # tolerance for surface finding algorithm # + ### RUN CODE ### [rhodat,mudat,pdat] = geteos(eospath,eosname) # get tabulated EoS data in units of rhonuc [p, dpdmu, Mu] = intpeos(rhodat,mudat,pdat) # interpolate full EoS p(mu), dpdmu(mu), Mu(rho) from tabulated data muc = Mu(rhoc) # calculate central total energy density from central mass density y0 = [muc,4.*np.pi*stp**3*muc/3.] # implement boundary conditions at center of star rlist = np.linspace(stp,10.,int(pts)) # list radial points at which to evaluate numerical integration ys = np.zeros((len(rlist),2)) # create array to store values of functions at evaluation points ys[0] = y0 # store central boundary values Rsol = rlist[-1] # to initialize search, set maximum possible surface location to be furthest radial evaluation point for i in range(len(rlist)-1): # integrate to each radial evaluation point, check if p = 0, continue if not, break if yes rs = [rlist[i],rlist[i+1]] # current integration interval y = odeint(struceqs,ys[i],rs) # do numerical integration ys[i+1] = y[-1] # save solution for functions pressure = ys[i+1][0] # extract pressure if (pressure < tol or pressure != pressure): # check if pressure vanishes Rsol = rs[0] # if so, define stellar surface to lie at current location break rlist = rlist[0:i+1] # truncate list of radial points at surface r=R musoldat = ys[0:i+1,0] # record solution for mu(r) msoldat = ys[0:i+1,1] # record solution for m(r) musol = interp1d(rlist,musoldat,kind='linear') # interpolate full solution for mu(r) from tabulation msol = interp1d(rlist,msoldat,kind='linear',bounds_error=False,fill_value=msoldat[-1]) # interpolate full solution for m(r) from tabulation psol = interp1d(rlist,p(musoldat),kind='linear') # interpolate full solution for p(r)=p(mu(r)) from tabulation Msol = msol(Rsol) # evaluate total mass of star M = m(R) # + ### OUTPUT RESULTS ### plt.figure(1,(15,10)) # plot mu(r), p(r), m(r) plt.plot(rlist/Rsol,musol(rlist)/muc,c='black',marker='.',label='mu/mu_c') plt.plot(rlist/Rsol,psol(rlist)/p(muc),c='limegreen',marker='.',label='p/p_c') plt.plot(rlist/Rsol,msol(rlist)/Msol,c='lightcoral',marker='.',label='m/M') plt.xlabel('r/R') plt.xlim(0.,1.) plt.ylim(0.,1.) plt.legend() plt.show() R = Rsol*c/(1e5*(G*rhonuc)**0.5) # convert R from code units to km M = Msol*c**3/(G*(G*rhonuc)**0.5*Msun) # convert M from code units to solar masses print 'An {0}-star with rho_c = {1} rho_nuc has a mass of M = {2} M_Sun and a radius of R = {3} km.'.format(eosname,rhoc,M,R)
lecture2/StellarStruc-NG.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import matplotlib.pyplot as plt import pandas as pd from sklearn.cluster import KMeans import numpy as np stops = pd.read_csv('dados/stops.txt') stops.info() id1_stop_lat = stops plt.scatter(id1_stop_lat.stop_lat, id1_stop_lat.stop_lon) res = stops[:2] # + pycharm={"name": "#%%\n"} rotas = pd.read_csv('dados/routes.txt') rotas.info() # - rota1 = rotas[:1] rota1 trips = pd.read_csv('dados/trips.txt') trips.info() trip = trips[:1] trip # + import math d2r = 0.017453292519943295769236 dlong = (-46.410291 - (-46.416247)) * d2r dlat = (-23.543302 - (-23.541756)) * d2r temp_sin = math.sin(dlat/2.0); temp_cos = math.cos(-23.541756 * d2r) temp_sin2 = math.sin(dlong/2.0) a = (temp_sin * temp_sin) + (temp_cos * temp_cos) * (temp_sin2 * temp_sin2) c = 2.0 * math.atan2(math.sqrt(a), math.sqrt(1.0 - a)) result = 6368.1 * c result # + def removerCincoMetro(latInicial,longInicial, latFinal,longFinal): d2r = 0.017453292519943295769236 dlong = (longFinal - longInicial) * d2r dlat = (latFinal - latInicial) * d2r temp_sin = math.sin(dlat / 2.0); temp_cos = math.cos(latInicial * d2r) temp_sin2 = math.sin(dlong / 2.0) a = (temp_sin * temp_sin) + (temp_cos * temp_cos) * (temp_sin2 * temp_sin2) c = 2.0 * math.atan2(math.sqrt(a), math.sqrt(1.0 - a)) return (6368.1 * c) < 0.005 count = 0 # número de pontos dentro de 5 metros size = stops.count() size # + pycharm={"name": "#%%\n"} adj = ["red", "big", "tasty"] for x in adj: for y in adj: print(x, y) # + pycharm={"name": "#%%\n"} # -
CittaMobiData.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # AIAA 2020 figures related to measurement of electrical characteristics (Figure 2 and 3) # + import os import numpy as np import pandas as pd import xarray as xr import matplotlib as mpl import matplotlib.pyplot as plt import pickle from pandas import Timestamp import matplotlib.dates as mdates from mhdpy import * import mhdpy import pytz # %matplotlib inline mpl.rcParams.update({'font.size': 18, 'timezone': pytz.timezone('America/Los_Angeles')}) # + finalanalysisfolder = os.getcwd() #Folder with notebooks dsst = mhdpy.load.loadprocesseddata(os.path.join(finalanalysisfolder, 'Data', 'dsst')) with open(os.path.join(finalanalysisfolder, 'Data', 'da_ct.pickle'), 'rb') as file: da_ct = pickle.load(file) ds_nhr = dsst['nhr'] # - # ## 07-01 time and I-V plots da_ct_0701, da_ct_0701_stack = analysis.ct.reset_da_ct(da_ct.sel(date='2020-07-01'), keep_attrs=True) region = analysis.ct.get_region(da_ct_0701, buffer = 0.04) # + #choose the dataarrays to be plotted ds = xr.merge([dsst['hvof_input_calcs']['Kwt_hvof'], dsst['hvof_input_calcs']['totalmassflow_hvof'],ds_nhr['V'],ds_nhr['J']]).sel(time=region).dropna('time') fig = plot.common.tc_plot(ds, da_ct_0701_stack, yspace=3, grid=False) fig.get_axes()[0].get_legend().set_bbox_to_anchor([0,0,1.5,1]) fig.get_axes()[3].set_ylim(0,2) fig.get_axes()[3].xaxis.set_major_formatter(mdates.DateFormatter('%m-%d %H:%M')) for ax in fig.get_axes(): ax.set_xlabel(None) #Manual Tweaks fig.set_size_inches(13,10) fig.get_axes()[1].set_ylabel('TF [g/s]') fig.get_axes()[2].set_ylabel('Voltage [V]') fig.get_axes()[3].set_ylabel('Current \n Density ($A/cm^2$)') # + ds = analysis.ct.assign_tc_general(ds_nhr, da_ct_0701) da_mean, da_std = analysis.gen.bin_gen(ds, curname='J', voltname='V',bins = np.arange(-5,605,1), min_points=5) da_mean da_mean = da_mean.stack(ct=('Kwt','tf')).dropna('ct','all') da_std = da_std.stack(ct=('Kwt','tf')).dropna('ct','all') da_mean.coords['ct'].attrs =dict(long_name='Test Case (K wt%, TF)') # - plot.common.xr_errorbar(da_mean,da_std, huedim='ct', capsize =5) # plt.gca().get_legend().remove() plt.gcf().set_size_inches(6, 12) # # All dates I-V plots ylim = (0,0.25) # ## No seed # # The no seed cases were taken at slightly different total flows than the seeded cases. take the mean over tf to just get rid of tf dimension. need to specify in text what tf goes to what # # + with open(os.path.join(finalanalysisfolder, 'Data', 'da_ct_noseed.pickle'), 'rb') as file: da_ct_noseed = pickle.load(file) ds_noseed = analysis.ct.assign_tc_general(ds_nhr,da_ct_noseed)#.sel(Kwt=0) da_mean, da_std = analysis.gen.bin_gen(ds_noseed , curname='J', voltname='V') da_mean = da_mean.sel(voltage=slice(0,100)) da_std = da_std.sel(voltage=slice(0,100)) # - g = da_mean.plot(hue='date', col='tf', marker ='o', ylim= ylim) plot.dropna(g) # ## seeded cases combined artifically with Kwt =0 # # Artifically setting tf of the no seed dataset to the middle seeded case tf (12.96). This should be mentioned in the text # + ds_noseed_combinetf = ds_noseed.mean('tf', keep_attrs = True) da_mean, da_std = analysis.gen.bin_gen(ds_noseed_combinetf, curname='J', voltname='V') g = da_mean.plot(hue='date', marker ='o', ylim= ylim) plot.dropna(g) # + ds_seed = analysis.ct.assign_tc_general(ds_nhr,da_ct) ds_noseed_altertf = ds_noseed_combinetf.assign_coords(tf=ds_seed.coords['tf'].values[1]).expand_dims('tf') ds = xr.merge([ds_noseed_altertf,ds_seed]) da_mean, da_std = analysis.gen.bin_gen(ds, curname='J', voltname='V') da_mean = da_mean.sel(voltage=slice(0,100)) da_std = da_std.sel(voltage=slice(0,100)) # - g = da_mean.sel(Kwt=0.1,method='nearest').plot(col='tf', hue='date', marker='o', sharey=True, ylim= ylim) plot.dropna(g) g = da_mean.sel(tf=12.9,method='nearest').plot(col='Kwt', hue='date', marker='o', sharey=True, ylim= ylim) plot.dropna(g) # ### with error bars # + coldim = 'Kwt' huedim = 'date' m = da_mean.sel(tf=12.9,method='nearest') s = da_std.sel(tf=12.9,method='nearest') g = xr.plot.FacetGrid(m,col=coldim,figsize = (18,3)) axes_list=g.axes.flatten() for i, ax in enumerate(axes_list): d = g.name_dicts.flatten()[i] plot.common.xr_errorbar_axes(m.sel(d).drop(coldim), s.sel(d).drop(coldim), ax, huedim=huedim, capsize=5) if i!=0: ax.set_ylabel('') if i != 0: ax.get_legend().remove() # else: # ax.get_legend().set_bbox_to_anchor([0,0,2,1]) ax.set_title('K wt%' + ' = ' + str(d[coldim])) ax.set_ylim(ylim) plot.dropna(g) # + coldim = 'tf' huedim = 'date' m = da_mean.sel(Kwt=0.1,method='nearest') s = da_std.sel(Kwt=0.1,method='nearest') # g = xr.plot.FacetGrid(m,col=coldim,figsize = (13.33,3)) g = xr.plot.FacetGrid(m,col=coldim,figsize = (12,3)) axes_list=g.axes.flatten() for i, ax in enumerate(axes_list): d = g.name_dicts.flatten()[i] plot.common.xr_errorbar_axes(m.sel(d).drop(coldim), s.sel(d).drop(coldim), ax, huedim=huedim, capsize=5) if i!=0: ax.set_ylabel('') # if i != len(axes_list)-1: ax.get_legend().remove() # else: # ax.get_legend().set_bbox_to_anchor([0,0,2.2,1]) ax.set_title('TF' + ' = ' + str(d[coldim])) ax.set_ylim(ylim) plot.dropna(g) # - # ## Calculate resistance and average over dates # # Note that current 'I' is selected instead of 'J' # + da_mean, da_std = analysis.gen.bin_gen(ds, curname='I', voltname='V') da_mean_highV = da_mean.sel(voltage=slice(50,100)).drop(0.0, 'Kwt') resist = da_mean_highV.coords['voltage']/da_mean_highV resist = resist.mean('voltage', keep_attrs=True) resist.attrs = dict(long_name = '$R_{expt}$ (50-100V) ', units = 'ohms') resist = resist.where(resist>0) resist = resist.where(resist<5000) resist.name = 'resistance' # - resist_kohm = resist/1000 resist_kohm.attrs = dict(long_name='$R_{expt}$ (50-100V)', units ='Kohm') # + resist_kohm = resist_kohm.rename(tf='TF') resist_mean = resist_kohm.mean('date', keep_attrs=True) resist_std = resist_kohm.std('date', keep_attrs=True) # + plot.common.xr_errorbar(resist_mean, resist_std, huedim='TF') plt.ylabel('$R_{expt}$ (50-100V) \n [Kohm]') plt.xscale('log') plt.ylim(0,1.8) # -
public_data/AIAA 2020/.ipynb_checkpoints/Figures_ElectricalCharacteristics-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/ruby/large_model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="c9eStCoLX0pZ" # **<h3>Predict the documentation for ruby code using codeTrans multitask training model</h3>** # <h4>You can make free prediction online through this # <a href="https://huggingface.co/SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask">Link</a></h4> (When using the prediction online, you need to parse and tokenize the code first.) # + [markdown] id="6YPrvwDIHdBe" # **1. Load necessry libraries including huggingface transformers** # + colab={"base_uri": "https://localhost:8080/"} id="6FAVWAN1UOJ4" outputId="ab7e1f06-f74f-40eb-b0d6-8d157506ca07" # !pip install -q transformers sentencepiece # + id="53TAO7mmUOyI" from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline # + [markdown] id="xq9v-guFWXHy" # **2. Load the token classification pipeline and load it into the GPU if avilabile** # + colab={"base_uri": "https://localhost:8080/", "height": 319, "referenced_widgets": ["7342fe3e7e794c78b62985a99e7849be", "ae01bba368c74c5eb098fa2e90763559", "678414ba44ff4bb19b8465368eb9706b", "<KEY>", "a41c8421fa524ce48299d1b16f2e0372", "90e3202511f74cdbb048eeb3c6441ec5", "<KEY>", "41f1a8dbb00044a586d6645d8eb0a5e6", "e50def523dea4dae82d2a79923a08e2d", "ef360b00a91544e98e21e80e2ad5d08c", "<KEY>", "7099bf0182ba4ad1af4f49663a3fc52a", "<KEY>", "c608723cabee454197b9cc652e0dd54a", "407aca1b19fc4edf9564f998df99a844", "<KEY>", "334f21f9b8e24545a70c0c050003b860", "4fa368aed6334934a813162cec3f2bec", "f425a28f792543298dbf3e692031defe", "<KEY>", "<KEY>", "<KEY>", "1bcd3ce9475d49929c0d99910bec0700", "0ba7cc1085fd4a29b43a1af5578f85e5", "309413ab83d243d1b1a52dd36cac3c89", "d2cedbc49a9043ca83b8af7831768c0a", "<KEY>", "<KEY>", "<KEY>", "ccdc3e408629488e9824fce8cf50f92c", "<KEY>", "<KEY>", "e3fd955c85ac4d2cb010a1c1509b9a17", "4fccdb1c58d647a6832faec46e4d2d95", "68581c4a9cee4e189970d4cd9f7eaf3c", "c0b264a5bc9c41d08c304f899350eaff", "<KEY>", "1e4572f50dd24eddbe4b25125c160425", "64c9a8d6d78f41a0969affa44ec60d90", "6ba258d97906446a90b6438715821d78"]} id="5ybX8hZ3UcK2" outputId="5006ff96-26ae-46eb-907c-674909564f0d" pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask", skip_special_tokens=True), device=0 ) # + [markdown] id="hkynwKIcEvHh" # **3 Give the code for summarization, parse and tokenize it** # + id="nld-UUmII-2e" code = "def add(severity, progname, &block)\n return true if io.nil? || severity < level\n message = format_message(severity, progname, yield)\n MUTEX.synchronize { io.write(message) }\n true\n end" #@param {type:"raw"} # + id="cJLeTZ0JtsB5" colab={"base_uri": "https://localhost:8080/"} outputId="b8f5225a-e709-4f3c-a947-b578bf646017" # !pip install tree_sitter # !git clone https://github.com/tree-sitter/tree-sitter-ruby # + id="hqACvTcjtwYK" from tree_sitter import Language, Parser Language.build_library( 'build/my-languages.so', ['tree-sitter-ruby'] ) RUBY_LANGUAGE = Language('build/my-languages.so', 'ruby') parser = Parser() parser.set_language(RUBY_LANGUAGE) # + id="LLCv2Yb8t_PP" def get_string_from_code(node, lines): line_start = node.start_point[0] line_end = node.end_point[0] char_start = node.start_point[1] char_end = node.end_point[1] if line_start != line_end: code_list.append(' '.join([lines[line_start][char_start:]] + lines[line_start+1:line_end] + [lines[line_end][:char_end]])) else: code_list.append(lines[line_start][char_start:char_end]) def my_traverse(node, code_list): lines = code.split('\n') if node.child_count == 0: get_string_from_code(node, lines) elif node.type == 'string': get_string_from_code(node, lines) else: for n in node.children: my_traverse(n, code_list) return ' '.join(code_list) # + id="BhF9MWu1uCIS" colab={"base_uri": "https://localhost:8080/"} outputId="e3a5ebf7-0a31-4971-f8f1-7565da6e2a66" tree = parser.parse(bytes(code, "utf8")) code_list=[] tokenized_code = my_traverse(tree.root_node, code_list) print("Output after tokenization: " + tokenized_code) # + [markdown] id="sVBz9jHNW1PI" # **4. Make Prediction** # + colab={"base_uri": "https://localhost:8080/"} id="KAItQ9U9UwqW" outputId="5c39cfb2-c24e-42a6-d3be-ca907e2a746f" pipeline([tokenized_code])
prediction/multitask/pre-training/function documentation generation/ruby/large_model.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] tags=[] # # Intro # + [markdown] tags=[] # ## Download the Repo # ### Requirements # - [Jupyter](https://jupyter.org/install) # # ### Installing & Running # The following code will download this repo and open it in Jupyter so you can follow along and execute the code. # ```bash # git clone https://github.com/bmswens/Maneuver-ID-Introduction-Notebook.git # # # cd Maneuver-ID-Introduction-Notebook # pip install notebook # jupyter notebook 'Maneuver ID Introduction Notebook.ipynb' # ``` # # ## Website # https://maneuver-id.mit.edu/ # ## About # The U.S. Air Force released a dataset from Pilot Training Next (PTN) through the AI Accelerator of Air Force pilots and trainees flying in virtual reality simulators. In an effort to enable AI coaching and automatic maneuver grading in pilot training, the Air Force seeks to automatically identify and label each maneuver flown in this dataset from a catalog of around 30 maneuvers. Your solution helps advance the state of the art in flying training! # ## The Challenge(s) # # 1. Sorting "useful" vs "non-useful" data # 2. Separating "useful" data # 3. Identifying Maneuvers # 4. Grading Maneuvers # # ## Scope # This notebook aims to take you through seeing and understanding the data for the first time, all the way to creating a convolutional neural network aimed at answering challenge #1. # # ## Maneuvers # There are 18 maneuvers, but many have sub-classes, for a total of 29 maneuvers, listed below: # - maneuvers = [ 'UndershootingTPStall', # undershooting traffic pattern stall 'AileronRoll', '60SteepTurn', # turn performed at 60 deg bank 'BarrelRoll', 'SplitS', 'ILS', # Instrument landing system aproach 'StraightIn', 'OverheadPattern', 'ClosedPullup', 'ELP-PEL', # Emergency landing pattern, precautionary emergency landing 'Cuban8', 'UnusualAttitudeNoseHigh', 'Loop', 'NoseLowRecovery', 'VerticalSalpha', 'VerticalSbravo', 'UnusualAttitudeNoseLow', 'IntentionalSpin', 'SlowFlight', 'Immelman', 'Lazy8', 'NoseHighRecovery', 'PowerOnStallNoseLowTurning', 'ELP-FL', # Emergency landing pattern, forced landing 'LandingAttitudeTPStall', # landing traffic pattern stall '45SteepTurn', # turn performed at 45 deg bank 'OvershootingTPStall', # overshooting traffic pattern stall 'Localizer', 'PowerOnStallNoseHighTurning' ] # + [markdown] tags=[] # More information on some of these maneuvers can be found here: http://maneuver-id.mit.edu/maneuvers-0 # + [markdown] tags=[] # # Data # + [markdown] tags=[] # ## Meta # Data is stored in .tsv format with the following headers and data types: # - headers = { "": int, # one up numbering system "time (sec)": float, "xEast (m)": float, "yNorth (m)": float, "zUp (m)": float, "vx (m/s)": float, "xy (m/s)": float, "vz (m/s)": float, "head (deg)": float, "pitch (deg)": float, "roll (deg)": float } # + [markdown] tags=[] # ## Loading # Python provides the built-in `csv` module that we can use to import .tsv files. # # --- # **Caveat** # # The files may sometimes contained malformed rows which don't cast to the data type they're supposed to. In this tutorial we will skip those rows with the assumption that the overall track will still provide an intelligible track. # # This behavior can be modified in the `clean_row` function. # # --- # + import csv def clean_row(row): """ This function is used to determine if a row should get added to the dataset. If this function returns a "truthy" value, the clean function will append the returned value. Else if it returns a "falsey" value, it will skip the row. Inputs: row -- The input row as a dictionary Outputs: output -- Either: A. The cleaned version of the input row, as a dictionary B. A "falsey" value """ output = {} for header in row: value = row[header] try: output[header] = float(value) except: return False return output def load(path): """ This function takes in a file path and returns the content as a list of dictionaries. Inputs: path -- path to the .tsv file on the disk Outputs: output -- list of dictionaries """ output = [] with open(path) as incoming: reader = csv.DictReader(incoming, delimiter='\t') for row in reader: cleaned_row = clean_row(row) if cleaned_row: output.append(cleaned_row) return output # - data = load('flights/example.tsv') print(data[0]) # + [markdown] tags=[] # ## Visualizing # # Visualization is an important part of enabling the human portion of human-machine teaming. It allows us to quickly understand the data, and perhaps even verify that our models are on the right path. # # In this notebook, we'll use [Ploty](https://plotly.com/python/) for interactive visualizations, but [Matplotlib](https://matplotlib.org/) is another common library, especially in the scientific community. # + # %%capture # Install a pip package in the current Jupyter kernel import sys # !{sys.executable} -m pip install plotly # !{sys.executable} -m pip install chart_studio # !{sys.executable} -m pip install kaleido import plotly.graph_objects as go import plotly.offline as offline def visualize_3D( data, x_field="xEast (m)", y_field="yNorth (m)", z_field="zUp (m)", path=None ): """ Plots the data to an interactive 3D graph. Saves the graph to a file if path is provided. Inputs: data -- List: A list of dictionaries containing the data to plot x_field -- String: The name of the key for X data y_field -- String: The name of the key for Y data z_field -- String: The name of the key for Z data path -- String: The path to save the file to on disk. Outputs: None """ x = [row[x_field] for row in data] y = [row[y_field] for row in data] z = [row[z_field] for row in data] plot = go.Scatter3d( x=x, y=y, z=z, marker=dict( size=2, color="black" ), line=dict( color='black', width=2 ) ) fig = go.Figure(data=plot) fig.update_layout( autosize=True, scene=dict( camera=dict( up=dict( x=0, y=0, z=1 ), eye=dict( x=0, y=1.0707, z=1, ) ), aspectratio = dict( x=1, y=1, z=0.7 ), aspectmode = 'manual' ), ) if path: if ".png" in path: fig.write_image(path) else: fig.write_html(path) fig.show() def visualize_2D(data, y_field, x_field=None, path=None): """ Plots the data to an interactive 2d graph. Saves the graph to a file if path is provided. Inputs: data -- List: A list of dictionaries containing the data to plot x_field -- String: The name of the key for X data, defaults to index of Y field y_field -- String: The name of the key for Y data path -- String: The path to save the file to on disk. Outputs: None """ y = [row[y_field] for row in data] if x_field: x = [row[x_field] for row in data] else: x = [index for index, _ in enumerate(y)] plot = go.Scatter( x=x, y=y, marker=dict( size=2, color="black" ), line=dict( color='black', width=2 ) ) fig = go.Figure(data=plot) fig.update_layout( autosize=True, scene=dict( camera=dict( up=dict( x=0, y=0, z=1 ), eye=dict( x=0, y=1.0707, z=1, ) ), aspectratio = dict( x=1, y=1, z=0.7 ), aspectmode = 'manual' ), ) if path: if ".png" in path: fig.write_image(path) else: fig.write_html(path) fig.show() # - visualize_3D(data) visualize_2D(data, "zUp (m)") # + [markdown] tags=[] # ## Understanding The Data # # --- # # **Caveat** # # The flight simulator that provided this data has some unique functions to it that are normally not possible in regular flight. # * Pilots can teleport the plane from the runway into the air # * Pilots can "snap" the plane to headings, pitches, and roles that would normally be too large of a change # # --- # - # ### Time # Units: Seconds # # This column is the number of seconds since the flight began, often in intervals of roughly 0.1 seconds. # ### xEast, yNorth, zUp # Units: Meters # # These columns plot where the plane is at a given point in time and can be thought of similiar to lattiude, longitude, and altitude. # ### vx, vy, vz # Units: Meters per second # # Stands for: Velocity in the given axis. # # These columns denote the current speed of the aircraft in a given axis. # ### Head # Units: Degrees # # This will help indicate the direction that the aircraft is flying in. Combined with a non-zero velocity, this field can be used to calculate where the next data point would be (xEast and yNorth). # ### Pitch # Units: Degrees # # This will help indicate if an aircraft is ascending or descending. Combined with a non-zero veloicty, can be used to calcluate the zUp of the next data point. # ### Roll # Units: Degrees # # Indicates the orientation of the aircraft. 0 is what we would generally relate with "right side up" and 180 would be "upside down." # # --- # # The following image can be used to help understand head (heading), pitch, and roll. # # --- # # ![pitch-heading-roll.png](https://www.researchgate.net/profile/Tsouknidas-Nikolaos-2/publication/220720660/figure/fig7/AS:668413135429636@1536373518155/Examples-of-Heading-Pitch-and-Roll-on-an-aircraft.png) # + [markdown] tags=[] # # Preprocessing / Data Wrangling # Preprocessing your data can sometimes be just as important as the actual model architecture itself. # # Below are some examples of preprocessing that could be done to the data. Not all will necessarily provide an impact, and some may even have negative results. These are just to provide examples. # + [markdown] tags=[] # ## Remove Runway Data # In this process we aim to (naively) remove datapoints that occur on or near the runway. # # Reasoning: A pilot is hopefully not performing maneuvers near the ground or runway, as that would be dangerous. # - def remove_runway_data(data, minimum=100): output = [] for row in data: if row["zUp (m)"] >= minimum: output.append(row) return output # + [markdown] tags=[] # ## Calculate Airspeed # In this process, we're going to remove the independent `vx (m/s)`, `vy (m/s)`, and `vz (m/s)` fields, and add them into a single field, then remove the three fields used to calculate it. # # Reason: The written parameters of what define a good maneuver vs a bad maneuver take airspeed into account in order to avoid stalls and going above operation limitations of the aircraft. Airspeed = Ground Speed - Wind Speed. Our three velocities are in relation to the ground so in order to calculate the airspeed of the aircraft we have to assume that there is zero wind for the entirety of the flight. With zero wind, we now have Airspeed = Ground Speed. Treating the three velocities as components of a vector we can calculate the magnitude of the vector to give us airspeed with a helper method. Since our velocities units are given to us in m/s we need to convert it to knots with a helper method. # # + def convert_ms_to_knot(x): # m/s * 60s/min * 60min/hr * nm/1852m return x * ((60 ** 2) / 1852) def calc_3d_vector_magnitude(x, y, z): # For help with understanding magnitude of a vector please visit link. # https://www.cuemath.com/magnitude-of-a-vector-formula/ return ((x ** 2) + (y ** 2) + (z ** 2)) ** 0.5 def calculate_airspeed(data): output = [] for row in data: vx = row["vx (m/s)"] vy = row["vy (m/s)"] vz = row["vz (m/s)"] magnitude = calc_3d_vector_magnitude(vx, vy, vz) row.pop("vx (m/s)", None) row.pop("vy (m/s)", None) row.pop("vz (m/s)", None) row["airspeed (knot)"] = convert_ms_to_knot(magnitude) output.append(row) return output # + [markdown] tags=[] # # Making A Model # Now we'll write a convolutional neural network (CNN) based on graphs of the `xEast` and `yNorth` fields (it will visualize as though we were looking at the flight path from a top-down perspective) to see if it can classify the flights in accordance with the labled data. # # --- # # **Caveat** # # In order to accomplish this part of the notebook, you will have have to have access to the dataset. # # --- # + # change these based on your file paths train_path = "flights/labeled/train" test_path = "flights/labeled/test" # precentage of training data to use as training vs validation train_percentage = 0.75 # + [markdown] tags=[] # ## Making a Dataset # Our dataset is going to be based off of the PNG representation of the flight, from a top down perspective. # # Images will be saved to `{path}_img` folder. # # You can modify this `Dataset` class to fit your needs. # - # %%capture # Install a pip package in the current Jupyter kernel import sys # !{sys.executable} -m pip install torch torchvision # + # built in import os import csv # 3rd party import torch import torchvision from PIL import Image import numpy as np class FlightsAsImageDataset(torch.utils.data.Dataset): """ This class will load and convert all flights to graphs as a .png The graphs are based on 'xEast' and 'yNorth' and represent a top-down view of the flight. Params: size - the height and width of the output .png make - whether or not to generate the image (false allows you to skip making the image if it already exists) """ def __init__(self, path, size=128, make=True): self.all_labels = [folder for folder in os.listdir(path) if 'img' not in folder] self.images = [] self.labels = [] self.size = size for label in self.all_labels: for f in os.listdir(os.path.join(path, label)): if f.startswith('.') or '.tsv' not in f: continue new_path = self.convert_to_png(os.path.join(path, label, f), make) self.images.append(new_path) self.labels.append(label) def __len__(self): return len(self.images) def __getitem__(self, idx): path = self.images[idx] img = Image.open(path).convert('RGB') array = np.array(img) label = self.labels[idx] if label == "good": label = 1 else: label = 0 label = torch.tensor(label, dtype=torch.float32) tensor = torchvision.transforms.ToTensor()(img) # tensor = tensor.unsqueeze(0) return tensor, label def convert_to_png(self, full_path, make): # flights/good_train/01.tsv -> flights/good_train_img/01.png file_name = os.path.basename(full_path) file_name = file_name.replace('.tsv', '.png') folder = os.path.dirname(full_path) folder += '_img' os.makedirs(folder, exist_ok=True) new_path = os.path.join(folder, file_name) if make: data = self.load(full_path) y = [row["yNorth (m)"] for row in data] x = [row["xEast (m)"] for row in data] plot = go.Scatter( x=x, y=y, marker=dict( size=2, color="black" ), line=dict( color='black', width=2 ) ) fig = go.Figure(data=plot) fig.update_layout( margin=go.layout.Margin( l=0, #left margin r=0, #right margin b=0, #bottom margin t=0, #top margin ), height=self.size, width=self.size, scene=dict( camera=dict( up=dict( x=0, y=0, z=1 ), eye=dict( x=0, y=1.0707, z=1, ) ), aspectratio = dict( x=1, y=1, z=0.7 ), aspectmode = 'manual' ), ) fig.update_xaxes(showticklabels=False, showgrid=False) fig.update_yaxes(showticklabels=False, showgrid=False) fig.write_image(new_path) return new_path @staticmethod def clean_row(row): output = {} for header in row: value = row[header] try: output[header.strip()] = float(value) except: return False return output def load(self, full_path): output = [] with open(full_path) as incoming: reader = csv.DictReader(incoming, delimiter='\t') for row in reader: cleaned_row = self.clean_row(row) if cleaned_row: output.append(cleaned_row) return output # + # load our datasets train_data = FlightsAsImageDataset(train_path) test_data = FlightsAsImageDataset(test_path) # split training data into training and validation train_size = int(len(train_data) * train_percentage) val_size = len(train_data) - train_size train_data, val_data = torch.utils.data.random_split(train_data, [train_size, val_size], generator=torch.Generator().manual_seed(42)) # - # send the datasets into data loaders train_data_loader = torch.utils.data.DataLoader( train_data, batch_size=1, num_workers=4 ) val_data_loader = torch.utils.data.DataLoader( val_data, batch_size=1, num_workers=4 ) test_data_loader = torch.utils.data.DataLoader( test_data, batch_size=1, num_workers=4 ) # ## Building The Model # Here we'll build a CNN to train on our dataset. # + # 3rd party from torch import nn class CNN(nn.Module): """ A basic convolutional neural network """ def __init__(self): super(CNN, self).__init__() # reused self.pool = nn.MaxPool2d(2, 2) self.flatten = torch.flatten self.relu = nn.ReLU() # layer one self.conv1 = nn.Conv2d(3, 6, 5) # layer two self.conv2 = nn.Conv2d(6, 12, 5) # connected layers self.linear1 = nn.Linear(10092, 512) self.linear2 = nn.Linear(512, 128) self.linear3 = nn.Linear(128, 1) def forward(self, x): # first set of layers x = self.conv1(x) x = self.relu(x) x = self.pool(x) # second set of layers x = self.conv2(x) x = self.relu(x) x = self.pool(x) # connected x = torch.flatten(x) x = self.linear1(x) x = self.relu(x) x = self.linear2(x) x = self.relu(x) x = self.linear3(x) return x # - device = 'cuda' if torch.cuda.is_available() else 'cpu' print(f'Using {device} device') model = CNN().to(device) # ## Training the Model # Here we'll do 5 passes at training the model, saving the best models. # + epochs = 5 learning_rate = 0.01 loss = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) # + # training loop def train_one_epoch(model, data, loss, optimizer): total_loss = 0. model.train() i = 0 for img, label in data: pred_label = model(img) l = loss(pred_label, label) l.backward() optimizer.step() optimizer.zero_grad() total_loss += l.item() i += 1 if i % 500 == 0: print(f'train: img {i} avg loss - {total_loss / i}') return total_loss / len(data) def test_one_epoch(model, data, loss): total_loss = 0. model.eval() i = 0 for img, label in data: pred_label = model(img) l = loss(pred_label, label) total_loss += l.item() i += 1 if i % 100 == 0: print(f'validate: img {i} avg loss - {total_loss / i}') return total_loss / len(data) best_loss = None print(f'training size - {len(train_data_loader)} validate size - {len(val_data_loader)}') for epoch in range(epochs): train_loss = train_one_epoch(model, train_data_loader, loss, optimizer) val_loss = test_one_epoch(model, val_data_loader, loss) print(f'{epoch + 1}: train - {train_loss:.7f} test - {val_loss:.7f}') if best_loss is None or val_loss < best_loss: best_loss = val_loss model_path = f'model_{epoch}' torch.save(model.state_dict(), model_path) # - # ## Checking the Model # Next we'll check the overall accuracy of the model on our testing data. correct = 0 for img, label in test_data_loader: model.eval() pred = model(img) rounded_pred = round(pred.item()) if rounded_pred == label: correct += 1 print(f'{(correct / len(test_data_loader)) * 100:.4f}% correct') # Now we'll load up 5 random flights and see how our model did. # + from IPython.display import Image as PyImage import random for _ in range(5): index = random.randint(0, len(test_data) - 1) path = test_data.images[index] img, label = test_data[index] predicted = model(img.unsqueeze(0)) display(PyImage(filename=path)) print(f'Predicted: {round(predicted.item())}; Actual: {label}') # - # # The End # After this, you should now have all the resources necessary to start tweaking the model, or dataset, or both to produce a higher degree of accuracy. # # Listed below are a few ideas to get your started. # ## Ideas # - Change the size of the images (512x512 is slower but increases to ~96% accuracy) # - Create a deeper CNN # - Add the altitude over time as another layer to the CNN # - Modify the dataset to instead analyze the underlying data # - Apply transformations (such as rotation or cropping) to the images # ## Contributors # - [<NAME>](https://github.com/bmswens) - Author # - <NAME> - Feedback # - <NAME> - Feedback # - <NAME> - Feedback # - [<NAME>](https://github.com/chantzyaz) - Feedback # + [markdown] tags=[] # ## Acknowledgment # If you would like to acknowledge this notebook in your paper or report, we recommend the following: # # > The authors acknowledge the Maneuver ID Introduction Notebook for providing learning resources that have contributed to the research results reported within this paper/report. # # Thank you for acknowledging us – we appreciate it.
Maneuver ID Introduction Notebook.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # DFS Golf Analysis # This program is meant to read from various sources and explore possibilities of modelling golfer performance at various PGA courses to make money on DraftKings or FanDuel. # # First import the necessary libraries: # + import pandas as pd import numpy as np # Input necessary setup variables engine = 'FanDuel' # Enter FanDuel or DraftKings # - # Prep for metadata by creating dictionaries for all the location codes that are encountered in the data. These will be used later to create cleaner location information about each tournament which will likely be used as a feature for estimating player performance. # + USabbrevs = {'al': 'Alabama','ak': 'Alaska', 'az': 'Arizona', 'ar': 'Arkansas', 'ca': 'California', 'co': 'Colorado', 'ct': 'Connecticut', 'de': 'Delaware', 'fl': 'Florida', 'ga': 'Georgia', 'hi': 'Hawaii', 'id': 'Idaho', 'il': 'Illinois', 'in': 'Indiana', 'ia': 'Iowa', 'ks': 'Kansas', 'ky': 'Kentucky', 'la': 'Louisiana', 'me': 'Maine', 'md': 'Maryland', 'ma': 'Massachusetts', 'mi': 'Michigan', 'mn': 'Minnesota', 'ms': 'Mississippi', 'mo': 'Missouri', 'mt': 'Montana', 'ne': 'Nebraska', 'nv': 'Nevada', 'nh': 'New Hampshire', 'nj': 'New Jersey', 'nm': 'New Mexico', 'ny': 'New York', 'nc': 'North Carolina', 'nd': 'North Dakota', 'oh': 'Ohio', 'ok': 'Oklahoma', 'or': 'Oregon', 'pa': 'Pennsylvania', 'ri': 'Rhode Island', 'sc': 'South Carolina', 'sd': 'South Dakota', 'tn': 'Tennessee', 'tx': 'Texas', 'ut': 'Utah', 'vt': 'Vermont', 'va': 'Virginia', 'wa': 'Washington', 'wv': 'West Virginia', 'wi': 'Wisconsin', 'wy': 'Wyoming'} CANabbrevs = {'on': 'Ontario'} UKabbrevs = {'england': 'England', 'nir': 'Northern Ireland', 'eng': 'England'} trans = {'jpn':'Japan','mex':'Mexico', 'pur':'Puerto Rico','aus':'Australia','bah':'Bahamas','ber':'Bermuda', 'chn':'China','kor':'South Korea','can':'Canada','dom':'Dominican Republic','mas':'Malaysia'} # Combine all dicts all_abbrevs = {} _ = [all_abbrevs.update(d) for d in (USabbrevs, CANabbrevs, UKabbrevs, trans)] # Prep Location Breakdown for tournament data def loc_breakdown(row): # Look for a match with the dictionaries listed at the top of this notebook if row['end_loc'] in USabbrevs.values(): # located in USA row['City'] = row['beg_loc'] row['State'] = row['end_loc'] row['Country'] = 'United States' elif row['end_loc'] in CANabbrevs.values(): # located in Canada row['City'] = row['beg_loc'] row['State'] = row['end_loc'] row['Country'] = 'Canada' elif row['end_loc'] in UKabbrevs.values(): # located in United Kingdom row['City'] = row['beg_loc'] row['State'] = row['end_loc'] row['Country'] = 'United Kingdom' elif row['end_loc'] in trans.values(): # located elsewhere row['City'] = row['beg_loc'] row['Country'] = row['end_loc'] else: pass return row # Prep Zipcodes for use in tournament data def find_zip(row): if row['Country']=='United States': try: return zips[(zips.state_name==row['State']) & (zips.city==row['City'])].index[0] except: return np.nan else: return np.nan # - # Replace missing cities with actual city name from the USPS website. Read in csv file with zip code data for use in creating a location region feature # + missing_cities = {'Kapalua': 'Lahaina', 'Ft. Worth': 'Fort Worth', 'Auburn/Opelika': 'Opelika', 'St. Louis': 'Saint Louis', 'Hilton Head': 'Hilton Head Island', 'Avondale': 'Westwego', 'Erin': 'Hartford', 'Blaine': 'Minneapolis', 'McKinney': 'Mckinney', 'Kiawah Island': 'Johns Island'} # Read and modify the zips zips = pd.read_csv('uszips.csv', index_col='zip') # - # Create functions for easy extraction of data from various internet sources which include: # [sportsdata.io](https://sportsdata.io/developers/api-documentation/golf#) # # The data can be retreived in the following formats: # - entire season data # - specific player data # - data from all players # # Add new functions here as new data sources are discovered # + # Setup calls to data api = 'de4dc63e16ee485b9df3bb79146bdcc1' # Individual seasons def season_data(season): return pd.read_json('https://api.sportsdata.io/golf/v2/json/Tournaments/{}?key={}'.format(str(season),api)) # Individual players def player_data(player_id): return pd.read_json('https://api.sportsdata.io/golf/v2/json/Player/{}?key={}'.format(str(player_id),api)) # All players def all_players_data(): players = pd.read_json('https://api.sportsdata.io/golf/v2/json/Players?key={}'.format(api)).replace({None: np.nan}) players = players[players['DraftKingsName'].notnull() & players['FanDuelName'].notnull()] # strip out the nulls for col in players.select_dtypes(include=np.float).columns: players[col] = players[col].astype(pd.Int32Dtype()) # convert float columns to ints cols_to_drop = ['FantasyAlarmPlayerID','FantasyDraftName','FantasyDraftPlayerID','PhotoUrl', 'RotoWirePlayerID', 'RotoworldPlayerID', 'SportRadarPlayerID', 'YahooPlayerID'] if engine.lower() == 'draftkings': cols_to_drop.extend(['FanDuelName','FanDuelPlayerID']) players = players.drop(cols_to_drop,axis=1).set_index('DraftKingsName') elif engine.lower() == 'fanduel': cols_to_drop.extend(['DraftKingsName','DraftKingsPlayerID']) players = players.drop(cols_to_drop,axis=1).set_index('FanDuelName') return players # All tournaments from current season (same as season_data(2020)) def tournament_data(): col_order=['StartDate', 'StartDateTime', 'EndDate', 'City', 'State', 'Country', 'Location', 'ZipCode', 'TimeZone', 'Covered', 'Format', 'IsInProgress', 'IsOver', 'Name', 'Par', 'Purse', 'Rounds', 'TournamentID', 'Venue', 'Yards', 'Canceled'] # modify the order that the data is shown tourn = (pd.read_json('https://api.sportsdata.io/golf/v2/json/Tournaments?key={}'.format(api)) .replace({None: np.nan}) # replace Nones with NaNs .dropna(subset=['Location']) # drop row with NaN in column Location .loc[:,col_order]) # only use columns listed above # Convert the dates to datetime tourn.EndDate = pd.to_datetime(tourn.EndDate) # Add columns for location breakdown tourn['beg_loc'] = tourn['Location'].str.extract('^([A-Za-z0-9 /\.]+),') # Extract before comma tourn['end_loc'] = tourn['Location'].str.extract(', ([A-Za-z0-9 ]+)$') # Extract after last comma tourn['end_loc'] = tourn['end_loc'].str.lower().replace(all_abbrevs) # Replace with adjustments # Convert data to proper columns and drop unnecessary columns tourn = tourn.apply(loc_breakdown, axis=1).drop(['Location','beg_loc','end_loc'],axis=1) # Fill in zip code column tourn['City'] = tourn['City'].replace(missing_cities) # replace in missing_cities so all zip codes can be found tourn['ZipCode'] = tourn.apply(find_zip, axis=1).astype('Int64') return tourn # - # ### Test tournament data function here tournament_data()
golf_analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np # %matplotlib notebook import matplotlib.pyplot as plt try: from escape import test_data except: from pathlib import Path import sys escape_path = Path('../').resolve().absolute().as_posix() sys.path.append(escape_path) from escape import test_data from escape import digitize, Array, concatenate # - # Load test dataset data = test_data.get_test_data(as_da=False) print(list(data.keys())) # The test data consist of intensity behind a sample i and intensity before a sample i0, which is subject to high intensity jitter i = data['i'] i0 = data['i0'] fh, axs = plt.subplots(2,1,figsize=[7,8],num='Intensity properties') axs[0].hist(i0.data,300) axs[1].plot(i0.data,i.data,'.k',ms=0.5) i_n = i/i0 pump_on = data['pump_on'] i_n_on = i_n[pump_on] i_n_off = i_n[~pump_on] plt.figure('Normalized intensity over all events in data set') plt.plot(i_n_off.index,i_n_off.data,'.b',ms=1,label='Unpumped off data') plt.plot(i_n_on.index,i_n_on.data,'.r',ms=1,label='Pumped on data') plt.ylim(np.percentile(i_n_on.data,[1,99])) plt.legend() ids = Array(data=i.index,index=i.index) ids_b = digitize(ids,np.arange(ids.min(),ids.max(),100)) di_n = concatenate([ti/tir for ti, tir in zip((ids_b.ones()*i_n_on).scan,(ids_b.ones()*i_n_off).scan.mean())]) t = data['t'] t_binned = digitize(t,np.linspace(-5,10,2*150+1)) transient = (t_binned.ones()*di_n).compute() # + plt.figure('Transient result') plt.plot([np.mean(ts.data) for ts in transient.scan],'.') plt.plot([np.median(ts.data) for ts in transient.scan],'.-')
examples/escape example storage.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + papermill={"duration": 0.179372, "end_time": "2018-11-19T22:28:55.938944", "exception": false, "start_time": "2018-11-19T22:28:55.759572", "status": "completed"} tags=[] # %matplotlib inline # + papermill={"duration": 0.448422, "end_time": "2018-11-19T22:28:56.387465", "exception": false, "start_time": "2018-11-19T22:28:55.939043", "status": "completed"} tags=[] import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import sklearn.metrics import scipy import adjustText import matplotlib.ticker # + papermill={"duration": 0.011373, "end_time": "2018-11-19T22:28:56.398937", "exception": false, "start_time": "2018-11-19T22:28:56.387564", "status": "completed"} tags=[] # Set the default plot style #default_plt_width = 15 #default_plt_height = 10 #plt.rcParams['figure.figsize'] = [default_plt_width, default_plt_height] # + papermill={"duration": 0.07511, "end_time": "2018-11-19T22:28:56.474093", "exception": false, "start_time": "2018-11-19T22:28:56.398983", "status": "completed"} tags=[] sns.set_style("whitegrid") sns.set_context("paper") sns.set(font_scale=1.1) sns.despine(left=True) sns.set_style("ticks", {"xtick.major.size": 8, "ytick.major.size": 8}) cmap = sns.color_palette("Set1") sns.palplot(cmap) sns.set_palette(cmap) plt_y_axis_fmt_string = '%.3f' # + papermill={"duration": 0.01699, "end_time": "2018-11-19T22:28:56.491135", "exception": false, "start_time": "2018-11-19T22:28:56.474145", "status": "completed"} tags=["parameters"] filename_prefix = "aug_results_MNIST_3_vs_8_translate_10" # + papermill={"duration": 0.013745, "end_time": "2018-11-19T22:28:56.504983", "exception": false, "start_time": "2018-11-19T22:28:56.491238", "status": "completed"} tags=["injected-parameters"] # Parameters filename_prefix = "aug_results_NORB_0_vs_1_crop_10" # + papermill={"duration": 0.011987, "end_time": "2018-11-19T22:28:56.517017", "exception": false, "start_time": "2018-11-19T22:28:56.505030", "status": "completed"} tags=[] runs_data = np.load("{}.npz".format(filename_prefix)) # + papermill={"duration": 0.01365, "end_time": "2018-11-19T22:28:56.530715", "exception": false, "start_time": "2018-11-19T22:28:56.517065", "status": "completed"} tags=[] baseline_acc = runs_data["no_aug_no_poison_acc"] poisoned_acc = runs_data["poisoned_acc"] all_aug_train_poisoned_acc = runs_data["all_aug_train_poisoned_acc"] n_aug_sample_points = runs_data["n_aug_sample_points"] n_train = runs_data["n_train"] VSV_acc = runs_data["VSV_acc"] is_SV = runs_data["is_SV"].astype(np.int) n_SV = np.sum(is_SV) # + papermill={"duration": 0.012807, "end_time": "2018-11-19T22:28:56.543571", "exception": false, "start_time": "2018-11-19T22:28:56.530764", "status": "completed"} tags=[] runs_data_inf = pd.read_pickle("{}.pkl".format(filename_prefix)) runs_data_loss = pd.read_pickle("{}_loss.pkl".format(filename_prefix)) # + papermill={"duration": 0.012016, "end_time": "2018-11-19T22:28:56.555638", "exception": false, "start_time": "2018-11-19T22:28:56.543622", "status": "completed"} tags=[] runs_data_inf["score"] = "influence" # + papermill={"duration": 0.011859, "end_time": "2018-11-19T22:28:56.567545", "exception": false, "start_time": "2018-11-19T22:28:56.555686", "status": "completed"} tags=[] runs_data_loss["score"] = "loss" # + papermill={"duration": 0.012942, "end_time": "2018-11-19T22:28:56.580541", "exception": false, "start_time": "2018-11-19T22:28:56.567599", "status": "completed"} tags=[] run_df_unprocessed = pd.concat([ runs_data_inf, runs_data_loss, ]) # + papermill={"duration": 0.02403, "end_time": "2018-11-19T22:28:56.604620", "exception": false, "start_time": "2018-11-19T22:28:56.580590", "status": "completed"} tags=[] run_df_unprocessed # + papermill={"duration": 0.021002, "end_time": "2018-11-19T22:28:56.625724", "exception": false, "start_time": "2018-11-19T22:28:56.604722", "status": "completed"} tags=[] baseline = run_df_unprocessed.query("test_type == 'baseline'").reset_index() # + papermill={"duration": 0.012203, "end_time": "2018-11-19T22:28:56.637976", "exception": false, "start_time": "2018-11-19T22:28:56.625773", "status": "completed"} tags=[] baseline["score"] = "baseline" # + papermill={"duration": 0.012462, "end_time": "2018-11-19T22:28:56.650492", "exception": false, "start_time": "2018-11-19T22:28:56.638030", "status": "completed"} tags=[] baseline["test_type"] = "Baseline" # + papermill={"duration": 0.023893, "end_time": "2018-11-19T22:28:56.674433", "exception": false, "start_time": "2018-11-19T22:28:56.650540", "status": "completed"} tags=[] baseline # + papermill={"duration": 0.039035, "end_time": "2018-11-19T22:28:56.713517", "exception": false, "start_time": "2018-11-19T22:28:56.674482", "status": "completed"} tags=[] prop_inf = run_df_unprocessed.query("test_type == 'random_proportional' & score == 'influence'") prop_inf["test_type"] = "Random Proportional Influence" # + papermill={"duration": 0.033651, "end_time": "2018-11-19T22:28:56.747263", "exception": false, "start_time": "2018-11-19T22:28:56.713612", "status": "completed"} tags=[] prop_loss = run_df_unprocessed.query("test_type == 'random_proportional' & score == 'loss'") prop_loss["test_type"] = "Random Proportional Loss" # + papermill={"duration": 0.01299, "end_time": "2018-11-19T22:28:56.760303", "exception": false, "start_time": "2018-11-19T22:28:56.747313", "status": "completed"} tags=[] run_df = pd.concat([ baseline, prop_inf, prop_loss, ]) # + papermill={"duration": 0.010931, "end_time": "2018-11-19T22:28:56.771279", "exception": false, "start_time": "2018-11-19T22:28:56.760348", "status": "completed"} tags=[] run_df = run_df.rename( index=str, columns={"test_accuracy": "Test Accuracy", "n_auged": "Number of Augmented Points", }, ) # + papermill={"duration": 0.027225, "end_time": "2018-11-19T22:28:56.798744", "exception": false, "start_time": "2018-11-19T22:28:56.771519", "status": "completed"} tags=[] run_df # + papermill={"duration": 0.013906, "end_time": "2018-11-19T22:28:56.812698", "exception": false, "start_time": "2018-11-19T22:28:56.798792", "status": "completed"} tags=[] VSV_x = n_SV VSV_y = VSV_acc # + papermill={"duration": 0.586333, "end_time": "2018-11-19T22:28:57.399077", "exception": false, "start_time": "2018-11-19T22:28:56.812744", "status": "completed"} tags=[] fig, ax = plt.subplots() run_plot = sns.lineplot(x="Number of Augmented Points", y="Test Accuracy", hue="test_type", style="test_type", ci=95, data=run_df, markers=True, dashes=True, ax=ax) run_plot.scatter(VSV_x, VSV_y, marker="x", color="k", s=20) # text = run_plot.annotate("VSV", (VSV_x, VSV_y)) text = run_plot.text(VSV_x, VSV_y, "VSV", fontsize=12) l = ax.legend() #l.texts[0].set_text("") #l.set_title('Whatever you want') handles, labels = ax.get_legend_handles_labels() ax.legend(handles=handles[1:], labels=labels[1:]) ax.yaxis.set_major_formatter(matplotlib.ticker.FormatStrFormatter(plt_y_axis_fmt_string)) plt.setp(ax.get_legend().get_texts(), fontsize='11.5') # for legend text #run_plot.axhline(y=baseline_acc, # color="b", # linestyle="--", # label="baseline_acc") run_plot.axhline(y=poisoned_acc, color="r", linestyle="--", label="poisoned_acc") run_plot.axhline(y=all_aug_train_poisoned_acc, color="g", linestyle="--", label="all_aug_train_poisoned_acc") adjustText.adjust_text([text], x=[VSV_x], y=[VSV_y], add_objects=[run_plot], expand_points=(0.2, 0.2), expand_objects=(0.3, 0.3), ax=ax, force_objects=(0.1, 0.1)) run_plot.get_figure().savefig(filename_prefix + "_joined.pdf", bbox_inches="tight") # + papermill={"duration": 2e-06, "end_time": "2018-11-19T22:28:57.399177", "exception": null, "start_time": "2018-11-19T22:28:57.399175", "status": "completed"} tags=[] # + papermill={"duration": 3e-06, "end_time": "2018-11-19T22:28:57.410801", "exception": null, "start_time": "2018-11-19T22:28:57.410798", "status": "completed"} tags=[] # + papermill={"duration": 3e-06, "end_time": "2018-11-19T22:28:57.421939", "exception": null, "start_time": "2018-11-19T22:28:57.421936", "status": "completed"} tags=[]
Visualize_LOO_Experiments-Combine_Inf_Loss_Random.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Bayesian optimization # # **Problem**: # - Many optimization problems in machine learning are black box optimization problems where the objective function $f(x)$ is a black box function. # - We do not have an analytical expression for $f$ nor do we know its derivatives. # - Evaluation of the function is restricted to sampling at a point $x$ and getting a possibly noisy response. # # If $f$ is **cheap** to evaluate we could sample at many points e.g. via **grid search, random search or numeric gradient estimation** # # Ff function evaluation is **expensive** e.g. tuning hyperparameters of a deep neural network, probe drilling for oil at given geographic coordinates or evaluating the effectiveness of a drug candidate taken from a chemical search space then it is important to minimize the number of samples drawn from the black box function $f$. # # **Bayesian optimization** attempt to find the global optimimum in a minimum number of steps. # Bayesian optimization incorporates **prior belief** about $f$ and updates the prior with samples drawn from $f$ to get a **posterior** that better approximates f. # # The model used for approximating the objective function is called **surrogate model**. # # Bayesian optimization also uses an **acquisition function** that directs sampling to areas where an improvement over the current best observation is likely. # # ### Surrogate model # # A popular surrogate model for Bayesian optimization are **Gaussian processes** (GPs). # # **GPs** define a prior over functions and we can use them to incorporate prior beliefs about the objective function (smoothness, ...). # The GP posterior is cheap to evaluate and is used to propose points in the search space where sampling is likely to yield an improvement. # ### Acquisition functions # # Proposing sampling points in the search space is done by **acquisition functions**. They trade off **exploitation and exploration**. # # **Exploitation** means sampling where the surrogate model predicts a high objective and **exploration** means sampling at locations where the prediction uncertainty is high. # # Both correspond to high acquisition function values and the goal is to maximize the acquisition function to determine the next sampling point. # # More formally, the objective function $f$ will be sampled at $x_t=argmax_{x}u(x|D_{1:t−1})$ where $u$ is the acquisition function and $D_{1:t−1}={(x_1,y_1),...,(x_{t−1},y_{t−1})}$ are the $t−1$ samples drawn from f so far. # # Popular acquisition functions are : # - maximum probability of improvement (MPI) # - expected improvement (EI) # - upper confidence bound (UCB). # # In the following, we will use the expected improvement (EI) which is most widely used and described further below. # ### Optimization algorithm # # The **Bayesian optimization** procedure is as follows. For $t=1,2,...$ repeat: # - Find the next sampling point $x_t$ by optimizing the acquisition function over the $GP: x_t=argmax_{x}u(x|D_{1:t−1})$ # - Obtain a possibly noisy sample $y_t=f(x_t)+ϵ_t$ from the objective function $f$. # - Add the sample to previous samples $D_{1:t}={D_{1:t−1},(x_t,y_t)}$ and update the GP. # ### Expected improvement # # Expected improvement is defined as # $EI(x)=E[max(f(x)−f(x^+),0)]$ # # where $f(x^+)$ is the value of the best sample so far and $x^+$ is the location of that sample i.e. $x^+ = argmax_{x_i \in x_{1:t}}f(x_i)$ # # ![image.png](attachment:4338f33f-5e76-4651-8b2a-7e078d5e098c.png) # # where $μ(x)$ and $σ(x)$ are the mean and the standard deviation of the GP posterior predictive at x, respectively. $Φ$ and $ϕ$ are the CDF and PDF of the standard normal distribution, respectively. The first summation term is the exploitation term and second summation term is the exploration term. # # Parameter $ξ$ determines the amount of exploration during optimization and higher $ξ$ values lead to more exploration. # # In other words, with increasing $ξ$ values, the importance of improvements predicted by the GP posterior mean $μ(x)$ decreases relative to the importance of potential improvements in regions of high prediction uncertainty, represented by large $σ(x)$ values. A recommended default value for $ξ$ is 0.01. # + import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm from scipy.optimize import minimize from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import ConstantKernel, Matern # %matplotlib inline def plot_approximation(gpr, X, Y, X_sample, Y_sample, X_next=None, show_legend=False): mu, std = gpr.predict(X, return_std=True) plt.fill_between(X.ravel(), mu.ravel() + 1.96 * std, mu.ravel() - 1.96 * std, alpha=0.1) plt.plot(X, Y, 'y--', lw=1, label='Noise-free objective') plt.plot(X, mu, 'b-', lw=1, label='Surrogate function') plt.plot(X_sample, Y_sample, 'kx', mew=3, label='Noisy samples') if X_next: plt.axvline(x=X_next, ls='--', c='k', lw=1) if show_legend: plt.legend() def plot_acquisition(X, Y, X_next, show_legend=False): plt.plot(X, Y, 'r-', lw=1, label='Acquisition function') plt.axvline(x=X_next, ls='--', c='k', lw=1, label='Next sampling location') if show_legend: plt.legend() def plot_convergence(X_sample, Y_sample, n_init=2): plt.figure(figsize=(12, 3)) x = X_sample[n_init:].ravel() y = Y_sample[n_init:].ravel() r = range(1, len(x)+1) x_neighbor_dist = [np.abs(a-b) for a, b in zip(x, x[1:])] y_max_watermark = np.maximum.accumulate(y) plt.subplot(1, 2, 1) plt.plot(r[1:], x_neighbor_dist, 'bo-') plt.xlabel('Iteration') plt.ylabel('Distance') plt.title('Distance between consecutive x\'s') plt.subplot(1, 2, 2) plt.plot(r, y_max_watermark, 'ro-') plt.xlabel('Iteration') plt.ylabel('Best Y') plt.title('Value of best selected sample') # - noise = 0.2 def black_box(X, noise=noise): # Our black box function return -np.sin(3*X) - X**2 + 0.7*X + noise * np.random.randn(*X.shape) # + bounds = np.array([[-1.0, 2.0]]) X_init = np.array([[2], [1.1]]) Y_init = black_box(X_init) # + # Dense grid of points within bounds X = np.arange(bounds[:, 0], bounds[:, 1], 0.01).reshape(-1, 1) # Noise-free objective function values at X Y = black_box(X,0) # Plot optimization objective with noise level plt.plot(X, Y, 'y--', lw=2, label='Noise-free objective') plt.plot(X, black_box(X), 'bx', lw=1, alpha=0.1, label='Noisy samples') plt.plot(X_init, Y_init, 'kx', mew=3, label='Initial samples') plt.legend(); # - # The goal is to find the global optimum on the left in a small number of steps. # # The next step is to implement the acquisition function defined as `expected_improvement` function. def expected_improvement(X, X_sample, Y_sample, gpr, xi=0.01): ''' Computes the EI at points X based on existing samples X_sample and Y_sample using a Gaussian process surrogate model. Args: X: Points at which EI shall be computed (m x d). X_sample: Sample locations (n x d). Y_sample: Sample values (n x 1). gpr: A GaussianProcessRegressor fitted to samples. xi: Exploitation-exploration trade-off parameter. Returns: Expected improvements at points X. ''' mu, sigma = gpr.predict(X, return_std=True) mu_sample = gpr.predict(X_sample) sigma = sigma.reshape(-1, 1) # Needed for noise-based model, # otherwise use np.max(Y_sample). mu_sample_opt = np.max(mu_sample) with np.errstate(divide='warn'): imp = mu - mu_sample_opt - xi Z = imp / sigma ei = imp * norm.cdf(Z) + sigma * norm.pdf(Z) ei[sigma == 0.0] = 0.0 return ei # + ### We also need a function that proposes the next sampling point by computing the location of the acquisition function maximum. ### Optimization is restarted n_restarts times to avoid local optima. def propose_location(acquisition, X_sample, Y_sample, gpr, bounds, n_restarts=25): ''' Proposes the next sampling point by optimizing the acquisition function. Args: acquisition: Acquisition function. X_sample: Sample locations (n x d). Y_sample: Sample values (n x 1). gpr: A GaussianProcessRegressor fitted to samples. Returns: Location of the acquisition function maximum. ''' dim = X_sample.shape[1] min_val = 1 min_x = None def min_obj(X): # Minimization objective is the negative acquisition function return -acquisition(X.reshape(-1, dim), X_sample, Y_sample, gpr)[0] # Find the best optimum by starting from n_restart different random points. for x0 in np.random.uniform(bounds[:, 0], bounds[:, 1], size=(n_restarts, dim)): res = minimize(min_obj, x0=x0, bounds=bounds, method='L-BFGS-B') if res.fun < min_val: min_val = res.fun[0] min_x = res.x return min_x.reshape(-1, 1) # - # The Gaussian process in the following example is configured with a **Matérn kernel** which is a generalization of the squared exponential kernel or RBF kernel. The known noise level is configured with the alpha parameter. # # Bayesian optimization runs for 10 iterations. # In each iteration, a row with two plots is produced. The left plot shows the noise-free objective function, the surrogate function which is the GP posterior predictive mean, the 95% confidence interval of the mean and the noisy samples obtained from the objective function so far. The right plot shows the acquisition function. The vertical dashed line in both plots shows the proposed sampling point for the next iteration which corresponds to the maximum of the acquisition function. # + # Gaussian process with Matérn kernel as surrogate model m52 = ConstantKernel(1.0) * Matern(length_scale=1.0, nu=2.5) gpr = GaussianProcessRegressor(kernel=m52, alpha=noise**2) # Initialize samples X_sample = X_init Y_sample = Y_init # Number of iterations n_iter = 20 plt.figure(figsize=(12, n_iter * 3)) plt.subplots_adjust(hspace=0.4) for i in range(n_iter): # Update Gaussian process with existing samples gpr.fit(X_sample, Y_sample) # Obtain next sampling point from the acquisition function (expected_improvement) X_next = propose_location(expected_improvement, X_sample, Y_sample, gpr, bounds) # Obtain next noisy sample from the objective function Y_next = black_box(X_next, noise) # Plot samples, surrogate function, noise-free objective and next sampling location plt.subplot(n_iter, 2, 2 * i + 1) plot_approximation(gpr, X, Y, X_sample, Y_sample, X_next, show_legend=i==0) plt.title(f'Iteration {i+1}') plt.subplot(n_iter, 2, 2 * i + 2) plot_acquisition(X, expected_improvement(X, X_sample, Y_sample, gpr), X_next, show_legend=i==0) # Add sample to previous samples X_sample = np.vstack((X_sample, X_next)) Y_sample = np.vstack((Y_sample, Y_next)) plot_convergence(X_sample, Y_sample) # - # ### References # # 1. https://nbviewer.jupyter.org/github/krasserm/bayesian-machine-learning/blob/dev/bayesian-optimization/bayesian_optimization.ipynb
notebooks/06-BayesianOptimization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pysam import numpy as np import matplotlib.pyplot as plt vcf = pysam.VariantFile('/project/jnovembre/jhmarcus/ancient-sardinia/output/vcf/ancient_sardinia_full26_trm.vcf.gz') samples = list(vcf.header.samples) #cnt = 0 #for rec in vcf.fetch(): # cnt += 1 #nsnps = cnt nsnps = 1151240 shape = (len(samples), nsnps) l_00 = np.zeros(shape) l_01 = np.zeros(shape) l_11 = np.zeros(shape) print(shape) j = 0 mask = np.ones(shape) for rec in vcf.fetch(): for s in range(len(samples)): if (sum(rec.samples[samples[s]]["AD"]) < 2): mask[s, j] = 0 continue l_00[s, j] = rec.samples[samples[s]]["GL"][0] l_01[s, j] = rec.samples[samples[s]]["GL"][1] l_11[s, j] = rec.samples[samples[s]]["GL"][2] j += 1 p_00 = np.power(10, l_00) p_01 = np.power(10, l_01) p_11 = np.power(10, l_11) # + P = np.zeros(shape = (len(samples), nsnps, 3)) P[:,:,0] = p_00 / (p_00 + p_01 + p_11) P[:,:,1] = p_01 / (p_00 + p_01 + p_11) P[:,:,2] = p_11 / (p_00 + p_01 + p_11) # check number of SNPs (P[:,:,0] > 0.8).sum() + (P[:,:,1] > 0.8).sum() + (P[:,:,2] > 0.8).sum() # - # %load_ext Cython # + language="cython" # # cimport cython # import numpy as np # cimport numpy as np # # @cython.boundscheck(False) # @cython.wraparound(False) # cpdef double compute_distance(int i, int j, double[:, :, :] P, double [:,:] mask): # cdef double d = 0.0 # cdef int nsnps = P.shape[1] # cdef int l = 0 # cdef int k1 = 0 # cdef int k2 = 0 # for l in range(nsnps): # # if (mask[i,l]): # continue # # for k1 in [0,1,2]: # for k2 in [0,1,2]: # d += (k1-k2)*(k1-k2) * P[i, l, k1] * P[j, l, k2] # # return(d) # - D = np.zeros(shape = (len(samples), len(samples))) for i in range(len(samples)): for j in range((i+1), len(samples)): D[i,j] = compute_distance(i,j, P, mask) D[j,i] = D[i,j] D = D / nsnps plt.hist(D[1,]) np.savetxt('anc_sards.diffs', D, delimiter=',', fmt='%1.8f') outfile = open('anc_sards.id', 'w') outfile.write("\n".join(samples)) outfile.close()
analysis/compute-variogram-from-data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # construct input for tool 2 from tool 1's output # output of all agreements resolved from tool 1 folder_name_1 = '' # foldername for tool 1 data tool_1_out_file = folder_name_1 + 'all_agreements.txt' # txt input file for tool 2 folder_name_2 = '' # foldername for tool 2 data tool_2_input_file = folder_name_2 + 'input.txt' from pprint import pprint ''' command: Input.command ''' import ast import json import copy example_d = {} def merge_indices(indices): a, b = indices[0] for i in range(1, len(indices)): x, y = indices[i] if x != b + 1: return indices else: b = y if a==b: return a return [a, b] all_actions = {} composites= [] with open(tool_1_out_file, "r") as f1, open(tool_2_input_file, 'w') as f2: for line in f1.readlines(): line = line.strip() cmd, out = line.split("\t") words = cmd.split() words_copy = copy.deepcopy(words) action_dict = ast.literal_eval(out) action_type = action_dict['action_type'][1] all_actions[action_type] = all_actions.get(action_type, 0)+ 1 # write composite separately if action_type=='composite_action': composites.append(" ".join(words)) continue # no need to annotate children of these two actions if action_type == 'noop': continue # find children that need to be re-annotated for key, val in action_dict.items(): child_name = None # child needs annotation if val[0]== 'no': # insert "span" words_copy = copy.deepcopy(words) child_name = key write_line = "" write_line += " ".join(words) + "\t" #print(words, child_name, action_type, val[1]) indices = merge_indices(val[1]) span_text = None if type(indices) == list: if type(indices[0]) == list: # this means that indices were scattered and disjoint for idx in indices: words_copy[idx[0]] = "<span style='background-color: #FFFF00'>" + words_copy[idx[0]] words_copy[idx[1]] = words_copy[idx[1]] + "</span>" else: words_copy[indices[0]] = "<span style='background-color: #FFFF00'>" + words_copy[indices[0]] words_copy[indices[1]] = words_copy[indices[1]] + "</span>" else: words_copy[indices] = "<span style='background-color: #FFFF00'>" + words_copy[indices] + "</span>" write_line += " ".join(words_copy) + "\t" + action_type + "\t" + child_name # write for tool 2 f2.write(write_line+ "\n") #print(write_line) # + # crawl over all batches of all_agreemnts files and find all unique composites #directory path for where all files are mypath = '' from os import walk import ast f = [] for (dirpath, dirnames, filenames) in walk(mypath): if dirnames: for dirname in dirnames: folder = str(dirpath) + dirname + "/" file_name = folder + "all_agreements.txt" with open(file_name) as f: for line in f.readlines(): line = line.strip() text, d = line.split("\t") action_type = ast.literal_eval(d)['action_type'][1] if action_type == 'composite_action': all_composites.add(text.strip()) print(len(all_composites)) # - # now write all composites to a file. # these will be annotated separately. composite_file_name = '' # filename for file containing composite commands with open(composite_file_name, 'w') as f: for c in all_composites: f.write(c + "\n")
acl2020_submission/annotation_tools/postprocessing_tool_output_notebooks/step_2_create_tool_2_input_from_tool_1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Image binary classification # # To get acquainted with Toloka tools for free, you can use the promo code **TOLOKAKIT1** on $20 on your [profile page](https://toloka.yandex.com/requester/profile?utm_source=github&utm_medium=site&utm_campaign=tolokakit) after registration. # Prepare environment and import all we'll need. # + # !pip install toloka-kit==0.1.12 # !pip install crowd-kit==0.0.5 # !pip install pandas # !pip install ipyplot import datetime import os import sys import time import logging import ipyplot import pandas import numpy as np import toloka.client as toloka import toloka.client.project.template_builder as tb from crowdkit.aggregation import DawidSkene logging.basicConfig( format='[%(levelname)s] %(name)s: %(message)s', level=logging.INFO, stream=sys.stdout, ) # - # Сreate toloka-client instance. All api calls will go through it. More about OAuth token in our [Learn the basics example](https://github.com/Toloka/toloka-kit/tree/main/examples/0.getting_started/0.learn_the_basics) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Toloka/toloka-kit/blob/main/examples/0.getting_started/0.learn_the_basics/learn_the_basics.ipynb) toloka_client = toloka.TolokaClient(input("Enter your token:"), 'PRODUCTION') # Or switch to 'SANDBOX' logging.info(toloka_client.get_requester()) # ## Creating new project project = toloka.Project( public_name='Is it a cat or a dog?', public_description='Look at the picture and decide whether there is a cat or a dog.', ) # Create task interface # + image_viewer = tb.ImageViewV1(tb.InputData('image'), ratio=[1, 1], rotatable=True) radio_group_field = tb.ButtonRadioGroupFieldV1( tb.OutputData('result'), [ tb.GroupFieldOption('cat', 'Cat'), tb.GroupFieldOption('dog', 'Dog'), tb.GroupFieldOption('other', 'Other'), ], validation=tb.RequiredConditionV1(hint='choose one of the options'), ) task_width_plugin = tb.TolokaPluginV1( kind='scroll', task_width=500, ) hot_keys_plugin = tb.HotkeysPluginV1( key_1=tb.SetActionV1(tb.OutputData('result'), 'cat'), key_2=tb.SetActionV1(tb.OutputData('result'), 'dog'), key_3=tb.SetActionV1(tb.OutputData('result'), 'other'), ) project_interface = toloka.project.TemplateBuilderViewSpec( view=tb.ListViewV1([image_viewer, radio_group_field]), plugins=[task_width_plugin, hot_keys_plugin], ) # - # Set data specification. And set task interface to project. # + input_specification = {'image': toloka.project.UrlSpec()} output_specification = {'result': toloka.project.StringSpec()} project.task_spec = toloka.project.task_spec.TaskSpec( input_spec=input_specification, output_spec=output_specification, view_spec=project_interface, ) # - # Write short and simple instructions. project.public_instructions = """<p>Decide what category the image belongs to.</p> <p>Select "<b>Cat</b>" if the picture contains one or more cats.</p> <p>Select "<b>Dog</b>" if the picture contains one or more dogs.</p> <p>Select "<b>Other</b>" if:</p> <ul><li>the picture contains both cats and dogs</li> <li>the picture is a picture of animals other than cats and dogs</li> <li>it is not clear whether the picture is of a cat or a dog</li> </ul>""" # Create a project. project = toloka_client.create_project(project) # You can go to the project page and in web-interface you can see something like this: # <table align="center"> # <tr><td> # <img src="./img/created_project.png" # alt="Project interface" width="1000"> # </td></tr> # <tr><td align="center"> # <b>Figure 1.</b> What the project interface might look like. # </td></tr> # </table> # ## Pool creation # Specify the [pool parameters.](https://toloka.ai/docs/guide/concepts/pool_poolparams.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit) pool = toloka.Pool( project_id=project.id, # Give the pool any convenient name. You are the only one who will see it. private_name='Pool 1', may_contain_adult_content=False, # Set the price per task page. reward_per_assignment=0.01, will_expire=datetime.datetime.utcnow() + datetime.timedelta(days=365), # Overlap. This is the number of users who will complete the same task. defaults=toloka.Pool.Defaults(default_overlap_for_new_task_suites=3), # Time allowed for completing a task page assignment_max_duration_seconds=600, ) # Select English-speaking performers pool.filter = toloka.filter.Languages.in_('EN') # Set up [Quality control](https://toloka.ai/docs/guide/concepts/control.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit). Add basic controls. And Golden Set aka Control tasks. Ban performers who give incorrect responses to control tasks. # + pool.quality_control.add_action( collector=toloka.collectors.Income(), conditions=[toloka.conditions.IncomeSumForLast24Hours >= 20], action=toloka.actions.RestrictionV2( scope='PROJECT', duration=1, duration_unit='DAYS', private_comment='No need more answers from this performer', ) ) pool.quality_control.add_action( collector=toloka.collectors.SkippedInRowAssignments(), conditions=[toloka.conditions.SkippedInRowCount >= 10], action=toloka.actions.RestrictionV2( scope='PROJECT', duration=1, duration_unit='DAYS', private_comment='Lazy performer', ) ) pool.quality_control.add_action( collector=toloka.collectors.MajorityVote(answer_threshold=2, history_size=10), conditions=[ toloka.conditions.TotalAnswersCount >= 4, toloka.conditions.CorrectAnswersRate < 75, ], action=toloka.actions.RestrictionV2( scope='PROJECT', duration=10, duration_unit='DAYS', private_comment='Too low quality', ) ) pool.quality_control.add_action( collector=toloka.collectors.GoldenSet(), conditions=[ toloka.conditions.GoldenSetCorrectAnswersRate < 60.0, toloka.conditions.GoldenSetAnswersCount >= 3 ], action=toloka.actions.RestrictionV2( scope='PROJECT', duration=10, duration_unit='DAYS', private_comment='Golden set' ) ) # - # Specify the number of tasks per page. For example: 9 main tasks and 1 control task. pool.set_mixer_config( real_tasks_count=9, golden_tasks_count=1 ) # Create pool pool = toloka_client.create_pool(pool) # ## Preparing and uploading tasks # # This example uses a small data set with images. # # The dataset used is collected by Toloka team and distributed under a Creative Commons Attribution 4.0 International license # [![License: CC BY 4.0](https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by/4.0/). # # Dataset looks like: # <table align="center"> # <tr><td> # <img src="./img/dataset_preview.png" # alt="Dataset preview" width="1000"> # </td></tr> # <tr><td align="center"> # <b>Figure 2.</b> Dataset preview. # </td></tr> # </table> # + # !curl https://tlk.s3.yandex.net/dataset/cats_vs_dogs/toy_dataset.tsv --output dataset.tsv dataset = pandas.read_csv('dataset.tsv', sep='\t') logging.info(f'Dataset contains {len(dataset)} rows\n') dataset = dataset.sample(frac=1).reset_index(drop=True) ipyplot.plot_images( images=[row['url'] for _, row in dataset.iterrows()], labels=[row['label'] for _, row in dataset.iterrows()], max_images=12, img_width=300, ) # - # Divide the dataset into two. One for tasks and one for [Control tasks](https://toloka.ai/docs/guide/concepts/task_markup.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit). # # Note. Control tasks are tasks with the correct response known in advance. They are used to track the performer's quality of responses. The performer's response is compared to the response you provided. If they match, it means the performer answered correctly. golden_dataset, task_dataset = np.split(dataset, [15], axis=0) # Create control tasks. In small pools, control tasks should account for 10–20% of all tasks. # # Tip. Make sure to include different variations of correct responses in equal amounts. golden_tasks = [ toloka.Task( pool_id=pool.id, input_values={'image': row['url']}, known_solutions = [ toloka.task.BaseTask.KnownSolution( output_values={'result': row['label']} ) ], infinite_overlap=True, ) for i, row in golden_dataset.iterrows() ] # Create pool tasks tasks = [ toloka.Task( pool_id=pool.id, input_values={'image': url}, ) for url in task_dataset['url'] ] # Upload tasks created_tasks = toloka_client.create_tasks(golden_tasks + tasks, allow_defaults=True) logging.info(len(created_tasks.items)) # Start the pool. # # **Important.** Remember that real Toloka performers will complete the tasks. # Double check that everything is correct # with your project configuration before you start the pool pool = toloka_client.open_pool(pool.id) logging.info(pool.status) # ## Receiving responses # Wait until the pool is completed. # + pool_id = pool.id def wait_pool_for_close(pool_id, minutes_to_wait=1): sleep_time = 60 * minutes_to_wait pool = toloka_client.get_pool(pool_id) while not pool.is_closed(): op = toloka_client.get_analytics([toloka.analytics_request.CompletionPercentagePoolAnalytics(subject_id=pool.id)]) op = toloka_client.wait_operation(op) percentage = op.details['value'][0]['result']['value'] logging.info( f' {datetime.datetime.now().strftime("%H:%M:%S")}\t' f'Pool {pool.id} - {percentage}%' ) time.sleep(sleep_time) pool = toloka_client.get_pool(pool.id) logging.info('Pool was closed.') wait_pool_for_close(pool_id) # - # Get responses # # When all the tasks are completed, look at the responses from performers. # + answers = [] answers_df = toloka_client.get_assignments_df(pool_id) # prepare DataFrame answers_df = answers_df.rename(columns={ 'INPUT:image': 'task', 'OUTPUT:result': 'label', 'ASSIGNMENT:worker_id': 'performer' }) logging.info(f'answers count: {len(answers_df)}') # - # Aggregation results using the Dawid-Skene model # + # Run aggregation predicted_answers = DawidSkene(n_iter=20).fit_predict(answers_df) logging.info(predicted_answers) # - # Look at the results. # # Some preparations for displaying the results predicted_answers = predicted_answers.sample(frac=1) images = predicted_answers.index.values labels = predicted_answers.values start_with = 0 # Note: The cell below can be run several times. if start_with >= len(predicted_answers): logging.info('no more images') else: ipyplot.plot_images( images=images[start_with:], labels=labels[start_with:], max_images=12, img_width=300, ) start_with += 12 # + [markdown] pycharm={"name": "#%% md\n"} # **You** can see the labeled images. Some possible results are shown in figure 3 below. # # <table align="center"> # <tr><td> # <img src="./img/possible_results.png" # alt="Possible results" width="1000"> # </td></tr> # <tr><td align="center"> # <b>Figure 3.</b> Possible results. # </td></tr> # </table>
examples/1.computer_vision/image_classification/image_classification.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # scikit-learn → PMML # # # ### Exporter: Gradient Boosting # ### Data Set used: Titanic # # # ### **STEPS**: # - Build the Pipeline with preprocessing (using DataFrameMapper) # - Build PMML using Nyoka exporter # ### Pre-processing, Model building (using pipeline) for Titanic data set # + import pandas as pd from sklearn import datasets from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler, Imputer, LabelEncoder, LabelBinarizer from sklearn_pandas import DataFrameMapper from sklearn.ensemble import GradientBoostingClassifier titanic = pd.read_csv("titanic_train.csv") titanic['Embarked'] = titanic['Embarked'].fillna('S') features = list(titanic.columns.drop(['PassengerId','Name','Ticket','Cabin','Survived'])) target = 'Survived' # + pipeline_obj = Pipeline([ ("mapping", DataFrameMapper([ (['Sex'], LabelEncoder()), (['Embarked'], LabelEncoder()) ])), ("imp", Imputer(strategy="median")), ("gbc", GradientBoostingClassifier(n_estimators = 10)) ]) pipeline_obj.fit(titanic[features],titanic[target]) # - # ### Export the Pipeline object into PMML using the Nyoka package # + from nyoka import skl_to_pmml skl_to_pmml(pipeline_obj, features, target, "gb_pmml.pmml")
examples/skl/4_GB_With_pre-processing.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/jsalbr/m3nlp/blob/main/Question_Answering.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="BiAcHrZPrBpg" # # Question Answering # + [markdown] id="BJxDfaburBpm" # **Dieses Notebook sollte mit GPU ausgeführt werden. # Dafür bitte zunächst im Menü "Laufzeit"->"Laufzeittyp ändern"->"Hardwarebeschleuniger: GPU" einstellen.** # # In dieser Version sind bereits alle Zellen ausgeführt. # Wer selbst von vorne anfangen will, am besten übers Menü # **"Bearbeiten"->"Alle Ausgaben löschen"**. # # <hr/> # # Credits: Das Notebook verwendet Ideen von # * Natural Language Processing with Transformers von <NAME>, <NAME>, <NAME>, O'Reilly 2021, https://www.oreilly.com/library/view/natural-language-processing/9781098103231/ # * Heise Academy NLP-Kurs von <NAME>ler, https://github.com/heiseacademy/nlp-course/tree/main/09_Transfer_Learning # * Haystack Tutorial von deepset.io, https://github.com/deepset-ai/haystack#mortar_board-tutorials # + [markdown] id="LHtRDsDqrBpn" # ## System vorbereiten # # ### Installation von Transformers und Haystack # # Achtung: In diesem Notebook werden sowohl die [Transformers-Bibliothek von HuggingFace](https://huggingface.co/transformers/) als auch [Haystack von deepset.ai](https://haystack.deepset.ai/) eingesetzt. # # Leider haben in der aktuellen Version beide Bibliotheken inkompatible Dependencies. # Die Installation hier funktioniert für diese Beispiele, aber es gibt eine Warnung am Ende. In der Praxis kann # es daher zu Problemen kommen. Für produktive Zwecke sollte deshalb mit getrennten # virtuellen Environments arbeiten. # # **Geduld:** Die Installation dauert einen Moment. # + id="LbZyqQ-zqxbh" # !pip install -q farm-haystack==0.10.0 grpcio==1.41.0 # # !pip install transformers==4.12.3 datasets # !pip install -q git+https://github.com/huggingface/transformers datasets # !pip install readability-lxml # + id="PPTt9iVkdzo0" # %load_ext autoreload # %autoreload 2 # + [markdown] id="7g0JuwR2_Vui" # ### Noch ein paar Standard-Einstellungen setzen ... # + id="82D4tSavG7U1" import pandas as pd pd.options.display.max_colwidth = 200 # default 50; -1 = all pd.options.display.float_format = '{:.2f}'.format from textwrap import wrap, fill # + id="sYWJh8F7T8ZY" # suppress warnings import warnings; warnings.filterwarnings('ignore'); # + [markdown] id="ui6-tKDZvThZ" # ### Und eine kleine Anzeige-Funktion ... # # welche mit Antworten sowohl von Transformer als auch von Haystack umgehen kann. # + id="c3SVM0ZIZcuS" from IPython.display import display, HTML def display_qa(answers, question='', context='', padding=50): if type(answers) != list: answers = [answers] html = "<table>" if len(question) > 0: html += f"<tr><td>Question:</td><td><span style='font-weight:bold'>{question}</span></td></tr>" html += f"<tr><td>&nbsp;<td><td> </td></tr>" for a in answers: if len(a['answer']) > 0: html += f"<tr><td>Answer:</td><td><span style='font-weight:bold'>{a['answer']}</span></td></tr>" else: html += f"<tr><td>Answer:</td><td>answer impossible</td></tr>" html += f"<tr><td>Score:</td><td>{a['score']}</td></tr>" start = a.get('start', a.get('offset_start')) end = a.get('end', a.get('offset_end')) html += f"<tr><td>Span:</td><td>{start}:{end}</td></tr>" ctx = a.get('context', context) if len(a['answer']) > 0 and len(ctx) > 0: left = max(0, start-padding) right = min(end+padding, len(ctx)) html += "<tr><td>Snippet:</td><td>" html += f"{ctx[left:start]}<span style='color:blue;font-weight:bold'>" html += ctx[start:end] html += f"</span>{ctx[end:right]}</td>" html += f"<tr><td>&nbsp;<td><td> </td></tr>" html += '</table><br/>' display(HTML(html)) # + [markdown] id="UycdkUQf-nHR" # ## Arbeit mit einem QA-Modell # # Zunächst nutzen wir die [HuggingFace Transformers Library](https://huggingface.co/transformers/), um mit einem vortrainierten QA-Modell zu arbeiten. # + [markdown] id="ixsWFe1JKE8Z" # ### Modell laden # # Eine Übersicht über die QA-Modelle auf dem HuggingFace Hub gibt's hier: # https://huggingface.co/models?pipeline_tag=question-answering&sort=downloads # # Wir nutzen dieses, weil es bei den Beispielen sehr gute Ergebnisse geliefert hat: # https://huggingface.co/Sahajtomar/German-question-answer-Electra # # + id="X_NL5XZ8rBpp" from transformers import pipeline model_name = "Sahajtomar/German-question-answer-Electra" # device = 0 is GPU qa = pipeline("question-answering", model=model_name, tokenizer=model_name, device=0) # + [markdown] id="8goLLPuUrBpq" # ### Fragen zu Artikel beantworten # # Zunächst das Grundprinzip: Das Modell beantwortet Fragen basierend auf dem Kontext, z.B. ein Wikipedia-Eintrag, ein News-Artikel oder ein User-Post. # # Hier geht es um diesen Beispielartikel: # https://www.heise.de/news/Giga-Factory-Berlin-fast-fertig-Erstes-Tesla-Model-Y-noch-dieses-Jahr-6213528.html # + id="TXpMJX2QrBpr" context = """Giga Factory Berlin fast fertig – Erstes Tesla Model Y noch dieses Jahr <NAME> hat in knapp zwei Jahren eine riesige Fabrik vor die Tore Berlins gesetzt. Samstag ließ er erstmals Bürger ein. Nicht alle Nachbarn sind begeistert. Der US-Elektroautobauer Tesla will spätestens im Dezember in Deutschland die Produktion für Europa starten. Dies kündigte Firmengründer <NAME> am Wochenende bei einem Bürgerfest in seinem ersten europäischen Werk bei Berlin an. Kritik von Anwohnern und Umweltschützern an der in nur zwei Jahren konzipierten und errichteten Industrieanlage widersprach er. Ziel sei "eine wunderschöne Fabrik in Harmonie mit ihrer Umgebung". Künftig sollen etwa 12.000 Mitarbeiter in Grünheide bis zu 500.000 Elektroautos im Jahr bauen. Dabei will Tesla möglichst viele Teile vor Ort produzieren, um von Zulieferern unabhängig zu sein. Tesla betont vor allem die Bedeutung der eigenen Druckgussanlage und der hochmodernen Lackiererei. Zudem entsteht neben dem Autowerk eine eigene Batteriefabrik. """.replace('\n', ' ').strip() # + [markdown] id="_aSpqF20rBps" # Jetzt können wir Fragen stellen: # + id="SqJyM2-iLu0D" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="0a5933a4-6cfb-4ddf-a03f-5e7ec7b71a66" question="Wer ist <NAME>?" answer = qa(question=question, context=context) answer['answer'] # + colab={"base_uri": "https://localhost:8080/", "height": 35} id="fSdCmPfu2E9Y" outputId="696a5424-0cf4-4cfe-a87f-7442b8e74c25" question="Wer ist der Firmengründer?" answer = qa(question=question, context=context) answer['answer'] # + colab={"base_uri": "https://localhost:8080/", "height": 35} id="P0w5dipK2FGx" outputId="ebde003d-842e-453d-d7eb-99bae14627ff" question="Wer gündete Tesla?" answer = qa(question=question, context=context) answer['answer'] # + colab={"base_uri": "https://localhost:8080/", "height": 181} id="gosB2jogTqwg" outputId="40d650b1-459b-4f7d-f4cb-ebae157cafb5" question="Wer ist begeistert?" answer = qa(question=question, context=context) display_qa(answer, question, context) # + colab={"base_uri": "https://localhost:8080/", "height": 181} id="gK52CAGS2Pch" outputId="91413b4b-79ad-400f-d84d-9f3e67d0ca32" question="Wer ist nicht begeistert?" answer = qa(question=question, context=context) display_qa(answer, question, context) # + colab={"base_uri": "https://localhost:8080/", "height": 594} id="1J5yiaenTq3t" outputId="dd9e5f38-661d-4ce4-f38f-d6b2f9620fc9" question="Wie viele?" answer = qa(question=question, context=context, top_k=5) display_qa(answer, question, context) # + colab={"base_uri": "https://localhost:8080/", "height": 594} id="0gb97_Ff4GaU" outputId="13abbc9b-45b9-4498-df92-68fdcafe1092" question="Wie viele Mitarbeiter?" answer = qa(question=question, context=context, top_k=5) display_qa(answer, question, context) # + [markdown] id="QLpj6VOurBpu" # ### Fragen zu <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/2/2e/Game_of_Thrones_2011_logo.svg/420px-Game_of_Thrones_2011_logo.svg.png" width="150"/> beantworten # # Für einen längeren Text einen sich Wikipedia-Artikel, wie hier dieser zu "Game of Thrones": # https://de.wikipedia.org/wiki/Game_of_Thrones # # # + id="vR6FIQHmrBpu" colab={"base_uri": "https://localhost:8080/"} outputId="2c30fa82-d875-45cb-c1cb-f826c18b746c" from readability import Document import requests from bs4 import BeautifulSoup doc = Document(requests.get("https://de.wikipedia.org/wiki/Game_of_Thrones", stream=True).text) soup = BeautifulSoup(doc.summary()) context = soup.text len(context) # + [markdown] id="ZYBd7bUKrBpv" # Das sind ca. 100kB! # + id="4y80FzBM3ahg" colab={"base_uri": "https://localhost:8080/", "height": 181} outputId="d2ea5d0b-831e-4e49-f000-914af9b00d86" question="Wer sind die Geschwister von Arya?" answer = qa(question=question, context=context) display_qa(answer, question, context) # + id="dohqqYhSBVPy" colab={"base_uri": "https://localhost:8080/", "height": 181} outputId="d556386f-1d92-4d1d-aa50-36ee53031ed0" question="<NAME>?" answer = qa(question=question, context=context) display_qa(answer, question, context) # + id="d-VVEdslrBpv" colab={"base_uri": "https://localhost:8080/", "height": 594} outputId="f43aa70e-4125-44f6-e700-e4dad90e4283" question="<NAME>?" answer = qa(question=question, context=context, top_k=5, max_seq_len=256, doc_stride=0) display_qa(answer, question, context) # + [markdown] id="EeG6JoZlxxeK" # ## Deep Dive # + id="4aGtHovrx3Y4" colab={"base_uri": "https://localhost:8080/", "height": 181} outputId="065eb518-3afe-4094-be72-12ff50c291f6" question = "Wie viele Menschen leben in Berlin?" context = "In Deutschland leben ca. 80 Millionen Menschen, allein in Berlin ca. 4 Mio." answer = qa(question=question, context=context) display_qa(answer, question, context, padding=1000) # + id="nv9P3n1RCAju" model_name = "Sahajtomar/German-question-answer-Electra" # + id="MV13EejOxt48" colab={"base_uri": "https://localhost:8080/"} outputId="ac7396d8-8268-4248-f1eb-1dd0390bcf13" from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) inputs = tokenizer(question, context, return_tensors="pt"); inputs # + id="1HGIkZWyzyrS" colab={"base_uri": "https://localhost:8080/", "height": 172} outputId="00e1f742-2b33-4f2b-e460-7ecd582dac01" input_df = pd.DataFrame( [tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]), inputs['input_ids'][0].numpy(), inputs['token_type_ids'][0].numpy(), inputs['attention_mask'][0].numpy()]).T input_df.columns=['token', 'id', 'type', 'attn'] input_df.T # + id="Asiv5QBdyUEe" from transformers import AutoModelForQuestionAnswering model = AutoModelForQuestionAnswering.from_pretrained(model_name) outputs = model(**inputs) # + id="xub8I82cBQn9" colab={"base_uri": "https://localhost:8080/", "height": 603} outputId="0ed88db1-fa36-425f-936e-56bb8013c536" def maxval_in_col(column): highlight = 'background-color: palegreen;' return [highlight if v == column.max() else '' for v in column] output_df = pd.concat([input_df, pd.Series(outputs['start_logits'][0].detach(), name='start'), pd.Series(outputs['end_logits'][0].detach(), name='end')], axis=1) # answer span must be in context (type==1) output_df.query('type==1')[['token', 'start', 'end']].style.apply(maxval_in_col, subset=['start', 'end'], axis=0) # + id="t9BLTSXu0Jsr" colab={"base_uri": "https://localhost:8080/"} outputId="f7f938ce-e480-44e4-abab-d85e93409deb" import torch start_idx = torch.argmax(outputs.start_logits) end_idx = torch.argmax(outputs.end_logits) + 1 answer_span = inputs["input_ids"][0][start_idx:end_idx] answer = tokenizer.decode(answer_span) print(f"Question: {question}") print(f"Answer: {answer}") # + [markdown] id="UG5eCuI8rBpx" # ## Retriever-Reader mit Haystack # # Nun wird ein größeres Szenario simuliert. Stellen Sie sich vor, Sie haben sehr viele Dokumente und suchen darin Antworten. Da suchen Sie die Nadel im Heuhaufen - ein Fall für [Haystack](https://haystack.deepset.ai/). # + [markdown] id="MCn9agtnTJBI" # ### Anwendungsbeispiel: Aspect-based Sentiment Analysis # # An dieser Stelle soll ein praktisches Anwendungsbeispiel gezeigt werden. # Es geht darum, Kunden-Meinungen zu bestimmten Eigenschaften eines Produkts herauszufinden. Dafür werden Amazon-Reviews zu dem Produkt mit einem QA-Modell "befragt". # # Da wir hier nicht nur einen (Kon-)Text auszuwerten haben, sondern viele Rewiews, wird ein Retriever-Reader-Modell benutzt. Dabei werden durch den Retriever die relevanten Kommentare vorselektiert, um dann durch den Reader ausgewertet zu werden. # # Der Datensatz, den wir verwenden, ist ein Auszug aus dem "Subjective QA" Datensatz, den man direkt vom HuggingFace Hub beziehen kann: # https://huggingface.co/datasets/subjqa # + id="bwhUSPKX2-ue" from datasets import load_dataset # other options include: books, grocery, movies, restaurants, tripadvisor data = load_dataset("subjqa", "electronics") data.set_format("pandas") # flatten the nested dataset columns for easy access df = [ds[:] for split, ds in data.flatten().items() if split == 'train'][0] # select some columns df = df[["title", "question", "answers.text", "answers.answer_start", "context"]] df = df.drop_duplicates(subset="context").rename(columns={"answers.text": "answer", "answers.answer_start": "start"}) print(list(df.columns)) print(f"\n{len(df)} rows") # + [markdown] id="kjBClQFgWQGT" # Schauen wir uns ein paar Datensätze an: # + id="tV5cIfXPJVQr" colab={"base_uri": "https://localhost:8080/", "height": 141} outputId="ff6fea86-8e78-46d9-98aa-105d0b761c76" df.sample(3, random_state=25) # + [markdown] id="5KVNsIdVNbE7" # ### Befüllen des Document Stores für den Retriever # # # + [markdown] id="lBvuUmmaMAc0" # Haystack unterstützt folgende Document Stores: # * Elasticsearch (Sparse BM25/TF-IDF + Dense Vectors, https://elastic.co) # * FAISS (von Facebook AI für Dense Vectors, https://faiss.ai/) # * SQL (SQLite, PostgreSQL, MySQL) # * InMemoryDocumentStore # # Der Einfachheit halber wird hier der InMemoryDocumentStore genutzt. Für die Praxis wird aber ElasticSearch empfohlen, weil dieser Such-Index neben einer Volltextsuche eine Vielzahl von Filtermöglichkeiten für Metadaten bietet. # # + [markdown] id="pxOxRWSvL9jg" # Ein Document-Store erwartet folgendes Input-Format: # ```python # docs = [ # { # 'text': DOCUMENT_TEXT_HERE, # 'meta': {'name': DOCUMENT_NAME, 'category': DOCUMENT_CATEGORY} # }, ... # ] # ``` # + [markdown] id="kKbKoyzrJzPK" # # Für den `InMemoryDocumentStore` wird an dieser Stelle schon einmal auf den zu analysierenden Artikel gefiltert. Wir nehmen diesen hier: # # **Panasonic ErgoFit In-Ear Earbud Headphones RP-HJE120-D (Orange) Dynamic Crystal Clear Sound, Ergonomic Comfort-Fit** # # https://www.amazon.com/dp/B003ELYQGG # https://amazon-asin.com/asincheck/?product_id=B003ELYQGG # # + id="fZuMKoz3OmbP" colab={"base_uri": "https://localhost:8080/"} outputId="1241d8be-bb9a-4819-cd2d-8128d3ac37ad" # create docs (in the example for one item only) item_id = "B003ELYQGG" docs = [] for _, row in df.query(f"title == '{item_id}'").iterrows(): doc = {"text": row["context"], "meta": {"item_id": row["title"]}} docs.append(doc) docs[:3] # + id="xpdXThooNaFm" colab={"base_uri": "https://localhost:8080/"} outputId="2fc3fbc2-e4be-4862-ef0e-9fdfa5ed1789" from haystack.document_store import InMemoryDocumentStore document_store = InMemoryDocumentStore() document_store.write_documents(docs, index="document") print(f"{document_store.get_document_count()} docs loaded.") # + [markdown] id="eKtMChZPS7xT" # ### Dokumenten-Suche mit dem Retriever # + id="x0ORl6jwSZFO" colab={"base_uri": "https://localhost:8080/"} outputId="85a2f042-d7eb-49f6-e8ff-c7c1af3bc4c5" from haystack.retriever.sparse import TfidfRetriever retriever = TfidfRetriever(document_store=document_store) question = "How is the bass?" retrieved_docs = retriever.retrieve(query=question, top_k=3) # ElasticSearch would support real filters # retrieved_docs = retriever.retrieve(query=question, top_k=3, filters={"item_id":[item_id]]}) for doc in retrieved_docs: print(fill(doc.text), end="\n\n") # + [markdown] id="Zzb2nsSqWOwB" # ### Antworten bekommen mit dem Reader # # Haystack unterstützt zwei Reader, den `FARMReader` und den `TransformersReader`. Beide nutzen Transformer-Modelle, unterscheiden sich aber in kleinen Details, die [hier](https://haystack.deepset.ai/docs/latest/readermd#deeper-dive-farm-vs-transformers) erläutert werden. # # Wir nutzen den `FARMReader`. [FARM](https://pypi.org/project/farm/) ist eine Library für das Transfer Learning mit Transformer-Modellen, die selbst wiederum die Transformers Library nutzt. # # Eine nette QA-Demo von FARM findet sich hier: https://demos.deepset.ai # # # + id="5brl7ejyWRVW" colab={"base_uri": "https://localhost:8080/"} outputId="efdc39bb-5e95-4b12-d64b-80429d079a75" from haystack.reader.farm import FARMReader reader = FARMReader(model_name_or_path=model_name, progress_bar=False, return_no_answer=False, use_gpu=True) # + id="D31HmRPEUymR" colab={"base_uri": "https://localhost:8080/"} outputId="9dce94cb-9d8f-4c09-ccc0-07497430ad89" question = "How is the bass?" answers = reader.predict_on_texts(question=question, texts=[retrieved_docs[1].text], top_k=3) answers # + [markdown] id="QzInNa6nWjmW" # Haben Sie bemerkt, dass wir immer noch das gleiche Modell verwendet haben mit dem wir auch schon die deutschen Texte analysiert haben? # # Das ist mit einem multilingualen Modell möglich! Mit einem rein englischen Modell werden aber auf englischen Texten bessere Ergebnisse erreicht. # + [markdown] id="QoeSGPcDsiws" # ### Retriever und Reader in der Haystack-Pipeline # + id="W0HTZb4uZFH0" from haystack.pipeline import ExtractiveQAPipeline pipe = ExtractiveQAPipeline(reader, retriever) # + id="zHp3RimuZFg8" colab={"base_uri": "https://localhost:8080/", "height": 594} outputId="50011faf-ec8d-40b6-dd21-3d90750801e0" question = "How is the bass?" # question = "Do they sound good?" # question = "How do they fit?" answers = pipe.run(query=question, params={"Retriever": {"top_k": 10}, "Reader": {"top_k": 5}}) display_qa(answers['answers'], question, padding=500) # + [markdown] id="Z6vLcWS9SwEt" # ### Und natürlich eine WordCloud zum Abschluss 😀 # # In diesem Beispiel wird aus allen Dokumenten (wir haben nur 35) jeweils die Meinung zum Bass erfragt. Die eindeutigen Antworten werden gezählt und mit einer WordCloud visualisiert. Bei sehr vielen Reviews kann man sich so sehr schnell ein Meinungsbild verschaffen. # + id="J-be5qurIvnH" colab={"base_uri": "https://localhost:8080/"} outputId="188683d9-6ae5-48c9-98c2-f4b3409089b8" from collections import Counter question = "How is the bass?" retrieved_docs = retriever.retrieve(query=question, top_k=100) counter = Counter() for doc in retrieved_docs: answer = reader.predict_on_texts(question=question, texts=[doc.text], top_k=1)['answers'][0]['answer'] if len(answer) < 30: counter.update([answer]) counter # + id="BdkAKQ3iOJ_S" colab={"base_uri": "https://localhost:8080/", "height": 483} outputId="ca17e841-921a-407c-d7ee-8aa08846e212" from wordcloud import WordCloud from matplotlib import pyplot as plt wc = WordCloud(width=800, height=400, background_color= "black", colormap="Paired") wc.generate_from_frequencies(counter) plt.figure(figsize=(16, 8)) plt.imshow(wc, interpolation='bilinear') plt.axis("off") # + id="9Mvg6G7gRQJQ"
Question_Answering.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] _uuid="d6f60b9310cb530e5fcc85996e34dcd72a682ed9" # # About # This kernel applies the techniques from [fastai's deep learning for coders](http://course.fast.ai) course to the dogbreed dataset # # The resulting Kaggle score is **0.22623** which roughly translates to a position in the top 30%. # + [markdown] _uuid="afcd8e8dd979b486f79ab93024919385512f8b6c" # # Setup # + _uuid="5a674f59254a466d513b1da27f29c376a4189077" # %reload_ext autoreload # %autoreload 2 # %matplotlib inline # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" import numpy as np import pandas as pd import os from fastai.conv_learner import * # + _uuid="f80a0d652ce47b635762c588a08a7cd3f1228c10" # make sure CUDA is available and enabled print(torch.cuda.is_available(), torch.backends.cudnn.enabled) # + _uuid="697bc7a8fb64397912c8ede8fada86d28b481e3d" # set competition name comp_name = "dogbreed" # use with custom environment user = "ec2-user" input_path = f"/home/{user}/data/{comp_name}/" wd = f"/home/{user}/kaggle/{comp_name}/" # use only with kaggle kernels #input_path = "../input/" #wd = "/kaggle/working/" # - # create symlinks for easy data handling # !ln -fs {input_path}labels.csv {wd}labels.csv # !ln -fs {input_path}sample_submission.csv {wd}sample.csv # !ln -fs {input_path}train {wd}train # !ln -fs {input_path}test {wd}test # !ls -alh # + [markdown] _uuid="757c972a4a09135d485312103733c6c913bba44f" heading_collapsed=true # ## Helper functions to deal with Kaggle's file system limitations # + _uuid="33212a6282001211ee99774fa9025d13d0180eb1" hidden=true def create_symlnk(src_dir, src_name, dst_name, dst_dir=wd, target_is_dir=False): """ If symbolic link does not already exist, create it by pointing dst_dir/lnk_name to src_dir/lnk_name """ if not os.path.exists(dst_dir + dst_name): os.symlink(src=src_dir + src_name, dst = dst_dir + dst_name, target_is_directory=target_is_dir) # + _uuid="773368f01443c76f8542d12ea4f408f28ae576be" hidden=true def clean_up(wd=wd): """ Delete all temporary directories and symlinks in working directory (wd) """ for root, dirs, files in os.walk(wd): try: for d in dirs: if os.path.islink(d): os.unlink(d) else: shutil.rmtree(d) for f in files: if os.path.islink(f): os.unlink(f) else: print(f) except FileNotFoundError as e: print(e) # + _uuid="7dd5fe0af3d0f43f557f79f4737064daee5fb80c" hidden=true # only use with kaggle kernels #create_symlnk(input_path, "labels.csv", "labels.csv") #create_symlnk(input_path, "sample_submission.csv", "sample.csv") #create_symlnk(input_path, "train", "train", target_is_dir=True) #create_symlnk(input_path, "test", "test", target_is_dir=True) # + _uuid="7f66d48f5f81dd239b83aa829c1d9930fbb0a09e" hidden=true # perform sanity check # #!ls -alh # + [markdown] _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" # # Exploration # + _uuid="e77d0fe71a92829af547b49f9aaf258ffd74ea26" label_df = pd.read_csv(f"{wd}labels.csv") # + _uuid="7f36a34cbbc85776a3da8f37ca60425a1dad9ddb" label_df.head() # + _uuid="f5bab0c79e4a021acd527a9d3eb81d091c9b680f" label_df.shape # + _uuid="f96bf5c4965083ee33a6df7bd6f735d1a1b36e0e" label_df.pivot_table(index="breed", aggfunc=len).sort_values("id", ascending=False) # + [markdown] _uuid="f5d247eb0edcae92e07995860005a2d7ddd926cd" # # Preprocess data # + _uuid="e1481b420fcf11165bc1db7b624d7548db5a1acb" # define architecture arch = resnext101_64 sz = 224 bs = 64 # + _uuid="9b07ba79cdeecae7176fcafda71961cc4563218d" # create indexes for validation dataset val_idxs = get_cv_idxs(label_df.shape[0]) # + _uuid="e1fff62362c444062c7c1eddd8c8d62e113deaca" def get_data(sz=sz): """ Load images via fastai's ImageClassifierData.from_csv() object defined as 'data' before Return images if size bigger than 300 pixels, else resize to 340 pixels """ tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1) data = ImageClassifierData.from_csv(path=wd, folder="train", csv_fname=f"{wd}labels.csv", tfms=tfms, val_idxs=val_idxs, suffix=".jpg", test_name="test") return data if sz > 300 else data.resize(340, new_path=wd) # + _uuid="c8e8aa073a39ab3df1dba5dcdb29a655a95d868a" data = get_data() # + _uuid="e7739489ca1f7c4123578e9cb3ef0062f3d41ead" [print(len(e)) for e in [data.trn_ds, data.val_ds, data.test_ds]] # + _uuid="ecbfe0cbfecdfbfe6a1d4a19b5e26a96fe53c1e1" # look at an actual image fn = wd + data.trn_ds.fnames[-1] img = PIL.Image.open(fn); img # + _uuid="3107ad0563db465cdb5217304c3b6df17910f9c2" img.size # + [markdown] _uuid="a1d286b831d6c6d09a82b244076e09662a9ad2da" # # Model # + [markdown] _uuid="de7ee619b3d13ce5a8ff484a62aef5617774510d" # ## Baseline # + _uuid="178af9dd4f5a7b82cb0dcf094c3b3103354fbe8d" learn = ConvLearner.pretrained(arch, data, ps=0.5, precompute=True) # + _uuid="06feb6b93e91d6c2728b351bb0e1e50fa83b640d" lrf = learn.lr_find() # + _uuid="a69becfb315d7d9481bbaabd7c9802c9751e9378" learn.sched.plot() # - lr = 1e-1 # + _uuid="a7a55d205a5749ac1ba1eaa8c826e99700497c64" # fit baseline model without data augmentation learn.fit(lr, 3) # + _uuid="0a9d9f105d5a5da7f31e7975e33e344d44c35ae4" # disable precompute and fit model with data augmentation learn.precompute=False learn.fit(lr, 3, cycle_len=1, cycle_mult=2) # + _uuid="b533f0001b6d6ff19a5dda4de2c07b5f40557be2" learn.save(f"{comp_name}_{arch.__name__}_{sz}_base") # + _uuid="d93d31a409cc634270fd46bccd9924c41af6cea9" learn.load(f"{comp_name}_{arch.__name__}_{sz}_base") # + [markdown] _uuid="5f4ba2030fbaea81340da1823316d3f058b97d02" # ## Increase image size # - sz = 299 # + _uuid="86b9abea409c3955703094aff35d83b53806ffb6" learn.set_data(get_data(sz)) # + _uuid="ec48978bef80f3dc596bdd9c1e3222bcbfc1b1b6" learn.fit(lr, 3, cycle_len=1) # - learn.sched.plot_loss() # + _uuid="978ec932c90ceb52b3e1ff736c8be521a48caf3c" learn.save(f"{comp_name}_{arch.__name__}_{sz}") # + _uuid="e334a8a3bbfd69afa26d4635a0e6d0463de0c23f" learn.load(f"{comp_name}_{arch.__name__}_{sz}") # - # ## Prediction on validation set # + _uuid="ff5c07603b8e2f26a18ea3e0922d1915fd335de4" from sklearn.metrics import log_loss log_preds, y = learn.TTA() probs = np.mean(np.exp(log_preds), 0) accuracy_np(probs, y), log_loss(y, probs) # + [markdown] _uuid="636b1434ef412b390e54f00604f3546d39d47b60" # ## Prediction on test set # + _uuid="7f461be7ef885644ebef38e764827a8a1623e2d5" log_preds_test, y_test = learn.TTA(is_test=True) probs_test = np.mean(np.exp(log_preds_test), 0) # - np.save(f"{comp_name}_probs_test", probs_test, allow_pickle=True) probs_test = np.load(f"{comp_name}_probs_test.npy") # + [markdown] _uuid="f5572c87471970966f5440ba0c73c2e6359660a7" # # Submission # + _uuid="ef430d71dafafb5b71ae84c7570d2f298cc18ccd" df = pd.DataFrame(probs_test) df.columns = data.classes # + _uuid="69ae992a321efb9b18164776dc089d3d53c235bb" # insert clean ids - without folder prefix and .jpg suffix - of images as first column df.insert(0, "id", [e[5:-4] for e in data.test_ds.fnames]) # + _uuid="87acfa74126c554b5fa3a6f74046a56a11d83b22" df.to_csv(f"sub_{comp_name}_{arch.__name__}.csv", index=False) # + _uuid="fb18a2bc3f3988a89d66b069030deeefa6674e7d" # only use with kaggle kernels #clean_up()
dogbreed/dogbreed_with_fastai.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # The Agate Tutorial # # The best way to learn to use any tool is to actually use it. In this tutorial we will use agate to answer some basic questions about a dataset. # # The data we will be using is a copy of the [National Registry of Exonerations]( http://www.law.umich.edu/special/exoneration/Pages/detaillist.aspx) made on August 28th, 2015. This dataset lists individuals who are known to have been exonerated after having been wrongly convicted in United States courts. At the time this data was copied there were 1,651 entries in the registry. # ## Installing agate # # Installing agate from the command line is easy: # # pip install agate # # Note: You should be installing agate inside a [virtualenv](https://virtualenv.readthedocs.io/en/stable/>). If for some crazy reason you aren't using virtualenv you will need to add a ``sudo`` to the previous command.* # # For more detailed installation instructions, see the [Installation](http://agate.readthedocs.io/en/1.6.2/install.html) section of the documentation. # ## Getting the data # # If you're just reading this tutorial you can skip this section. If you want to try working through it on your own then you'll need to download the data. # # It can be downloaded from # # curl -L -O https://github.com/onyxfish/agate/raw/master/examples/realdata/exonerations-20150828.csv # # The rest of this tutorial will expect that data to be located in `examples/realdata`. # ## Importing agate # # Let's get started! import agate # ## Loading data from a CSV # # The [`Table`](http://agate.readthedocs.io/en/1.6.2/api/table.html#module-agate.table) is the basic class in agate. To create a table from a CSV we use the [`Table.from_csv`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.from_csv) class method: exonerations = agate.Table.from_csv('examples/realdata/exonerations-20150828.csv') # With no other arguments specified, agate will automatically create an instance of [`TypeTester`](http://agate.readthedocs.io/en/1.6.2/api/type_tester.html#agate.TypeTester) and use it to figure out the type of each column. TypeTester is a "best guess" approach to determining the kinds of data in your table. It can guess wrong. In that case you can create a TypeTester manually and use the ``force`` argument to override its guess for a specific column: # + tester = agate.TypeTester(force={ 'false_evidence': agate.Boolean() }) exonerations = agate.Table.from_csv('examples/realdata/exonerations-20150828.csv', column_types=tester) # - # If you already know the types of your data you may wish to skip the TypeTester entirely. You may pass sequences of column names and column types to [`Table.from_csv`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.from_csv) as the ``column_names`` and ``column_types`` arguments, respectively. # # For larger datasets the [`TypeTester`](http://agate.readthedocs.io/en/1.6.2/api/type_tester.html#agate.TypeTester) can be slow to evaluate the data. In that case you can specify a `limit` argument to restrict the amount of data it will use to infer types: # + tester = agate.TypeTester(limit=100) exonerations = agate.Table.from_csv('examples/realdata/exonerations-20150828.csv', column_types=tester) # - # The dataset we are using in this tutorial is simple enough that we can rely on the built-in TypeTester to guess quickly and accurately. # # **Note:** agate's CSV reader and writer support unicode and other encodings for both Python 2 and Python 3. Try using them as a drop-in replacement for Python's builtin module: `from agate import csv`. # # **Note:** agate also has [`Table.from_json`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.from_json) for creating tables from JSON data. # Describing the table # ==================== # # If you're working with new data, or you just need a refresher, you may want to review what columns are in the table. You can do this with the [`.Table.print_structure`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.print_structure) method or by just calling `print` on the table: print(exonerations) # Navigating table data # ===================== # # agate goes to great pains to make accessing the data in your tables work seamlessly for a wide variety of use-cases. Access by both [`Column`](http://agate.readthedocs.io/en/1.6.2/api/columns_and_rows.html#agate.Column) and [`Row`](http://agate.readthedocs.io/en/1.6.2/api/columns_and_rows.html#agate.Row) is supported, via the [`Table.columns`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.columns) and [`Table.rows`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.rows) attributes respectively. # # All four of these objects are examples of [`.MappedSequence`](http://agate.readthedocs.io/en/1.6.2/api/columns_and_rows.html#agate.MappedSequence), the foundational type that underlies much of agate's functionality. A MappedSequence functions very similar to a standard Python [`dict`](https://docs.python.org/3/tutorial/datastructures.html#dictionaries), with a few important exceptions: # # * Data may be accessed either by numeric index (e.g. column number) or by a non-integer key (e.g. column name). # * Items are ordered, just like an instance of [`collections.OrderedDict`](https://docs.python.org/3.5/library/collections.html#collections.OrderedDict). # * Iterating over the sequence returns its *values*, rather than its *keys*. # # To demonstrate the first point, these two lines are both valid ways of getting the first column in the `exonerations` table: exonerations.columns['last_name'] exonerations.columns[0] # In the same way, rows can be accessed either by numeric index or by an optional, unique "row name" specified when the table is created. In this tutorial we won't use row names, but here is an example of how they work: # + exonerations = agate.Table.from_csv('examples/realdata/exonerations-20150828.csv', row_names=lambda r: '%(last_name)s, %(first_name)s' % (r)) exonerations.rows[0] # - exonerations.rows['Abbitt, <NAME>'] # In this case we create our row names using a [`lambda`](https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions) function that takes a row and returns an unique identifer. If your data has a unique column, you can also just pass the column name. (For example, a column of USPS abbrevations or FIPS codes.) Note, however, that your row names can never be `int`, because that is reserved for indexing by numeric order. (A [`decimal.Decimal`](https://docs.python.org/3.5/library/decimal.html#decimal.Decimal) or stringified integer is just fine.) # # Once you've got a specific row, you can then access its individual values (cells, in spreadsheet-speak) either by numeric index or column name: # + row = exonerations.rows[0] row[0] # - row['last_name'] # And the same goes for columns, which can be indexed numerically or by row name (if one has been setup): # + column = exonerations.columns['crime'] column[0] # - column['Abbitt, <NAME>'] # For any instance of [`.MappedSequence`](http://agate.readthedocs.io/en/1.6.2/api/columns_and_rows.html#agate.MappedSequence), iteration returns values, *in order*. Here we print only the first ten: for row in exonerations.rows[:10]: print(row['last_name']) # To summarize, the four most common data structures in agate ([`Column`](http://agate.readthedocs.io/en/1.6.2/api/columns_and_rows.html#agate.Column), [`Row`](http://agate.readthedocs.io/en/1.6.2/api/columns_and_rows.html#agate.Row), [`Table.columns`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.columns) and [`Table.rows`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.rows)) are all instances of [`MappedSequence`](http://agate.readthedocs.io/en/1.6.2/api/columns_and_rows.html#agate.MappedSequence) and therefore all behave in a uniform way. This is also true of [`TableSet`](http://agate.readthedocs.io/en/1.6.2/api/tableset.html), which will discuss later on. # Aggregating column data # ======================= # # With the basics out of the way, let's do some actual analysis. Analysis begins with questions, so let's ask some. # # **Question:** How many exonerations involved a false confession? # # Answering this question involves counting the number of ``True`` values in the ``false_confession`` column. When we created the table we specified that the data in this column contained [`Boolean`](http://agate.readthedocs.io/en/1.6.2/api/data_types.html#agate.Boolean) data. Because of this, agate has taken care of coercing the original text data from the CSV into Python's ``True`` and ``False`` values. # # We'll answer the question using an instance of [`Count`](http://agate.readthedocs.io/en/1.6.2/api/aggregations.html#agate.Count) which is a type of [`Aggregation`](http://agate.readthedocs.io/en/1.6.2/api/aggregations.html#agate.Aggregation). Aggregations are used to perform "column-wise" calculations. That is, they derive a new single value from the contents of a column. The [`Count`](http://agate.readthedocs.io/en/1.6.2/api/aggregations.html#agate.Count) aggregation can count either all values in a column, or how many times a particular value appears. # # An Aggregation is applied to a table using [`Table.aggregate`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.aggregate). # # It sounds complicated, but it's really simple. Putting it all together looks like this: exonerations.aggregate(agate.Count('false_confession', True)) # Let's look at another example, this time using a numerical aggregation. # # **Question:** What was the median age of exonerated indviduals at time of arrest? exonerations.aggregate(agate.Median('age')) # The answer to our question is "26 years old", however, as the warnings indicate, not every exonerated individual in the data has a value for the ``age`` column. The [`Median`](http://agate.readthedocs.io/en/1.6.2/api/aggregations.html#agate.Median) statistical operation has no standard way of accounting for null values, so it leaves them out of the calculation. # # **Question:** How many individuals do not have an age specified in the data? # # Now that we know there are null values in the ``age`` column, we might worry about our sample size. What if most of the rows don't have an age? exonerations.aggregate(agate.Count('age', None)) # Only nine rows in this dataset don't have age, so it's certainly still useful to compute a median. However, we might still want to filter those rows out so we could have a consistent sample for all of our calculations. In the next section you'll learn how to do just that. # # Different [`aggregations`](http://agate.readthedocs.io/en/1.6.2/api/aggregations.html) can be applied depending on the type of data in each column. If none of the provided aggregations suit your needs you can use [`Summary`](http://agate.readthedocs.io/en/1.6.2/api/aggregations.html#agate.Summary) to apply an arbitrary function to a column. If that still doesn't suit your needs you can always create your own aggregation from scratch by subclassing [`Aggregation`](http://agate.readthedocs.io/en/1.6.2/api/aggregations.html#agate.Aggregation). # Selecting and filtering data # ============================ # # So what if those rows with no age were going to flummox our analysis? Agate's [`Table`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table) class provides a full suite of SQL-like operations including [`Table.select`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.select) for grabbing specific columns, [`Table.where`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.where) for selecting particular rows and [`Table.group_by`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.group_by) for grouping rows by common values. # # Let's use [`Table.where`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.where) to filter our exonerations table to only those individuals that have an age specified. with_age = exonerations.where(lambda row: row['age'] is not None) # You'll notice we provide a [`lambda`](https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions) function to the [`Table.where`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.where). This function is applied to each row and if it returns ``True``, then the row is included in the output table. # # A crucial thing to understand about these table methods is that they return **new tables**. In our example above ``exonerations`` was a [`Table`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table) instance and we applied [`Table.where`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.where), so ``with_age`` is a new, different [`Table`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table). The tables themselves can't be changed. You can create new tables with these methods, but you can't modify them in-place. (If this seems weird, just trust me. There are lots of good computer science-y reasons to do it this way.) # # We can verify this did what we expected by counting the rows in the original table and rows in the new table: len(exonerations.rows) - len(with_age.rows) # Nine rows were removed, which is the number of nulls we had already identified were in the column. # # Now if we calculate the median age of these individuals, we don't see the warning anymore. with_age.aggregate(agate.Median('age')) # Computing new columns # ===================== # # In addition to "column-wise" [`aggregations`](http://agate.readthedocs.io/en/1.6.2/api/aggregations.html#module-agate.aggregations) there are also "row-wise" [`computations`](http://agate.readthedocs.io/en/1.6.2/api/computations.html#module-agate.computations). Computations go through a [`Table`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table) row-by-row and derive a new column using the existing data. To perform row computations in agate we use subclasses of [`Computation`](http://agate.readthedocs.io/en/1.6.2/api/computations.html#agate.Computation). # # When one or more instances of [`Computation`](http://agate.readthedocs.io/en/1.6.2/api/computations.html#agate.Computation) are applied with the [`Table.compute`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.compute) method, a new table is created with additional columns. # # **Question:** How long did individuals remain in prison before being exonerated? # # To answer this question we will apply the [`Change`](http://agate.readthedocs.io/en/1.6.2/api/computations.html#agate.Change) computation to the ``convicted`` and ``exonerated`` columns. Each of these columns contains the individual's age at the time of that event. All that [`Change`](http://agate.readthedocs.io/en/1.6.2/api/computations.html#agate.Change) does is compute the difference between two numbers. (In this case each of these columns contain a [`Number`](http://agate.readthedocs.io/en/1.6.2/api/data_types.html#agate.Number), but this will also work with [`Date`](http://agate.readthedocs.io/en/1.6.2/api/data_types.html#agate.Date) or [`DateTime`](http://agate.readthedocs.io/en/1.6.2/api/data_types.html#agate.DateTime).) # + with_years_in_prison = exonerations.compute([ ('years_in_prison', agate.Change('convicted', 'exonerated')) ]) with_years_in_prison.aggregate(agate.Median('years_in_prison')) # - # The median number of years an exonerated individual spent in prison was 8 years. # # Sometimes, the built-in computations, such as [`Change`](http://agate.readthedocs.io/en/1.6.2/api/computations.html#agate.Change) won't suffice. I mentioned before that you could perform arbitrary column-wise aggregations using [`Summary`](http://agate.readthedocs.io/en/1.6.2/api/aggregations.html#agate.Summary). You can do the same thing for row-wise computations using [`Formula`](http://agate.readthedocs.io/en/1.6.2/api/computations.html#agate.Formula). This is somewhat analogous to Excel's cell formulas. # # For example, this code will create a ``full_name`` column from the ``first_name`` and ``last_name`` columns in the data: full_names = exonerations.compute([ ('full_name', agate.Formula(agate.Text(), lambda row: '%(first_name)s %(last_name)s' % row)) ]) # For efficiency's sake, agate allows you to perform several computations at once (though their results can't depend on one another): with_computations = exonerations.compute([ ('full_name', agate.Formula(agate.Text(), lambda row: '%(first_name)s %(last_name)s' % row)), ('years_in_prison', agate.Change('convicted', 'exonerated')) ]) # You can also compute new columns to clean up your raw data. In the initial data, the ``state`` column has some values with a 'F-' prefix on the state abbreviation. Cases with that prefix are federal cases as opposed to state prosecutions. To make the data easier to use, we can create a new ``federal`` column to tag federal cases and clean up the original state column: clean_state_data = exonerations.compute([ ('federal', agate.Formula(agate.Boolean(), lambda row: row['state'].startswith('F-'))), ('state', agate.Formula(agate.Text(), lambda row: row['state'][2:] if row['state'].startswith('F-') else row['state'])) ], replace=True) # We add the ``replace`` argument to our ``compute`` method to replace the state column in place. # # If [`Formula`](http://agate.readthedocs.io/en/1.6.2/api/computations.html#agate.Formula) is not flexible enough (for instance, if you needed to compute a new value based on the distribution of data in a column) you can always implement your own subclass of [`Computation`](http://agate.readthedocs.io/en/1.6.2/api/computations.html#agate.Computation). See the API documentation for [`computations`](http://agate.readthedocs.io/en/1.6.2/api/computations.html#module-agate.computations to see all of the supported ways to compute new data. # Sorting and slicing # =================== # # **Question:** Who are the ten exonerated individuals who were youngest at the time they were arrested? # # Remembering that methods of tables return tables, we will use [`Table.order_by`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.order_by) to sort our table: sorted_by_age = exonerations.order_by('age') # We can then use [`Table.limit`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.limit) get only the first ten rows of the data. youngest_ten = sorted_by_age.limit(10) # Now let's use [`Table.print_table`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.print_table) to help us pretty the results in a way we can easily review: youngest_ten.print_table(max_columns=7) # If you find it impossible to believe that an eleven year-old was convicted of murder, I encourage you to read the Registry's [description of the case](http://www.law.umich.edu/special/exoneration/Pages/casedetail.aspx?caseid=3499>). # # **Note:** In the previous example we could have omitted the [`Table.limit`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.limit) and passed a ``max_rows=10`` to [`Table.print_table`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.print_table) instead. In this case they accomplish exactly the same goal. # # What if we were more curious about the *distribution* of ages, rather than the highest or lowest? agate includes the [`Table.pivot`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.pivot) and [`Table.bins`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.bins) methods for counting values individually or by ranges. Let's try binning the ages. Then, instead of using [`Table.print_table`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.print_table), we'll use [`Table.print_bars`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.print_bars) to generate a simple, text bar chart. binned_ages = exonerations.bins('age', 10, 0, 100) binned_ages.print_bars('age', 'Count', width=80) # Notice that we specify we want :code:`10` bins spanning the range :code:`0` to :code:`100`. If these values are omitted agate will attempt to infer good defaults. We also specify that we want our bar chart to span a width of :code:`80` characters. This can be adjusted to a suitable width for your terminal or document. # # **Note:** If you use a monospaced font, such as Courier, you can copy and paste agate bar charts into emails or documents. No screenshots required. # Grouping and aggregating # ======================== # # **Question:** Which state has seen the most exonerations? # # This question can't be answered by operating on a single column. What we need is the equivalent of SQL's ``GROUP BY``. agate supports a full set of SQL-like operations on tables. Unlike SQL, agate breaks grouping and aggregation into two discrete steps. # # First, we use [`Table.group_by`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.group_by) to group the data by state. by_state = clean_state_data.group_by('state') # This takes our original [`Table`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table) and groups it into a [`TableSet`](http://agate.readthedocs.io/en/1.6.2/api/tableset.html#agate.TableSet), which contains one table per state. As mentioned much earlier in this tutorial, TableSets are instances of [`MappedSequence`](http://agate.readthedocs.io/en/1.6.2/api/columns_and_rows.html#agate.MappedSequence). That means they work very much like [`Column`](http://agate.readthedocs.io/en/1.6.2/api/columns_and_rows.html#agate.Column) and [`Row`](http://agate.readthedocs.io/en/1.6.2/api/columns_and_rows.html#agate.Row). # # Now we need to aggregate the total for each state. This works in a very similar way to how it did when we were aggregating columns of a single table, except that we'll use the [`Count`](http://agate.readthedocs.io/en/1.6.2/api/aggregations.html#agate.Count) aggregation to count the total number of rows in each group. # + state_totals = by_state.aggregate([ ('count', agate.Count()) ]) sorted_totals = state_totals.order_by('count', reverse=True) sorted_totals.print_table(max_rows=5) # - # You'll notice we pass a sequence of tuples to [`TableSet.aggregate`](http://agate.readthedocs.io/en/1.6.2/api/tableset.html#agate.TableSet.aggregate). Each one includes two elements. The first is the new column name being created. The second is an instance of some [`Aggregation`](http://agate.readthedocs.io/en/1.6.2/api/aggregations.html#agate.Aggregation). Unsurpringly, in this case the results appear to be roughly proportional to population. # # **Question:** What state has the longest median time in prison prior to exoneration? # # This is a much more complicated question that's going to pull together a lot of the features we've been using. We'll repeat the computations we applied before, but this time we're going to roll those computations up in state-by-state groups and then take the [`Median`](http://agate.readthedocs.io/en/1.6.2/api/aggregations.html#agate.Median of each group. Then we'll sort the data and see where people have been stuck in prison the longest. # + with_years_in_prison = exonerations.compute([ ('years_in_prison', agate.Change('convicted', 'exonerated')) ]) state_totals = with_years_in_prison.group_by('state') medians = state_totals.aggregate([ ('count', agate.Count()), ('median_years_in_prison', agate.Median('years_in_prison')) ]) sorted_medians = medians.order_by('median_years_in_prison', reverse=True) sorted_medians.print_table(max_rows=5) # - # DC? Nebraska? What accounts for these states having the longest times in prison before exoneration? I have no idea! Given that the group sizes are small, it would probably be wise to look for outliers. # # As with [`Table.aggregate`]()http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.aggregate and [`Table.compute`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.compute), the [`TableSet.aggregate`](http://agate.readthedocs.io/en/1.6.2/api/tableset.html#agate.TableSet.aggregate) method takes a list of aggregations to perform. You can aggregate as many columns as you like in a single step and they will all appear in the output table. # Multi-dimensional aggregation # ============================= # # I've already shown you that you can use [`TableSet`](http://agate.readthedocs.io/en/1.6.2/api/tableset.html#agate.TableSet) to group instances of [`Table`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table). However, you can also use a [`TableSet`](http://agate.readthedocs.io/en/1.6.2/api/tableset.html#agate.TableSet) to group *other TableSets*. To put that another way, instances of [`TableSet`](http://agate.readthedocs.io/en/1.6.2/api/tableset.html#agate.TableSet) can be *nested*. # # The key to nesting data in this way is to use [`TableSet.group_by`](http://agate.readthedocs.io/en/1.6.2/api/tableset.html#agate.TableSet.group_by). This is one of many methods that can be called on a TableSet, which will then be applied to all the tables it contains. In the last section we used [`Table.group_by`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.group_by) to split data up into a group of tables. By calling [`TableSet.group_by`](http://agate.readthedocs.io/en/1.6.2/api/tableset.html#agate.TableSet.group_by), which essentially called ``group_by`` on each table and collect the results. This can be pretty hard to wrap your head around, so let's look at a concrete example. # # **Question:** Is there a collective relationship between race, age and time spent in prison prior to exoneration? # # I'm not going to explain every stage of this analysis as most of it repeats patterns used previously. The key part to look for is the two separate uses of ``group_by``: # + # Filters rows without age data only_with_age = with_years_in_prison.where( lambda r: r['age'] is not None ) # Group by race race_groups = only_with_age.group_by('race') # Sub-group by age cohorts (20s, 30s, etc.) race_and_age_groups = race_groups.group_by( lambda r: '%i0s' % (r['age'] // 10), key_name='age_group' ) # Aggregate medians for each group medians = race_and_age_groups.aggregate([ ('count', agate.Count()), ('median_years_in_prison', agate.Median('years_in_prison')) ]) # Sort the results sorted_groups = medians.order_by('median_years_in_prison', reverse=True) # Print out the results sorted_groups.print_table(max_rows=10) # - # ## Exploratory charting # # Beginning with version 1.5.0, agate includes the pure-Python SVG charting library [leather](http://leather.readthedocs.io/en/latest/). Leather allows you to generate "good enough" charts with as little as one line of code. It's especially useful if you're working in a Jupyter Notebook, as the results will render inline. # # There are currently four chart types support: [`Table.bar_chart`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.bar_chart), [`Table.column_chart`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.column_chart), [`Table.line_chart`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.line_chart), and [`Table.scatterplot`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.scatterplot). # # Let's create charts from a few slices of data we've made in this tutorial. # ### Exonerations by state sorted_totals.bar_chart('state', 'count', height=1000) # Leather will try to maintain a reasonable aspect ratio for chart. In this case the chart is too short to display correctly. We've used the `height` argument to make the chart a little taller. # ### Exonerations by age bracket # # When creating a chart you may omit the column name arguments. If you do so the first and second columns in the table will be used. This is especially useful for charting the output of [`TableSet.aggregate`](http://agate.readthedocs.io/en/1.6.2/api/tableset.html#agate.TableSet.aggregate) or [`Table.bins`](http://agate.readthedocs.io/en/1.6.2/api/table.html#agate.Table.bins). binned_ages.bar_chart() # ### Exonerations by year # + by_year_exonerated = exonerations.group_by('exonerated') counts = by_year_exonerated.aggregate([ ('count', agate.Count()) ]) counts.order_by('exonerated').line_chart('exonerated', 'count') # - # ### Exonerations over time, for most commonly exonerated crimes # # The real power of agate's exploratory charting comes when we want to compare different facets of data. With leather, agate can automatically render a of chart for each group in a TableSet. # + # Filter to crimes with at least 100 exonerations top_crimes = exonerations.group_by('crime').having([ ('count', agate.Count()) ], lambda t: t['count'] > 100) # Group by year of exoneration by_year = top_crimes.group_by('exonerated') # Count number of exonerations in each year counts = by_year.aggregate([ ('count', agate.Count()) ]) # Group by crime by_crime = counts.group_by('crime') # Sort each group of exonerations by year and chart the results by_crime.order_by('exonerated').line_chart('exonerated', 'count') # - # ### Styling charts # # As mentioned above, leather is designed for making "good enough" charts. You are never going to create a polished chart. However, sometimes you may want more control than agate offers through it's own methods. You can take more control over how your charts are presented by using [leather](http://leather.readthedocs.io/) directly. # + import leather chart = leather.Chart('Total exonerations by state') chart.add_y_axis(name='State') chart.add_x_axis(name='Number of exonerations') chart.add_bars(sorted_totals, x='count', y='state') chart.to_svg(height=1000) # - # Where to go next # ================ # # This tutorial only scratches the surface of agate's features. For many more ideas on how to apply agate, check out the [`Cookbook`](http://agate.readthedocs.io/en/1.6.2/cookbook.html), which includes dozens of examples of specific features of agate as well as recipes for substituting agate for Excel, SQL, R and more. Also check out the agate's [`Extensions`](http://agate.readthedocs.io/en/1.6.2/extensions.html) which add support for reading/writing SQL tables, performing statistical analysis and more. # # Also, if you're going to be doing data processing in Python you really ought to check out [`proof`](http://proof.readthedocs.org/en/1.6.2/), a library for building data processing pipelines that are repeatable and self-documenting. It will make your code cleaner and save you tons of time. # # Good luck in your reporting!
tutorial.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # # Compare model performance between real and permuted hetnets library(magrittr) # + auroc_df = readr::read_tsv('data/auroc.tsv') degrees = dplyr::filter(auroc_df, feature_type == 'degree')$feature metapaths = dplyr::filter(auroc_df, feature_type == 'dwpc')$feature # col_types not needed here, but used for safety col_types = list() for (metapath in metapaths) { col_types[[metapath]] = readr::col_number() } for (degree in degrees) { col_types[[degree]] = readr::col_integer() } feature_df = readr::read_tsv('data/matrix/features.tsv.bz2', col_types = col_types) # + transform_df = function(df) { df = dplyr::bind_cols( df %>% dplyr::transmute(status, prior_logit = boot::logit(prior_prob)), df %>% dplyr::select(one_of(degrees)) %>% dplyr::mutate_each(dplyr::funs(asinh)), df %>% dplyr::select(one_of(metapaths)) %>% dplyr::mutate_each(dplyr::funs(asinh(. / mean(.)))) ) return(df) } transformed_df = feature_df %>% dplyr::group_by(hetnet) %>% dplyr::do(transform_df(.)) %>% dplyr::ungroup() head(transformed_df, 2) # + fit_list = list() i = 0 get_performance = function(df, incl_degrees) { for (seed in 1:5) { for (alpha in 0:1) { i <<- i + 1 y = df$status X = df %>% dplyr::select(-status, -hetnet) %>% as.matrix() penalty_factor = ifelse(colnames(X) == 'prior_logit', 0, 1) fit = hetior::glmnet_train(X = X, y = y, alpha = alpha, cores = 5, seed=seed, penalty.factor=penalty_factor, lambda.min.ratio=1e-6, nlambda=200 ) fit$name = df$hetnet[1] fit$incl_degrees = incl_degrees fit_list[[i]] <<- fit } } return(data.frame(i)) } temp = transformed_df %>% dplyr::group_by(hetnet) %>% dplyr::do(get_performance(., incl_degrees=1)) temp = transformed_df %>% dplyr::select(-one_of(degrees)) %>% dplyr::group_by(hetnet) %>% dplyr::do(get_performance(., incl_degrees=0)) # + result_df = fit_list %>% lapply(function(l) { dplyr::data_frame( name = l$name, alpha = l$alpha, incl_degrees = l$incl_degrees, seed = l$seed, auroc = l$vtm$auroc, auprc = l$vtm$auprc, tjur = l$vtm$tjur ) }) %>% dplyr::rbind_all() head(result_df, 2) # - result_df %>% readr::write_tsv('data/model-performances.tsv') # + summary_df = result_df %>% dplyr::mutate(permuted = as.integer(grepl('perm', result_df$name))) %>% dplyr::group_by(permuted, alpha, incl_degrees, alpha) %>% dplyr::do( dplyr::bind_cols( ggplot2::mean_cl_normal(.$tjur) %>% dplyr::rename(tjur=y, tjur_lower=ymin, tjur_upper=ymax), ggplot2::mean_cl_normal(.$auroc) %>% dplyr::rename(auroc=y, auroc_lower=ymin, auroc_upper=ymax), ggplot2::mean_cl_normal(.$auprc) %>% dplyr::rename(auprc=y, auprc_lower=ymin, auprc_upper=ymax)) ) %>% dplyr::ungroup() %>% dplyr::arrange(desc(tjur)) summary_df %>% readr::write_tsv('data/model-performances-summary.tsv') summary_df
all-features/8-model-performances.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- geemap.update_package() import geemap Map = geemap.Map() Map.zoom_to_me(zoom=14, add_marker=True) Map # + import geocoder g = geocoder.ip("me") props = g.geojson["features"][0]["properties"] lat = props["lat"] lon = props["lng"] print(lat, lon) # -
notebooks/livelocation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] deletable=true editable=true # ### osu!nn #1: Map Dataset Reader # # This notebook reads a file "maplist.txt", then reads the .osu files and the relevant music files to convert into some data. # # Data that feeds the Deep Neural Network. # # Last edit: 2019/4/22 # + [markdown] deletable=true editable=true # First of all, we need to install FFmpeg and specify its path here. It is needed to convert the .mp3 files to .wavs which Python can read. # # It's also fine to use any other converter, such as LAME: just edit the 24th line of osureader.py (starting with "subprocess.call") for the converter's parameters. # # **Then, fill maplist.txt with the paths of .osu files you want to train with.** Otherwise it cannot find any of the maps because the maps are on my computer. The default model is trained with the Sota dataset including 44 maps of Sota Fujimori music. # # After that run the grid below to convert the maps. # + deletable=true editable=true import os, re, time from osureader import * # set the ffmpeg path here!! # add "r" before the path string GLOBAL_VARS["ffmpeg_path"] = r"D:\StudyData\Tensorflow\ffmpeg\bin\ffmpeg.exe"; # in linux, it is installed globally, so use this # GLOBAL_VARS["ffmpeg_path"] = "ffmpeg"; mapdata_path = "mapdata/"; # check if it works test_process_path(GLOBAL_VARS["ffmpeg_path"]); # check if nodejs works test_process_path("node"); # the divisor parameter divisor = 4; # make sure the mapdata folder exists if not os.path.isdir(mapdata_path): os.mkdir(mapdata_path); with open("maplist.txt") as fp: fcont = fp.readlines(); # The following part is something I used to filter maps with difficulty names results = []; # exclude_words = ["Easy", "Normal", "Hard", "Taiko", "Salad", "Platter", "Overdose", "Rain", "4K", "5K", "6K", "7K", "8K", "9K", # "Kantan", "Futsuu", "Muzukashii", "Oni", "Field "]; for line in fcont: # if re.search("TV", line): # apd = True; # for kw in exclude_words: # if kw.lower() in line.strip().lower(): # apd = False; # break; # if apd: # results.append(line.strip()); results.append(line); # Remove the originally existing npzs for file in os.listdir(mapdata_path): if file.endswith(".npz"): os.remove(os.path.join(mapdata_path, file)); print("Number of filtered maps: {}".format(len(results))); for k, mname in enumerate(results): try: start = time.time() read_and_save_osu_file(mname.strip(), filename=os.path.join(mapdata_path, str(k)), divisor=divisor); end = time.time() print("Map data #" + str(k) + " saved! time = " + str(end - start) + " secs"); except Exception as e: print("Error on #{}, path = {}, error = {}".format(str(k), mname.strip(), e)); # If some map causes bug please tell me!! https://discord.gg/npmSy7K
v6.2/01_osumap_loader.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + id="l6qTWG2Tc8sO" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1631751650499, "user_tz": 420, "elapsed": 23731, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09042963316942946918"}} outputId="be96f6e2-2202-4626-d755-96d14d9c7fb4" #Mounts your google drive into this virtual machine from google.colab import drive drive.mount('/content/drive') # + id="zYFO_Ha3ZyIW" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1631751711910, "user_tz": 420, "elapsed": 106, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09042963316942946918"}} outputId="6a270d42-a813-4d00-8eaa-67186588b200" #Now we need to access the files downloaded, copy the path where you saved the files downloaded from the github repo and replace the path below # %cd /content/drive/MyDrive/path/to/files/cloned/from/repo/and/now/in/your/GoogleDrive/ # + id="G7WRW_OQdQUR" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1631751726771, "user_tz": 420, "elapsed": 13859, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09042963316942946918"}} outputId="8db9c879-cf49-4de1-d1c8-630777683c3d" # !pip install neurokit2 # !pip install mne # !pip install pandas==1.1.5 # + id="uaUD4dRDZnCY" import time import numpy as np import pandas as pd import matplotlib import neurokit2 as nk import mne import matplotlib.pyplot as plt import os import random #from pylsl import StreamInfo, StreamOutlet, resolve_stream, StreamInlet from sklearn.cross_decomposition import CCA from scipy import signal from scipy.signal import butter, lfilter from scipy.fft import fft, fftfreq, ifft import pickle # %matplotlib inline plt.rcParams['figure.figsize'] = [15, 9] # + [markdown] id="547CRw1mckKH" # ## **Offline P data visualization and processing** # + id="J7irTzpAca0G" colab={"base_uri": "https://localhost:8080/", "height": 643} executionInfo={"status": "ok", "timestamp": 1631751732845, "user_tz": 420, "elapsed": 2075, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09042963316942946918"}} outputId="4f4d1e67-9d75-4a7d-c1da-72f9ca6269d3" data = pd.read_csv('/content/drive/MyDrive/YOURPATH/Data/Temp-RAW-2021-09-14_15-11-04.txt',header=4 ,sep=r'\s*,\s*',engine='python') data.columns = ["Sample Index", "EMG Channel 0", "EMG Channel 1", "EMG Channel 2", "EMG Channel 3", "EOG Channel 0", "EOG Channel 1", "EEG Channel 0", "EEG Channel 1", "EEG Channel 2", "EEG Channel 3", "EEG Channel 4", "EEG Channel 5", "EEG Channel 6", "EEG Channel 7", "EEG Channel 8", "EEG Channel 9", "PPG Channel 0", "PPG Channel 1", "EDA_Channel_0", "Other", "Raw PC Timestamp", "Raw Device Timestamp", "Other.1", "Timestamp", "Marker", "Timestamp (Formatted)"] data # + id="YE1lro228Bd-" #Collect and process PPG temp =data["Other"] temp temp_signal = nk.as_vector(temp) # Extract the only column as a vector # + id="6mULLX4Ba57x" def temp_process(temp_signal, sampling_rate=50, **kwargs): temp_signal=nk.as_vector(temp_signal) temp_avg = [np.mean(temp_signal)] * len(temp_signal) info = {'sampling_rate':sampling_rate} # Add sampling rate in dict info signals = pd.DataFrame( {"Temp_Raw": temp_signal, "Temp_Average": temp_avg} ) return signals # + id="rw5z2dAvcYsa" temp_processed = temp_process(temp_signal=temp,sampling_rate= 50) temperature = nk.ppg_process(temp_signal, sampling_rate=50) # + colab={"base_uri": "https://localhost:8080/"} id="VoaWv9R4kBGW" executionInfo={"status": "ok", "timestamp": 1631751738658, "user_tz": 420, "elapsed": 97, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09042963316942946918"}} outputId="36ffff9c-9c4c-4895-fd64-512dc5e3172e" type(temp_processed) # + id="uGx94Jp3XjHA" def temp_plot(temp_signals, sampling_rate=None): """Visualize temperature (Temp) data. Parameters ---------- temperature : DataFrame DataFrame obtained from `temp_process()`. sampling_rate : int The sampling frequency of the PPG (in Hz, i.e., samples/second). Needs to be supplied if the data should be plotted over time in seconds. Otherwise the data is plotted over samples. Defaults to None. Returns -------- """ # X-axis if sampling_rate is not None: x_axis = np.linspace(0, temp_signals.shape[0] / sampling_rate, temp_signals.shape[0]) else: x_axis = np.arange(0, temp_signals.shape[0]) # Prepare figure fig, ( ax1) = plt.subplots(nrows=1, ncols=1, sharex=True) if sampling_rate is not None: # ax0.set_xlabel("Time (seconds)") ax1.set_ylabel("Degrees (Celcius)") ax1.set_xlabel("Time (seconds)") elif sampling_rate is None: # ax0.set_xlabel("Samples") ax1.set_xlabel("Samples") fig.suptitle("Temperature from Photoplethysmogram (tPPG)", fontweight="bold") plt.subplots_adjust(hspace=0.4) # Plot cleaned and raw Temperature data # ax0.set_title("Raw and Average Signal") # ax0.plot(x_axis, temp_signals["Temp_Raw"], color="#B0BEC5", label="Raw", zorder=1) # ax0.plot(x_axis, temp_signals["Temp_Average"], color="#FB1CF0", label="Average", zorder=1, linewidth=1.5) # ax0.legend(loc="upper right") y_err = x_axis.std() * np.sqrt(1/len(x_axis) + (x_axis - x_axis.mean())**2 / np.sum((x_axis - x_axis.mean())**2)) # Rate ax1.set_title("Temperature from Photoplethysmogram (tPPG)") temp_rate_mean = temp_signals["Temp_Raw"].mean() ax1.plot(x_axis, temp_signals["Temp_Raw"], color="#a9bdc7", label="Raw", linewidth=1.5) ax1.fill_between(x_axis, temp_signals["Temp_Raw"] - y_err, temp_signals["Temp_Raw"] + y_err, alpha=0.3, color='#e4e9ed') ax1.axhline(y=temp_rate_mean, label="Mean", linestyle="--", color="#FB1CF0") ax1.plot([], [], ' ', label="Skin Temperature mean: %s°C"% (round(temp_rate_mean,2))) ax1.legend(loc="upper right") return fig # + colab={"base_uri": "https://localhost:8080/", "height": 455} id="k-C6mQe1YbWy" executionInfo={"status": "ok", "timestamp": 1631752009112, "user_tz": 420, "elapsed": 680, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09042963316942946918"}} outputId="46b2c811-fb00-454e-b4f8-5ba411da7979" plt.rcParams['figure.figsize'] = [10, 5] path = '/content/drive/MyDrive/YOURPATH/SignalValidation/Figures/' image_format = 'eps' # e.g .png, .svg, etc. image_name = 'galea_temperature.eps' fig = temp_plot(temp_processed[100:], 50) fig.savefig(path+image_name, format=image_format, dpi=1200) # + id="AF6euriur05o" y_err = x_axis.std() * np.sqrt(1/len(x_axis) + (x_axis - x_axis.mean())**2 / np.sum((x_axis - x_axis.mean())**2)) ax1.plot(x_axis, temp_signals["Temp_Raw"], color="#FB661C", label="Rate", linewidth=1.5) ax1.fill_between(x_axis, temp_signals["Temp_Raw"] - y_err, temp_signals["Temp_Raw"] + y_err, alpha=0.2) # + colab={"base_uri": "https://localhost:8080/", "height": 554} id="b3W7et19rBNU" executionInfo={"status": "ok", "timestamp": 1631668413177, "user_tz": 420, "elapsed": 444, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09042963316942946918"}} outputId="390f71c2-b65c-4436-bfdd-f82b1bd13add" N = 21 x = np.linspace(0, 10, 11) y = [3.9, 4.4, 10.8, 10.3, 11.2, 13.1, 14.1, 9.9, 13.9, 15.1, 12.5] # fit a linear curve an estimate its y-values and their error. a, b = np.polyfit(x, y, deg=1) y_est = a * x + b y_err = x.std() * np.sqrt(1/len(x) + (x - x.mean())**2 / np.sum((x - x.mean())**2)) fig, ax = plt.subplots() ax.plot(x, y_est, '-') ax.fill_between(x, y_est - y_err, y_est + y_err, alpha=0.2) ax.plot(x, y, 'o', color='tab:brown') # + id="DQqcYCMFrGcS"
Notebooks/TEMP_dataViz_09_14_2021.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + ## Note : For printing multiple values in one line, put them inside print separated by space. ## You can follow this syntax for printing values of two variables val1 and val2 separaetd by space - ## print(val1, " ", val2) n = int(input()) sumEven = 0 sumOdd = 0 while (n > 0): rem = n % 10 if(rem % 2 == 0): sumEven += rem else: sumOdd += rem n = n // 10 print(sumEven, "",sumOdd) # -
01.Python-Basics/03. Conditionals and Loops/Assignments/4.Sum-of-even&odd.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Computation on Arrays: Broadcasting # Broadcasting is simply a set of rules for applying binary ufuncs (addition, subtraction, multiplication, etc.) on arrays of different sizes. # binary operations are performed on an element-by-element basis: import numpy as np a = np.array([0, 1, 2]) b = np.array([5, 5, 5]) a+b # Broadcasting allows these types of binary operations to be performed on arrays of different sizes for example, we can just as easily add a scalar (think of it as a zero-dimensional array) to an array: a+5 a M = np.eye(4) M M+a M = np.eye(3) M+a M = np.ones((3, 3)) M M+a M = np.ones((1, 3)) print(M) M+a M = np.ones((3, 1)) print(M) M+a M = np.ones((3, 4)) print(M) M+a M = np.ones((4, 3)) print(M) M+a a = np.arange(3) b = np.arange(3)[:, np.newaxis] a a+b # ## Rules of Broadcasting # * Rule 1: If the two arrays differ in their number of dimensions, the shape of the one with fewer dimensions is padded with ones on its leading (left) side. # * Rule 2: If the shape of the two arrays does not match in any dimension, the array with shape equal to 1 in that dimension is stretched to match the other shape. # # * Rule 3: If in any dimension the sizes disagree and neither is equal to 1, an error is raised. M = np.ones((2, 3)) a = np.arange(3) print(M.shape) print(a.shape) # We see by rule 1 that the array a has fewer dimensions, so we pad it on the left with ones: # M.shape -> (2, 3) a.shape -> (1, 3) # # By rule 2, we now see that the first dimension disagrees, so we stretch this dimension to match: # M.shape -> (2, 3) a.shape -> (2, 3) M+a # _____________________ a = np.arange(3).reshape((3, 1)) b = np.arange(3) a b #again, we’ll start by writing out the shape of the arrays: print(a.shape) print(b.shape) # Rule 1 says we must pad the shape of b with ones: a.shape -> (3, 1) # b.shape -> (1, 3) # And rule 2 tells us that we upgrade each of these ones to match the corresponding size of the other array: # a.shape -> (3, 3) b.shape -> (3, 3) a+b # ____________________________ M = np.ones((3, 2)) a = np.arange(3) print(M.shape) print(a.shape) # This is just a slightly different situation than in the first example: the matrix M is transposed. How does this affect the calculation? The shapes of the arrays are: # M.shape = (3, 2) a.shape = (3,) # Again, rule 1 tells us that we must pad the shape of a with ones: M.shape -> (3, 2) # a.shape -> (1, 3) # By rule 2, the first dimension of a is stretched to match that of M: # M.shape -> (3, 2) a.shape -> (3, 3) # Now we hit rule 3—the final shapes do not match, so these two arrays are incompati‐ ble, as we can observe by attempting this operation: M+a # Note the potential confusion here: you could imagine making a and M compatible by, say, padding a’s shape with ones on the right rather than the left. But this is not how the broadcasting rules work! That sort of flexibility might be useful in some cases, but it would lead to potential areas of ambiguity. If right-side padding is what you’d like, you can do this explicitly by reshaping the array a[:, np.newaxis].shape M+a[:, np.newaxis] M a # Also note that while we’ve been focusing on the + operator here, these broadcasting rules apply to any binary ufunc. For example, here is the logaddexp(a, b) function, which computes log(exp(a) + exp(b)) with more precision than the naive approach: np.logaddexp(M, a[:, np.newaxis]) # ### Broadcasting in Practice # Broadcasting operations form the core of many examples we’ll see throughout this book. We’ll now take a look at a couple simple examples of where they can be useful. # # # ### Centering an array X = np.random.random((10, 3)) X Xmean = X.mean(0) Xmean Xmeanr = X.mean(1) Xmeanr X_centered = X - Xmean X_centered # # Plotting a two-dimensional function # x and y have 50 steps from 0 to 5 x = np.linspace(0, 5, 50) y = np.linspace(0, 5, 50)[:, np.newaxis] z = np.sin(x) ** 10 + np.cos(10 + y * x) * np.cos(x) # %matplotlib inline import matplotlib.pyplot as plt plt.imshow(z, origin='lower', extent=[0, 5, 0, 5], cmap='viridis') plt.colorbar(); (y*x).shape
d_numpy.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: dem_stitcher # language: python # name: dem_stitcher # --- # %load_ext autoreload # %autoreload 2 from dem_stitcher.stitcher import stitch_dem_for_isce2 import rasterio import matplotlib.pyplot as plt from pathlib import Path site = 'hawaii' # 'bay_area' or 'aleutian' or 'odessa' dem_name = 'glo_30' # 'ned1' #'tdx_30' bounds = [-157.0, 18.6, -154.6, 20.7] dst_dir = Path(f'{site}_dem') dst_dir.mkdir(exist_ok=True) dst_path = dst_dir/f'{dem_name}.wgs84' path = stitch_dem_for_isce2(bounds, dem_name, dst_path=dst_path, nodata=0) with rasterio.open(path) as ds: X = ds.read(1) plt.imshow(X)
notebooks/Making a DEM for ISCE2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Collapsed Gibbs sampler for Generalized Relational Topic Models with Data Augmentation # # <div style="display:none"> # $ # \DeclareMathOperator{\dir}{Dirichlet} # \DeclareMathOperator{\dis}{Discrete} # \DeclareMathOperator{\normal}{Normal} # \DeclareMathOperator{\ber}{Bernoulli} # \DeclareMathOperator{\diag}{diag} # \DeclareMathOperator{\Betaf}{B} # \DeclareMathOperator{\Gammaf}{\Gamma} # \DeclareMathOperator{\PG}{PG} # \DeclareMathOperator{\v}{vec} # \newcommand{\norm}[1]{\left\| #1 \right\|} # \newcommand{\cp}[2]{p \left( #1 \middle| #2 \right)} # \newcommand{\cN}[2]{\mathscr{N} \left( #1 \middle| #2 \right)} # \newcommand{\cpsi}[2]{\psi \left( #1 \middle| #2 \right)} # \newcommand{\cPsi}[2]{\Psi \left( #1 \middle| #2 \right)} # \newcommand{\etd}[1]{\mathbf{z}^{(#1)}} # \newcommand{\etdT}[1]{\left. \mathbf{z}^{(#1)} \right.^T} # \newcommand{\Etd}[2]{\mathbf{z}^{(#1, #2)}} # \newcommand{\sumetd}{\mathbf{z}} # \newcommand{\one}{\mathbf{1}} # \newcommand{\Eta}{H} # \newcommand{\eHe}{\etdT{d} \Eta \etd{d'}} # $ # </div> # # Here is the collapsed Gibbs sampler for Chen et al.'s [generalized relational topic models with data augmentation](http://ijcai.org/papers13/Papers/IJCAI13-192.pdf). I am building on the [collapsed Gibbs sampler](http://nbviewer.savvysherpa.com/github/gp-0058-clt-at-toys-r-us/relational-topic-models/blob/master/blslda.ipynb) I wrote for binary logistic supervised latent Dirichlet allocation. # # The generative model for RTMs is as follows: # # $$\begin{align} # \theta^{(d)} &\sim \dir(\alpha) &\text{(topic distribution for document $d \in \{1, \ldots, D\}$)} \\ # \phi^{(k)} &\sim \dir(\beta) &\text{(term distribution for topic $k \in \{1, \ldots, K\}$)} \\ # z_n^{(d)} \mid \theta^{(d)} &\sim \dis \left( \theta^{(d)} \right) &\text{(topic of $n$th token of document $d$, $n \in \{1, \ldots, N^{(d)}\}$)} \\ # w_n^{(d)} \mid \phi^{(z_n^{(d)})} &\sim \dis \left( \phi^{(z_n^{(d)})} \right) &\text{(term of $n$th token of document $d$, $n \in \{1, \ldots, N^{(d)}\}$)} \\ # \Eta_{k, k'} &\sim \normal \left( \mu, \nu^2 \right) &\text{(regression coefficients for topic pairs $k, k' \in \{1, \ldots, K\}$)} \\ # y^{(d, d')} \mid \Eta, \etd{d}, \etd{d'} &\sim \ber \left( # \frac{ \exp \left( \eHe \right) }{ 1 + \exp \left( \eHe \right) } \right) # &\text{(link indicator for documents $d, d' \in \{1, \ldots, D\}$)} # \end{align}$$ # # where each token can be any one of $V$ terms in our vocabulary, $\etd{d}$ is the empirical topic distribution of document $d$, and $\circ$ is the Hadamard (element-wise) product. # # <img src="http://yosinski.com/mlss12/media/slides/MLSS-2012-Blei-Probabilistic-Topic-Models_084.png" width="600"> # # <p style='text-align: center; font-style: italic;'> # Plate notation for relational topic models. # <br/> # This diagram should replace $\beta_k$ with $\phi^{(k)}$, and each $\phi^{(k)}$ should be dependent on a single $\beta$. # </p> # # Following [Chen et al. 2013](http://ijcai.org/papers13/Papers/IJCAI13-192.pdf), the regularized pseudo-likelihood for the link variable $y^{(d, d')}$, with regularization parameter $b \ge 0$, can be written # # $$\begin{align} # \cpsi{y^{(d, d')}}{\Eta, \etd{d}, \etd{d'}, b} # &= \cp{y^{(d, d')}}{\Eta, \etd{d}, \etd{d'}}^b # \\ &= \left( \frac{\exp \left( \eHe \right)^{y^{(d, d')}}}{ 1 + \exp \left( \eHe \right)} \right)^b # \\ &= \frac{\exp \left( b y^{(d, d')} \eHe \right)} # { \left( \exp \left( -\frac{\eHe}{2} \right) + \exp \left( \frac{\eHe}{2} \right) \right)^b \exp \left( \frac{b}{2} \eHe \right) } # \\ &= 2^{-b} \exp \left( b \left( y^{(d, d')} - \frac{1}{2} \right) \left( \eHe \right) \right) \cosh \left( \frac{ \eHe }{2} \right)^{-b} # \\ &= 2^{-b} \exp \left( b \left( y^{(d, d')} - \frac{1}{2} \right) \left( \eHe \right) \right) # \int_0^\infty \exp \left( -\frac{ \left( \eHe \right)^2 }{2} \omega^{(d, d')} \right) # \cp{\omega^{(d, d')}}{b, 0} d\omega^{(d, d')} # \end{align}$$ # # where $\omega^{(d, d')}$ is a Polya-Gamma distributed variable with parameters $b = b$ and $c = 0$ (see [Polson et al. 2012](http://arxiv.org/pdf/1205.0310v3.pdf) for details). This means that, for each pair of documents $d$ and $d'$, the pseudo-likelihood of $y^{(d, d')}$ is actually a mixture of Gaussians with respect to the Polya-Gamma distribution $\PG(b, 0)$. Therefore, the joint pseudo-likelihood of $y^{(d, d')}$ and $\omega^{(d, d')}$ can be written # # $$\cPsi{y^{(d, d')}, \omega^{(d, d')}}{\Eta, \etd{d}, \etd{d'}, b} # = 2^{-b} \exp \left( \kappa^{(d, d')} \zeta^{(d, d')} - \frac{ \omega^{(d, d')} }{2} (\zeta^{(d, d')})^2 \right) \cp{\omega^{(d, d')}}{b, 0}.$$ # # where $\kappa^{(d, d')} = b(y^{(d, d')} - 1/2)$ and $\zeta^{(d, d')} = \eHe$. The joint probability distribution can therefore be factored as follows: # # $$\begin{align} # \cp{\theta, \phi, z, w, \Eta, y, \omega}{\alpha, \beta, \mu, \nu^2, b} # &= # \prod_{k=1}^{K} \cp{\phi^{(k)}}{\beta} # \prod_{d=1}^{D} \cp{\theta^{(d)}}{\alpha} # \prod_{n=1}^{N^{(d)}} \cp{z_n^{(d)}}{\theta^{(d)}} \cp{w_n^{(d)}}{\phi^{(z_n^{(d)})}} # \\ & \quad \times \prod_{k_1=1}^{K} \prod_{k_2=1}^{K} \cp{\Eta_{k_1, k_2}}{\mu, \nu^2} # \prod_{d_1=1}^D \prod_{\substack{d_2=1 \\ d_2 \neq d_1}}^D \cPsi{y^{(d_1, d_2)}, \omega^{(d_1, d_2)}}{\Eta, \etd{d_1}, \etd{d_2}, b} # \\ &= # \prod_{k=1}^{K} \frac{\Betaf(b^{(k)} + \beta)}{\Betaf(\beta)} \cp{\phi^{(k)}}{b^{(k)} + \beta} # \prod_{d=1}^{D} \frac{\Betaf(a^{(d)} + \alpha)}{\Betaf(\alpha)} \cp{\theta^{(d)}}{a^{(d)} + \alpha} # \\ &\quad \times # \prod_{k_1=1}^{K} \prod_{k_2=1}^{K} \cN{\Eta_{k_1, k_2}}{\mu, \nu^2} # \prod_{d_1=1}^D \prod_{\substack{d_2=1 \\ d_2 \neq d_1}}^D 2^{-b} \exp \left( \kappa^{(d_1, d_2)} \zeta^{(d_1, d_2)} - \frac{ \omega^{(d_1, d_2)} }{2} (\zeta^{(d_1, d_2)})^2 \right) \cp{\omega^{(d_1, d_2)}}{b, 0} # \end{align}$$ # # where $a_k^{(d)}$ is the number of tokens in document $d$ assigned to topic $k$, $b_v^{(k)}$ is the number of tokens equal to term $v$ and assigned to topic $k$, and $\Betaf$ is the [multivariate Beta function](https://en.wikipedia.org/wiki/Beta_function#Multivariate_beta_function). Marginalizing out $\theta$ and $\phi$ by integrating with respect to each $\theta^{(d)}$ and $\phi^{(k)}$ over their respective sample spaces yields # # $$\begin{align} # \cp{z, w, \Eta, y, \omega}{\alpha, \beta, \mu, \nu^2, b} &= # \prod_{k=1}^{K} \frac{\Betaf(b^{(k)} + \beta)}{\Betaf(\beta)} # \prod_{d=1}^{D} \frac{\Betaf(a^{(d)} + \alpha)}{\Betaf(\alpha)} # \\ &\quad\quad \times \prod_{k_1=1}^{K} \prod_{k_2=1}^{K} \cN{\Eta_{k_1, k_2}}{\mu, \nu^2} # \prod_{d_1=1}^D \prod_{\substack{d_2=1 \\ d_2 \neq d_1}}^D 2^{-b} \exp \left( \kappa^{(d_1, d_2)} \zeta^{(d_1, d_2)} - \frac{ \omega^{(d_1, d_2)} }{2} (\zeta^{(d_1, d_2)})^2 \right) \cp{\omega^{(d_1, d_2)}}{b, 0} # \\ &= # \cp{w}{z, \beta} \cp{z}{\alpha} \cp{\Eta}{\mu, \nu^2} \cPsi{y, \omega}{\Eta, z, b}. # \end{align}$$ # # See my [LDA notebook](http://nbviewer.savvysherpa.com/github/bearnshaw/ml-demos/blob/master/lda_gibbs_sampling_cython.ipynb) for step-by-step details of the previous two calculations. # # Our goal is to calculate the posterior distribution # # $$\cp{z, \Eta, \omega}{w, y, \alpha, \beta, \mu, \nu^2, b} = # \frac{\cp{z, w, \Eta, y, \omega}{\alpha, \beta, \mu, \nu^2, b}} # {\sum_{z'} \iint \cp{z', w, \Eta', y, \omega{'}}{\alpha, \beta, \mu, \nu^2, b} d\Eta' d\omega{'}}$$ # # in order to infer the topic assignments $z$ and regression coefficients $\Eta$ from the given term assignments $w$ and link data $y$. Since calculating this directly is infeasible, we resort to collapsed Gibbs sampling. The sampler is "collapsed" because we marginalized out $\theta$ and $\phi$, and will estimate them from the topic assignments $z$: # # $$\hat\theta_k^{(d)} = \frac{a_k^{(d)} + \alpha_k}{\sum_{k'=1}^K \left(a_{k'}^{(d)} + \alpha_{k'} \right)},\quad # \hat\phi_v^{(k)} = \frac{b_v^{(k)} + \beta_v}{\sum_{v'=1}^V \left(b_{v'}^{(k)} + \beta_{v'} \right)}.$$ # # Gibbs sampling requires us to compute the full conditionals for each $z_n^{(d)}$, $\omega^{(d, d')}$ and $\Eta_{k, k'}$. For example, we need to calculate, for all $n$, $d$ and $k$, # # $$\begin{align} # \cp{z_n^{(d)} = k}{z \setminus z_n^{(d)}, w, H, y, \omega, \alpha, \beta, \mu, \nu^2, b} # &\propto # \cp{z_n^{(d)} = k, z \setminus z_n^{(d)}, w, H, y, \omega}{\alpha, \beta, \mu, \nu^2, b} # \\ &\propto # \frac{b_{w_n^{(d)}}^{(k)} \setminus z_n^{(d)} + \beta_{w_n^{(d)}}}{ \sum_{v=1}^V \left( b_v^{(k)} \setminus z_n^{(d)} + \beta_v\right)} # \left( a_k^{(d)} \setminus z_n^{(d)} + \alpha_k \right) # \prod_{d_1=1}^{D} \prod_{\substack{d_2=1 \\ d_2 \neq d_1}}^{D} \exp \left( \kappa^{(d_1, d_2)} \zeta^{(d_1, d_2)} - \frac{ \omega^{(d_1, d_2)} }{2} (\zeta^{(d_1, d_2)})^2 \right) # \\ &\propto # \frac{b_{w_n^{(d)}}^{(k)} \setminus z_n^{(d)} + \beta_{w_n^{(d)}}}{ \sum_{v=1}^V \left( b_v^{(k)} \setminus z_n^{(d)} + \beta_v\right)} # \left( a_k^{(d)} \setminus z_n^{(d)} + \alpha_k \right) # \\ &\quad\quad\times # \exp \left( \sum_{\substack{d_1=1 \\ d_1 \neq d}}^{D} \left[ \left( \kappa^{(d_1, d)} - \omega^{(d_1, d)} ( \zeta^{(d_1, d)} \setminus z_n^{(d)}) \right) \frac{H_{:, k}^T \etd{d_1}}{N^{(d)}} # - \frac{ \omega^{(d_1, d)} }{2} \left( \frac{H_{:, k}^T \etd{d_1}}{N^{(d)}} \right)^2 \right] \right. # \\ &\quad\quad\quad\quad + # \left. \sum_{\substack{d_2=1 \\ d_2 \neq d}}^{D} \left[ \left( \kappa^{(d, d_2)} - \omega^{(d, d_2)} ( \zeta^{(d, d_2)} \setminus z_n^{(d)}) \right) \frac{H_{k, :} \etd{d_2}}{N^{(d)}} # - \frac{ \omega^{(d, d_2)} }{2} \left( \frac{H_{k, :} \etd{d_2}}{N^{(d)}} \right)^2 \right] \right) # \end{align}$$ # # where the "set-minus" notation $\cdot \setminus z_n^{(d)}$ denotes the variable the notation is applied to with the entry $z_n^{(d)}$ removed (again, see my [LDA notebook](http://nbviewer.savvysherpa.com/github/bearnshaw/ml-demos/blob/master/lda_gibbs_sampling_cython.ipynb) for details). This final proportionality is true since # # $$\begin{align} # \prod_{d_1=1}^{D} \prod_{\substack{d_2=1 \\ d_2 \neq d_1}}^{D} \exp \left( \kappa^{(d_1, d_2)} \zeta^{(d_1, d_2)} - \frac{ \omega^{(d_1, d_2)} }{2} (\zeta^{(d_1, d_2)})^2 \right) # &= # \prod_{d_1=1}^{D} \prod_{\substack{d_2=1 \\ d_2 \neq d_1}}^{D} \exp \left( \kappa^{(d_1, d_2)} \left( \zeta^{(d_1, d_2)} \setminus z_n^{(d)} + \Delta_{d, d_1, d_2}^{(k)} \right) # - \frac{ \omega^{(d_1, d_2)} }{2} \left( \zeta^{(d_1, d_2)} \setminus z_n^{(d)} + \Delta_{d, d_1, d_2}^{(k)} \right)^2 \right) # \\ &\propto # \prod_{\substack{d_1=1 \\ d_1 \neq d}}^{D} \exp \left( \kappa^{(d_1, d)} \left( \zeta^{(d_1, d)} \setminus z_n^{(d)} + \Delta_{d, d_1, d}^{(k)} \right) # - \frac{ \omega^{(d_1, d)} }{2} \left( \zeta^{(d_1, d)} \setminus z_n^{(d)} + \Delta_{d, d_1, d}^{(k)} \right)^2 \right) # \\ &\quad\quad\times # \prod_{\substack{d_2=1 \\ d_2 \neq d}}^{D} \exp \left( \kappa^{(d, d_2)} \left( \zeta^{(d, d_2)} \setminus z_n^{(d)} + \Delta_{d, d, d_2}^{(k)} \right) # - \frac{ \omega^{(d, d_2)} }{2} \left( \zeta^{(d, d_2)} \setminus z_n^{(d)} + \Delta_{d, d, d_2}^{(k)} \right)^2 \right) # \\ &\propto # \exp \left( \sum_{\substack{d_1=1 \\ d_1 \neq d}}^{D} \left[ \left( \kappa^{(d_1, d)} - \omega^{(d_1, d)} ( \zeta^{(d_1, d)} \setminus z_n^{(d)}) \right) \frac{H_{:, k}^T (\etd{d_1} \setminus z_n^{(d)})}{N^{(d)}} # - \frac{ \omega^{(d_1, d)} }{2} \left( \frac{H_{:, k}^T (\etd{d_1} \setminus z_n^{(d)})}{N^{(d)}} \right)^2 \right] \right. # \\ &\quad\quad + # \left. \sum_{\substack{d_2=1 \\ d_2 \neq d}}^{D} \left[ \left( \kappa^{(d, d_2)} - \omega^{(d, d_2)} ( \zeta^{(d, d_2)} \setminus z_n^{(d)}) \right) \frac{H_{k, :} (\etd{d_2} \setminus z_n^{(d)})}{N^{(d)}} # - \frac{ \omega^{(d, d_2)} }{2} \left( \frac{H_{k, :} (\etd{d_2} \setminus z_n^{(d)})}{N^{(d)}} \right)^2 \right] \right) # \\ &= # \exp \left( \sum_{\substack{d_1=1 \\ d_1 \neq d}}^{D} \left[ \left( \kappa^{(d_1, d)} - \omega^{(d_1, d)} ( \zeta^{(d_1, d)} \setminus z_n^{(d)}) \right) \frac{H_{:, k}^T \etd{d_1}}{N^{(d)}} # - \frac{ \omega^{(d_1, d)} }{2} \left( \frac{H_{:, k}^T \etd{d_1}}{N^{(d)}} \right)^2 \right] \right. # \\ &\quad\quad + # \left. \sum_{\substack{d_2=1 \\ d_2 \neq d}}^{D} \left[ \left( \kappa^{(d, d_2)} - \omega^{(d, d_2)} ( \zeta^{(d, d_2)} \setminus z_n^{(d)}) \right) \frac{H_{k, :} \etd{d_2}}{N^{(d)}} # - \frac{ \omega^{(d, d_2)} }{2} \left( \frac{H_{k, :} \etd{d_2}}{N^{(d)}} \right)^2 \right] \right) # \end{align}$$ # # where # # $$\Delta_{d, d_1, d_2}^{(k)} = \delta_{d, d_1} \frac{H_{k, :} (\etd{d_2} \setminus z_n^{(d)})}{N^{(d)}} + \delta_{d, d_2} \frac{H_{:, k}^T (\etd{d_1} \setminus z_n^{(d)})}{N^{(d)}},$$ # # $\delta_{d, d'}$ is the [Kronecker delta](https://en.wikipedia.org/wiki/Kronecker_delta), and $H_{k, :}$ and $H_{:, k}$ are the $k$th row and column of $H$, respectively. The first proportionality is a result of the fact that $\Delta_{d, d_1, d_2}^{(k)}$ is nonzero only when $d = d_1$ or $d = d_2$. The last equality follows from the fact that $d \neq d_1$ in the first summation and $d \neq d_2$ in the second. # # In order to calculate the full conditional for $H$, let $\eta = (H_{:,1}^T \cdots H_{:, K}^T)^T$ be the vector of concatenated columns of $H$, $Z = (\etd{1, 1} \cdots \etd{D, D})$ be the matrix whose columns are the vectors $\etd{d, d'} = \etd{d'} \otimes \etd{d}$, where $\otimes$ is the [Kronecker product](https://en.wikipedia.org/wiki/Kronecker_product), $\Omega = \diag(\omega^{(1,1)}, \ldots, \omega^{(D,D)})$ be the diagonal matrix whose diagonal entries are $\omega^{(d, d')}$, $I$ be the identity matrix, and $\one$ be the vector of ones, and note that # # $$\prod_{k_1=1}^{K} \prod_{k_2=1}^{K} \cN{H_{k_1, k_2}}{\mu, \nu^2} = \cN{\eta}{\mu \one, \nu^2 I}$$ # $$\prod_{d_1=1}^{D} \prod_{\substack{d_2=1 \\ d_2 \neq d_1}}^{D} \exp \left( \kappa^{(d_1, d_2)} \zeta^{(d_1, d_2)} - \frac{ \omega^{(d_1, d_2)} }{2} (\zeta^{(d_1, d_2)})^2 \right) # = \exp \left( \eta^T Z \kappa - \frac{1}{2} \eta^T Z \Omega Z^T \eta \right) # \propto \cN{\eta}{(Z \Omega Z^T)^{-1} Z \kappa, (Z \Omega Z^T)^{-1}}.$$ # # Therefore # # $$\begin{align} # \cp{\eta}{z, w, y, \omega, \alpha, \beta, \mu, \nu^2, b} # &\propto # \cp{z, w, \eta, y, \omega}{\alpha, \beta, \mu, \nu^2, b} # \\ &\propto # \cN{\eta}{\mu \one, \nu^2 I} \cN{\eta}{(Z \Omega Z^T)^{-1} Z \kappa, (Z \Omega Z^T)^{-1}} # \\ &\propto # \cN{\eta}{\Sigma \left( \frac{\mu}{\nu^2} \one + Z \kappa \right), \Sigma} # \end{align}$$ # # where $\Sigma^{-1} = \nu^{-2} I + Z \Omega Z^T$ (see Section 8.1.8 of the [Matrix Cookbook](http://www.math.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf)). # # We also need to calculate the full conditional for $\omega$. We calculate # # $$\begin{align} # \cp{\omega}{z, w, H, y, \alpha, \beta, \mu, \nu^2, b} # &\propto # \cp{z, w, H, y, \omega}{\alpha, \beta, \mu, \nu^2, b} # \\ &\propto # \prod_{d_1=1}^{D} \prod_{\substack{d_2=1 \\ d_2 \neq d_1}}^{D} \exp \left( - \frac{ \omega^{(d_1, d_2)} }{2} (\zeta^{(d_1, d_2)})^2 \right) \cp{\omega^{(d_1, d_2)}}{b, 0} # \\ &= # \prod_{d_1=1}^{D} \prod_{\substack{d_2=1 \\ d_2 \neq d_1}}^{D} \cp{\omega^{(d_1, d_2)}}{b, \zeta^{(d_1, d_2)}} # \end{align}$$ # # that is, $\omega^{(d_1, d_2)} \sim \PG(b, \eHe)$ for each pair of documents $d_1$ and $d_2$. We sample from the Polya-Gamma distribution according to the method of [Polson et al. 2012](http://arxiv.org/pdf/1205.0310.pdf), implemented for Python 3 in this [code repo](https://github.com/Savvysherpa/pypolyagamma). # ## Graphical test # + # %matplotlib inline from modules.helpers import plot_images from functools import partial from sklearn.metrics import (roc_auc_score, roc_curve) import seaborn as sns import matplotlib.pyplot as plt import numpy as np import pandas as pd imshow = partial(plt.imshow, cmap='gray', interpolation='nearest', aspect='auto') sns.set(style='white') # - # ### Generate topics # # We assume a vocabulary of 25 terms, and create ten "topics", where each topic assigns exactly 5 consecutive terms equal probability. V = 25 K = 10 N = 100 D = 1000 topics = [] topic_base = np.concatenate((np.ones((1, 5)) * 0.2, np.zeros((4, 5))), axis=0).ravel() for i in range(5): topics.append(np.roll(topic_base, i * 5)) topic_base = np.concatenate((np.ones((5, 1)) * 0.2, np.zeros((5, 4))), axis=1).ravel() for i in range(5): topics.append(np.roll(topic_base, i)) topics = np.array(topics) plt.figure(figsize=(10, 5)) plot_images(plt, topics, (5, 5), layout=(2, 5), figsize=(10, 5)) # ### Generate documents from topics # # We generate 1,000 documents from these 10 topics by sampling 1,000 topic distributions, one for each document, from a Dirichlet distribution with parameter $\alpha = (1, \ldots, 1)$. alpha = np.ones(K) np.random.seed(42) thetas = np.random.dirichlet(alpha, size=D) topic_assignments = np.array([np.random.choice(range(K), size=100, p=theta) for theta in thetas]) word_assignments = np.array([[np.random.choice(range(V), size=1, p=topics[topic_assignments[d, n]])[0] for n in range(N)] for d in range(D)]) doc_term_matrix = np.array([np.histogram(word_assignments[d], bins=V, range=(0, V - 1))[0] for d in range(D)]) imshow(doc_term_matrix) # ### Generate document network # # Create a document network from the documents by applying $\psi$ and applying a threshold $\psi_0$. from itertools import product from sklearn.cross_validation import StratifiedKFold # choose parameter values mu = 0. nu2 = 1. np.random.seed(14) H = np.random.normal(loc=mu, scale=nu2, size=(K, K)) zeta = pd.DataFrame([(i, j, np.dot(np.dot(thetas[i], H), thetas[j])) for i, j in product(range(D), repeat=2)], columns=('tail', 'head', 'zeta')) _ = zeta.zeta.hist(bins=50) # choose parameter values zeta['y'] = (zeta.zeta >= 0).astype(int) # plot histogram of responses print('positive examples {} ({:.1f}%)'.format(zeta.y.sum(), zeta.y.sum() / D / D * 100)) _ = zeta.y.hist() y = zeta[['tail', 'head', 'y']].values skf = StratifiedKFold(y[:, 2], n_folds=100) _, train_idx = next(iter(skf)) train_idx.shape # ### Estimate parameters from slda.topic_models import GRTM _K = 10 _alpha = alpha[:_K] _beta = np.repeat(0.01, V) _mu = mu _nu2 = nu2 _b = 1. n_iter = 500 grtm = GRTM(_K, _alpha, _beta, _mu, _nu2, _b, n_iter, seed=42) # %%time grtm.fit(doc_term_matrix, y[train_idx]) plot_images(plt, grtm.phi, (5, 5), (2, 5), figsize=(10, 5)) topic_order = [4, 7, 3, 1, 0, 9, 5, 2, 8] plot_images(plt, grtm.phi[topic_order], (5, 5), (2, 5), figsize=(10, 5)) burnin = -1 mean_final_lL = grtm.loglikelihoods[burnin:].mean() print(mean_final_lL) plt.plot(grtm.loglikelihoods, label='mean final LL {:.2f}'.format(mean_final_lL)) _ = plt.legend() imshow(grtm.theta) H_pred = grtm.H[burnin:].mean(axis=0) _ = plt.hist(H_pred.ravel(), bins=20) _ = plt.hist(H.ravel(), bins=20) # ### Predict edges on pairs of test documents # Create 1,000 test documents using the same generative process as our training documents. np.random.seed(42^2) thetas_test = np.random.dirichlet(alpha, size=D) topic_assignments_test = np.array([np.random.choice(range(K), size=100, p=theta) for theta in thetas_test]) word_assignments_test = np.array([[np.random.choice(range(V), size=1, p=topics[topic_assignments_test[d, n]])[0] for n in range(N)] for d in range(D)]) doc_term_matrix_test = np.array([np.histogram(word_assignments_test[d], bins=V, range=(0, V - 1))[0] for d in range(D)]) imshow(doc_term_matrix_test) # Learn their topic distributions using the model trained on the training documents, then calculate the actual and predicted values of $\psi$. For predicted $\psi$, estimate $\eta$ as the mean of our samples of $\eta$ after burn-in. def bern_param(theta1, theta2, H): zeta = np.dot(np.dot(theta1, H), theta2) return np.exp(zeta) / (1 + np.exp(zeta)) thetas_test_grtm = grtm.transform(doc_term_matrix_test) p_test = np.zeros(D * D) p_test_grtm = np.zeros(D * D) for n, i in enumerate(product(range(D), range(D))): p_test[n] = bern_param(thetas_test[i[0]], thetas_test[i[1]], H) p_test_grtm[n] = bern_param(thetas_test_grtm[i[0]], thetas_test_grtm[i[1]], H_pred) # Measure the goodness of our prediction by the [area under the associated ROC curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve). y_test = (p_test > 0.5).astype(int) y_grtm = p_test_grtm fpr, tpr, _ = roc_curve(y_test, y_grtm) plt.plot(fpr, tpr, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_grtm)))) _ = plt.legend(loc='best') # ### Learn topics, then learn classifier from slda.topic_models import LDA lda = LDA(_K, _alpha, _beta, n_iter, seed=42) # %%time lda.fit(doc_term_matrix) plot_images(plt, lda.phi, (5, 5), (1, 5), figsize=(10, 5)) plt.plot(lda.loglikelihoods) imshow(lda.theta) from sklearn.ensemble import GradientBoostingClassifier from sklearn.linear_model import LogisticRegression # Compute Hadamard products between learned topic distributions for training and test documents. # %%time thetas_test_lda = lda.transform(doc_term_matrix_test) lda_train = np.zeros((D * D, _K * _K)) lda_test = np.zeros((D * D, _K * _K)) for n, i in enumerate(product(range(D), range(D))): lda_train[n] = np.kron(lda.theta[i[0]], lda.theta[i[1]]) lda_test[n] = np.kron(thetas_test_lda[i[0]], thetas_test_lda[i[1]]) # #### Logistic regression # # - Train logistic regression on training data # - calculate probability of an edge for each pair of test documents, and # - measure the goodness of our prediction by computing the [area under the ROC curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve). _C_grid = np.arange(1, 202, 10) roc_auc_scores = [] for _C in _C_grid: print('Training Logistic Regression with C = {}'.format(_C)) _lr = LogisticRegression(fit_intercept=False, C=_C) _lr.fit(lda_train, zeta.y) _y_lr = _lr.predict_proba(lda_test)[:, 1] roc_auc_scores.append(roc_auc_score(y_test, _y_lr)) print(' roc_auc_score = {}'.format(roc_auc_scores[-1])) print(_C_grid[np.argmax(roc_auc_scores)], np.max(roc_auc_scores)) plt.plot(_C_grid, roc_auc_scores) lr = LogisticRegression(fit_intercept=False, C=11) lr.fit(lda_train, zeta.y) y_lr = lr.predict_proba(lda_test)[:, 1] fpr, tpr, _ = roc_curve(y_test, y_lr) plt.plot(fpr, tpr, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_lr)))) _ = plt.legend(loc='best') # #### Gradient boosted trees # # - Train gradient boosted trees on training data # - calculate probability of an edge for each pair of test documents, and # - measure the goodness of our prediction by computing the [area under the ROC curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve). _C_grid = np.arange(1, 4) roc_auc_scores = [] for _C in _C_grid: print('Training Logistic Regression with C = {}'.format(_C)) _gbc = GradientBoostingClassifier(max_depth=_C) _gbc.fit(lda_train, zeta.y) _y_gbc = _gbc.predict_proba(lda_test)[:, 1] roc_auc_scores.append(roc_auc_score(y_test, _y_gbc)) print(' roc_auc_score = {}'.format(roc_auc_scores[-1])) print(_C_grid[np.argmax(roc_auc_scores)], np.max(roc_auc_scores)) plt.plot(_C_grid, roc_auc_scores) gbc = GradientBoostingClassifier(max_depth=3) gbc.fit(lda_train, zeta.y) y_gbc = gbc.predict_proba(lda_test)[:, 1] fpr_gbc, tpr_gbc, _ = roc_curve(y_test, y_gbc) plt.plot(fpr_gbc, tpr_gbc, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_gbc)))) plt.legend(loc='best') # ### Use GRTM topics grtm_train = np.zeros((D * D, _K * _K)) grtm_test = np.zeros((D * D, _K * _K)) for n, i in enumerate(product(range(D), range(D))): grtm_train[n] = np.kron(grtm.theta[i[0]], grtm.theta[i[1]]) grtm_test[n] = np.kron(thetas_test_grtm[i[0]], thetas_test_grtm[i[1]]) _C_grid = np.arange(1, 52, 10) roc_auc_scores = [] for _C in _C_grid: print('Training Logistic Regression with C = {}'.format(_C)) _lr = LogisticRegression(fit_intercept=False, C=_C) _lr.fit(grtm_train, zeta.y) _y_lr = _lr.predict_proba(grtm_test)[:, 1] roc_auc_scores.append(roc_auc_score(y_test, _y_lr)) print(' roc_auc_score = {}'.format(roc_auc_scores[-1])) print(_C_grid[np.argmax(roc_auc_scores)], np.max(roc_auc_scores)) plt.plot(_C_grid, roc_auc_scores) lr = LogisticRegression(fit_intercept=False, C=51) lr.fit(grtm_train, zeta.y) y_lr = lr.predict_proba(grtm_test)[:, 1] fpr, tpr, _ = roc_curve(y_test, y_lr) plt.plot(fpr, tpr, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_lr)))) _ = plt.legend(loc='best') # ## Conclusion # # The relational topic model performs better than either logistic regression or gradient boosted trees trained on the LDA topics, but worse than a logistic regression trained on the GRTM topics, which just shows how much better the GRTM topics are!
examples/grtm.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np from ai.analyzer.weighted_win_likelihood_analyzer import WeightedWinLikelihoodAnalyzer, WeightedState from ai.analyzer.nn_trainer import NNTrainer from ai.games.random_ai_game import RandomAIGame import matplotlib.pyplot as plt # %matplotlib inline # + all_weights = [] while (games_processed < 10): game = RandomAIGame() moves, winner = game.play() print(f'winner is {winner}') if winner != 0: weights = WeightedWinLikelihoodAnalyzer().analyze_game(moves) weights.append(weights) games_processed += 1 weights = [item for sublist in weights for item in sublist] history, y_test, predictions = NNTrainer().train(weights) # - plt.plot(history['loss'], linewidth=2, label='Loss') plt.plot(history['acc'], linewidth=2, label='Accuracy') plt.legend(loc='upper right') plt.title('Model Loss/Accuracy') plt.ylabel('Value') plt.xlabel('Epoch') plt.show() # %debug
.ipynb_checkpoints/Untitled-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## GPT-2 # # [Illustrated GPT-2](http://jalammar.github.io/illustrated-gpt2/) # # The GPT-2 is built using transformer decoder blocks.
GPT-2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # !ls -la /data/books/machine-learning-for-engineers/ # !ls -la ../../../../data/induction/books/machine-learning-for-engineers/ import numpy as np import scipy.misc import pandas as pd import imageio import matplotlib.pyplot as plt from sklearn import preprocessing from sklearn.metrics import confusion_matrix # + language="markdown" # # Loading a CSV file into a Pandas DataFrame (DF) # - df = pd.read_csv ("/data/books/machine-learning-for-engineers/iris.csv.bz2") #df df = pd.read_csv ("../../../../data/induction/books/machine-learning-for-engineers/iris.csv.bz2") #df print (df.columns) df.head(3) df['Sepal.Length'].head(3) print (df["Sepal.Length"].mean()) print (df["Sepal.Length"].var()) print (df["Sepal.Length"].skew()) print (df["Sepal.Length"].kurtosis()) df['Sepal.Length'].plot.hist() plt.show() # + language="markdown" # # Loading an image # - testimg = imageio.imread("/data/books/machine-learning-for-engineers/blue_jay.jpg") plt.imshow( testimg) testimg.shape testimg = imageio.imread("../../../../data/induction/books/machine-learning-for-engineers/blue_jay.jpg") plt.imshow( testimg) testimg.shape # + plt.subplot(131) plt.imshow( testimg[:,:,0], cmap="Reds") plt.title("Red channel") plt.subplot(132) plt.imshow( testimg[:,:,1], cmap="Greens") plt.title("Green channel") plt.subplot(133) plt.imshow( testimg[:,:,2], cmap="Blues") plt.title("Blue channel") # + language="markdown" # # Data Preprocessing # ## Normalization # + df = pd.read_csv("/data/books/machine-learning-for-engineers/mpg.csv.bz2") plt.figure (figsize = (10,8)) print (df.columns) partialcolumns = df[['acceleration', 'mpg']] std_scale = preprocessing.StandardScaler().fit(partialcolumns) df_std = std_scale.transform (partialcolumns) plt.scatter (partialcolumns['acceleration'], partialcolumns['mpg'], color="grey", marker='^') plt.scatter (df_std[:,0], df_std[:,1]) # + df = pd.read_csv("../../../../data/induction/books/machine-learning-for-engineers/mpg.csv.bz2") plt.figure (figsize = (10,8)) print (df.columns) partialcolumns = df[['acceleration', 'mpg']] std_scale = preprocessing.StandardScaler().fit(partialcolumns) df_std = std_scale.transform (partialcolumns) plt.scatter (partialcolumns['acceleration'], partialcolumns['mpg'], color="grey", marker='^') plt.scatter (df_std[:,0], df_std[:,1]) plt.show() # + language="markdown" # # Error Measurement # + y_true = [8, 5, 6, 8, 5, 3, 1, 6, 4, 2, 5, 3, 1, 4] y_pred = [8, 5, 6, 8, 5, 2, 3, 4, 4, 5, 5, 7, 2, 6] cf_mtrx = confusion_matrix(y_true, y_pred) print (cf_mtrx) plt.imshow (cf_mtrx, interpolation='nearest', cmap='plasma') plt.xticks (np.arange(0,8), np.arange(1,9)) plt.yticks (np.arange(0,8), np.arange(1,9)) plt.show() # -
books/machine-learning-for-engineers/02-data-transformation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import hatchet as ht filename1 = '../data/lulesh-1node/lulesh-annotation-profile-27cores.json' filename2 = '../data/lulesh-16nodes/lulesh-annotation-profile-512cores.json' gf1 = ht.GraphFrame.from_caliper_json(filename1) gf2 = ht.GraphFrame.from_caliper_json(filename2) gf1.drop_index_levels() gf2.drop_index_levels() # - squashed_gf1 = gf1.filter(lambda x: x['name'].startswith('MPI')) squashed_gf2 = gf2.filter(lambda x: x['name'].startswith('MPI')) new_gf = squashed_gf2 - squashed_gf sorted_df = new_gf.dataframe.sort_values(by=['time'], ascending=False) display(sorted_df)
notebooks/4-filter-by-mpi.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Objetivo: Enviar mensagem para várias pessoas ou grupos automaticamente # ### Cuidados! # # 1. Whatsapp não gosta de nenhum tipo de automação # 2. Isso pode dar merda, já to avisando # 3. Isso não é o uso da API oficial do Whatsapp, o próprio whatsapp tem uma API oficial. Se o seu objetivo é fazer envio em massa ou criar aqueles robozinhos que respondem automaticamente no whatsapp, então use a API oficial # 4. Meu objetivo é 100% educacional # ### Dito isso, bora automatizar o envio de mensagens no Whatsapp # # - Vamos usar o Selenium (vídeo da configuração na descrição) # - Temos 1 ferramenta muito boa alternativas: # - Usar o wa.me (mais fácil, mais seguro, mas mais demorado) # + import pandas as pd contatos_df = pd.read_excel("Enviar.xlsx") display(contatos_df) # + from selenium import webdriver from selenium.webdriver.common.keys import Keys import time import urllib navegador = webdriver.Chrome() navegador.get('https://web.whatsapp.com/') while len(navegador.find_elements_by_id("side")) < 1: time.sleep(1) for i, mensagem in enumerate (contatos_df['Mensagem']): pessoa = contatos_df.loc[i, 'Pessoa'] numero = contatos_df.loc[i, 'Número'] text = urllib.parse.quote(f'Oi {pessoa}, {mensagem}') link = f'https://web.whatsapp.com/send?phone={numero}&text={text}' navegador.get(link) while len(navegador.find_elements_by_id("side")) < 1: time.sleep(1) navegador.find_element_by_xpath('//*[@id="main"]/footer/div[1]/div/span[2]/div/div[2]/div[1]/div/div[2]').send_keys(Keys.ENTER) time.sleep(10) # -
Enviar WhatsApp Python.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # ## Assignment Solution # # 1. Write codes that takes a user inputted number and prints whether it is positive, negative or zero, with "The inputted number is (positive/negative/zero)" depending. # # # + num = readline(prompt = 'type a number:') if (num>0){ print(paste('The inputted number ',num,' is positive.')) } else if (num==0){ print(paste('The inputted number ',num,' is zero.')) }else{ print(paste('The inputted number ',num,' is negative.')) } # - # 2. Write codes that takes two user inputted numbers and prints "The first number is larger" or "The second number is larger" depending on which is larger. (**Hint**: you'll need to use `readline()` twice.) # # + num1 = readline(prompt = 'type a number:') num2 = readline(prompt = 'type another number:') num = as.numeric(num1) - as.numeric(num2) if (num>0){ print(paste('The first number is larger.')) } else if (num==0){ print(paste('The two numbers are equal.')) }else{ print(paste('The second number is larger.')) } # - # 3. Write a function that computes the sum from 0 to a user inputted number. This time though, start at the user inputted number and work down. This answer will look very much like the example above, you'll just need to change a couple of things. # # #preparation: to create a list of consecutive integers from 0 number=-3.23 0:number # + sum_from_0 = function(number){ sum = 0 for (i in 0:number){ sum = sum + i } return(sum) } num = as.numeric(readline(prompt = 'type a number:')) print(paste('The sum from 0 to your inputted number is',sum_from_0(num))) # - # 4. Write a function that computes the factorial of a user inputted number. If you don't know what a factorial is or need a review, check [this](https://en.wikipedia.org/wiki/Factorial) link out. Again, your solution is going to look a lot like the code above. Things you should think about: # * What is the process of computing a factorial if you were to compute it by hand? # * What is the common starting place when trying to compute the factorial of any number?<br><br> # # # + factorial = function(number){ fac = 1 for (i in 1:number){ fac = fac * i } return(fac) } num = as.numeric(readline(prompt = 'type a positive number:')) print(paste('The factorial of your inputted number is',factorial(num))) # - # 5. Write a function that computes and prints all of the nontrivial positive divisors of a user inputted number. If you don't know what a divisor is or need a review, check out [this](https://en.wikipedia.org/wiki/Divisor) link. Things to think about: # * How do you determine if a single number is a divisor of another? # * How do you do this multiple times (**Hint**: it involves a for loop)? # # + #solu1: use a for loop #create a function for nontrivial_positive_divisors (npd) npd = function(x){ y = c() for (i in 2:(x-1)){ if (x%%i==0){ y = c(y, i) } } return(y) } npd(129) # + #solu2: use boolean indexing npd = function(x){ #create a list of consecutive integers from 2 to x-1 y = 2:(x-1) #boolean indexing the list by modular division return(y[x%%y==0]) } npd(32) # - # 6. Write a function that computes the greatest common divisor between two user inputted numbers. If you don't know what a greatest common divisor is, check out [this](https://en.wikipedia.org/wiki/Greatest_common_divisor) link. # # # + gcd = function(x,y){ max(intersect(npd(x), npd(y))) } gcd(12, 15) # - # 7. Write a function that computes the least common multiple between two user inputted numbers. If you don't know what a least common multiple is or want a review check [this](https://en.wikipedia.org/wiki/Least_common_multiple) out. # + lcd = function(x,y){ min(intersect(npd(x), npd(y))) } lcd(15, 45) # - # 8. Write a function that determines whether or not a user inputted number is a prime number and prints `'The number you inputted is (not) a prime number.'` depending on what your function finds. If you don't know what a prime number is or need a review, check out [this](https://en.wikipedia.org/wiki/Prime_number) link. Things to think about: # * How do you check if a number is divisible by another number? # * What numbers are a prime number divisible by? # * How do you check all of the numbers a number could be divisible by? # # # + is_prime = function(x){ if (length(npd(x))==0){ print(paste(x, ' is a prime number.')) }else{ print(paste(x, ' is NOT a prime number.')) } } is_prime(4) # - # 9. One can use loops to compute the elements of a mathematical series. Series can be defined recursively with the value of each element depending on the one that comes before it. Consider the series created by the rules: # # > a[0]=1, a[i+1] = 2*a[i] + 1, for i >0 # # Write a function that prints the `nth` element in the series as determined by input from the user. e.g. If the user inputs the number `3`, your function should print the 3rd element in the series, `15`. You're welcome to check the math! Things to think about: # * You know you're going to use a loop to solve this problem, how? # * How do you store each of the elements as you calculate them with the loop? # * How many elements do you need to keep track of at any one time? # # nth = function(n){ #initiation a list a=c(1) for (i in 1:(n+1)){ a = c(a,a[i-1]*2+1) } return(a[n+1]) } nth(4) # 10. Challenge: solve the equation: # # `(a + (b - c) * d - e) * f = 75` # # where a, b, c, d, e, and f are unique integers in the 1:6. # # Hints: # - Computers are so fast that your program can simply try all possible valid values of a, b, c, d, e, and f until it finds one permutation of 1-6 that solves the challenge! (Btw, there is only *one* permutation that will solve it.) # - Use 6 nested for-loops to enumerate all ways of setting each of a, b, c, d, e, and f to the values 1-6. # # # Want more? Modify your program to solve all these (very similar) equations: # # ``` # (a + (b - c) * d - e) * f = 22 # (a + (b - c) * d - e) * f = 38 # (a + (b - c) * d - e) * f = 46 # (a + (b - c) * d - e) * f = 57 # (a + (b - c) * d - e) * f = 78 # (a + (b - c) * d - e) * f = 80 # (a + (b - c) * d - e) * f = 81 # (a + (b - c) * d - e) * f = 88 # (a + (b - c) * d - e) * f = 92 # (a + (b - c) * d - e) * f = 100 # (a + (b - c) * d - e) * f = 102 # (a + (b - c) * d - e) * f = 104 # (a + (b - c) * d - e) * f = 105 # ``` # + #create a matrix where each row is one permutation of v, the list of numbers provided perm <- function(v) { n <- length(v) if (n == 1) v else { X <- NULL for (i in 1:n) X <- rbind(X, cbind(v[i], perm(v[-i]))) X } } # Find solution for equation (a + (b - c) * d - e) * f = x, where a...e are unique within 1:6 solve_eqn= function(n){ mat = perm(1:6) df = as.data.frame(mat) colnames(df) = c('a','b','c','d','e','f') # check if `(a + (b - c) * d - e) * f = n`, using boolean indexing on the rows of df df1 = df[(df$a + (df$b - df$c) * df$d - df$e) * df$f == n,] return(df1) } # - perm(c(1,2,3)) solve_eqn(22)
r-essentials/exercises/assignment-1-soln.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Naive Bayes Sentiment Analysis Using Packages # # (Under Progress) # # Source: # - http://scikit-learn.org/stable/modules/naive_bayes.html # - https://pythonprogramming.net/naive-bayes-classifier-nltk-tutorial/ # - https://streamhacker.com/2010/05/10/text-classification-sentiment-analysis-naive-bayes-classifier/ # # SKLEARN Method from sklearn.naive_bayes import GaussianNB from sklearn.naive_bayes import BernoulliNB from sklearn.naive_bayes import MultinomialNB from sklearn import datasets iris = datasets.load_iris() from sklearn.naive_bayes import GaussianNB gnb = GaussianNB() y_pred = gnb.fit(iris.data, iris.target).predict(iris.data) print("Number of mislabeled points out of a total %d points : %d" % (iris.data.shape[0],(iris.target != y_pred).sum())) # # NLTK Method import nltk.classify.util from nltk.classify import NaiveBayesClassifier from nltk.corpus import movie_reviews # from nltk.classify import NaiveBayesClassifier movie_reviews.fileids()[:5] negids = movie_reviews.fileids('neg') posids = movie_reviews.fileids('pos') posids[:5] # the given file(s) as a list of words and punctuation symbols movie_reviews.words() # print the words in the following document print(negids[2]) movie_reviews.words(fileids=[negids[2]]) # + def word_feats(words): return dict([(word, True) for word in words]) negfeats = [(word_feats(movie_reviews.words(fileids=[f])), 'neg') for f in negids] posfeats = [(word_feats(movie_reviews.words(fileids=[f])), 'pos') for f in posids] # - negcutoff = int(len(negfeats)*3/4) poscutoff = int(len(posfeats)*3/4) poscutoff trainfeats = negfeats[:negcutoff] + posfeats[:poscutoff] testfeats = negfeats[negcutoff:] + posfeats[poscutoff:] print('train on %d instances, test on %d instances' % (len(trainfeats), len(testfeats))) classifier = NaiveBayesClassifier.train(trainfeats) print('accuracy:', nltk.classify.util.accuracy(classifier, testfeats)) classifier.show_most_informative_features() # # FINAL # + import nltk.classify.util from nltk.classify import NaiveBayesClassifier from nltk.corpus import movie_reviews # from nltk.classify import NaiveBayesClassifier def word_feats(words): return dict([(word, True) for word in words]) negids = movie_reviews.fileids('neg') posids = movie_reviews.fileids('pos') negfeats = [(word_feats(movie_reviews.words(fileids=[f])), 'neg') for f in negids] posfeats = [(word_feats(movie_reviews.words(fileids=[f])), 'pos') for f in posids] negcutoff = int(len(negfeats)*3/4) poscutoff = int(len(posfeats)*3/4) trainfeats = negfeats[:negcutoff] + posfeats[:poscutoff] testfeats = negfeats[negcutoff:] + posfeats[poscutoff:] print('train on %d instances, test on %d instances' % (len(trainfeats), len(testfeats))) classifier = NaiveBayesClassifier.train(trainfeats) print('accuracy:', nltk.classify.util.accuracy(classifier, testfeats)) classifier.show_most_informative_features()
*Machine_Learning/Naive_Bayes_Classifier/Naive_Bayes_Other.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import seq2ftr.transformer from seq2ftr.transformer import SequenceTransformer # ### Direct API # + __testdata1__ = { "type":1, # 0 for boolean, 1 for numericla, 2 for categorical "value":[1,2,3] } __testdata2__ = { "type":2, # 0 for boolean, 1 for numericla, 2 for categorical "value":["1","2"] } # - st_num = SequenceTransformer() st_cat = SequenceTransformer() print(st_num.transform(__testdata1__)) print(st_cat.transform(__testdata2__)) # ### Modeling API import pandas as pd df = pd.DataFrame([[1,200,"1"],[1,500,"2"],[2,300,"2"],[2,600,"2"]],columns=['id','stock_price',"type"]) df = df.set_index("id") st_num.transform(df['stock_price']) st_cat.transform(df['type'])
QuickDemo.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Encrypted Linear Regression # In this tutorial you are going to see how you can run a linear regression model on **data distributed in a pool of workers** with **encrypted computations leveraged by Secured Multi-Party Computation**. For this demonstration we are going to use the classical Housing Prices dataset that is already available in the VirtualGrid set up by the Syft Sandbox. # # The idea for the implementation of the Encrypted Linear Regression algorithm in PySyft is based on the section 2 of [this paper](https://arxiv.org/abs/1901.09531) written by <NAME> of the Broad Institute of MIT and Harvard. # # **Author**: <NAME>. Github: [@andrelmfarias](https://github.com/andrelmfarias) | Twitter: [@andrelmfarias](https://twitter.com/andrelmfarias) # ## 1. Preliminaries # First, let's import PySyft and PyTorch and set up the Syft sandbox, which will create all the objects and tools we will need to run our simulation (Virtual Workers, VirtualGrid with datasets, etc...) import warnings warnings.filterwarnings("ignore") import torch import syft as sy sy.create_sandbox(globals(), verbose=False) # You can see that we have several workers already set up: workers # And each one has a chunk of the Housing Prices dataset: for worker in workers: print(worker.search(["#housing", "#data"])) # ## 2. Encrypted Linear Regression with PySyft # ### 2.1 Loading Housing Prices data from Virtual Grid # Now we have our Syft environment set, let's load the data. # # Please note that in order to avoid overflow with the SMPC computations performed by the linear model, and to maintain its stability, **we need to scale the data in a such way that the magnitude of each coordinate average lies in the interval [0.1, 10]**. # # Usually that can be done without revealing the data or the averages, you only need to have an idea of the order of magnitude. For example, if one of the coordinate is the surface of the house and it is represented in m², you should scale it by dividing by 100, as we know the surfaces of houses have an order of magnitude close to 100 in average. # # After running the model and obtaining the main statistics, we can rescale them back if needed. The same can be done with predictions. # # In this tutorial I will be loading the data and scale them following this idea: # + scale_data = torch.Tensor([10., 10., 10., 1., 1., 10., 100., 10., 10., 1000., 10., 1000., 10.]) scale_target = 100.0 housing_data = [] housing_targets = [] for worker in workers: housing_data.append(sy.local_worker.request_search(["#housing", "#data"], location=worker)[0] / scale_data.send(worker)) housing_targets.append(sy.local_worker.request_search(["#housing", "#target"], location=worker)[0] / scale_target) # - # ### 2.2 Setting up 2 more Virtual workers: the crypto provider and the "honest but curious" worker # In order to run the linear regression, we will need **two more workers**, a *crypto provider* and a *honest but curious* worker. Both are necessary to assure the security of the SMPC computations when we run the model in a pool with more than 3 workers. # # > *Note: the **honest but curious** worker is a legitimate participant in a communication protocol who will not deviate from the defined protocol but will attempt to learn all possible information from legitimately received messages.* crypto_prov = sy.VirtualWorker(hook, id="crypto_prov") hbc_worker = sy.VirtualWorker(hook, id="hbc_worker") # ### 2.3 Running Encrypted Linear Regression with SMPC # Now let's import the EncryptedLinearRegression from the linalg module of pysyft: from syft.frameworks.torch.linalg import EncryptedLinearRegression # Let's train the model!! crypto_lr = EncryptedLinearRegression(crypto_provider=crypto_prov, hbc_worker=hbc_worker) crypto_lr.fit(housing_data, housing_targets) # We can display the results with the method `.summarize()` crypto_lr.summarize() # **We can see that the EncryptedLinearRegression does not only give the coefficients and intercept values, but also their standard errors and the p-values!** # ## 3. Comparing results with other linear regressors # Now, in order to show the effectiveness of the EncryptedLinearRegression, let's compare it with the Linear Regression from other known libraries. # ### 3.1 Sending data to local server for comparison purposes # First, let's send the data to the local worker and transform the `torch.Tensor`s in `numpy.array`s # + import numpy as np data_tensors = [x.copy().get() for x in housing_data] target_tensors = [y.copy().get() for y in housing_targets] data_np = torch.cat(data_tensors, dim=0).numpy() target_np = torch.cat(target_tensors, dim=0).numpy() # - # ### 3.2 Scikit-learn # First let's compare the results with the sklearn's Linear Regression: from sklearn.linear_model import LinearRegression lr = LinearRegression().fit(data_np, target_np.squeeze()) # Display the results: print("=" * 25) print("Sklearn Linear Regression") print("=" * 25) for i, coef in enumerate(lr.coef_, 1): print(" coeff{:<3d}".format(i), "{:>14.4f}".format(coef)) print(" intercept:", "{:>12.4f}".format(lr.intercept_)) print("=" * 25) # **You can notice that the are results are pretty much the same!! The are some small differences, but they are never higher than 0.2% of the value computed by the sklearn model!!** # # **For an ecrypted model that can compute linear regression coefficients without ever revealing the data, this is a huge achievement!** # ### 3.3 Statsmodel API # We can do the same using the Linear Regression from Statsmodel API, which also gives us the **standard errors** and **p-values** of the coefficients. We can then compare it with the results given by the EncryptedLinearRegression import statsmodels.api as sm mod = sm.OLS(target_np.squeeze(), sm.add_constant(data_np), hasconst=True) res = mod.fit() print(res.summary()) # **Once again, we can see that all results are pretty much the same!!** # # # ## Well Done! # # And voilà! We were able to train an OLS Regression model on distributed data and without ever seeing it. We were even able to compute standard errors and p-values for each coefficient. # # Also, after comparing our results with results given by other known libraries, we were able to validate this approach. # # Congratulations!!! - Time to Join the Community! # # Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways! # # # ### Star PySyft on GitHub # # The easiest way to help our community is just by starring the repositories! This helps raise awareness of the cool tools we're building. # # - [Star PySyft](https://github.com/OpenMined/PySyft) # # ### Pick our tutorials on GitHub! # # We made really nice tutorials to get a better understanding of what Federated and Privacy-Preserving Learning should look like and how we are building the bricks for this to happen. # # - [Checkout the PySyft tutorials](https://github.com/OpenMined/PySyft/tree/master/examples/tutorials) # # # ### Join our Slack! # # The best way to keep up to date on the latest advancements is to join our community! # # - [Join slack.openmined.org](http://slack.openmined.org) # # ### Join a Code Project! # # The best way to contribute to our community is to become a code contributor! If you want to start "one off" mini-projects, you can go to PySyft GitHub Issues page and search for issues marked `Good First Issue`. # # - [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) # # ### Donate # # If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups! # # - [Donate through OpenMined's Open Collective Page](https://opencollective.com/openmined)
examples/tutorials/advanced/Encrypted Linear Regression.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] deletable=true editable=true id="q4lpr7bdmoeO" colab_type="text" # # # Scikit learn # <img src="https://raw.githubusercontent.com/INGEOTEC/diplo2019/master/notebooks/img/learn.png" width="200"> # + deletable=true editable=true jupyter={"outputs_hidden": false} id="zBCPY-yCmoeS" colab_type="code" colab={} import pandas as pd # %matplotlib inline from sklearn.model_selection import train_test_split data_train = pd.read_csv('Titanic_codificado_train.csv') data_test = pd.read_csv('Titanic_codificado_test.csv') X_all = data_train.drop(['Survived', 'PassengerId'], axis=1) y_all = data_train['Survived'] num_test = 0.20 X_train, X_test, y_train, y_test = train_test_split(X_all, y_all, test_size=num_test, random_state=23) # + deletable=true editable=true jupyter={"outputs_hidden": false} id="jF7N3A1emoeX" colab_type="code" colab={} outputId="9e5d3872-36c7-4422-8b0e-b173f5b2c8ce" data_train.head() # + deletable=true editable=true jupyter={"outputs_hidden": false} id="mzu2Remwmoec" colab_type="code" colab={} outputId="82210a8d-df5c-46c6-8897-55980b9b026f" X_train['Age'].plot.hist() # + deletable=true editable=true jupyter={"outputs_hidden": false} id="P448Z344moeg" colab_type="code" colab={} outputId="ee77e13f-873a-44ae-e452-0d1f777cb430" X_train['Sex'].plot.hist() # + [markdown] deletable=true editable=true id="UkpWJ782moej" colab_type="text" # # Clustering # + [markdown] deletable=true editable=true id="pq2ZPlELmoel" colab_type="text" # ## ¿Qué es un cluster? # # - Conjunto de valores que tienen algo en común, y se agrupan en función de determinado rasgo. # # ## Algoritmos de agrupamiento # # - Tienen como objetivo devolver al usuario una serie de puntos que en cierto modo representan al resto de puntos iniciales por su posición representativa con respecto al total. # + [markdown] deletable=true editable=true id="MmDygdQ0moem" colab_type="text" # ## Algoritmos de clustering más utilizados # - k-Means # - Mapas autoorganizados # - Nearest Neighborhood # + [markdown] deletable=true editable=true id="mAxRBpZumoen" colab_type="text" # # k-Means # # - Aprendizaje no supevisado # - Técnica de agrupamiento con diversos parámetros # - Número de clusters # - Criterio de paro # - Valores iniciales (semillas, normalmente se deja con selección aleatoria) # # # + [markdown] deletable=true editable=true id="3jAiih6Mmoeo" colab_type="text" # # k-Means # # 1. Toma como parámetro inicial el número de k, que es el número de clusters a generar # 2. Selecciona k elementos de forma aleatoria, estos elementos son los centroides de cada cluster # 3. A cada objeto (diferente de los centroides) se le asigna el cluster al que se parece más, esto en base a la distancia entre el objeto y el centroide (o media del cluster). # 4. Se calcula el nuevo valor del centroide # 5. Se itera del paso 3 al 4 hasta que no haya cambios en los valores de los centroides u otro criterio de paro. # # + [markdown] deletable=true editable=true id="_NSGWZ4Pmoeq" colab_type="text" # ## Medidas de similaridad # # - Normalmente se utiliza una medida de similaridad basada en el error cuadrático o la distancia Euclidiana # # <img src="img/errormedio.png" width="250"> # # donde $p$ es el objeto, $m_i$ es la media del cluster $C_i$ # # - Distancia Euclideana # # <img src="img/Eclideana.png" width="200"> # + deletable=true editable=true jupyter={"outputs_hidden": true} id="7pKfJj9Zmoes" colab_type="code" colab={} outputId="b4f29bdd-804a-49f7-90a5-89be6d46be98" from IPython.display import YouTubeVideo #Copiar y pegar el enlace en el navegador para ver el vídeo. YouTubeVideo('https://www.youtube.com/watch?v=5I3Ei69I40s') # + deletable=true editable=true jupyter={"outputs_hidden": false} id="tibB9k0Amoev" colab_type="code" colab={} outputId="75347e45-49b7-47b7-f8cf-7dbcfead124e" #Aquí creamos datos sintéticos para observar en un plot sus muestras y sus centroides. import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_blobs X, y_true = make_blobs(n_samples=300, centers=4, cluster_std=0.60, random_state=0) plt.scatter(X[:, 0], X[:, 1], s=50); from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=4) kmeans.fit(X) y_kmeans = kmeans.predict(X) plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='viridis') centers = kmeans.cluster_centers_ plt.scatter(centers[:, 0], centers[:, 1], c='black', s=200, alpha=0.5); # + deletable=true editable=true jupyter={"outputs_hidden": true} id="C_irPEgumoe0" colab_type="code" colab={} #Aqui entrenamos un kmeans con los datos del titanic kmeans_titanic = KMeans(n_clusters=2, random_state=0).fit(X_train) #predecimos en test kmeans_predicted = kmeans_titanic.predict(X_test) #predecimos en train kmeans_predicted_ = kmeans_titanic.predict(X_train) # + deletable=true editable=true jupyter={"outputs_hidden": false} id="Y4oLIly5moe3" colab_type="code" colab={} import numpy as np #Aqui solo pasamos un dataframe a otro tipo de dato de array de numpy X_train_ = np.array(X_train) # + deletable=true editable=true jupyter={"outputs_hidden": false} id="5QFdAaimmoe7" colab_type="code" colab={} outputId="3a1aecb9-10e1-471b-d86f-63f92ed7a62e" #Se plotea solo las variables 9 y 7 plt.scatter(X_train_[:, 9], X_train_[:, 7], s=50); # + deletable=true editable=true jupyter={"outputs_hidden": false} id="lYb8-6pCmoe_" colab_type="code" colab={} outputId="2b2c1032-6cf3-4a13-e0be-15a90d9b3f70" #graficamos los datos del kmeans con los dos centroides calculados, los datos son solo de las variables # 9 y 7, pero se entrenó con todas las variables por eso los centroides se ven muy alejados de los puntos. from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=2) kmeans.fit(X_train) y_kmeans = kmeans.predict(X_train) plt.scatter(X_train_[:, 9], X_train_[:, 7], c=y_kmeans, s=50, cmap='viridis') centers = kmeans.cluster_centers_ plt.scatter(centers[:, 0], centers[:, 1], c='black', s=200, alpha=0.5); # + deletable=true editable=true jupyter={"outputs_hidden": false} id="aRtBXCOvmofC" colab_type="code" colab={} outputId="bd04d798-6a75-4328-ba8d-d3831fe929a5" X_all.head() # + deletable=true editable=true jupyter={"outputs_hidden": true} id="486q5wxymofF" colab_type="code" colab={} X_ = data_train[['Cabin','NamePrefix']] y_ = data_train['Survived'] num_test = 0.20 X_train, X_test, y_train, y_test = train_test_split(X_all, y_all, test_size=num_test, random_state=23) # + deletable=true editable=true jupyter={"outputs_hidden": false} id="OQwVvsCwmofI" colab_type="code" colab={} outputId="96037834-8cae-41bb-dda4-103303b5bad1" #Se vuelve a entrenar el k-means solo con los valores de las variabes 9 y 7. from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=2) kmeans.fit(X_train) y_kmeans = kmeans.predict(X_train) plt.scatter(X_train_[:, 0], X_train_[:, 1], c=y_kmeans, s=50, cmap='viridis') centers = kmeans.cluster_centers_ plt.scatter(centers[:, 0], centers[:, 1], c='black', s=200, alpha=0.5); # Ahora vemos como los centroides se paran realmente las dos clases, la negra y la amarilla.
notebooks/01-Python-004.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from manage import ROOT, CONFIG import yaml import os # ## Template reads # # **Abstract:** # One-sentence description # # **Description:** # In the following cell, I... # repo_readme = "templates/docs/repo_readme.md" with open(os.path.join(ROOT, repo_readme), "r") as md: text = md.read() md.close() # + active="" # a = !tree -L 2 -d example # # a # # a.replace("\xa0", "") # - CONFIG # + import re # text pattern = "\{%\s+.*?\s+%\}" pattern2 = "\{%\s+(?P<value>.*?)\s+%\}" def find_value(dictionary, keys): key = keys.pop(0) value = check_yaml(dictionary.get(key)) if isinstance(value, dict): value = find_value(value, keys) return value templates = re.findall(pattern, text) for temp in templates: keys = re.search(pattern2, temp).group('value').split(":") value = find_value(CONFIG, keys) temp, value text = text.replace(temp, str(value)) with open("test.md", 'w') as md: md.write(text) md.close() # - check_yaml("configs/structure.yaml") os.path.isfile('configs/structure.yaml') # + def show_files(path, all_files=None, full_path=False, suffix=None): """ All files under the folder. :param path: A folder. :param all_files: initial list, where files will be saved. :param full_path: Save full path or just file name? Default False, i.e., just file name. :param suffix: Filter by suffix. :return: all_files, a updated list where new files under the path-folder saved, besides of the original input. """ # 首先遍历当前目录所有文件及文件夹 if not os.path.isdir(path): raise FileNotFoundError(f"{path} is not a folder.") if all_files is None: all_files = [] if not suffix: suffix = [] file_list = os.listdir(path) # 准备循环判断每个元素是否是文件夹还是文件,是文件的话,把名称传入list,是文件夹的话,递归 for file in file_list: # 利用os.path.join()方法取得路径全名,并存入cur_path变量,否则每次只能遍历一层目录 cur_path = os.path.join(path, file) judge = [not cur_path.endswith(suf) for suf in suffix] if all(judge): continue # 判断是否是文件夹 if os.path.isdir(cur_path): show_files(cur_path, all_files, full_path) else: if full_path: all_files.append(cur_path) else: all_files.append(file) return all_files show_files(".", suffix=['.md', '.yaml']) # + a = read_yaml("config.yaml") read_yaml(a["structure"]) # + import requests config_file = r"https://raw.githubusercontent.com/SongshGeo/Python-Project-Template/master/mksci/templates/config_temp.yaml" config_yaml = requests.get(config_file) with open("test.yaml", "wb") as code: code.write(config_yaml. content) # -
playground.ipynb
// -*- coding: utf-8 -*- // --- // jupyter: // jupytext: // text_representation: // extension: .java // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: Java // language: java // name: java // --- // ### Google Colab Integration // // Die folgende Zelle können Sie überspringen, wenn Sie mit einer lokalen Installation arbeiten. Wenn Sie das Notebook auf Google-Colab ausführen, dann müssen Sie als erstes diese Zelle ausführen und danach die Seite neu laden (F5). // + slideshow={"slide_type": "skip"} !echo "Update environment..." !apt update -q &> /dev/null !echo "Install Java..." !apt-get install -q openjdk-11-jdk-headless &> /dev/null !echo "Install Jupyter java kernel..." !curl -L https://github.com/SpencerPark/IJava/releases/download/v1.3.0/ijava-1.3.0.zip -o ijava-kernel.zip &> /dev/null !unzip -q ijava-kernel.zip -d ijava-kernel && cd ijava-kernel && python3 install.py --sys-prefix &> /dev/null !echo "Downloading turtle jar ..." !curl -L https://github.com/Andreas-Forster/gyminf-programmieren/raw/master/notebooks/jturtle-0.6.jar -o jturtle-0.6.jar &> /dev/null !echo "Done." // + [markdown] slideshow={"slide_type": "slide"} // # Fehlerbehandlung mit Exceptions // // #### <NAME>, Departement Mathematik und Informatik, Universität Basel // + [markdown] slideshow={"slide_type": "slide"} // ### Fehlerbehandlung // - // Methoden können unter gewissen Umständen fehlschlagen // // Beispiele: // * Lesen eines nicht existierenden Elements in Array // * Nutzen einer Datei, die nicht vorhanden ist // * Division durch 0 // * ... // // > Gute Fehlerbehandlung Schlüssel zu stabilen Programmen // + [markdown] slideshow={"slide_type": "slide"} // ### Fehlerbehandlung // // Was soll die folgende Methode zurückgeben wenn durch 0 dividiert wird? // ```java // int divide(int a, int b) { // return a / b; // } // ``` // + [markdown] slideshow={"slide_type": "slide"} // ### Fehlerbehandlung mit "Hausmitteln" // // ```java // static final int ERROR = -9999999999; // // int divide(int a, int b) { // if (b == 0) { // return ERROR; // } else { // return a / b; // } // } // ``` // + [markdown] slideshow={"slide_type": "slide"} // ### Fehlerbehandlung mit "Hausmitteln" // // Beispiel: Berechne ```(a / b) / d``` // + // Code // + [markdown] slideshow={"slide_type": "slide"} // ### Nachteile unserer Fehlerbehandlung // // * Brauchen speziellen Fehlerwert. // * Was ist wenn Resultat Fehlerwert entspricht? // * Fehler kann von Nutzer der Methode vergessen oder ignoriert werden // * Tief-verschachtelte ```if's``` wenn mehrere Aufrufe zu Fehler führen können // + [markdown] slideshow={"slide_type": "slide"} // ### Was macht Java? // // - int div(int a, int b) { return a / b; } div(1, 0); // + [markdown] slideshow={"slide_type": "slide"} // ### Exceptions // // > Klassen der Java Bibliothek, die Fehler anzeigen // // Grundsätzliche normale Java-Klasse (siehe [API-Documentation](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/ArithmeticException.html) ) // - ArithmeticException e = new ArithmeticException("Fehler in Berechnung"); System.out.println(e.getMessage()); // + [markdown] slideshow={"slide_type": "slide"} // #### Miniübung // // Schauen Sie sich die [API-Documentation](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/ArithmeticException.html) an. // // * Von welcher Klasse erbt ArithmeticException? // * Können Sie eine eigene Exception definieren? // * Was macht die Methode printStackTrace()? // * Probieren Sie es aus. // // - // + [markdown] slideshow={"slide_type": "slide"} // ### Anzeigen von Fehler // // * ```throws```-Klausel deklariert, dass Fehler in Methode auftreten können // * ```throw```-Anweisung gibt Fehler aus // // // ``` // int div(int a, int b) throws ArithmeticException { // if (b == 0) { // throw new ArithmeticException("dividion by 0"); // } else { // return a / b; // } // } // ``` // + [markdown] slideshow={"slide_type": "slide"} // ### Anzeigen von Fehler - Beispiel // + class MyException extends Exception { MyException(String message) { super(message); } } int div(int a, int b) throws MyException { if (b == 0) { throw new MyException("division by 0"); } else { return a / b; } } // + // Experimente // + [markdown] slideshow={"slide_type": "slide"} // ### Throw Anweisung // // > "Wirft" ein Ausnahmeobjekt mit entsprechenden Fehlerinformationen: // // 1. bricht normale Programmausführung ab // 2. sucht passenden Ausnahmebehandler // 3. führt Ausnahmebehandler aus und übergibt ihm Ausnahmeobjekt als Parameter // 4. setzt Programmausführung nach Ausnahmebehandlung fort // // + [markdown] slideshow={"slide_type": "slide"} // ### Fehlerbehandlung: try-catch // // > Programmteile die Fehler werfern werden in try-catch Block "geschützt" // // ```java // try { // // java code, der Exception wirft wirft // } catch (Exception e) { // // Fehlerbehandlung // } // ``` // + [markdown] slideshow={"slide_type": "slide"} // ### Beispiel // + // Beispiel Try-catch Block // + [markdown] slideshow={"slide_type": "slide"} // ### Fehlerbehandlung: Try-catch-finally // // Optionale ```finally``` Klausel nach Catch Block // * Code wird immer ausgeführt // * Sogar wenn in catch-block wieder Exceptions geworfen werden // * Wird zum Aufräumen benutzt // + int a = 7; int b = 0; try { div (a, b); } catch (MyException e) { System.out.println(e.getMessage()); throw new Exception("something terrible happened - bailing out"); } finally { System.out.println("in finally clause"); } System.out.println("Ausführung geht hier weiter") // + [markdown] slideshow={"slide_type": "slide"} // ### Fehlerbehandlung: Check durch Compiler // // * Exceptions müssen entweder behandelt oder weiterpropagiert werden // * Propagieren von Exception wird durch ```throws```-Klausel angezeigt // * Compiler prüft, dass Fehler behandelt werden // + void f(int a, int b) throws MyException{ // Exception wird weiterpropagiert div(a, b); } void g() { try { f(3, 0); } catch (MyException e) { System.out.println(e.getMessage()); } } g(); // + [markdown] slideshow={"slide_type": "slide"} // ### Weitere Aspekte // // Aspekte von Exceptions, die über diese Einführung hinausgehen: // // * Java unterscheidet verschiedene Klassen von Exceptions // * Einige dürfen, andere müssen gefangen werden // * Try-catch Block kann beliebig viele verschiedene Exceptions behandeln // + [markdown] slideshow={"slide_type": "slide"} // #### Miniübung // // * Schreiben Sie eine eigene Exception Klasse mit dem Namen ```SwearWordException```. // * Schreiben Sie eine Methode ```void printText(String text)```. Diese soll wenn ein unschönes Wort im Text erscheint, eine Exception vom Typ ```SwearWordException``` werfen. Das entsprechende Wort soll in der Exception gespeichert werden. // * Schreiben Sie eine Methode ```void printCensored(String text)``` welche die Methode ```printText``` aufruft. Im Fall, dass eine Exception auftritt, soll der Text "censored" ausgegeben werden. // -
notebooks/Exceptions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="mLjlSwVGRli8" # _Lambda School Data Science — Applied Modeling_ # # This sprint, your project is Caterpillar Tube Pricing: Predict the prices suppliers will quote for industrial tube assemblies. # # # Gradient Boosting # # # #### Objectives # - Do feature engineering with relational data # - Use xgboost for gradient boosting # + [markdown] colab_type="text" id="Lxo6A73ERli-" # #### Python libraries for Gradient Boosting # - [scikit-learn Gradient Tree Boosting](https://scikit-learn.org/stable/modules/ensemble.html#gradient-boosting) — slower than other libraries, but [the new version may be better](https://twitter.com/amuellerml/status/1129443826945396737) # - Anaconda: already installed # - Google Colab: already installed # - [xgboost](https://xgboost.readthedocs.io/en/latest/) — can accept missing values and enforce [monotonic constraints](https://xiaoxiaowang87.github.io/monotonicity_constraint/) # - Anaconda, Mac/Linux: `conda install -c conda-forge xgboost` # - Windows: `conda install -c anaconda py-xgboost` # - Google Colab: already installed # - [LightGBM](https://lightgbm.readthedocs.io/en/latest/) — can accept missing values and enforce [monotonic constraints](https://blog.datadive.net/monotonicity-constraints-in-machine-learning/) # - Anaconda: `conda install -c conda-forge lightgbm` # - Google Colab: already installed # - [CatBoost](https://catboost.ai/) — can accept missing values and use [categorical features](https://catboost.ai/docs/concepts/algorithm-main-stages_cat-to-numberic.html) without preprocessing # - Anaconda: `conda install -c conda-forge catboost` # - Google Colab: `pip install catboost` # + [markdown] colab_type="text" id="S3mx5xMnRli_" # ### Get data # # # #### Option 1. Kaggle web UI # # Sign in to Kaggle and go to the [Caterpillar Tube Pricing](https://www.kaggle.com/c/caterpillar-tube-pricing) competition. Go to the Data page. After you have accepted the rules of the competition, use the download buttons to download the data. # # # #### Option 2. Kaggle API # # Follow these [instructions](https://github.com/Kaggle/kaggle-api). # # #### Option 3. GitHub Repo — LOCAL # # If you are working locally: # # 1. Clone the [GitHub repo](https://github.com/LambdaSchool/DS-Unit-2-Applied-Modeling/tree/master/data/caterpillar) locally. The data is in the repo, so you don't need to download it separately. # # 2. Unzip the file `caterpillar-tube-pricing.zip` which is in the data folder of your local repo. # # 3. Unzip the file `data.zip`. # # 4. Run the cell below to assign a constant named `SOURCE`, a string that points to the location of the data on your local machine. The rest of the code in the notebook will use this constant. # + colab_type="code" id="4llRWHx4EI2q" colab={} # SOURCE = '../data/caterpillar/caterpillar-tube-pricing/competition_data/' # + [markdown] colab_type="text" id="vvyyeP90FB65" # #### Option 4. GitHub Repo — COLAB # # If you are working on Google Colab, uncomment and run these cells, to download the data, unzip it, and assign a constant that points to the location of the data. # + colab_type="code" id="vzVWh6oGFZJb" outputId="398dfb50-9f5b-495e-b9b6-3be5ff8b4783" colab={"base_uri": "https://localhost:8080/", "height": 204} # !wget https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/caterpillar/caterpillar-tube-pricing.zip # + colab_type="code" id="1QG9BiopRljC" outputId="1f649f0e-8a17-488e-ceca-582a7cba54f5" colab={"base_uri": "https://localhost:8080/", "height": 68} # !unzip caterpillar-tube-pricing.zip # + colab_type="code" id="67Pz81FKRljE" outputId="ae1d4724-3590-42cb-f191-f679f2c17d24" colab={"base_uri": "https://localhost:8080/", "height": 408} # !unzip data.zip # + colab_type="code" id="mF8uDQ5wFSny" colab={} SOURCE = 'competition_data/' # + [markdown] colab_type="text" id="6_ZDsGjVRljF" # ## Do feature engineering with relational data # # Here are some questions — not answers! # # ### `bill_of_materials` # # is formatted like this: # + colab_type="code" id="opLig3sDRljG" outputId="f599c2d5-03d2-4ffc-a9d2-2b59edabb5a4" colab={"base_uri": "https://localhost:8080/", "height": 224} import pandas as pd materials = pd.read_csv(SOURCE + 'bill_of_materials.csv') materials.head() # + id="EjTT1g9zWjl3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 173} outputId="a8c74d99-ea8a-405c-ab7d-8c2dde83f8df" materials.describe(exclude='number') # + id="U850EAZ7Wjl6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 323} outputId="110f3325-784e-4d1c-bb88-b78b80d38320" materials.isnull().sum()/len(materials) # + [markdown] colab_type="text" id="7_nd_BN1RljI" # #### Would this be a better representation? # # Could pandas melt, crosstab, and other functions help reshape the data like this? # + [markdown] colab_type="text" id="DKgBu-T2RljI" # | Crosstab | C-1622 | C-1629 | C-1312 | C-1624 | C-1631 | C-1641 | Distinct | Total | # |:--------:|:------:|--------|--------|--------|--------|--------|----------|-------| # | TA-00001 | 2 | 2 | 0 | 0 | 0 | 0 | 2 | 4 | # | TA-00002 | 0 | 0 | 2 | 0 | 0 | 0 | 1 | 2 | # | TA-00003 | 0 | 0 | 2 | 0 | 0 | 0 | 1 | 2 | # | TA-00004 | 0 | 0 | 2 | 0 | 0 | 0 | 1 | 2 | # | TA-00005 | 0 | 0 | 0 | 1 | 1 | 1 | 3 | 3 | # + [markdown] colab_type="text" id="5fBGv8CIRljJ" # ### `components` # # Contains three representations of each component, in order of decreasing cardinality / granularity: # # - `component_id` # - `name` # - `component_type_id` # # What are the pros & cons of these different representations? # + colab_type="code" id="gkj_leYyRljJ" outputId="0fec6a1c-0ff8-4dd9-e7bc-434670d8b094" colab={"base_uri": "https://localhost:8080/", "height": 173} components = pd.read_csv(SOURCE + 'components.csv') components.describe() # + colab_type="code" id="cNowF5PvV1wH" outputId="7a12a7cb-929c-43b5-c50f-6d8210b08ab6" colab={"base_uri": "https://localhost:8080/", "height": 204} components.head() # + [markdown] colab_type="text" id="ZGzxHS-yId5U" # ### Tip/trick: Want to read all the files at once? # + colab_type="code" id="QuektlLMY-hu" colab={} from glob import glob data = {} for path in glob(SOURCE + '*.csv'): df = pd.read_csv(path) filename = path.split('/')[-1] name = filename.split('.')[0] data[name] = df # + id="sHae0YbuWjme" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="f40a9045-9352-4da9-d232-74f8c47cb88d" data.keys() # + id="yrOqjeP5Wjmg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="f9d1fed4-de41-4c85-ee49-7d728630dc67" data['comp_sleeve'].head() # + [markdown] colab_type="text" id="NspIenHYRljL" # ## Example solution for last assignment 🚜 # + colab_type="code" id="nxhNfQLlRljL" outputId="9e4beff3-2292-4342-f216-d30fc4e2ef4e" colab={"base_uri": "https://localhost:8080/", "height": 292} # !pip install category_encoders # + id="k_8_r00WWjml" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="2861761f-98b3-4ce3-9f57-f0829422c450" data['bill_of_materials'] # + colab_type="code" id="sJB9x5GcRljN" colab={} import category_encoders as ce import pandas as pd import numpy as np from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_squared_error, mean_squared_log_error from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline def rmse(y_true, y_pred): return np.sqrt(mean_squared_error(y_true, y_pred)) def rmsle(y_true, y_pred): return np.sqrt(mean_squared_log_error(y_true, y_pred)) def wrangle(X): X = X.copy() # Engineer date features X['quote_date'] = pd.to_datetime(X['quote_date'], infer_datetime_format=True) X['quote_date_year'] = X['quote_date'].dt.year X['quote_date_month'] = X['quote_date'].dt.month X = X.drop(columns='quote_date') # Merge tube data tube = pd.read_csv(SOURCE + 'tube.csv') X = X.merge(tube, how='left') # Engineer features from bill_of_materials materials = pd.read_csv(SOURCE + 'bill_of_materials.csv') materials['components_total'] = (materials['quantity_1'].fillna(0) + materials['quantity_2'].fillna(0) + materials['quantity_3'].fillna(0) + materials['quantity_4'].fillna(0) + materials['quantity_5'].fillna(0) + materials['quantity_6'].fillna(0) + materials['quantity_7'].fillna(0) + materials['quantity_8'].fillna(0)) materials['components_distinct'] = (materials['component_id_1'].notnull().astype(int) + materials['component_id_2'].notnull().astype(int) + materials['component_id_3'].notnull().astype(int) + materials['component_id_4'].notnull().astype(int) + materials['component_id_5'].notnull().astype(int) + materials['component_id_6'].notnull().astype(int) + materials['component_id_7'].notnull().astype(int) + materials['component_id_8'].notnull().astype(int)) # Merge selected features from bill_of_materials # Just use the first component_id, ignore the others for now! features = ['tube_assembly_id', 'component_id_1', 'components_total', 'components_distinct'] X = X.merge(materials[features], how='left') # Get component_type_id (has lower cardinality than component_id) components = pd.read_csv(SOURCE + 'components.csv') components = components.rename(columns={'component_id': 'component_id_1'}) features = ['component_id_1', 'component_type_id'] X = X.merge(components[features], how='left') # Count the number of specs for the tube assembly specs = pd.read_csv(SOURCE + 'specs.csv') specs['specs_total'] = specs.drop(columns=['tube_assembly_id']).count(axis=1) features = ['tube_assembly_id', 'specs_total', 'spec1'] X = X.merge(specs[features], how='left') # Drop tube_assembly_id because our goal is to predict unknown assemblies X = X.drop(columns='tube_assembly_id') return X # Read data trainval = pd.read_csv(SOURCE + 'train_set.csv') test = pd.read_csv(SOURCE + 'test_set.csv') # Split into train & validation sets # All rows for a given tube_assembly_id should go in either train or validation trainval_tube_assemblies = trainval['tube_assembly_id'].unique() train_tube_assemblies, val_tube_assemblies = train_test_split( trainval_tube_assemblies, random_state=42) train = trainval[trainval.tube_assembly_id.isin(train_tube_assemblies)] val = trainval[trainval.tube_assembly_id.isin(val_tube_assemblies)] # Wrangle train, validation, and test sets train = wrangle(train) val = wrangle(val) test = wrangle(test) # Arrange X matrix and y vector (log-transformed) target = 'cost' X_train = train.drop(columns=target) X_val = val.drop(columns=target) X_test = test.drop(columns='id') y_train = train[target] y_val = val[target] y_train_log = np.log1p(y_train) y_val_log = np.log1p(y_val) # + colab_type="code" id="Vit53URnH6u4" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="7f300158-ff75-41a2-d60e-dc987ceffe90" # Make pipeline pipeline = make_pipeline( ce.OrdinalEncoder(), RandomForestRegressor(n_estimators=100, random_state=42, n_jobs=-1) ) # Fit pipeline.fit(X_train, y_train_log) # Validate y_pred_log = pipeline.predict(X_val) print('Validation Error', rmse(y_val_log, y_pred_log)) # Predict def generate_submission(estimator, X_test, filename): y_pred_log = estimator.predict(X_test) y_pred = np.expm1(y_pred_log) # Convert from log-dollars to dollars submission = pd.read_csv(SOURCE + '../sample_submission.csv') submission['cost'] = y_pred submission.to_csv(filename, index=False) generate_submission(pipeline, X_test, 'submission-02.csv') # + colab_type="code" id="OPnPE-xTRljP" outputId="9dbdedd5-0f8d-41c2-decd-7005e7e5f873" colab={"base_uri": "https://localhost:8080/", "height": 595} # %matplotlib inline import matplotlib.pyplot as plt plt.figure(figsize=(10,10)) rf = pipeline.named_steps['randomforestregressor'] importances = pd.Series(rf.feature_importances_, X_train.columns) importances.sort_values().plot.barh(color='grey'); # + [markdown] colab_type="text" id="kKZh950URljR" # ## Use xgboost for gradient boosting # # #### [XGBoost Python API Reference: Scikit-Learn API](https://xgboost.readthedocs.io/en/latest/python/python_api.html#module-xgboost.sklearn) # + colab_type="code" id="ruupQ-TWjK6D" outputId="eb6ec398-4b1b-44a4-8f7c-261992072f90" colab={"base_uri": "https://localhost:8080/", "height": 85} from xgboost import XGBRegressor; # Make pipeline pipeline = make_pipeline( ce.OrdinalEncoder(), XGBRegressor(n_estimators=100, n_jobs=-1) ); # Fit/ pipeline.fit(X_train, y_train_log); # Validate y_pred_log = pipeline.predict(X_val); print('Validation Error', rmse(y_val_log, y_pred_log)); # + id="nnh5em8KWjmz" colab_type="code" colab={} generate_submission(pipeline, X_test, 'submission_xgb_1000.csv') # + [markdown] colab_type="text" id="QnpC0mHzRljS" # #### <NAME>, [Avoid Overfitting By Early Stopping With XGBoost In Python](https://machinelearningmastery.com/avoid-overfitting-by-early-stopping-with-xgboost-in-python/) # + colab_type="code" id="qb_R5_8eRljT" colab={} import category_encoders as ce; encoder = ce.OrdinalEncoder(); X_train_encoded = encoder.fit_transform(X_train); X_val_encoded = encoder.transform(X_val); # + id="8YjYXTGMWjm5" colab_type="code" colab={} eval_set = [ (X_train_encoded, y_train_log), (X_val_encoded, y_val_log) ] # + id="SZYKhcKYWjm8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="85f07a74-a7ee-4d72-e3ef-911f588a24f0" from xgboost import XGBRegressor; model = XGBRegressor(n_estimators=1000, n_jobs=-1); model.fit(X_train_encoded, y_train_log, eval_set=eval_set, eval_metric='rmse', early_stopping_rounds=10 ) # + id="mQVMbgFFWjm9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="61af3fbe-9cb9-4c61-b5c0-2e0be125da33" results = model.evals_result(); train_rmse = results['validation_0']['rmse']; val_rmse = results['validation_1']['rmse']; epochs = range(0, len(train_rmse)); plt.plot(epochs, train_rmse, label='train'); plt.plot(epochs, val_rmse, label='val'); plt.legend(); plt.ylim(0.2, 0.4); # + id="cEuvlxX0fiJ4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 530} outputId="14d25747-9a0d-49bf-8eb1-76c5dadc0fb5" # !pip install catboost # + id="VpqZYcBWfPvr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="3057076e-7086-4893-8aec-7fa3ae925cb7" from catboost import CatBoostRegressor; # Make pipeline pipeline = make_pipeline( ce.OrdinalEncoder(), CatBoostRegressor(learning_rate=0.3) ) # Fit pipeline.fit(X_train, y_train_log); # Validate y_pred_log = pipeline.predict(X_val); print(f'Validation Error {rmse(y_val_log, y_pred_log)}') # + id="5uJBmb7Sg7j_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3986994a-5194-4b1b-8e06-129e516792f4" import lightgbm as lgbm; # Make pipeline pipeline = make_pipeline( ce.OrdinalEncoder(), lgbm.sklearn.LGBMRegressor(learning_rate=0.2) ); # Fit pipeline.fit(X_train, y_train_log); y_pred_log = pipeline.predict(X_val); print('Validation Error', rmse(y_val_log, y_pred_log)) # + [markdown] colab_type="text" id="Uu50KGLSqDK0" # #### Kaggle RMSLE: 0.29454 # + [markdown] colab_type="text" id="OCKIuAU2RljU" # ### Understand the difference between boosting & bagging # # Boosting (used by Gradient Boosting) is different than Bagging (used by Random Forests). # # [_An Introduction to Statistical Learning_](http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Seventh%20Printing.pdf) Chapter 8.2.3, Boosting: # # >Recall that bagging involves creating multiple copies of the original training data set using the bootstrap, fitting a separate decision tree to each copy, and then combining all of the trees in order to create a single predictive model. # # >**Boosting works in a similar way, except that the trees are grown _sequentially_: each tree is grown using information from previously grown trees.** # # >Unlike fitting a single large decision tree to the data, which amounts to _fitting the data hard_ and potentially overfitting, the boosting approach instead _learns slowly._ Given the current model, we fit a decision tree to the residuals from the model. # # >We then add this new decision tree into the fitted function in order to update the residuals. Each of these trees can be rather small, with just a few terminal nodes. **By fitting small trees to the residuals, we slowly improve fˆ in areas where it does not perform well.** # # >Note that in boosting, unlike in bagging, the construction of each tree depends strongly on the trees that have already been grown. # + [markdown] colab_type="text" id="kXCr2NY5RljV" # # Assignment # - Continue to participate in the [Kaggle Caterpillar competition](https://www.kaggle.com/c/caterpillar-tube-pricing). # - Do more feature engineering. # - Use xgboost for gradient boosting. # - Submit new predictions. # - Commit your notebook to your fork of the GitHub repo. # # ## Stretch Goals # - Improve your scores on Kaggle. # - Make visualizations and share on Slack. # - Look at [Kaggle Kernels](https://www.kaggle.com/c/caterpillar-tube-pricing/kernels) for ideas about feature engineerng and visualization. # - Look at the bonus notebook in the repo, about Monotonic Constraints with Gradient Boosting. # - Read more about gradient boosting: # - [A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning](https://machinelearningmastery.com/gentle-introduction-gradient-boosting-algorithm-machine-learning/) # - [A Kaggle Master Explains Gradient Boosting](http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/) # - [_An Introduction to Statistical Learning_](http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Seventh%20Printing.pdf) Chapter 8 # - [Gradient Boosting Explained](http://arogozhnikov.github.io/2016/06/24/gradient_boosting_explained.html) # - [Boosting](https://www.youtube.com/watch?v=GM3CDQfQ4sw) (3 minute video)
module2-gradient-boosting/gradient_boosting.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # List # + x = 3 # - y = [4, 7, 55, 89] #list y name= ['shahzad', 'mohsin', 'hamza'] name z = ['karachi', 1945, 23.7] z fruits = ['apple', 'mango', 'peach', 'banana'] #index = 0 1 2 3 fruits[2] len(fruits) del fruits[1] #implace fruits fruits.remove('apple') # by remove function fruits fruits.remove('Apple') fruits del fruits[4] #inplace fruits fruits fruits[1] = 'mango' fruits fruits.append('apple') fruits fruits.insert(0, 'grape') fruits fruits.insert(10, 'orange') fruits fruits.index('orange') fruits.insert(10, 120) fruits fruits.count('grape') a = [2, 5, 34, 67, 34, 12] a.sort() a a = [2, 5, 34, 67, 34, 12] a.sort(reverse=True) # Descending Order a random = [27, 45, 12, 89, 33, 99] random.reverse() # reverse list without sorting random b = random # pass by reference b random.append(120) b b.append('new') random c = random.copy() c c.append(131) c random poppedElement = random.pop() random poppedElement random # + newList = [] # - random = [27, 45, 12, 89, 33, 99] newList.append(random.pop()) newList newList newList.pop(2) # # Slicing # + alist = [10,9,8,7,6,5,4,3,2,1] #index =[0,1, 2,3,4,5,6,7,8,9] # - alist[2:5] alist[2:] alist[:7] alist[5:2] students=['asad', 'fahad', 'kami', 'hasan','saad', 'faisal'] #index -6 -5 -4 -3 -2 -1 students[-1] students[-5] students[-3:] students students[-5:4] # # Tuples a = (1,2,'3',4,5,6, "asad") a a[3] a[3]=232 a = 2,3,4 # packing a x,y,z = a # unpacking x y z # # List in list tuple in list etcetc listInList = [1,2,3,['apple','orange','banana',[22,33,44]]] listInList listInList[3][3][0] listInList[3][0] listInList[3][0]='eggs' listInList listInList.insert(0,"apple") listInList # + alist = [1,2,3,4,['a','b','c']] alist[4].insert(0,'d') # - alist alist[4].remove('d') alist # + age =20 name = 'asad' print(f"my name is {name} and my age is {age}") # - print("my name is {} and my age is {}".format(name,age))
Anaconda Python/SAIMS(25052019).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import glob import os import csv import numpy as np import pandas as pd # + output_path = './CIFAR10/' exp_prefixes = ['RANDOM', 'LC', 'MARGIN', 'ENT', 'DBAL', 'BALD', 'CORESET', 'VAAL', 'ENS'] dataset_name = 'CIFAR10' model = 'resnet18' output_file_name = f'{dataset_name}_{model}_ALL.csv' exp_list = [] for exp in exp_prefixes: exp_list.extend(glob.glob(os.path.join(output_path, model, '{}_*').format(exp))) # - exp_list def exp_to_al_method(exp): if 'LC' in exp: return 'LeastConfidence' elif 'ENT' in exp: return 'MaxEntropy' elif 'MARGIN' in exp: return 'MinMargin' elif 'VAAL' in exp: return 'VAAL' elif 'CORESET' in exp: return 'Coreset' elif 'DBAL' in exp: return 'DBAL' elif 'BALD' in exp: return 'BALD' elif 'ENS' in exp: return 'ENSvarR' elif 'COG' in exp: return 'Center-of-Gravity' else: return 'Random' columns = ['AL_Method', 'Episode', 'TestAccuracy'] data = [] for exp in exp_list: file_path = os.path.join(exp, 'plot_episode_yvalues.txt') file = open(file_path, "r") values = file.readlines() values = [float(v[:-1]) for v in values] data_tuple = [] for idx in range(len(values)): data_tuple.append([exp_to_al_method(exp), idx, values[idx]]) data.extend(data_tuple) df = pd.DataFrame(data, columns = columns) df.Episode = df.Episode.apply(lambda x: int((x+1)*10)) # + import matplotlib.pyplot as plt import seaborn as sns sns.set() import warnings warnings.filterwarnings('ignore') from matplotlib import rc rc('font',**{'family':'sans-serif','sans-serif':['Arial']}) rc('text', usetex=True) plt.rcParams["axes.labelweight"] = "normal" plt.rcParams["font.weight"] = "normal" sns.set_style("darkgrid") # + al_methods = ['LeastConfidence','MaxEntropy','MinMargin', 'ENSvarR','Coreset','VAAL', 'DBAL', 'Random',] colors = ['black', 'red', 'springgreen', 'blue', 'peru', 'magenta', 'gold', 'cyan', 'darkviolet', 'orange'] colors = colors[:len(set(df.AL_Method))] # print(index) # if index != 7: # continue fig, ax = plt.subplots(figsize=(7,6)) sns.set(rc={"lines.linewidth": 2}) # sns.set_style("ticks") sns.lineplot(x="Episode", y="TestAccuracy", hue="AL_Method", data=df, \ ax=ax, palette=colors, linewidth = 3, legend=True) ax.set_xlabel('\% of Data Labeled', size = 22, ) ax.set_ylabel('Test Accuracy', size = 22,) ax.patch.set_edgecolor('black') ax.patch.set_linewidth('2') ax.set_title(dataset_name, size = 30) ax.legend(loc='lower left', bbox_to_anchor=(1.1, 0.1), shadow=True, markerscale=1, ncol=1, prop={'size': 20}) x_vals = np.arange(10,70,10) xticks = ax.set_xticks(x_vals) ax.set_xticklabels(ax.get_xticks(), size = 25) ax.set_yticklabels(ax.get_yticks(), size = 25) axins2 = ax.inset_axes([0.55, 0.1, 0.4, 0.4]) sns.set_style("darkgrid") # sub region of the original image if dataset_name == 'CIFAR10': x1, x2, y1, y2 = 55, 61, 89, 92 else: x1, x2, y1, y2 = 55, 61, 52, 56.7 axins2.set_xlim(x1, x2) axins2.set_ylim(y1, y2) ax.indicate_inset_zoom(axins2, edgecolor="black") sns.lineplot(x="Episode", y="TestAccuracy", hue="AL_Method", data=df,\ ax=axins2, legend=False, palette=colors, linewidth=2) axins2.set_xlabel("") axins2.set_ylabel("") axins2.set_xticklabels('') axins2.set_yticklabels([str(round(float(label), 1)) for label in axins2.get_yticks()]) # axins2.set_yticklabels('') axins2.patch.set_edgecolor('black') axins2.patch.set_linewidth('2') axins2.tick_params(axis = "x", which = "both", bottom = False, top = False) ax.indicate_inset_zoom(axins2, edgecolor="black") plt.savefig(f'./{dataset_name}_AL.png', dpi=300, format='png', bbox_inches = "tight") plt.show() # - df2 = df.groupby(['AL_Method', 'Episode']).mean()['TestAccuracy'].reset_index() df2.columns = ['AL_Method', 'Episode', 'Mean'] df2['SD'] = df.groupby(['AL_Method', 'Episode']).std()['TestAccuracy'].reset_index().TestAccuracy df2.sort_values(['Mean', 'SD'], ascending=False).loc[df.Episode==60, :].iloc[:, [0, 2, 3]]
output/results_aggregator.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import matplotlib import seaborn as sns matplotlib.rcParams['savefig.dpi'] = 144 from static_grader import grader # # PW Miniproject # ## Introduction # # The objective of this miniproject is to exercise your ability to use basic Python data structures, define functions, and control program flow. We will be using these concepts to perform some fundamental data wrangling tasks such as joining data sets together, splitting data into groups, and aggregating data into summary statistics. # **Please do not use `pandas` or `numpy` to answer these questions.** # # We will be working with medical data from the British NHS on prescription drugs. Since this is real data, it contains many ambiguities that we will need to confront in our analysis. This is commonplace in data science, and is one of the lessons you will learn in this miniproject. # ## Downloading the data # # We first need to download the data we'll be using from Amazon S3: # !mkdir pw-data # !aws s3 sync s3://dataincubator-wqu/pwdata-ease/ ./pw-data # + language="bash" # mkdir pw-data # wget http://dataincubator-wqu.s3.amazonaws.com/pwdata/201701scripts_sample.json.gz -nc -P ./pw-data # wget http://dataincubator-wqu.s3.amazonaws.com/pwdata/practices.json.gz -nc -P ./pw-data # - # ## Loading the data # # The first step of the project is to read in the data. We will discuss reading and writing various kinds of files later in the course, but the code below should get you started. import gzip import simplejson as json # + with gzip.open('./pw-data/201701scripts_sample.json.gz', 'rb') as f: scripts = json.load(f) with gzip.open('./pw-data/practices.json.gz', 'rb') as f: practices = json.load(f) # - # This data set comes from Britain's National Health Service. The `scripts` variable is a list of prescriptions issued by NHS doctors. Each prescription is represented by a dictionary with various data fields: `'practice'`, `'bnf_code'`, `'bnf_name'`, `'quantity'`, `'items'`, `'nic'`, and `'act_cost'`. scripts[:2] # A [glossary of terms](http://webarchive.nationalarchives.gov.uk/20180328130852tf_/http://content.digital.nhs.uk/media/10686/Download-glossary-of-terms-for-GP-prescribing---presentation-level/pdf/PLP_Presentation_Level_Glossary_April_2015.pdf/) and [FAQ](http://webarchive.nationalarchives.gov.uk/20180328130852tf_/http://content.digital.nhs.uk/media/10048/FAQs-Practice-Level-Prescribingpdf/pdf/PLP_FAQs_April_2015.pdf/) is available from the NHS regarding the data. Below we supply a data dictionary briefly describing what these fields mean. # # | Data field |Description| # |:----------:|-----------| # |`'practice'`|Code designating the medical practice issuing the prescription| # |`'bnf_code'`|British National Formulary drug code| # |`'bnf_name'`|British National Formulary drug name| # |`'quantity'`|Number of capsules/quantity of liquid/grams of powder prescribed| # | `'items'` |Number of refills (e.g. if `'quantity'` is 30 capsules, 3 `'items'` means 3 bottles of 30 capsules)| # | `'nic'` |Net ingredient cost| # |`'act_cost'`|Total cost including containers, fees, and discounts| # The `practices` variable is a list of member medical practices of the NHS. Each practice is represented by a dictionary containing identifying information for the medical practice. Most of the data fields are self-explanatory. Notice the values in the `'code'` field of `practices` match the values in the `'practice'` field of `scripts`. practices[:2] # In the following questions we will ask you to explore this data set. You may need to combine pieces of the data set together in order to answer some questions. Not every element of the data set will be used in answering the questions. # ## Question 1: summary_statistics # # Our beneficiary data (`scripts`) contains quantitative data on the number of items dispensed (`'items'`), the total quantity of item dispensed (`'quantity'`), the net cost of the ingredients (`'nic'`), and the actual cost to the patient (`'act_cost'`). Whenever working with a new data set, it can be useful to calculate summary statistics to develop a feeling for the volume and character of the data. This makes it easier to spot trends and significant features during further stages of analysis. # # Calculate the sum, mean, standard deviation, and quartile statistics for each of these quantities. Format your results for each quantity as a list: `[sum, mean, standard deviation, 1st quartile, median, 3rd quartile]`. We'll create a `tuple` with these lists for each quantity as a final result. values = [1,2,4,6,7] sum(values) / len(values) # + #mean(values) # - scripts[:2] sum([scripts['items'] for scripts in scripts]) # + def mean(values): return sum(values) / len(values) def describe(key): values =[scripts[key] for scripts in scripts] total = sum(values) avg = mean(values) s = 0 # q25 = 0 # med = 0 # q75 = 0 return (total, avg) #(total, avg, s, q25, med, q75) # - describe('items') # + import math import statistics def describe(key): lst = [] for i in range(len(scripts)): lst.append(scripts[i][key]) n = len(lst) total = sum(lst) avg = total/n s = math.sqrt(sum([(i-avg)**2 for i in lst])/n) ls = sorted(lst) med = statistics.median(lst) lq = ls[:len(lst)//2] uq = ls[len(lst)//2:] q25 = statistics.median(lq) q75 = statistics.median(uq) return (total, avg, s, q25, med, q75) # + #def describe(key): # total = 0 # for i in keys: # total = total + float(i[key]) # avg = 0 # for i in keys: # avg = total/(len(keys)) # s = 0 # n = 0 # for i in keys: # n += (i[key] - avg)**2 # s = (n/(len(keys)))**0.5 # # l = [] # for i in keys: # l.append(i[key]) # l = sorted(l) # ln =len(l) # med = 0 # if not ln % 2: # med = (l[ln / 2] + l[ln / 2 - 1]) / 2 # else: # med = l[ln / 2] # if ln % 2 == 0: # q25 = float(l[ln/4]) # q75 = float(l[3*ln/4]) # else: # q25 = float(l[ln/4]) # q75 = float(l[3*(ln+1)/4]) # return (total, avg, s, q25, med, q75) # - def summary(): results = [('items', describe('items')), ('quantity', describe('quantity')), ('nic', describe('nic')), ('act_cost', describe('act_cost'))] return results # + #summary = [('items', describe('items')), # ('quantity', describe('quantity')), # ('nic', describe('nic')), # ('act_cost', describe('act_cost'))] # + #keys = scripts #summary() # - grader.score.pw__summary_statistics(summary) # ## Question 2: most_common_item # # Often we are not interested only in how the data is distributed in our entire data set, but within particular groups -- for example, how many items of each drug (i.e. `'bnf_name'`) were prescribed? Calculate the total items prescribed for each `'bnf_name'`. What is the most commonly prescribed `'bnf_name'` in our data? # # To calculate this, we first need to split our data set into groups corresponding with the different values of `'bnf_name'`. Then we can sum the number of items dispensed within in each group. Finally we can find the largest sum. # # We'll use `'bnf_name'` to construct our groups. You should have *5619* unique values for `'bnf_name'`. # + #bnf_names = ... #assert(len(bnf_names) == 5619) # + bnf_names = set([x['bnf_name'] for x in scripts]) groups = {name: [] for name in bnf_names} for script in scripts: groups[script['bnf_name']].append(script['items']) max_dict ={} for k,v in groups.items(): max_dict[k] = sum(v) max_item = (max(max_dict.keys(), key=(lambda k: max_dict[k])) , max_dict[max(max_dict.keys(), key=(lambda k: max_dict[k]))]) def most_common_item(): return [max_item] # + bnf_names = set([i['bnf_name'] for i in scripts]) # 'set' identifies unique item in a list, elemanating duplications #assert(len(bnf_names) == 11990) # - type(bnf_names) # We want to construct "groups" identified by `'bnf_name'`, where each group is a collection of prescriptions (i.e. dictionaries from `scripts`). We'll construct a dictionary called `groups`, using `bnf_names` as the keys. We'll represent a group with a `list`, since we can easily append new members to the group. To split our `scripts` into groups by `'bnf_name'`, we should iterate over `scripts`, appending prescription dictionaries to each group as we encounter them. #groups = {name: [] for name in bnf_names} #for script in scripts: # INSERT ... groups = {name: [] for name in bnf_names} for script in scripts: groups[script['bnf_name']].append(script['items']) type(groups) # + #dict(groups.items()[:1]) # - # Now that we've constructed our groups we should sum up `'items'` in each group and find the `'bnf_name'` with the largest sum. The result, `max_item`, should have the form `[(bnf_name, item total)]`, e.g. `[('Foobar', 2000)]`. max_item = [("", 0)] # + max_dict ={} for k,v in groups.items(): max_dict[k] = sum(v) max_item = (max(max_dict.keys(), key=(lambda k: max_dict[k])) , max_dict[max(max_dict.keys(), key=(lambda k: max_dict[k]))]) # lambda function is a way to create small anonymous functions, i.e. functions without a name # - type(max_dict) # + #dict(max_dict.items()[:2]) # - def most_common_item(): return [max_item] grader.score('pw__most_common_item', most_common_item) # **TIP:** If you are getting an error from the grader below, please make sure your answer conforms to the correct format of `[(bnf_name, item total)]`. # + # grader.score.pw__most_common_item(max_item) # - # **Challenge:** Write a function that constructs groups as we did above. The function should accept a list of dictionaries (e.g. `scripts` or `practices`) and a tuple of fields to `groupby` (e.g. `('bnf_name')` or `('bnf_name', 'post_code')`) and returns a dictionary of groups. The following questions will require you to aggregate data in groups, so this could be a useful function for the rest of the miniproject. def group_by_field(data, fields): groups = {} return groups # + # groups = group_by_field(scripts, ('bnf_name',)) # test_max_item = ... # assert test_max_item == max_item # - # ## Question 3: postal_totals # # Our data set is broken up among different files. This is typical for tabular data to reduce redundancy. Each table typically contains data about a particular type of event, processes, or physical object. Data on prescriptions and medical practices are in separate files in our case. If we want to find the total items prescribed in each postal code, we will have to _join_ our prescription data (`scripts`) to our clinic data (`practices`). # # Find the total items prescribed in each postal code, representing the results as a list of tuples `(post code, total items prescribed)`. Sort your results ascending alphabetically by post code and take only results from the first 100 post codes. Only include post codes if there is at least one prescription from a practice in that post code. # # **NOTE:** Some practices have multiple postal codes associated with them. Use the alphabetically first postal code. # We can join `scripts` and `practices` based on the fact that `'practice'` in `scripts` matches `'code'` in `practices'`. However, we must first deal with the repeated values of `'code'` in `practices`. We want the alphabetically first postal codes. # + # practice_postal = {} # for practice in practices: # if practice['code'] in practice_postal: # practice_postal[practice['code']] = ... # else: # practice_postal[practice['code']] = ... # + practice_postal = {} for practice in practices: if practice['code'] in practice_postal: if practice['post_code'] < practice_postal[practice['code']]: practice_postal[practice['code']] = practice['post_code'] else: pass else: practice_postal[practice['code']] = practice['post_code'] joined = scripts[:] for script in joined: script['post_code'] = practice_postal[script['practice']] post_code_list=[] for script in joined: post_code_list.append(script['post_code']) groups={post_code: [] for post_code in post_code_list} for script in joined: groups[script['post_code']].append(script['items']) for group in groups.items(): groups[group[0]]=sum(group[1]) s=sorted(groups.items(), key=lambda tup: tup[0]) def postal_totals(): return s[:100] # - practice_postal = {} for practice in practices: if practice['code'] in practice_postal: if practice['post_code'] < practice_postal[practice['code']]: practice_postal[practice['code']] = practice['post_code'] else: pass else: practice_postal[practice['code']] = practice['post_code'] type(practice_postal) # + #dict(practice_postal.items()[0:2]) # - joined = scripts[:] for script in joined: script['post_code'] = practice_postal[script['practice']] type(joined) print(joined[:1]) # + items_by_post = [] for script in joined: items_by_post.append(script['post_code']) groups={post_code: [] for post_code in items_by_post} for script in joined: groups[script['post_code']].append(script['items']) for group in groups.items(): groups[group[0]]=sum(group[1]) s=sorted(groups.items(), key=lambda tup: tup[0]) # - type(items_by_post) print(items_by_post[:2]) print(s[:2]) def postal_totals(): return s[:100] print(postal_totals()) grader.score('pw__postal_totals', postal_totals) # **Challenge:** This is an aggregation of the practice data grouped by practice codes. Write an alternative implementation of the above cell using the `group_by_field` function you defined previously. assert practice_postal['K82019'] == 'HP21 8TR' # **Challenge:** This is an aggregation of the practice data grouped by practice codes. Write an alternative implementation of the above cell using the `group_by_field` function you defined previously. assert practice_postal['K82019'] == 'HP21 8TR' # Now we can join `practice_postal` to `scripts`. # + # joined = scripts[:] # for script in joined: # script['post_code'] = ... # - # Finally we'll group the prescription dictionaries in `joined` by `'post_code'` and sum up the items prescribed in each group, as we did in the previous question. # + # items_by_post = ... # + # postal_totals = [('B11 4BW', 20673)] * 100 # grader.score.pw__postal_totals(postal_totals) # - # ## Question 4: items_by_region # # Now we'll combine the techniques we've developed to answer a more complex question. Find the most commonly dispensed item in each postal code, representing the results as a list of tuples (`post_code`, `bnf_name`, amount dispensed as proportion of total). Sort your results ascending alphabetically by post code and take only results from the first 100 post codes. # # **NOTE:** We'll continue to use the `joined` variable we created before, where we've chosen the alphabetically first postal code for each practice. Additionally, some postal codes will have multiple `'bnf_name'` with the same number of items prescribed for the maximum. In this case, we'll take the alphabetically first `'bnf_name'`. # Now we need to calculate the total items of each `'bnf_name'` prescribed in each `'post_code'`. Use the techniques we developed in the previous questions to calculate these totals. You should have 141196 `('post_code', 'bnf_name')` groups. # + # for letter, number in list({'a':1, 'b':2, 'c':3}.items()): # print(letter, number) # + # total_items_by_post = [] # for key, group in list(group_by_field(joined, ('post_code', 'bnf_name')).items()): # items_total = sum(d['items'] for d in group) # total_items_by_post[key] = items_total # + # total_items_by_post # - # print(joined [:1]) print (joined[:1]) post_code_list=[] for script in joined: post_code_list.append(script['post_code']) print(post_code_list[:5]) post_code_list = set(post_code_list) # + dict_new = {post_code:[] for post_code in post_code_list} for script in joined: if len(dict_new[script['post_code']])== 2 and dict_new[script['post_code']][0] == script['bnf_name']: dict_new[script['post_code']][1] += script['items'] else: dict_new[script['post_code']].append((script['bnf_name'],script['items'])) # - list_new = [] for key in dict_new.keys(): for i in range(len(dict_new[key])): list_new.append({'post_code': key, 'bnf_name': dict_new[key][i][0], 'total':dict_new[key][i][1]}) total_by_item_post = {(dict_info['post_code'], dict_info['bnf_name']): dict_info['total'] for dict_info in list_new} # total_item_by_post = {(dict_info['post_code'], dict_info['bnf_name']): dict_info['total'] for dict_info in list_new} # total_items_by_bnf_post = {(dict_info['post_code'], dict_info['bnf_name']): dict_info['total'] for dict_info in list_new} # + # assert len(total_by_item_post) == 498644 # total_items_by_bnf_post # - assert len(total_by_item_post) == 141196 # assert len(total_items_by_bnf_post) == 141196 # + total_by_item_post = {} for dict_info in list_new: if dict_info['post_code'] in total_by_item_post.keys(): total_by_item_post[dict_info['post_code']] += dict_info['total'] else: total_by_item_post[dict_info['post_code']] = dict_info['total'] # + # assert len(total_by_item_post) == 7448 # len(total_by_item_post) == 7448 # - max_item_by_post = [] for key in dict_new.keys(): max_item_by_post.append((key, (max(dict_new[key], key=lambda x:x[1]))[0],float((max(dict_new[key], key=lambda x:x[1]))[1])/total_by_item_post[key])) max_item_by_post = sorted(max_item_by_post, key=lambda post_code: post_code[0]) def items_by_region(): output = max_item_by_post[:100] return output # + # total_by_item_post = set([i['post_code'] for i in practices]) # print(total_by_item_post) items_by_region = set([i['post_code'] for i in practices]) items_by_region # - grader.score('pw__items_by_region', items_by_region) # + # total_items_by_bnf_post = ... # assert len(total_items_by_bnf_post) == 141196 # - # Let's use `total_by_item_post` to find the maximum item total for each postal code. To do this, we will want to regroup `total_by_item_post` by `'post_code'` only, not by `('post_code', 'bnf_name')`. First let's turn `total_by_item_post` into a list of dictionaries (similar to `scripts` or `practices`) and then group it by `'post_code'`. You should have 118 groups in `total_by_item_post` after grouping it by `'post_code'`. total_by_item_post = ... assert len(total_by_item_post) == 7448 total_by_item_post = {(dict_info['post_code'], dict_info['bnf_name']): dict_info['total'] for dict_info in list_new} assert len(total_by_item_post) == 141196 # + # total_by_item_post = {} # for dict_info in list_new: # if dict_info['post_code'] in total_by_item_post.keys(): # total_by_item_post[dict_info['post_code']] += dict_info['total'] # else: # total_by_item_post[dict_info['post_code']] = dict_info['total'] # + # assert len(total_by_item_post) == 7448 # - # + # total_items = [] # for (post_code, bnf_name), total in list(total_items_by_bnf_post.items()): # new_dict = {'post_code': post_code, # 'bnf_name' : bnf_name, # 'total' : total} # total_items.append(new_dict) # + # total_items[:2] # + # total_items_by_post = group_by_field(total_items, ('post_code',)) # + # list(total_items_by_post) # - # + # total_items = ... # assert len(total_items_by_post) == 118 # + # total_by_item_post = {(dict_info['post_code'], dict_info['bnf_name']): dict_info['total'] for dict_info in list_new} #total_items_by_bnf_post = {(dict_info['post_code'], dict_info['bnf_name']): dict_info['total'] for dict_info in list_new} # + # len(total_by_item_post) # + # assert len(total_by_item_post) == 141196 #assert len(total_items_by_bnf_post) == 141196 # + # total_by_item_post = {} # for dict_info in list_new: # if dict_info['post_code'] in total_by_item_post.keys(): # total_by_item_post[dict_info['post_code']] += dict_info['total'] # else: # total_by_item_post[dict_info['post_code']] = dict_info['total'] # + # len(total_by_item_post) # + #assert len(total_by_item_post) == 7448 # assert len(total_by_item_post) == 118 # - # Now we will aggregate the groups in `total_by_item_post` to create `max_item_by_post`. Some `'bnf_name'` have the same item total within a given postal code. Therefore, if more than one `'bnf_name'` has the maximum item total in a given postal code, we'll take the alphabetically first `'bnf_name'`. We can do this by [sorting](https://docs.python.org/2.7/howto/sorting.html) each group according to the item total and `'bnf_name'`. max_item_by_post = ... #youtube max_item_by_post = max(total_by_item_post) max_item_by_post type(total_by_item_post) # + from operator import itemgetter result = [] for key in total_items_by_post: list_bnf = [] totals = {} for item in total_items_by_post[key]: if item['bnf_name'] not in list_bnf: list_bnf.append(item['bnf_name']) totals[item['bnf_name']] = item['items'] else: totals[item['bnf_name']] += item['items'] totals = totals.items() # - from operator import itemgetter get_total = itemgetter('total') max_item_by_post = [] groups = list(total_items_by_post.values()) for group in groups: max_total = sorted(group, key=itemgetter('total'), reverse=True)[0] max_item_by_post.append(max_total) max_item_by_post = [sorted(group, key=itemgetter('total'), reverse=True)[0] for group in list(total_items_by_post.values())] max_item_by_post[:3] # + # total_by_item_post[('YO16 4LZ')] # + # test_list # - # + # max_item_by_post[0] # + #youtube items_by_region = [] for item in max_item_by_post: numerator = item['total'] denominator = dict(items_by_post)[items['post_code']] proportion = numerator / denominator result = (item['post_code'], item['bnf_name'], proportion) items_by_region.append(result) # - #youtube items_by_region = sorted(items_by_region)[:100] #youtube items_by_region # + # total = 0 # name = "" # for i in item_per_post: # if(item_per_post[i] > total): # total = item_per_post[i] # name = i # + # max_item_by_post = [] # for key in dict_new.keys(): # max_item_by_post.append((key, (max(dict_new[key], key=lambda x:x[1]))[0],float((max(dict_new[key], key=lambda x:x[1]))[1])/total_by_item_post[key])) # - max_item_by_post = sorted(max_item_by_post, key=lambda post_code: post_code[:100]) def items_by_region(): output = max_item_by_post[:3] return output # + #pw Q AAA = items_by_region() print(AAA) # items_by_region = a # + # total_by_item_post = set([i['post_code'] for i in practices]) # print(len(total_by_item_post)) # - # + # AAA = [('AL1 3HD', 'Amoxicillin_Cap 500mg', 0.1026344676180022), ('AL1 3JB', 'Bendroflumethiazide_Tab 2.5mg', 0.1265466816647919), ('AL1 4JE', 'Aspirin_Tab 75mg', 0.19230769230769232), ('AL10 0BS', 'Amoxicillin_Cap 500mg', 0.12405237767057202), ('AL10 0LF', 'ActiLymph Class 1 Combined Armsleeve + T', 0.3333333333333333), ('AL10 0NL', 'Amitriptyline HCl_Tab 10mg', 0.0639686684073107), ('AL10 0UR', 'Diazepam_Tab 10mg', 0.5434782608695652), ('AL10 8HP', 'Sertraline HCl_Tab 50mg', 0.10324129651860744), ('AL2 1ES', 'Levothyrox Sod_Tab 100mcg', 0.13074204946996468), ('AL2 3JX', 'Simvastatin_Tab 40mg', 0.0847231487658439), ('AL3 5ER', 'Bisoprolol Fumar_Tab 2.5mg', 0.11428571428571428), ('AL3 5HB', 'Omeprazole_Cap E/C 20mg', 0.16846758349705304), ('AL3 5JB', 'Alimemazine Tart_Tab 10mg', 1.0), ('AL3 5NF', 'Ramipril_Cap 10mg', 0.09449465899753492), ('AL3 5NP', 'Clopidogrel_Tab 75mg', 0.09023255813953489), ('AL3 7BL', 'Bendroflumethiazide_Tab 2.5mg', 0.08917197452229299), ('AL3 8LJ', 'Aspirin Disper_Tab 75mg', 0.17897727272727273), ('AL5 2BT', 'Bisoprolol Fumar_Tab 2.5mg', 0.137660485021398), ('AL5 4HX', 'Metformin HCl_Tab 500mg M/R', 0.07671601615074024), ('AL5 4QA', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.14298480786416443), ('AL6 9EF', 'Atorvastatin_Tab 20mg', 0.17326732673267325), ('AL6 9SB', 'Mometasone Fur_Oint 0.1%', 0.2826086956521739), ('AL7 1BW', 'Irripod Sod Chlor Top Irrig 20ml', 0.1583710407239819), ('AL7 3UJ', 'Levothyrox Sod_Tab 50mcg', 0.13861386138613863), ('AL7 4HL', 'Clarithromycin_Tab 500mg', 0.07758094074526573), ('AL7 4PL', 'Levothyrox Sod_Tab 25mcg', 0.11315136476426799), ('AL8 6JL', 'Latanoprost_Eye Dps 50mcg/ml', 0.7142857142857143), ('AL8 7QG', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.15814226925338037), ('AL9 7SN', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.14134542705971279), ('B1 1EQ', 'Loperamide HCl_Cap 2mg', 0.5384615384615384), ('B1 3AL', 'Citalopram Hydrob_Tab 20mg', 0.11314475873544093), ('B1 3RA', 'Quetiapine_Tab 25mg', 0.21739130434782608), ('B10 0BS', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.1784776902887139), ('B10 0JL', 'Desunin_Tab 800u', 0.17592592592592593), ('B10 0TU', 'Amlodipine_Tab 5mg', 0.228310502283105), ('B10 0UG', 'Amoxicillin_Cap 500mg', 0.10748299319727891), ('B10 9AB', 'Losartan Pot_Tab 50mg', 0.08932461873638345), ('B10 9QE', 'Fortisip Bottle_Liq (8 Flav)', 0.08923076923076922), ('B11 1LU', 'Paracet_Tab 500mg', 0.1488), ('B11 1TX', 'Fortisip Bottle_Liq (8 Flav)', 0.17955112219451372), ('B11 3ND', 'GlucoRx Nexus (Reagent)_Strips', 0.07524271844660194), ('B11 4AN', 'Metformin HCl_Tab 500mg', 0.16051502145922747), ('B11 4BW', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.07043407043407043), ('B11 4DG', 'Paracet_Tab 500mg', 0.3543123543123543), ('B11 4RA', 'Paracet_Tab 500mg', 0.16339869281045752), ('B12 0UF', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.1488833746898263), ('B12 0YA', 'Amoxicillin_Cap 500mg', 0.1375186846038864), ('B12 8HE', 'Atorvastatin_Tab 40mg', 0.19387755102040816), ('B12 8QE', 'Atorvastatin_Tab 20mg', 0.12996941896024464), ('B12 9LP', 'Aspirin Disper_Tab 75mg', 0.08866995073891626), ('B12 9RR', 'Aspirin Disper_Tab 75mg', 0.1111111111111111), ('B13 0HN', 'Amlodipine_Tab 5mg', 0.10548885077186965), ('B13 8JL', 'Nurse It Ster Dress Pack', 0.31699496106275765), ('B13 8JS', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.15428571428571428), ('B13 8QS', 'Lansoprazole_Cap 15mg (E/C Gran)', 0.11512415349887133), ('B13 9HD', 'Influenza_Vac Inact 0.5ml Pfs', 0.5218037661050545), ('B13 9LH', 'Amlodipine_Tab 5mg', 0.23478260869565218), ('B14 4DU', 'Paracet_Tab 500mg', 0.18742985409652077), ('B14 4JU', 'Paracet_Tab 500mg', 0.1768465909090909), ('B14 5DJ', 'Atorvastatin_Tab 10mg', 0.10728476821192053), ('B14 5NG', 'Aspirin Disper_Tab 75mg', 0.1897810218978102), ('B14 5SB', 'Amlodipine_Tab 5mg', 0.16043956043956045), ('B14 6AA', 'Amlodipine_Tab 10mg', 0.05718954248366013), ('B14 7AG', '3m Health Care_Cavilon Durable Barrier C', 0.08466453674121406), ('B14 7NH', 'Omeprazole_Cap E/C 20mg', 0.12063492063492064), ('B15 1LZ', 'Levothyrox Sod_Tab 100mcg', 0.056847545219638244), ('B15 2QU', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.10996563573883161), ('B15 3BU', 'Protopic_Oint 0.1%', 0.5952380952380952), ('B15 3SJ', 'Metronidazole_Tab 400mg', 1.0), ('B16 0HH', 'Lisinopril_Tab 5mg', 0.2079207920792079), ('B16 0HZ', 'Amoxicillin_Cap 500mg', 0.12021857923497267), ('B16 0LU', 'Paracet_Tab 500mg', 0.21238938053097345), ('B16 8HA', 'Aspirin Disper_Tab 75mg', 0.19321148825065274), ('B16 9AL', 'Aspirin Disper_Tab 75mg', 0.13713405238828968), ('B17 0HG', 'Omeprazole_Cap E/C 20mg', 0.13983050847457626), ('B17 8DP', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.15562735595045774), ('B17 8LG', 'Stexerol-D3_Tab 1 000u', 0.17080745341614906), ('B17 9DB', 'Omeprazole_Cap E/C 20mg', 0.12826446280991735), ('B18 7AL', 'Aspirin Disper_Tab 75mg', 0.07208765859284891), ('B18 7BA', 'Citalopram Hydrob_Tab 20mg', 0.0877742946708464), ('B18 7EE', 'Metformin HCl_Tab 500mg', 0.3333333333333333), ('B19 1BP', 'Aspirin Disper_Tab 75mg', 0.14380321665089876), ('B19 1HL', 'Metformin HCl_Tab 500mg', 0.245136186770428), ('B19 1HS', 'Paracet_Tab 500mg', 0.2457757296466974), ('B19 1TT', 'Metformin HCl_Tab 500mg', 0.26259541984732826), ('B19 2JA', 'Amlodipine_Tab 5mg', 0.18029556650246306), ('B20 2BT', 'Simvastatin_Tab 20mg', 0.19021739130434784), ('B20 2ES', 'GlucoRx Lancets 0.31mm/30 Gauge', 0.07936507936507936), ('B20 2NR', 'Imuvac_Vac 0.5ml Pfs', 0.6362725450901804), ('B20 2QR', 'Bendroflumethiazide_Tab 2.5mg', 0.1571753986332574), ('B20 3HE', 'Simvastatin_Tab 20mg', 0.16216216216216217), ('B20 3QP', 'Ventolin_Evohaler 100mcg (200 D)', 0.18430034129692832), ('B21 0HL', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.25), ('B21 0HR', 'Amlodipine_Tab 10mg', 0.16783216783216784), ('B21 9NH', 'Adcal-D3_Capl 750mg/200u', 0.17357222844344905), ('B21 9RY', 'Atorvastatin_Tab 10mg', 0.043362495245340436), ('B23 5BX', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.12195121951219512), ('B23 5DD', 'Ventolin_Evohaler 100mcg (200 D)', 0.23908375089477452), ('B23 5TJ', 'Bendroflumethiazide_Tab 2.5mg', 0.1712962962962963), ('B23 6DJ', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.11962931760741365)] # + # def items_by_region(): # return AAA # + # AAA = [('AL1 3HD', 'Amoxicillin_Cap 500mg', 0.1026344676180022), ('AL1 3JB', 'Bendroflumethiazide_Tab 2.5mg', 0.1265466816647919), ('AL1 4JE', 'Aspirin_Tab 75mg', 0.19230769230769232), ('AL10 0BS', 'Amoxicillin_Cap 500mg', 0.12405237767057202), ('AL10 0LF', 'ActiLymph Class 1 Combined Armsleeve + T', 0.3333333333333333), ('AL10 0NL', 'Amitriptyline HCl_Tab 10mg', 0.0639686684073107), ('AL10 0UR', 'Diazepam_Tab 10mg', 0.5434782608695652), ('AL10 8HP', 'Sertraline HCl_Tab 50mg', 0.10324129651860744), ('AL2 1ES', 'Levothyrox Sod_Tab 100mcg', 0.13074204946996468), ('AL2 3JX', 'Simvastatin_Tab 40mg', 0.0847231487658439), ('AL3 5ER', 'Bisoprolol Fumar_Tab 2.5mg', 0.11428571428571428), ('AL3 5HB', 'Omeprazole_Cap E/C 20mg', 0.16846758349705304), ('AL3 5JB', 'Alimemazine Tart_Tab 10mg', 1.0), ('AL3 5NF', 'Ramipril_Cap 10mg', 0.09449465899753492), ('AL3 5NP', 'Clopidogrel_Tab 75mg', 0.09023255813953489), ('AL3 7BL', 'Bendroflumethiazide_Tab 2.5mg', 0.08917197452229299), ('AL3 8LJ', 'Aspirin Disper_Tab 75mg', 0.17897727272727273), ('AL5 2BT', 'Bisoprolol Fumar_Tab 2.5mg', 0.137660485021398), ('AL5 4HX', 'Metformin HCl_Tab 500mg M/R', 0.07671601615074024), ('AL5 4QA', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.14298480786416443), ('AL6 9EF', 'Atorvastatin_Tab 20mg', 0.17326732673267325), ('AL6 9SB', 'Mometasone Fur_Oint 0.1%', 0.2826086956521739), ('AL7 1BW', 'Irripod Sod Chlor Top Irrig 20ml', 0.1583710407239819), ('AL7 3UJ', 'Levothyrox Sod_Tab 50mcg', 0.13861386138613863), ('AL7 4HL', 'Clarithromycin_Tab 500mg', 0.07758094074526573), ('AL7 4PL', 'Levothyrox Sod_Tab 25mcg', 0.11315136476426799), ('AL8 6JL', 'Latanoprost_Eye Dps 50mcg/ml', 0.7142857142857143), ('AL8 7QG', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.15814226925338037), ('AL9 7SN', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.14134542705971279), ('B1 1EQ', 'Loperamide HCl_Cap 2mg', 0.5384615384615384), ('B1 3AL', 'Citalopram Hydrob_Tab 20mg', 0.11314475873544093), ('B1 3RA', 'Quetiapine_Tab 25mg', 0.21739130434782608), ('B10 0BS', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.1784776902887139), ('B10 0JL', 'Desunin_Tab 800u', 0.17592592592592593), ('B10 0TU', 'Amlodipine_Tab 5mg', 0.228310502283105), ('B10 0UG', 'Amoxicillin_Cap 500mg', 0.10748299319727891), ('B10 9AB', 'Losartan Pot_Tab 50mg', 0.08932461873638345), ('B10 9QE', 'Fortisip Bottle_Liq (8 Flav)', 0.08923076923076922), ('B11 1LU', 'Paracet_Tab 500mg', 0.1488), ('B11 1TX', 'Fortisip Bottle_Liq (8 Flav)', 0.17955112219451372), ('B11 3ND', 'GlucoRx Nexus (Reagent)_Strips', 0.07524271844660194), ('B11 4AN', 'Metformin HCl_Tab 500mg', 0.16051502145922747), ('B11 4BW', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.07043407043407043), ('B11 4DG', 'Paracet_Tab 500mg', 0.3543123543123543), ('B11 4RA', 'Paracet_Tab 500mg', 0.16339869281045752), ('B12 0UF', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.1488833746898263), ('B12 0YA', 'Amoxicillin_Cap 500mg', 0.1375186846038864), ('B12 8HE', 'Atorvastatin_Tab 40mg', 0.19387755102040816), ('B12 8QE', 'Atorvastatin_Tab 20mg', 0.12996941896024464), ('B12 9LP', 'Aspirin Disper_Tab 75mg', 0.08866995073891626), ('B12 9RR', 'Aspirin Disper_Tab 75mg', 0.1111111111111111), ('B13 0HN', 'Amlodipine_Tab 5mg', 0.10548885077186965), ('B13 8JL', 'Nurse It Ster Dress Pack', 0.31699496106275765), ('B13 8JS', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.15428571428571428), ('B13 8QS', 'Lansoprazole_Cap 15mg (E/C Gran)', 0.11512415349887133), ('B13 9HD', 'Influenza_Vac Inact 0.5ml Pfs', 0.5218037661050545), ('B13 9LH', 'Amlodipine_Tab 5mg', 0.23478260869565218), ('B14 4DU', 'Paracet_Tab 500mg', 0.18742985409652077), ('B14 4JU', 'Paracet_Tab 500mg', 0.1768465909090909), ('B14 5DJ', 'Atorvastatin_Tab 10mg', 0.10728476821192053), ('B14 5NG', 'Aspirin Disper_Tab 75mg', 0.1897810218978102), ('B14 5SB', 'Amlodipine_Tab 5mg', 0.16043956043956045), ('B14 6AA', 'Amlodipine_Tab 10mg', 0.05718954248366013), ('B14 7AG', '3m Health Care_Cavilon Durable Barrier C', 0.08466453674121406), ('B14 7NH', 'Omeprazole_Cap E/C 20mg', 0.12063492063492064), ('B15 1LZ', 'Levothyrox Sod_Tab 100mcg', 0.056847545219638244), ('B15 2QU', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.10996563573883161), ('B15 3BU', 'Protopic_Oint 0.1%', 0.5952380952380952), ('B15 3SJ', 'Metronidazole_Tab 400mg', 1.0), ('B16 0HH', 'Lisinopril_Tab 5mg', 0.2079207920792079), ('B16 0HZ', 'Amoxicillin_Cap 500mg', 0.12021857923497267), ('B16 0LU', 'Paracet_Tab 500mg', 0.21238938053097345), ('B16 8HA', 'Aspirin Disper_Tab 75mg', 0.19321148825065274), ('B16 9AL', 'Aspirin Disper_Tab 75mg', 0.13713405238828968), ('B17 0HG', 'Omeprazole_Cap E/C 20mg', 0.13983050847457626), ('B17 8DP', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.15562735595045774), ('B17 8LG', 'Stexerol-D3_Tab 1 000u', 0.17080745341614906), ('B17 9DB', 'Omeprazole_Cap E/C 20mg', 0.12826446280991735), ('B18 7AL', 'Aspirin Disper_Tab 75mg', 0.07208765859284891), ('B18 7BA', 'Citalopram Hydrob_Tab 20mg', 0.0877742946708464), ('B18 7EE', 'Metformin HCl_Tab 500mg', 0.3333333333333333), ('B19 1BP', 'Aspirin Disper_Tab 75mg', 0.14380321665089876), ('B19 1HL', 'Metformin HCl_Tab 500mg', 0.245136186770428), ('B19 1HS', 'Paracet_Tab 500mg', 0.2457757296466974), ('B19 1TT', 'Metformin HCl_Tab 500mg', 0.26259541984732826), ('B19 2JA', 'Amlodipine_Tab 5mg', 0.18029556650246306), ('B20 2BT', 'Simvastatin_Tab 20mg', 0.19021739130434784), ('B20 2ES', 'GlucoRx Lancets 0.31mm/30 Gauge', 0.07936507936507936), ('B20 2NR', 'Imuvac_Vac 0.5ml Pfs', 0.6362725450901804), ('B20 2QR', 'Bendroflumethiazide_Tab 2.5mg', 0.1571753986332574), ('B20 3HE', 'Simvastatin_Tab 20mg', 0.16216216216216217), ('B20 3QP', 'Ventolin_Evohaler 100mcg (200 D)', 0.18430034129692832), ('B21 0HL', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.25), ('B21 0HR', 'Amlodipine_Tab 10mg', 0.16783216783216784), ('B21 9NH', 'Adcal-D3_Capl 750mg/200u', 0.17357222844344905), ('B21 9RY', 'Atorvastatin_Tab 10mg', 0.043362495245340436), ('B23 5BX', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.12195121951219512), ('B23 5DD', 'Ventolin_Evohaler 100mcg (200 D)', 0.23908375089477452), ('B23 5TJ', 'Bendroflumethiazide_Tab 2.5mg', 0.1712962962962963), ('B23 6DJ', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.11962931760741365)] # - def items_by_region(): return # return [(u'AL1 3HD', u'Levothyrox Sod_Tab 25mcg', 0.15228013029315962)] * 100 items_by_region = [('B11 4BW', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.0341508247)] * 100 # items_by_region = [('B11 4BW', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.0145116819)]*100 grader.score('pw__items_by_region', items_by_region) # + # total_items_by_post = [] # myset = set() # for post in groups: # item_per_post = {} # for item in groups[post]: # if(item['bnf_name'] in myset): # item_per_post[item['bnf_name']] += item['items'] # else: # item_per_post[item['bnf_name']] = item['items'] # total = 0 # name = "" # for i in item_per_post: # if(item_per_post[i] > total): # total = item_per_post[i] # name = i # total_items_by_post.append((post,name,total)) # myset.clear() # print(total_items_by_post[:11]) # - # In order to express the item totals as a proportion of the total amount of items prescribed across all `'bnf_name'` in a postal code, we'll need to use the total items prescribed that previously calculated as `items_by_post`. Calculate the proportions for the most common `'bnf_names'` for each postal code. Format your answer as a list of tuples: `[(post_code, bnf_name, total)]` # + # AAA = [('AL1 3HD', 'Amoxicillin_Cap 500mg', 0.1026344676180022), ('AL1 3JB', 'Bendroflumethiazide_Tab 2.5mg', 0.1265466816647919), ('AL1 4JE', 'Aspirin_Tab 75mg', 0.19230769230769232), ('AL10 0BS', 'Amoxicillin_Cap 500mg', 0.12405237767057202), ('AL10 0LF', 'ActiLymph Class 1 Combined Armsleeve + T', 0.3333333333333333), ('AL10 0NL', 'Amitriptyline HCl_Tab 10mg', 0.0639686684073107), ('AL10 0UR', 'Diazepam_Tab 10mg', 0.5434782608695652), ('AL10 8HP', 'Sertraline HCl_Tab 50mg', 0.10324129651860744), ('AL2 1ES', 'Levothyrox Sod_Tab 100mcg', 0.13074204946996468), ('AL2 3JX', 'Simvastatin_Tab 40mg', 0.0847231487658439), ('AL3 5ER', 'Bisoprolol Fumar_Tab 2.5mg', 0.11428571428571428), ('AL3 5HB', 'Omeprazole_Cap E/C 20mg', 0.16846758349705304), ('AL3 5JB', 'Alimemazine Tart_Tab 10mg', 1.0), ('AL3 5NF', 'Ramipril_Cap 10mg', 0.09449465899753492), ('AL3 5NP', 'Clopidogrel_Tab 75mg', 0.09023255813953489), ('AL3 7BL', 'Bendroflumethiazide_Tab 2.5mg', 0.08917197452229299), ('AL3 8LJ', 'Aspirin Disper_Tab 75mg', 0.17897727272727273), ('AL5 2BT', 'Bisoprolol Fumar_Tab 2.5mg', 0.137660485021398), ('AL5 4HX', 'Metformin HCl_Tab 500mg M/R', 0.07671601615074024), ('AL5 4QA', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.14298480786416443), ('AL6 9EF', 'Atorvastatin_Tab 20mg', 0.17326732673267325), ('AL6 9SB', 'Mometasone Fur_Oint 0.1%', 0.2826086956521739), ('AL7 1BW', 'Irripod Sod Chlor Top Irrig 20ml', 0.1583710407239819), ('AL7 3UJ', 'Levothyrox Sod_Tab 50mcg', 0.13861386138613863), ('AL7 4HL', 'Clarithromycin_Tab 500mg', 0.07758094074526573), ('AL7 4PL', 'Levothyrox Sod_Tab 25mcg', 0.11315136476426799), ('AL8 6JL', 'Latanoprost_Eye Dps 50mcg/ml', 0.7142857142857143), ('AL8 7QG', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.15814226925338037), ('AL9 7SN', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.14134542705971279), ('B1 1EQ', 'Loperamide HCl_Cap 2mg', 0.5384615384615384), ('B1 3AL', 'Citalopram Hydrob_Tab 20mg', 0.11314475873544093), ('B1 3RA', 'Quetiapine_Tab 25mg', 0.21739130434782608), ('B10 0BS', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.1784776902887139), ('B10 0JL', 'Desunin_Tab 800u', 0.17592592592592593), ('B10 0TU', 'Amlodipine_Tab 5mg', 0.228310502283105), ('B10 0UG', 'Amoxicillin_Cap 500mg', 0.10748299319727891), ('B10 9AB', 'Losartan Pot_Tab 50mg', 0.08932461873638345), ('B10 9QE', 'Fortisip Bottle_Liq (8 Flav)', 0.08923076923076922), ('B11 1LU', 'Paracet_Tab 500mg', 0.1488), ('B11 1TX', 'Fortisip Bottle_Liq (8 Flav)', 0.17955112219451372), ('B11 3ND', 'GlucoRx Nexus (Reagent)_Strips', 0.07524271844660194), ('B11 4AN', 'Metformin HCl_Tab 500mg', 0.16051502145922747), ('B11 4BW', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.07043407043407043), ('B11 4DG', 'Paracet_Tab 500mg', 0.3543123543123543), ('B11 4RA', 'Paracet_Tab 500mg', 0.16339869281045752), ('B12 0UF', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.1488833746898263), ('B12 0YA', 'Amoxicillin_Cap 500mg', 0.1375186846038864), ('B12 8HE', 'Atorvastatin_Tab 40mg', 0.19387755102040816), ('B12 8QE', 'Atorvastatin_Tab 20mg', 0.12996941896024464), ('B12 9LP', 'Aspirin Disper_Tab 75mg', 0.08866995073891626), ('B12 9RR', 'Aspirin Disper_Tab 75mg', 0.1111111111111111), ('B13 0HN', 'Amlodipine_Tab 5mg', 0.10548885077186965), ('B13 8JL', 'Nurse It Ster Dress Pack', 0.31699496106275765), ('B13 8JS', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.15428571428571428), ('B13 8QS', 'Lansoprazole_Cap 15mg (E/C Gran)', 0.11512415349887133), ('B13 9HD', 'Influenza_Vac Inact 0.5ml Pfs', 0.5218037661050545), ('B13 9LH', 'Amlodipine_Tab 5mg', 0.23478260869565218), ('B14 4DU', 'Paracet_Tab 500mg', 0.18742985409652077), ('B14 4JU', 'Paracet_Tab 500mg', 0.1768465909090909), ('B14 5DJ', 'Atorvastatin_Tab 10mg', 0.10728476821192053), ('B14 5NG', 'Aspirin Disper_Tab 75mg', 0.1897810218978102), ('B14 5SB', 'Amlodipine_Tab 5mg', 0.16043956043956045), ('B14 6AA', 'Amlodipine_Tab 10mg', 0.05718954248366013), ('B14 7AG', '3m Health Care_Cavilon Durable Barrier C', 0.08466453674121406), ('B14 7NH', 'Omeprazole_Cap E/C 20mg', 0.12063492063492064), ('B15 1LZ', 'Levothyrox Sod_Tab 100mcg', 0.056847545219638244), ('B15 2QU', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.10996563573883161), ('B15 3BU', 'Protopic_Oint 0.1%', 0.5952380952380952), ('B15 3SJ', 'Metronidazole_Tab 400mg', 1.0), ('B16 0HH', 'Lisinopril_Tab 5mg', 0.2079207920792079), ('B16 0HZ', 'Amoxicillin_Cap 500mg', 0.12021857923497267), ('B16 0LU', 'Paracet_Tab 500mg', 0.21238938053097345), ('B16 8HA', 'Aspirin Disper_Tab 75mg', 0.19321148825065274), ('B16 9AL', 'Aspirin Disper_Tab 75mg', 0.13713405238828968), ('B17 0HG', 'Omeprazole_Cap E/C 20mg', 0.13983050847457626), ('B17 8DP', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.15562735595045774), ('B17 8LG', 'Stexerol-D3_Tab 1 000u', 0.17080745341614906), ('B17 9DB', 'Omeprazole_Cap E/C 20mg', 0.12826446280991735), ('B18 7AL', 'Aspirin Disper_Tab 75mg', 0.07208765859284891), ('B18 7BA', 'Citalopram Hydrob_Tab 20mg', 0.0877742946708464), ('B18 7EE', 'Metformin HCl_Tab 500mg', 0.3333333333333333), ('B19 1BP', 'Aspirin Disper_Tab 75mg', 0.14380321665089876), ('B19 1HL', 'Metformin HCl_Tab 500mg', 0.245136186770428), ('B19 1HS', 'Paracet_Tab 500mg', 0.2457757296466974), ('B19 1TT', 'Metformin HCl_Tab 500mg', 0.26259541984732826), ('B19 2JA', 'Amlodipine_Tab 5mg', 0.18029556650246306), ('B20 2BT', 'Simvastatin_Tab 20mg', 0.19021739130434784), ('B20 2ES', 'GlucoRx Lancets 0.31mm/30 Gauge', 0.07936507936507936), ('B20 2NR', 'Imuvac_Vac 0.5ml Pfs', 0.6362725450901804), ('B20 2QR', 'Bendroflumethiazide_Tab 2.5mg', 0.1571753986332574), ('B20 3HE', 'Simvastatin_Tab 20mg', 0.16216216216216217), ('B20 3QP', 'Ventolin_Evohaler 100mcg (200 D)', 0.18430034129692832), ('B21 0HL', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.25), ('B21 0HR', 'Amlodipine_Tab 10mg', 0.16783216783216784), ('B21 9NH', 'Adcal-D3_Capl 750mg/200u', 0.17357222844344905), ('B21 9RY', 'Atorvastatin_Tab 10mg', 0.043362495245340436), ('B23 5BX', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.12195121951219512), ('B23 5DD', 'Ventolin_Evohaler 100mcg (200 D)', 0.23908375089477452), ('B23 5TJ', 'Bendroflumethiazide_Tab 2.5mg', 0.1712962962962963), ('B23 6DJ', 'Lansoprazole_Cap 30mg (E/C Gran)', 0.11962931760741365)] # + # def items_by_region(): # return AAA # + # def items_by_region(): # return [(u'AL1 3HD', u'Levothyrox Sod_Tab 25mcg', 0.15228013029315962)] * 100 # + # items_by_region = [('B11 4BW', 'Salbutamol_Inha 100mcg (200 D) CFF', 0.0341508247)] * 100 # + # grader.score.pw__items_by_region(items_by_region) # - # *Copyright &copy; 2017 The Data Incubator. All rights reserved.*
WorldQuant University_Unit l/datacourse/data-wrangling/miniprojects/pw.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.6 64-bit (''base'': conda)' # language: python # name: python37664bitbaseconda774df39cacc84caf9286edd8f47a70cc # --- # # Classificação # + ## lendo o dataset import pandas as pd df = pd.read_csv('data/decision_tree_example - Página1.csv') df.head() # - x = df.drop(['Filme','Assistiu?'],axis=1) y = df['Assistiu?'] from sklearn.tree import DecisionTreeClassifier, export_graphviz from sklearn.preprocessing import OneHotEncoder dt = DecisionTreeClassifier(criterion='entropy') dt.fit(x,y) ohe = OneHotEncoder(categories='auto') x_ohe = ohe.fit_transform(x).todense() ohe.categories_ # + columns = [] for arr in ohe.categories_: for value in arr: columns.append(value) # - new_x = pd.DataFrame(x_ohe,columns=columns) new_x dt = DecisionTreeClassifier(criterion='entropy') dt.fit(new_x,y) dt.classes_ # + import graphviz # dot is a graph description language dot = export_graphviz(dt, out_file=None, feature_names=new_x.columns.values, class_names=["não", "sim"], filled=True, rounded=True, special_characters=True) # we create a graph from dot source using graphviz.Source graph = graphviz.Source(dot) graph # - new_x.columns # + ## predizendo filme do brad pitt de drama com avaliação baixa dt.predict([[1,0,0,0,1,0,1,0,0]])
.ipynb_checkpoints/Aula03-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # k-Nearest Neighbor # # # This function illustrates how to use k-nearest neighbors in tensorflow # # We will use the 1970s Boston housing dataset which is available through the UCI ML data repository. # # ### Data: # ----------x-values----------- # * CRIM : per capita crime rate by town # * ZN : prop. of res. land zones # * INDUS : prop. of non-retail business acres # * CHAS : Charles river dummy variable # * NOX : nitrix oxides concentration / 10 M # * RM : Avg. # of rooms per building # * AGE : prop. of buildings built prior to 1940 # * DIS : Weighted distances to employment centers # * RAD : Index of radian highway access # * TAX : Full tax rate value per $10k # * PTRATIO: Pupil/Teacher ratio by town # * B : 1000*(Bk-0.63)^2, Bk=prop. of blacks # * LSTAT : % lower status of pop # # ------------y-value----------- # * MEDV : Median Value of homes in $1,000's # import required libraries import matplotlib.pyplot as plt import numpy as np import tensorflow as tf import requests from tensorflow.python.framework import ops ops.reset_default_graph() # ### Create graph sess = tf.Session() # ### Load the data # + housing_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data' housing_header = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV'] cols_used = ['CRIM', 'INDUS', 'NOX', 'RM', 'AGE', 'DIS', 'TAX', 'PTRATIO', 'B', 'LSTAT'] num_features = len(cols_used) housing_file = requests.get(housing_url) housing_data = [[float(x) for x in y.split(' ') if len(x)>=1] for y in housing_file.text.split('\n') if len(y)>=1] y_vals = np.transpose([np.array([y[13] for y in housing_data])]) x_vals = np.array([[x for i,x in enumerate(y) if housing_header[i] in cols_used] for y in housing_data]) ## Min-Max Scaling x_vals = (x_vals - x_vals.min(0)) / x_vals.ptp(0) # - # ### Split the data into train and test sets np.random.seed(13) #make results reproducible train_indices = np.random.choice(len(x_vals), round(len(x_vals)*0.8), replace=False) test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices))) x_vals_train = x_vals[train_indices] x_vals_test = x_vals[test_indices] y_vals_train = y_vals[train_indices] y_vals_test = y_vals[test_indices] # ### Parameters to control run # + # Declare k-value and batch size k = 4 batch_size=len(x_vals_test) # Placeholders x_data_train = tf.placeholder(shape=[None, num_features], dtype=tf.float32) x_data_test = tf.placeholder(shape=[None, num_features], dtype=tf.float32) y_target_train = tf.placeholder(shape=[None, 1], dtype=tf.float32) y_target_test = tf.placeholder(shape=[None, 1], dtype=tf.float32) # - # ## Declare distance metric # ### L1 Distance Metric # # Uncomment following line and comment L2 distance = tf.reduce_sum(tf.abs(tf.subtract(x_data_train, tf.expand_dims(x_data_test,1))), axis=2) # ### L2 Distance Metric # # Uncomment following line and comment L1 above # + #distance = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(x_data_train, tf.expand_dims(x_data_test,1))), reduction_indices=1)) # - # ## Predict: Get min distance index (Nearest neighbor) # + #prediction = tf.arg_min(distance, 0) top_k_xvals, top_k_indices = tf.nn.top_k(tf.negative(distance), k=k) x_sums = tf.expand_dims(tf.reduce_sum(top_k_xvals, 1),1) x_sums_repeated = tf.matmul(x_sums,tf.ones([1, k], tf.float32)) x_val_weights = tf.expand_dims(tf.div(top_k_xvals,x_sums_repeated), 1) top_k_yvals = tf.gather(y_target_train, top_k_indices) prediction = tf.squeeze(tf.matmul(x_val_weights,top_k_yvals), axis=[1]) #prediction = tf.reduce_mean(top_k_yvals, 1) # Calculate MSE mse = tf.div(tf.reduce_sum(tf.square(tf.subtract(prediction, y_target_test))), batch_size) # Calculate how many loops over training data num_loops = int(np.ceil(len(x_vals_test)/batch_size)) for i in range(num_loops): min_index = i*batch_size max_index = min((i+1)*batch_size,len(x_vals_train)) x_batch = x_vals_test[min_index:max_index] y_batch = y_vals_test[min_index:max_index] predictions = sess.run(prediction, feed_dict={x_data_train: x_vals_train, x_data_test: x_batch, y_target_train: y_vals_train, y_target_test: y_batch}) batch_mse = sess.run(mse, feed_dict={x_data_train: x_vals_train, x_data_test: x_batch, y_target_train: y_vals_train, y_target_test: y_batch}) print('Batch #' + str(i+1) + ' MSE: ' + str(np.round(batch_mse,3))) # + # %matplotlib inline # Plot prediction and actual distribution bins = np.linspace(5, 50, 45) plt.hist(predictions, bins, alpha=0.5, label='Prediction') plt.hist(y_batch, bins, alpha=0.5, label='Actual') plt.title('Histogram of Predicted and Actual Values') plt.xlabel('Med Home Value in $1,000s') plt.ylabel('Frequency') plt.legend(loc='upper right') plt.show() # - tested; Gopal
tests/tf/02_nearest_neighbor.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cmudict import nltk phone_dict = nltk.corpus.cmudict.dict() phone_dict['mice'] # # Pun generator # ## Extracting and comparing phones # + # extracting phones from words and sentences # consider using metaphonedoble instead of this library def word_to_phoneme(word): return phone_dict[word][0] def sentence_to_word_of_phoneme(sentence): """takes string sentence and returns list of lists of composing phones""" return [word_to_phoneme(word) for word in sentence.lower().split()] def subfinder_bool(mylist, pattern): """if a subpattern is in a list return a bool""" matches = [] for i in range(len(mylist)): if mylist[i] == pattern[0] and mylist[i:i+len(pattern)] == pattern: matches.append(pattern) return True return False # + # phone comparisions def edit_distance(w1, w2): """Code taken from https://github.com/maxwell-schwartz/PUNchlineGenerator Levenshtein distance """ cost = [] # These may be useful for later work: #vowels = ['A', 'E', 'I', 'O', 'U'] #voiced = ['B', 'D', 'G', 'J', 'L', 'M', 'N', 'R', 'V', 'W', 'Y', 'Z'] #unvoiced = ['C', 'F', 'H', 'K', 'P', 'S', 'T'] for i in range(len(w1)+1): x = [] for j in range(len(w2)+1): x.append(0) cost.append(x) for i in range(len(w1)+1): cost[i][0] = i for j in range(len(w2)+1): cost[0][j] = j # baseline costs del_cost = 2 add_cost = 2 sub_cost = 1 for i in range(1, len(w1)+1): for j in range(1, len(w2)+1): if w1[i-1] == w2[j-1]: sub_cost = 0 else: sub_cost = 2 # get the totals del_total = cost[i-1][j] + del_cost add_total = cost[i][j-1] + add_cost sub_total = cost[i-1][j-1] + sub_cost # choose the lowest cost from the options options = [del_total, add_total, sub_total] options.sort() cost[i][j] = options[0] return cost[-1][-1] # - def debug_distance(word1, word2): print(phonetic_distance(word1, word2)) print(word_to_phones(word1)) print(word_to_phones(word2)) def phonetic_distance(word1, word2): """compares two words and returns phonetic distance""" phoneme1 = word_to_phoneme(word1.lower()) phoneme2 = word_to_phoneme(word2.lower()) return edit_distance(phoneme1, phoneme2) def enumerate_PD_pun_subs(sentence, possible_words, max_distance=5, max_return=10): """ Takes a sentence and possible words and creates returns an array of possible pun substituions based on phonetic distance """ output = [] sentence_words = list(sentence.split()) for word_index, word in enumerate(sentence_words): for pos_word in possible_words: if pos_word in word: # This substituion would be meaningless continue dist = phonetic_distance(word, pos_word) if dist <= max_distance: # Decrease the distance output.append((pos_word, word_index, dist)) output.sort(key=lambda tup: tup[2]) return output def substitute_pun(sentence, sub_tuple): """Takes a sentence and a touple of (word, index, and score) and makes a sentence """ sentence_words = list(sentence.split()) sentence_words[sub_tuple[1]] = sub_tuple[0] return ' '.join(word for word in sentence_words) # + sentence = 'There was a man who wanted to make a pun in a pinch' possible_sub_words = ['music', 'peel', 'thyme', 'mime', 'inside', 'remind', 'mess', 'nest', 'credential', 'special', 'kiss', 'banter', 'flatter'] for output in enumerate_PD_pun_subs(sentence, possible_sub_words, max_distance=4): print(output) print(substitute_pun(sentence, output)) # - # def insert_pun(sentence, possible_words, max_distance=5, max_return=10): """ function to generate a num """ # best_distance = max_distance # best_index = None # best_word = None sentence_words = list(sentence.split()) for word_index, word in enumerate(sentence_words): for pos_word in possible_words: if pos_word in word: # This substituion would be meaningless continue dist = phonetic_distance(word, pos_word) # if dist <= best_distance: # Decrease the distance best_distance += -1 best_index = word_index best_word = pos_word if best_word is None: return 'no substitution found \n' + sentence sentence_words[best_index] = best_word return ' '.join(word for word in sentence_words)
04_Pun_generator.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Esimerkki 1 # Kerrataan pythonin perusteet # int muuttujat luku1 ja luku2 saavat alkuarvot # Komennolla print tulostetaan näytölle # suorita ohjelma yläpaneelista painamalla "run" # # ![run](run.png) luku1 = 4 luku2 = 6 print("Valitut luvut ovat", luku1, "ja", luku2) # ### Esimerkki 2 # # Testaa koodia vaihtamalla muuttujien luku1 ja luku2 arvoja # + luku1 = -2 luku2 = 10 print("Lukujen summa on", luku1 + luku2) # - # ### Esimerkki 3: # Näppäimistöltä lukeminen # + nimi = input("<NAME>\n") print("Hei", nimi) # - # ### Esimerkki 4: Tyyppimuunnokset # # input-käsky palauttaa käyttäjän syötteen # aina merkkijonona, joten ennen kuin # käyttäjältä saatuja tietoja voi käyttää # laskutoimituksissa, tulee tehdä tyyppimuunnos merkkijonosta luvuksi. # float tarkoittaa desimaalilukua, int kokonaislukua # # + luku = input("Anna luku, kerron sen kahdella\n") tulos = float(luku) * 2 print("Luku kaksinkertaisena on", tulos) # - #Esimerkki 4: Vakiot KILOHINTA = 2.5 PAKKAUSKULUT = 4.0 paino = float(input("Anna paino kilogrammoina:\n")) print("Tuote maksaa", paino * KILOHINTA + PAKKAUSKULUT, "euroa.") # ### Esimerkki 5: Ehtolauseet # Testaa koodia vaihtamalla muuttujan luku arvoa # Vain ensimmäinen ehdot täyttävä osio # suoritetaan, loput jätetään huomiotta # # Ehtolauseessa voi olla (tai olla olematta) # rajaton määrä elif-osia ja yksi else-osa luku = 0 if luku > 0: print("Luku on nollaa suurempi") elif luku < 0: print("Luku on nollaa pienempi") else: print("Luku on nolla") # ### Esimerkki 6 # while-silmukan avulla voidaan toistaa koodia # niin kauan kuin jatkamisehto on voimassa # Jatkamisehto tarkistetaan jokaisen kierroksen # alussa, myös ensimmäisen # Silmukkaa ei siis välttämättä suoriteta kertaakaan # kierros = 0 while kierros < 5: print(kierros) kierros += 1 print("Silmukka suoritettiin", kierros, "kertaa") # ### Esimerkki 7: # # Jatkamisehto määritellään samalla tavalla # kuin if-lauseessa # Jos jatkamisehto ei ole voimassa, ohjelman # suorittaminen jatkuu seuraavasta # sisentämättömästä rivistä silmukan jälkeen kierros = 0 while kierros != 8: print(kierros) kierros += 1 print("Silmukka suoritettiin", kierros, "kertaa") # ### Esimerkki 8 for-silmukka # # for-silmukan avulla voidaan toistaa koodia, jos # toistojen määrä tiedetään heti silmukan alussa # for-silmukkaan kuuluu aina muuttuja, jonka arvo # päivittyy automaattisesti (edellisessä esimerkissä tätä muuttujaa ei vain tarvittu mihinkään) # # for-silmukan avulla voidaan käydä läpi myös luvut # ennalta määritellyltä väliltä # # Toistojen määrä tai lukuväli voi olla kiinteästi # koodissa tai arvot voidaan lukea muuttujista # # Huomaa, ettei for-silmukka saavuta koskaan # viimeistä parametrina annettua lukua! # for luku in range(5): print("Hei Qubitti", luku, ".kerran") #print(luku) # ### Esimerkki 9 # # Funktiot ovat ohjelman sisällä olevia pieniä # osia, jotka suorittavat jonkin tehtävän # Funktio määritellään sanalla def ja sitä # kutsutaan funktion nimellä # # Funktiolla voi olla nolla, yksi tai useita # parametreja, joita voi käsitellä funktiossa # muuttujien tavoin # # Funktiosta on mahdollista palauttaa yksi tai # useita arvoja return-käskyllä # # Funktiossa voi olla useita return-käskyjä, # mutta niistä kuitenkin suoritetaan vain yksi, # minkä jälkeen funktiosta poistutaan # välittömästi # # # + #Funktiolla ei ole parametria, ei paluuarvoa def sano_tervehdys(): print("Hello Quantum Espoo!") # Pelkkä paluuarvo: def kysy_nimi(): vastaus = input("Kuka olet?\n") return vastaus # yksi parametri def tulosta_teksti(teksti): print(teksti) # kaksi parametria, yksi paluuarvo def laske_tulo(luku1, luku2): return luku1 * luku2 #kutsutaan funktiota sano_tervehdys() # tallenetaan nimi pääohjelman muuttujaan nimi nimi = kysy_nimi() #tulostus: tulosta_teksti(nimi) # lasketaan lukujen tulo ja tallentaan tulos tulo-muuttujaan: tulo = laske_tulo(2,3) tulosta_teksti(tulo) # + luku1 = 3 luku2 = - 5 luku3 = 10 # - # ### Tehtävä1 # Jatka kirjoittamalla tähän ikkunaan koodi, joka laskee ja tulostaa lukujen keskiarvon: # + # Kirjoita koodisi tähän # - # Kirjoita ohjelma, joka kutsuu funktiona def lukujen_tulo(luku1, luku2): return luku1 * luku2 # ### Tehtävä2 # Kirjoita ohjelma, joka kutsuu funktiota lukujen_tulo arvoilla 7 ja 8 ja # tulostaa lukujen tulon. # # # + #Kirjoita koodisi tähän # -
kv_laskuharjoitukset/kierros1/python-perusteet.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf from tensorflow import keras from tensorflow.keras.datasets import cifar10 (X_full, y_full), (X_test, y_test) = cifar10.load_data() print(X_full.shape) print(X_test.shape) import matplotlib.pyplot as plt # %matplotlib inline plt.imshow(X_train[0]) # *Dense Neural Network* model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape = [32,32,3])) for _ in range(20): model.add(keras.layers.Dense(100, activation='elu',kernel_initializer= 'he_normal')) model.add(keras.layers.Dense(10, activation='softmax')) optimizer = keras.optimizers.Nadam(learning_rate = 5e-5) model.compile(optimizer=optimizer, loss = 'sparse_categorical_crossentropy', metrics=['accuracy']) from sklearn.model_selection import train_test_split X_train, X_valid, y_train, y_valid = train_test_split(X_full, y_full,test_size=0.1, random_state=42) print(X_train.shape) print(X_valid.shape) print(y_train.shape) print(y_valid.shape) early_stopping_cb = keras.callbacks.EarlyStopping(patience=20) model_checkpoint_cb = keras.callbacks.ModelCheckpoint('my_keras_model.h5', save_best_only=True) callbacks = [early_stopping_cb, model_checkpoint_cb] model.fit(X_train, y_train, epochs=100, validation_data=(X_valid, y_valid), callbacks=callbacks) history_1 = model.evaluate(X_valid,y_valid) import numpy as np keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) # *With Batch normalization* # *With HE kernel initiaiizer and ELU activation* # + model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape = [32,32,3])) model.add(keras.layers.BatchNormalization()) for _ in range(20): model.add(keras.layers.Dense(100, kernel_initializer = 'he_normal')) model.add(keras.layers.BatchNormalization()) model.add(keras.layers.Activation('elu')) model.add(keras.layers.Dense(10, activation='softmax')) optimizer = keras.optimizers.Nadam(learning_rate=5e-4) model.compile(optimizer=optimizer, loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) # - early_stopping_cb = keras.callbacks.EarlyStopping(patience=20) model_checkpoints_cb = keras.callbacks.ModelCheckpoint('my_cifar10_bn_model.h5',save_best_only=True) callbacks = [early_stopping_cb, model_checkpoints_cb] model.fit(X_train,y_train, epochs=100, validation_data=(X_valid, y_valid), callbacks=callbacks) model.evaluate(X_valid,y_valid) keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) # *With Lecun kernel intializer and SELU activation* # + model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape=[32,32,3])) for _ in range(20): model.add(keras.layers.Dense(100, kernel_initializer = 'lecun_normal', activation = 'selu')) model.add(keras.layers.Dense(10, activation='softmax')) optimizer = keras.optimizers.Nadam(learning_rate=7e-5) model.compile(optimizer=optimizer, loss='sparse_categorical_crossentropy', metrics = ['accuracy']) # - early_stopping_cb = keras.callbacks.EarlyStopping(patience=20) model_checkpoints_cb = keras.callbacks.ModelCheckpoint('my_cifar10_selu_model.h5',save_best_only=True) callbacks = [early_stopping_cb, model_checkpoints_cb] model.fit(X_train, y_train, epochs=100, validation_data=(X_valid, y_valid), callbacks=callbacks) model.evaluate(X_valid,y_valid) keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) # *With Alpha Dropout* # + model = keras.models.Sequential()# model.add(keras.layers.Flatten(input_shape=[32,32,3])) for _ in range(20): model.add(keras.layers.Dense(100, activation='selu', kernel_initializer = 'lecun_normal')) model.add(keras.layers.AlphaDropout(rate=0.1)) model.add(keras.layers.Dense(10, activation='softmax')) optimizer = keras.optimizers.Nadam(5e-4) model.compile(optimizer=optimizer, loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) # - early_stopping_cb = keras.callbacks.EarlyStopping(patience=20) model_checkpoints_cb = keras.callbacks.ModelCheckpoint('my_cifar10_alpha_dropout_model.h5', save_best_only=True) callbacks = [early_stopping_cb, model_checkpoints_cb] X_means = X_train.mean(axis = 0) X_stds = X_train.std(axis = 0) X_train_scaled = (X_train - X_means) / X_stds X_valid_scaled = (X_valid - X_means) / X_stds X_test_scaled = (X_test - X_means) / X_stds model.fit(X_train_scaled, y_train, epochs=100, validation_data=(X_valid_scaled, y_valid), callbacks=callbacks) model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5") model.evaluate(X_valid_scaled, y_valid) # *With Monte Carlo Dropout* class MCAlphaDropout(keras.layers.AlphaDropout): def call(self, inputs): return super().call(inputs, training=True) mc_model = keras.models.Sequential([ MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer for layer in model.layers ]) # + def mc_dropout_predict_probas(mc_model, X, n_samples=10): Y_probas = [mc_model.predict(X) for sample in range(n_samples)] return np.mean(Y_probas, axis=0) def mc_dropout_predict_classes(mc_model, X, n_samples=10): Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples) return np.argmax(Y_probas, axis=1) # + keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled) accuracy = np.mean(y_pred == y_valid[:, 0]) accuracy # - keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) # *With 1 cycle Scheduling* # + model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape = [32,32,3])) for _ in range(20): model.add(keras.layers.Dense(100, activation='selu',kernel_initializer = 'lecun_normal')) model.add(keras.layers.AlphaDropout(0.1)) model.add(keras.layers.Dense(10, activation='softmax')) optimizer = keras.optimizers.SGD(learning_rate=1e-3) model.compile(optimizer=optimizer, loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) # + import math K = keras.backend class ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_batch_end(self, batch, logs): self.rates.append(K.get_value(self.model.optimizer.lr)) self.losses.append(logs['loss']) K.set_value(self.model.optimizer.lr,self.model.optimizer.lr * self.factor) def find_learning_rate(model, X, y, epochs = 1, batch_size = 32, min_rate = 10**-5, max_rate = 10): init_weights = model.get_weights() iterations = math.ceil(len(X)/batch_size) * epochs factor = np.exp(np.log(max_rate / min_rate)/iterations) init_lr = K.get_value(model.optimizer.lr) K.set_value(model.optimizer.lr, min_rate) exp_lr = ExponentialLearningRate(factor) history = model.fit(X, y, epochs = epochs, batch_size=batch_size, callbacks=[exp_lr]) K.set_value(model.optimizer.lr,init_lr) model.set_weights(init_weights) return exp_lr.rates, exp_lr.losses def plot_lr_vs_loss(rates, losses): plt.plot(rates,losses) plt.gca().set_xscale('log') plt.hlines(min(losses), min(rates), max(rates)) plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2]) plt.xlabel('Learning rate') plt.ylabel('Loss') # - batch_size = 128 rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs = 1, batch_size=batch_size) plot_lr_vs_loss(rates, losses) plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses))/1.4]) keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) # + model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape=[32,32,3])) for _ in range(20): model.add(keras.layers.Dense(100, activation='selu',kernel_initializer = 'lecun_normal')) model.add(keras.layers.AlphaDropout(0.1)) model.add(keras.layers.Dense(10, activation = 'softmax')) optimizer = keras.optimizers.SGD(learning_rate=1e-2) model.compile(optimizer = optimizer, loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) # - class OneCycleScheduler(keras.callbacks.Callback): def __init__(self, iterations, max_rate, start_rate=None, last_iterations=None, last_rate=None): self.iterations = iterations self.max_rate = max_rate self.start_rate = start_rate or max_rate / 10 self.last_iterations = last_iterations or iterations // 10 + 1 self.half_iteration = (iterations - self.last_iterations) // 2 self.last_rate = last_rate or self.start_rate / 1000 self.iteration = 0 def _interpolate(self, iter1, iter2, rate1, rate2): return ((rate2 - rate1) * (self.iteration - iter1) / (iter2 - iter1) + rate1) def on_batch_begin(self, batch, logs): if self.iteration < self.half_iteration: rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate) elif self.iteration < 2 * self.half_iteration: rate = self._interpolate(self.half_iteration, 2 * self.half_iteration, self.max_rate, self.start_rate) else: rate = self._interpolate(2 * self.half_iteration, self.iterations, self.start_rate, self.last_rate) self.iteration += 1 K.set_value(self.model.optimizer.lr, rate) n_epochs = 15 onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled)/batch_size) * n_epochs, max_rate=0.05 ) history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size, validation_data=(X_valid_scaled, y_valid),callbacks=[onecycle])
Training Deep neural networks.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Grab land cover for a given parcel # *Resource: https://developers.arcgis.com/python/guide/raster-analysis-advanced-concepts/* # # Say you want to grab land cover data for a specific location, but you don't want to download the entire NLCD dataset to do that. The ArcGIS Python API can help! # # Here I present an example of doing just that. The steps involve: # * Prepping for the analysis: importing modules and authenticating our arcgis session # * Locating the land cover data and creating a layer from the data # ### Prepping for analysis: importing and authenticating # * Import the the arcgis `GIS` module. We're going to do some geocoding so we need to create the arcgis `geocoding` module as well. Lastly, enable the Jupyter `display` object #Import the GIS object. and display modules from arcgis import GIS from arcgis.geocoding import geocode from IPython.display import display, Image # * Authenticate our GIS object using our ArcGIS Pro account #Create the GIS object, authenticating with your ArcGIS Pro account gis = GIS('pro') # ### Searching for the content and linking to it # What we want is 2011 NLCD data, provided as an *image service* (i.e. as a raster). You could search for the data from within ArcGIS Pro or via the [ArcGIS Online](http://www.arcgis.com) website, but we'll do it right here. # # Like any web search, it's a bit of an art knowing how best to locate the resource you want. At play are what general search keywords to include, and specific categories like `owner` or `item-type` to invoke. We do, however, want to search outside the Duke Community, we we want to include `outside_org=True`. # # I've decided to use `NLCD 2011` as a general search term, filter results for only those that `esri` provides, and limit results to image services: #Search for land cover and print the number of results lc_results = gis.content.search("NLCD 2011, owner:esri",item_type='image',outside_org=True) len(lc_results) # ► This gives us 10 results, enough to show a list... #Show a complete list of results lc_results # * The second item is the one we want. Let's store that as a variable named `lc_item` #Get the second result and show it's info box in our notebook lc_item = lc_results[1] lc_item # * Let's examine a few properties of this item. Because arcgis Item objects are dynamic (they can be vector or raster or...) the list of what properties can change. But we can get a list of properies via the `item.keys()` function: #List the property keys associated with the item we fetched lc_item.keys() # * Let's examine the services web address or its *URL*. Open this [URL](https://landscape10.arcgis.com/arcgis/rest/services/USA_NLCD_Land_Cover_2011/ImageServer) in your web browser. (Note, you'll have to authenticate as this layer is only available to ESRI license holders!) The web site include many of these properties as well... #Get the URL lc_item['url'] #we could also use lc_item.url # * Down the road, we may need the image's spatial reference, so let's store that as a variable. #Extract the image service's spatial reference to a variable lc_sr = lc_item.spatialReference lc_sr # ### From *arcgis* `item` to *arcgis* `layer` # # * Now we need to extract the data **layer** from the data **item**. The `layers` property returns a list of layers associated with this image service, of which there is only one. So we extract that one to a new variable called `lc_lyr`...<br>*Calling this variable displays it to our notebook!* #Extract the one (and only) layer in the item to a new variable and display it lc_lyr = lc_item.layers[0] lc_lyr # ## Subsetting our image # #Create a map, zoomed to Durham m = gis.map("Durham, NC") m m.add_layer(lc_item) #Create a polygon of our map's extent, in the same coordinate system as our NLCD image layer durhamZoom = geocode('Durham, NC',out_sr = lc_item.spatialReference)[0] durham_extent = durhamZoom['extent'] durham_extent lc_lyr.extent = durham_extent lc_lyr c = lc_lyr.compute_histograms(durham_extent)['histograms'][0]['counts'] c img = lc_lyr.export_image(bbox=durham_extent,size=[500,450],f='image') Image(img) # # Image attributes tbl = lc_lyr.attribute_table() histo = lc_lyr.compute_histograms(durham_extent) import pandas as pd pd.DataFrame(tbl['attributes'])
Get-NLCD-Data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import pycosat import secrets import math import numpy as np # import pandas import time import pandas as pd # import matplotlib import matplotlib.pyplot as plt # import seaborn import scipy.stats as st import seaborn as sns from sklearn.neural_network import MLPRegressor # %matplotlib inline # - def gen_scores(value): scores = [] for i in range(NUM_FEATURES): scores.append(np.random.randint(0,101)) total = np.sum(scores) final_scores = [] for score in scores: final_scores.append(score/total*100) return final_scores # + #Equation = totalcost**2 + knowndefects**2 + (124 - featuresused)**2 + 1.5*(100 - userscore)**2 # - def fitness(individual, score): totalcost = sum(np.multiply(individual, costs)) knowndefects = sum(np.multiply(individual, defective)) featuresused = sum(np.multiply(individual, used)) #print(totalcost, knowndefects, featuresused, score) sumsq = lambda *args: sum([i ** 2 for i in args]) return sumsq(totalcost, knowndefects, 124-featuresused, 1.5*(100 - score)) def boolean_to_CNF(solution): cnf = [] for i, val in enumerate(solution): if val == 1: cnf.append(i+1) else: cnf.append(-1*(i+1)) return cnf def validate_CNF(cnf, individual): for clause in cnf: valid = False for val in clause: if individual[abs(val)-1] == val: valid = True if not valid: return False return True def can_we_stop_running(best_score, scores): if len(scores): print(best_score / max(scores)) return best_score / max(scores) <= .46 return False def train(X_train, y_train): return MLPRegressor(random_state=1, max_iter=500).fit(X_train, y_train) def predict(item, X_train, y_train): #print(X_train, y_train) #import pdb;pdb.set_trace() clf = MLPRegressor(random_state=1, max_iter=500).fit(X_train, y_train) return clf.predict([item]) def mutate(population, p=.01): mutated = list(map(lambda s: [1 - x if np.random.rand() <= p else x for x in s ], population)) return list(filter(lambda s: np.random.rand() <= .75, mutated)) def mutate2(population, scores, p=.01): size = len(population) wanted_size = NUM_MUTATED idx = np.argsort(scores)[:wanted_size] ret_pop = [] for i in idx: ret_pop.append(population[i]) mutated = list(map(lambda s: [1 - x if np.random.rand() <= p else x for x in s ], ret_pop)) return mutated def cull(population, scores): size = len(population) wanted_size = int(.25 * size) idx = np.argsort(scores)[:wanted_size] print(idx) ret_pop, ret_scores = [],[] for i in idx: ret_pop.append(population[i]) ret_scores.append(scores[i]) return ret_pop, ret_scores def sort(population, scores): return population, scores idx = np.argsort(scores) print(idx) ret_pop, ret_scores = [],[] for i in idx: ret_pop.append(population[i]) ret_scores.append(scores[i]) return ret_pop, ret_scores def oracle(item, human): return int(np.sum(np.multiply(item, human))) # + def ga_method(initial_population, cnf, interaction_number, stop_criteria): cur_interaction_number = interaction_number cur_population = initial_population scores = [] human_scores = [] best_score = 1e7 produced_items = [] best_item = None cur_i = 0 while not can_we_stop_running(best_score, scores): if cur_i == len(cur_population) -1: cur_population = mutate(produced_items) produced_items , scores = cull(produced_items, scores) cur_interaction_number = interaction_number cur_i = 0 pass elif cur_interaction_number > 0: cur_interaction_number -= 1 score = oracle(cur_population[cur_i], human) produced_items.append(cur_population[cur_i]) ind_fit = fitness(cur_population[cur_i], score) scores.append(ind_fit) human_scores.append(score) if ind_fit < best_score: best_score = ind_fit best_item = cur_population[cur_i] elif cur_interaction_number == 0: score = predict(cur_population[cur_i], produced_items, human_scores) produced_items.append(cur_population[cur_i]) ind_fit = fitness(cur_population[cur_i], score) scores.append(ind_fit) human_scores.append(score) if ind_fit < best_score: best_score = ind_fit best_item = cur_population[cur_i] cur_i+=1 return best_item, best_score, produced_items, scores # - def ga_method2(initial_population, interaction_number, generations): cur_interaction_number = interaction_number cur_population = initial_population fits = [] human_scores = [] best_score = 1e7 produced_items = [] best_item = None cur_i = 0 model = None cur_generation = 0 print('----------generation', cur_generation, 'with population', len(cur_population), '---------------') while cur_generation < generations -1: if cur_i == len(cur_population) -1: cur_generation += 1 cur_population += mutate2(produced_items, fits) print('----------generation', cur_generation, 'with population', len(cur_population), '---------------') cur_i+=1 if cur_interaction_number > 0: cur_interaction_number -= 1 score = oracle(cur_population[cur_i], human) produced_items.append(cur_population[cur_i]) ind_fit = fitness(cur_population[cur_i], score) fits.append(ind_fit) human_scores.append(score) if ind_fit < best_score: best_score = ind_fit best_item = cur_population[cur_i] cur_i+=1 if cur_interaction_number == 0: cur_interaction_number -=1 model = train(produced_items, human_scores) model.predict([cur_population[cur_i]]) produced_items.append(cur_population[cur_i]) ind_fit = fitness(cur_population[cur_i], score) fits.append(ind_fit) human_scores.append(score) if ind_fit < best_score: best_score = ind_fit best_item = cur_population[cur_i] cur_i+=1 if cur_interaction_number < 0: cur_interaction_number -=1 model.predict([cur_population[cur_i]]) produced_items.append(cur_population[cur_i]) ind_fit = fitness(cur_population[cur_i], score) fits.append(ind_fit) human_scores.append(score) if ind_fit < best_score: best_score = ind_fit best_item = cur_population[cur_i] cur_i+=1 return best_item, best_score, produced_items, fits # + from csv import reader NUM_FEATURES = 128 NUM_SOLUTIONS = 100 NUM_MUTATED = 100 a, c, d, u, s, cv, dv, uv, sv,v, t = [], [], [], [], [], [], [], [], [], [], [] for i in range(20): human = gen_scores(100) costs = [secrets.randbelow(10) for _ in range(NUM_FEATURES)] defective = [bool(secrets.randbelow(2)) for _ in range(NUM_FEATURES)] used = [bool(secrets.randbelow(2)) for _ in range(NUM_FEATURES)] items = [] with open('CSVModels/Scrum10k.csv', 'r') as read_obj: binary_solutions = [[int(x) for x in rec] for rec in reader(read_obj, delimiter=',')] for i, item in enumerate(binary_solutions): items.append(item) print(i) solutions = [[1 if val > 0 else 0 for val in sol] for sol in cnfsol] start_time = time.time() best_item, score, produced_items, scores = ga_method2(solutions, 80, 100) a.append(80) total_time = time.time() - start_time t.append(total_time) print("it took", total_time ,"seconds") valid = 0 valid_items, valid_scores = [], [] for item, sc in zip(produced_items, scores): sol = boolean_to_CNF(item) if validate_CNF(cnf, sol): valid+=1 valid_items.append(item) valid_scores.append(sc) totalcost = sum(np.multiply(best_item, costs)) knowndefects = sum(np.multiply(best_item, defective)) featuresused = sum(np.multiply(best_item, used)) fit = st.percentileofscore(scores, score) c.append(totalcost) d.append(knowndefects) u.append(featuresused) s.append(fit) print("Percentile of all solutions =", st.percentileofscore(scores, score)) print("Valid:", valid) print("Not Valid:", len(produced_items)-valid) print("%Valid:", valid/len(produced_items)) v.append(valid/len(produced_items)) sorted_i, sorted_scores = sort(valid_items, valid_scores) totalcostv = sum(np.multiply(sorted_i[0], costs)) knowndefectsv = sum(np.multiply(sorted_i[0], defective)) featuresusedv = sum(np.multiply(sorted_i[0], used)) fitv = st.percentileofscore(sorted_scores, sorted_scores[0]) cv.append(totalcostv) dv.append(knowndefectsv) uv.append(featuresusedv) sv.append(fitv) print("Percentile of best valid solution =", st.percentileofscore(sorted_scores, sorted_scores[0])) df = pd.DataFrame( { 'Asked': a, 'Cost': c, 'Known Defects': d, 'Features Used': u, 'Score': s, 'Valid %':v, 'Valid Cost': cv, 'Valid Known Defects': dv, 'Valid Features Used': uv, 'Valid Score': sv, 'Time': t }).T df.to_csv('BaselineScores/ScoreFFM-125-25-0.50-SAT-1.csv') # -
.ipynb_checkpoints/NSGA-II-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 関数オブジェクトの作成  # + # 関数オブジェクトを作る & square という変数に代入 def square(x): return pow(x, 2) square(3) # + square = lambda x: pow(x, 2) square(3) # - # ## 関数内の変数のスコープ # + def print_x(x): print(x) print(y) print_x(10) # - # y が定義されていれば、関数の外で定義されていても動く y = 5 print_x(10) # + # 関数内部で変数にオブジェクトを代入すると、その変数はローカル変数と認識される # つまり、set_y関数内でyに値を代入することにより、関数の外のyと関数の中のy(ローカル変数y)が別のものとして扱われる # 別のものとして扱われるため、y=2よりprint(y)が先にあるのでエラー y = 5 def set_y(x): print(x) print(y) y = 2 set_y(10) # + # 関数内部で代入を行っているにも関わらず、変数をglobalで扱いたい時には変数をglobal宣言する必要がある y = 5 def set_global_y(x): global y print(x) print(y) y = 2 set_global_y(10) print(f"y = {y}") # - # ## 関数の引数 # + def print_args(one, two, three): print(one) print(two) print(three) print_args(1, 2, 3) # + # 仮引数名を指定することによって、順番は関係ない print_args(1, three=2, two=3) # + # 引数のデフォルト値を設定できる def print_args(one, two=2, three=3): print(one) print(two) print(three) print_args(1) # - print_args(1, two=22) # ### 引数の参照 # + def iadd(x, y): x += y return x x, y = 1, 2 iadd(x, y) # - x, y x, y = [1], [2] iadd(x, y) # 関数の呼び出し元と呼び出し先でオブジェクトを共有しているため x, y # ### *による引数の展開(アンパック) print_args(4, 5, 6) # + # リストの展開 args = [4, 5, 6] print_args(*args) # - print_args(one=111, two=222, three=333) # + kwargs = { "one": 111, "two": 222, "three": 333, } print_args(**kwargs) # - # ## 変数としての関数 # + # 関数を引数として受け取ったり、関数を返り値として返すこともできる def print_one(): print("one") print_one() # + def print_start_and_done(f): print("start") f() print("done") print_start_and_done(print_one) # + def print_start_and_done_decorator(f): def new_f(): print("start") f() print("done") return new_f new_print_one = print_start_and_done_decorator(print_one) new_print_one() # - # ### デコレータ # + # デコレータ:defの上に@を使って関数を上書きできる # @に続く関数は、関数を引数に関数を返す関数 @print_start_and_done_decorator def print_two(): print("two") print_two()
notebooks/function.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:env_py3.8.1] # language: python # name: conda-env-env_py3.8.1-py # --- # # K-Nearest classification, K-Nearest regression and K-Means clustering # Table of contents: # * ## [Supervised Machine Learning Algorithms](#Supervised_Machine_Learning_Algorithms) # * ### [K-Nearest classification](#K_Nearest_classification) # * [Validate the data and leave only essential fields](#Validate_essential_K_Nearest_classification) # * [Split data to train DataFrame and test Series](#Split_K_Nearest_classification) # * [Validate correct spliting](#Validate_spliting_K_Nearest_classification) # * [Train model](#Train_model_K_Nearest_classification) # * [Get prediction](#Get_prediction_K_Nearest_classification) # * [Validate model's prediction (accuracy_score, confusion_matrix)](#Validate_K_Nearest_classification) # * ### [K-Nearest regression](#K_Nearest_regression) # * [Validate the data, find the corelation and leave only essential fields](#Validate_K_Nearest_regression) # * [Split data to train DataFrame and test Series](#Split_K_Nearest_regression) # * [Validate correct spliting](#Validate_Split_K_Nearest_regression) # * [Train model](#Train_K_Nearest_regression) # * [Get prediction](#Get_prediction_K_Nearest_regression) # * [Validate model's prediction (MAE)](#Validate_Model_K_Nearest_regression) # * ## [Unsupervised Machine Learning Algorithm](#Unsupervised_Machine_Learning_Algorithm) # * ### [K- Means Clustering](#K_Means_Clustering) # * [Clean data](#Clean_data_K_Means_Clustering) # * [Prepare helpful functions](#helpful_functions_K_Means_Clustering) # * [Separate DF to **general**, **frequency**, **payments** and **transactions** DataFrames](#Separate_DF_K_Means_Clustering) # 1. [Analyse the general_data](#Analyse_the_general_data_K_Means_Clustering) # * [Determinete an optimal number of clusters](#Determinete_an_optimal_number_of_clusters_Analyse_the_general_data_K_Means_Clustering) # * [Train model](#Train_model_Analyse_the_general_data_K_Means_Clustering) # * [Analyse the clusters](#Analyse_the_clusters_Analyse_the_general_data_K_Means_Clustering) # 2. [Analyse the frequency_data](#Analyse_the_frequency_data_K_Means_Clustering) # * [Determinete an optimal number of clusters](#Determinete_an_optimal_number_of_clusters_Analyse_the_frequency_data_K_Means_Clustering) # * [Train model](#Train_model_Analyse_the_frequency_data_K_Means_Clustering) # * [Analyse the clusters](#Analyse_the_clusters_Analyse_the_frequency_data_K_Means_Clustering) # 3. [Analyse the payments_data](#Analyse_the_payments_data_K_Means_Clustering) # * [Determinete an optimal number of clusters](#Determinete_an_optimal_number_of_clusters_Analyse_the_payments_data_K_Means_Clustering) # * [Train model](#Train_model_Analyse_the_payments_data_K_Means_Clustering) # * [Analyse the clusters](#Analyse_the_clusters_Analyse_the_payments_data_K_Means_Clustering) # 4. [Analyse the transactions_data](#Analyse_the_transactions_data_K_Means_Clustering) # * [Determinete an optimal number of clusters](#Determinete_an_optimal_number_of_clusters_Analyse_the_transactions_data_K_Means_Clustering) # * [Train model](#Train_model_Analyse_the_transactions_data_K_Means_Clustering) # * [Analyse the clusters](#Analyse_the_clusters_Analyse_the_transactions_data_K_Means_Clustering) # 5. [Analyse the entire credit_analysis_data](#Analyse_the_entire_credit_analysis_data_K_Means_Clustering) # * [Determinete an optimal number of clusters](#Determinete_an_optimal_number_of_clusters_Analyse_the_entire_credit_analysis_data_K_Means_Clustering) # * [Train model](#Train_model_Analyse_the_entire_credit_analysis_data_K_Means_Clustering) # * [Analyse the clusters](#Analyse_the_clusters_Analyse_the_entire_credit_analysis_data_K_Means_Clustering) # + import pandas as pd import os import sys from os import listdir # Data files are available in the "../data/" directory # For example, running this will list all files under the data directory cur_dir = os.path.abspath('') data_dir = "%s\\data" % cur_dir for file_name in os.listdir(data_dir): print(file_name) # - # <a id="Supervised_Machine_Learning_Algorithms"></a> # ## Supervised Machine Learning Algorithms # <a id="K_Nearest_classification"></a> # ### K-Nearest classification: # The task is to create classification model to determine the exact species of the iris flowers based on the given parameters iris_flowers_filename = "iris-flowers.csv" iris_flowers = pd.read_csv("%s\%s" % (data_dir, iris_flowers_filename)) iris_flowers.head(10) # <a id="Validate_essential_K_Nearest_classification"></a> # * **<ins>Validate the data and leave only essential fields</ins>** iris_flowers = iris_flowers.drop(['Id'], axis=1) iris_flowers.isnull().any() # <a id="Split_K_Nearest_classification"></a> # * **<ins>Split data to train DataFrame and test Series:</ins>** from sklearn.model_selection import train_test_split #target y = iris_flowers['Species'] #features: X = iris_flowers.drop(['Species'], axis=1) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) # <a id="Validate_spliting_K_Nearest_classification"></a> # * **<ins>Validate correct spliting:</ins>** print(X_train.shape) print(y_train.shape) print(X_test.shape) print(y_test.shape) assert X_train.shape[0] == y_train.shape[0] assert X_test.shape[0] == y_test.shape[0] assert X_train.shape[1] == X_test.shape[1] # <a id="Train_model_K_Nearest_classification"></a> # * **<ins>Train model:</ins>** from sklearn.neighbors import KNeighborsClassifier model = KNeighborsClassifier(n_neighbors=3) model.fit(X_train, y_train) # <a id="Get_prediction_K_Nearest_classification"></a> # * **<ins>Get prediction:</ins>** predictions = model.predict(X_test) # <a id="Validate_K_Nearest_classification"></a> # * **<ins>Validate model's prediction (accuracy_score, confusion_matrix):</ins>** from sklearn.metrics import accuracy_score, confusion_matrix # K1 accuracy 91-100% # K3 accuracy 86-97% # K3 accuracy 91-100% accuracy_score(y_test, predictions) confusion_matrix(y_test, predictions) # <a id="K_Nearest_regression"></a> # ### K-Nearest regression: # The task is to create model to used car price based on the given parameters second_hand_used_cars_filename = "second-hand-used-cars.csv" second_hand_used_cars = pd.read_csv("%s\%s" % (data_dir, second_hand_used_cars_filename)) second_hand_used_cars.head(10) # <a id="Validate_K_Nearest_regression"></a> # * **<ins>Validate the data, find the corelation and leave only essential fields:</ins>** second_hand_used_cars.isnull().any() second_hand_used_cars.corr() second_hand_used_cars.drop('v.id', axis=1, inplace=True) # <a id="Split_K_Nearest_regression"></a> # * **<ins>Split data to train DataFrame and test Series:</ins>** #target: y = second_hand_used_cars['current price'] #features: X = second_hand_used_cars.drop(['current price'], axis=1) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) # <a id="Validate_Split_K_Nearest_regression"></a> # * **<ins>Validate correct spliting:</ins>** print(X_train.shape) print(y_train.shape) print(X_test.shape) print(y_test.shape) assert X_train.shape[0] == y_train.shape[0] assert X_test.shape[0] == y_test.shape[0] assert X_train.shape[1] == X_test.shape[1] # <a id="Train_K_Nearest_regression"></a> # * **<ins>Train model:</ins>** from sklearn.neighbors import KNeighborsRegressor # Change n_neighbors value here and validate the MAE (below) model = KNeighborsRegressor(n_neighbors=3) model.fit(X_train, y_train) # <a id="Get_prediction_K_Nearest_regression"></a> # * **<ins>Get prediction:</ins>** predictions = model.predict(X_test) # <a id="Validate_Model_K_Nearest_regression"></a> # * **<ins>Validate model's prediction (MAE):</ins>** from sklearn.metrics import mean_absolute_error # MAE (mean absolute error) has to be as less as posible # n_neighbors=1 -----> MAE == 26962.536666666667 # n_neighbors=2 -----> MAE == 26962.536666666667 # n_neighbors=3 -----> MAE == 21688.485555555555 mean_absolute_error(y_test, predictions) # <a id="Unsupervised_Machine_Learning_Algorithm"></a> # ## Unsupervised Machine Learning Algorithm # <a id="K_Means_Clustering"></a> # ### K- Means Clustering # Problem Statement: # 1. Create a model to find customers with similar credit patterns. # 2. Find major clusters and unusual clusters, small clusters - analyze the customer information in those clusters credit_analysis_data = "financial-data-credit-analysis.csv" credit_analysis_data = pd.read_csv("%s\%s" % (data_dir, credit_analysis_data)) credit_analysis_data.head() # <a id="Clean_data_K_Means_Clustering"></a> # * **<ins>Clean data</ins>** credit_analysis_data.isnull().any() raw_shape = credit_analysis_data.shape raw_shape credit_analysis_data = credit_analysis_data.dropna() credit_analysis_data.isnull().any() credit_analysis_data.dtypes credit_analysis_data['CUST_ID'] = credit_analysis_data['CUST_ID'].str.replace("C", "") credit_analysis_data.head() credit_analysis_data['CUST_ID'] = pd.to_numeric(credit_analysis_data['CUST_ID']) credit_analysis_data.dtypes # <a id="helpful_functions_K_Means_Clustering"></a> # * **<ins>Prepare helpful functions:</ins>** def get_min_values(df): min_values = {} for (column_name, column_data) in df.iteritems(): min_values[column_name] = column_data.min() print(min_values) return min_values def how_many_rows_have_been_deleted(df): clean_shape = df.shape clean_shape row_removed_percent = 100 - (clean_shape[0]/raw_shape[0]) * 100 print(str(round(row_removed_percent, 2)) + "% rows have been removed") def remove_zero_value(df): for (column_name, column_data) in df.iteritems(): if not column_data.min() > 0: df = df[df[column_name] != column_data.min()] get_min_values(df) return df # + from sklearn.cluster import KMeans import matplotlib.pyplot as plt # %matplotlib inline def determinate_optimal_number_of_clusters(df): ## we are applying K-means and trying to find the cost ## (intra cluster distance, sum of distance of point to the center of its respective cluster) cost = [] n_cluster = [] for k in range (1, 11): # Create a k-means model on our data, using k clusters. random_state helps ensure that the algorithm returns the same results each time. kmeans_model = KMeans(n_clusters=k, random_state=1) kmeans_model.fit(df) # These are our fitted labels for clusters -- the first cluster has label 0, the second has label 1, etc. labels = kmeans_model.labels_ # Sum of distances of samples to their closest cluster center interia = kmeans_model.inertia_ print("k:",k, " cost:", interia) cost.append(interia) n_cluster.append(k) # plotting the points plt.plot(n_cluster, cost) # naming the x axis plt.xlabel('number of clusters k') # naming the y axis plt.ylabel('cost') # giving a title to my graph plt.title('Elbow Plot') # function to show the plot plt.show() # - def train_model(df, k): kmeans_model = KMeans(n_clusters=k, random_state=1) kmeans_model.fit(df) labels = kmeans_model.labels_ return labels def add_cluster_id_column_to_df(df, labels): labels_df = pd.DataFrame(labels) labels_df.columns = ['cluster_id'] #Reset index: updated_df = df.reset_index() updated_df.drop('index',axis =1,inplace = True) #Concatenate IDs with DF updated_df = pd.concat([updated_df, labels_df],axis =1) updated_df.head() return updated_df # <a id="Separate_DF_K_Means_Clustering"></a> # * **Separate DF to <ins>general</ins>, <ins>frequency</ins>, <ins>payments</ins> and <ins>transactions</ins> DataFrames</ins>** general_data = credit_analysis_data[['CUST_ID', 'TENURE', 'CREDIT_LIMIT', 'BALANCE', 'PURCHASES', 'ONEOFF_PURCHASES', 'INSTALLMENTS_PURCHASES', 'CASH_ADVANCE']] general_data.head() frequency_data = credit_analysis_data[['CUST_ID', 'TENURE', 'CREDIT_LIMIT', 'BALANCE_FREQUENCY', 'PURCHASES_FREQUENCY', 'ONEOFF_PURCHASES_FREQUENCY', 'PURCHASES_INSTALLMENTS_FREQUENCY', 'CASH_ADVANCE_FREQUENCY']] frequency_data.head() payments_data = credit_analysis_data[['CUST_ID', 'TENURE', 'CREDIT_LIMIT', 'PAYMENTS', 'MINIMUM_PAYMENTS', 'PRC_FULL_PAYMENT']] payments_data.head() transactions_data = credit_analysis_data[['CUST_ID', 'TENURE', 'CREDIT_LIMIT', 'CASH_ADVANCE_TRX', 'PURCHASES_TRX']] transactions_data.head() # <a id="Analyse_the_general_data_K_Means_Clustering"></a> # ### 1. **Analyse the <ins>general_data:</ins>** # <a id="Determinete_an_optimal_number_of_clusters_Analyse_the_general_data_K_Means_Clustering"></a> # * **Determinete an optimal number of clusters** essential_general_data = general_data.loc[:, general_data.columns != 'CUST_ID'] determinate_optimal_number_of_clusters(essential_general_data) # <a id="Train_model_Analyse_the_general_data_K_Means_Clustering"></a> # * **<ins>Train model:<ins>** # Selected K = 2 as optimum value of k or number of clusters labels = train_model(general_data, 2) labels general_data = add_cluster_id_column_to_df(general_data, labels) # <a id="Analyse_the_clusters_Analyse_the_general_data_K_Means_Clustering"></a> # * **<ins>Analyse the clusters:<ins>** import seaborn as sns sns.pairplot( general_data, hue="cluster_id", palette='winter', # markers=["o", "P", "s", 'H'], y_vars=['CREDIT_LIMIT'], x_vars=[col for col in general_data.columns if col not in ['CUST_ID', 'CREDIT_LIMIT']] ) # <a id="Analyse_the_frequency_data_K_Means_Clustering"></a> # ### 2. **Analyse the <ins>frequency_data:</ins>** # <a id="Determinete_an_optimal_number_of_clusters_Analyse_the_frequency_data_K_Means_Clustering"></a> # * **Determinete an optimal number of clusters** essential_frequency_data = frequency_data.loc[:, frequency_data.columns != 'CUST_ID'] determinate_optimal_number_of_clusters(essential_frequency_data) # <a id="Train_model_Analyse_the_frequency_data_K_Means_Clustering"></a> # * **<ins>Train model:<ins>** # Selected K = 2 as optimum value of k or number of clusters labels = train_model(frequency_data, 2) frequency_data = add_cluster_id_column_to_df(frequency_data, labels) # <a id="Analyse_the_clusters_Analyse_the_frequency_data_K_Means_Clustering"></a> # * **<ins>Analyse the clusters:<ins>** sns.pairplot( frequency_data, hue="cluster_id", palette='spring', # markers=["o", "P", "s", 'H'], y_vars=['CREDIT_LIMIT'], x_vars=[col for col in frequency_data.columns if col not in ['CUST_ID', 'CREDIT_LIMIT']] ) # <a id="Analyse_the_payments_data_K_Means_Clustering"></a> # ### 3. **Analyse the <ins>payments_data:</ins>** # <a id="Determinete_an_optimal_number_of_clusters_Analyse_the_payments_data_K_Means_Clustering"></a> # * **Determinete an optimal number of clusters** essential_payments_data = payments_data.loc[:, payments_data.columns != 'CUST_ID'] determinate_optimal_number_of_clusters(essential_payments_data) # <a id="Train_model_Analyse_the_payments_data_K_Means_Clustering"></a> # * **<ins>Train model:<ins>** # Selected K = 2 as optimum value of k or number of clusters labels = train_model(payments_data, 2) payments_data = add_cluster_id_column_to_df(payments_data, labels) # <a id="Analyse_the_clusters_Analyse_the_payments_data_K_Means_Clustering"></a> # * **<ins>Analyse the clusters:<ins>** sns.pairplot( payments_data, hue="cluster_id", palette='summer', # markers=["o", "P", "s", 'H'], y_vars=['CREDIT_LIMIT'], x_vars=[col for col in payments_data.columns if col not in ['CUST_ID', 'CREDIT_LIMIT']] ) # <a id="Analyse_the_transactions_data_K_Means_Clustering"></a> # ### 4. **Analyse the <ins>transactions_data</ins>:** # <a id="Determinete_an_optimal_number_of_clusters_Analyse_the_transactions_data_K_Means_Clustering"></a> # * **Determinete an optimal number of clusters** essential_transactions_data = transactions_data.loc[:, transactions_data.columns != 'CUST_ID'] determinate_optimal_number_of_clusters(essential_transactions_data) # <a id="Train_model_Analyse_the_transactions_data_K_Means_Clustering"></a> # * **<ins>Train model:<ins>** # Selected K = 2 as optimum value of k or number of clusters labels = train_model(transactions_data, 2) transactions_data = add_cluster_id_column_to_df(transactions_data, labels) # <a id="Analyse_the_clusters_Analyse_the_transactions_data_K_Means_Clustering"></a> # * **<ins>Analyse the clusters:<ins>** sns.pairplot( transactions_data, hue="cluster_id", palette='autumn', # markers=["o", "P", "s", 'H'], y_vars=['CREDIT_LIMIT'], x_vars=[col for col in transactions_data.columns if col not in ['CUST_ID', 'CREDIT_LIMIT']] ) # <a id="Analyse_the_entire_credit_analysis_data_K_Means_Clustering"></a> # ### 5. **Analyse the entire <ins>credit_analysis_data</ins>:** # <a id="Determinete_an_optimal_number_of_clusters_Analyse_the_entire_credit_analysis_data_K_Means_Clustering"></a> # * **Determinete an optimal number of clusters** essential_credit_analysis_data = credit_analysis_data.loc[:, credit_analysis_data.columns != 'CUST_ID'] determinate_optimal_number_of_clusters(essential_credit_analysis_data) # <a id="Train_model_Analyse_the_entire_credit_analysis_data_K_Means_Clustering"></a> # * **<ins>Train model:<ins>** # Selected K = 2 as optimum value of k or number of clusters labels = train_model(credit_analysis_data, 2) credit_analysis_data = add_cluster_id_column_to_df(credit_analysis_data, labels) # <a id="Analyse_the_clusters_Analyse_the_entire_credit_analysis_data_K_Means_Clustering"></a> # * **<ins>Analyse the clusters:<ins>** features = [col for col in credit_analysis_data.columns if col not in ['CUST_ID', 'CREDIT_LIMIT', 'cluster_id']] quarter_length = len(features)//4 def plot_part_of_the_clusters(features_list): pairplot = sns.pairplot( credit_analysis_data, hue="cluster_id", palette='Accent', y_vars=['CREDIT_LIMIT'], x_vars=features_list, ) # tight_layout() to automatically adjust the spacings pairplot.fig.tight_layout() # adjust legent not not to overlap the plot pairplot._legend.set_bbox_to_anchor((1.1, 1.1)) plot_part_of_the_clusters(features[:quarter_length]) plot_part_of_the_clusters(features[quarter_length:quarter_length*2]) plot_part_of_the_clusters(features[quarter_length*2:quarter_length*3]) plot_part_of_the_clusters(features[quarter_length*3:])
model.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Business Understanding # # ### QUESTION # How is the average Salary and JobSatisfaction distributed by the wished WorkStart? # In the output, sort the chart by workstart-time. # # ### Reason # The reason why I decided to look at this specific case is, because I want to know if participants from the survey are actually satisfied with there job if they contribute a new wished workstart time. # + import numpy as np import pandas as pd import matplotlib.pyplot as plt from functions import create_mixed_plot df = pd.read_csv('./data/survey_results_public.csv') df.head() # - # ## Data understanding # # **1.** Looking at the data "The dataframe contains {} rows and {} columns.".format(df.shape[0], df.shape[1]) # **2.** Selected only necessary columns workstart_df = df[['WorkStart', 'Salary', 'JobSatisfaction']] workstart_df # **3.** Check how many NaN-values are avialable for WorkStart-column nulls = workstart_df[workstart_df['WorkStart'].isnull()] len(nulls) # ## Data Preperation # # **4.** Drop the NaN-values of all columns and reset the index workstart_df = workstart_df.dropna().reset_index() # **5.** Looking at the dataframe values and their counts workstart_df['WorkStart'].value_counts() # ## Data modeling # # **6.** Creating a custom dataframe which in the following will be used to map the id in the correct order workstart_id_df = pd.DataFrame( { "id": [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23], "WorkStart": ["Midnight","1:00 AM","2:00 AM","3:00 AM","4:00 AM","5:00 AM","6:00 AM","7:00 AM","8:00 AM","9:00 AM","10:00 AM","11:00 AM","Noon","1:00 PM","2:00 PM","3:00 PM","4:00 PM","5:00 PM","6:00 PM","7:00 PM","8:00 PM","9:00 PM","10:00 PM","11:00 PM"] }) merged_df = pd.merge(workstart_id_df, workstart_df, on="WorkStart") merged_df = merged_df.sort_values(['id']) merged_df # **7.** Show all calculated average job satisfaction and sort them in descending order merged_df.groupby(['id','WorkStart'])['JobSatisfaction'].mean().reset_index(name='mean').sort_values(['mean'], ascending=False) # **8.** Show all calculated average salaries and sort them in descending order merged_df.groupby(['id','WorkStart'])['Salary'].mean().reset_index(name='mean').sort_values(['mean'], ascending=False) # ## Results evaluation # # **9.** Showing a chart with the salaries as primary-y-axis and the job satisfaction as secondary-y-axis. # The x-axis is filled with the id and the workstart to ensure that we have to correct order for the times. plt = create_mixed_plot(dataframe=merged_df, groupby=["id", "WorkStart"], primary_measure='Salary', \ secondary_measure='JobSatisfaction', title='Salary and JobSatisfaction per WorkStart') plt.show() # ### Answer: # The highest average salary (at **~67,672.37**) is for the ones who would start working at 5am # The lowest average salary (at **~17,260.19**) is for the ones who would start working at 11pm # The highest job-satisfaction (at **7.75**) is at midnight although the Salary is pretty low (compared to the rest) # As shown in the plot, people who whould like to start at 1am do have in average the lowest job-satisfaction with **5.75**. # The group who would like to start at 1am also has the second-lowest salary with **18,824.08**.
.ipynb_checkpoints/Workstart_JobSatisfaction_Salary-checkpoint.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.4.2 # language: julia # name: julia-1.4 # --- # # Examples of using mimetic field operators using CartesianGrids using Plots pyplot() default(grid = false) # ### Testing field types and operators i = 5; j = 5; nx = 12; ny = 12; w = Nodes(Dual,(5,4)) w .= reshape(1:20,5,4) w∘w C = Curl() D = Divergence() D*(C*w) w = Nodes(Dual,(12,12)); w[4,4] = 1.0; q = Edges(Primal,w) curl!(q,w) q = Edges(Dual,(8,6)); p = deepcopy(q); q.u[3,2] = 0.3; p.u[3,2] = 0.2; p∘q q = Edges(Dual,(5,4)) q.u .= reshape(1:16, 4, 4) q.v .= reshape(1:15, 5, 3) v = Edges(Primal,(5,4)) grid_interpolate!(v,q) v.v # + v = Edges(Primal,(100,200)) v.u[3:end-3,3:end-3] .= rand(size(v.u[3:end-3,3:end-3])...) v.v[3:end-3,3:end-3] .= rand(size(v.v[3:end-3,3:end-3])...) u = Edges(Primal,(100,200)) u.u[3:end-3,3:end-3] .= rand(size(v.u[3:end-3,3:end-3])...) u.v[3:end-3,3:end-3] .= rand(size(v.v[3:end-3,3:end-3])...); dq = EdgeGradient(Primal,v); # + v = Edges(Primal,(100,200)) v.u .= cos.(2π*(0:99)/99)*cos.(4π*(0:198)/198)'; u = Edges(Primal,(100,200)) u.u .= sin.(2π*(0:99)/99)*cos.(2π*(0:198)/198)'; dq = EdgeGradient(Primal,v); # - uv = EdgeGradient(Primal,u); #tensorproduct!(uv,u,v); divuv = zero(u) divergence!(divuv,u*v); # + utmp = zero(dq) dv = zero(dq) grid_interpolate!(utmp,u) grad!(dv,v) ugradvtmp = zero(dq) product!(ugradvtmp,transpose(utmp),dv); ugradv = zero(u) grid_interpolate!(ugradv,ugradvtmp); # - ugradv2 = zero(u) directional_derivative!(ugradv2,v,u); CartesianGrids.norm(ugradv2 - ugradv) w = zero(u) grid_interpolate!(w,divergence(u)) vdivu = zero(u) product!(vdivu,w,v); CartesianGrids.norm(divuv - ugradv - vdivu) plot(transpose(utmp)) # Laplacian of edge data = divergence of the gradient? # + grad!(dv,v) lapv = zero(v) divergence!(lapv,dv); lapv2 = laplacian(v); # - CartesianGrids.norm(lapv2-lapv) # Curl of the Laplacian of edge data = Laplacian of the curl? # + lapv = laplacian(v); curllapv = curl(lapv); lapcurlv = laplacian(curl(v)); CartesianGrids.norm(curllapv-lapcurlv) # - q = Edges(Primal,(8,6)) q.u[2,3] = 1.0 lapq = similar(q) laplacian!(lapq,q) divergence!(lapq,grad(q)) nx = 250; ny = 250; i = 40; j = 50; w = Nodes(Dual,(nx,ny)) w[i,j] = 1.0 E = plan_intfact(5,w) E25 = plan_intfact(2.5,w) E25*(E25*w)≈E*w @time E*w; plot(E*w) # + cellzero = Nodes(Dual,(nx,ny)) nodezero = Nodes(Primal,cellzero) facezero = Edges(Primal,cellzero) dualfacezero = Edges(Dual,cellzero) cellunit = deepcopy(cellzero) cellunit[i,j] = 1.0 # - w = Nodes(Dual,(50,10)); w[20,5] = 1.0 L = plan_laplacian(w,with_inverse=true) plot(L\w) laplacian(L\w) E = plan_intfact(1.0,(nx,ny)) E! = plan_intfact!(1.0,(nx,ny)) w = deepcopy(cellunit) plot(E!*w) C = Curl() C*w Nodes(Primal,cellunit) w = Nodes(Dual, 5, 4) w .= reshape(1:20, 5, 4) Ww = XEdges(Primal,w) grid_interpolate!(Ww,w) s = Nodes(Dual,5,4) s .= rand(5, 4) divergence(curl(s)) v = Nodes(Dual,(12,12)) v[4,5] = 1.0 v faceones = deepcopy(facezero) fill!(faceones,1.0) w = Nodes(Primal,v) w[4,5] = 1.0 grad(w) q = Edges(Dual,v) q.u[4,5] = 1.0 dq = grad(q) dq.dudy # #### Testing the inverse Laplacian L = plan_laplacian(nx,ny;with_inverse=true) @time L\cellunit cellunit_out = L*(L\cellunit) findmax(cellunit_out)
examples/Field operators.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.1.1 # language: julia # name: julia-1.1 # --- # Packages using Knet, AutoGrad, LinearAlgebra, Base.Iterators, Statistics, Random, StatsBase, IterTools, Plots # Constants ENV["COLUMNS"] = 64 ARRAY=Array{Float64} # KnetArray{Float32} UPDATE=true # keep this true (false only useful for checking gradients) BSIZE=1 # keep batchsize=1 until larger ones supported XSIZE=28*28 YSIZE=10 HSIZE=[64] ALPHA=100.0 GAMMA=0.1 LAMBDA=0.995 ETA=0.1 MU0=0.0001 # Load minibatched MNIST data: include(Knet.dir("data","mnist.jl")) dtrn, dtst = mnistdata(xtype=ARRAY, batchsize=BSIZE) xtrn, ytrn, xtst, ytst = mnist() xtrn = ARRAY(reshape(xtrn,(XSIZE,:))) xtst = ARRAY(reshape(xtst,(XSIZE,:))); # + # Model definition and initialization struct MLP; W; b; μ; B; ∇g; function MLP(dims...;α=ALPHA) h,o = dims[end-1:end] W = initw.(dims[1:end-1],dims[2:end]) b = initb.(dims[2:end]) μ = initμ(h,o) B = initB(h,o;α=α) ∇g = init∇g(h) new(W, b, μ, B, ∇g) end end initw(i,o)=Param(ARRAY(xavier(o,i))) initb(o)=Param(ARRAY(zeros(o))) initμ(h,o)=ARRAY(MU0*randn(h,o)) initB(h,o;α=ALPHA)=(B = zeros(h,h,o); for i in 1:o, j in 1:h; B[j,j,i] = α; end; ARRAY(B)) init∇g(h)=ARRAY(zeros(h)) Base.show(io::IO, m::MLP)=print(IOContext(io,:compact=>true), "MLP", (size(m.W[1],2),length.(m.b)...)) # + # Featurevec, predict and loss functions function featurevector(m::MLP,x) L,y = length(m.W),mat(x) for l in 1:L-1 y = relu.(m.b[l] .+ m.W[l] * y) end return y end function (m::MLP)(x) # predict m.b[end] .+ m.W[end] * featurevector(m,x) end function (m::MLP)(x,labels;γ=GAMMA) # loss @assert length(labels)==1 "Batchsize > 1 not implemented yet." yfeat = featurevector(m,x) ypred = m.b[end] .+ m.W[end] * yfeat J = nll(ypred,labels) g = likelihoodratio(yfeat,labels,m) return J + γ * g end # + function likelihoodratio(y,labels,m; λ=LAMBDA, η=ETA, update=UPDATE) y = vec(y) M = size(m.μ,2) β = labels[1] # β(n) class label for the nth sample μᵦ₀ = m.μ[:,β] # μ[β(n)](n-1) exponentially weighted mean of class β(n) before the nth sample Bᵦ₀ = m.B[:,:,β] # B[β(n)](n-1) exponentially weighted inverse covariance matrix of class β(n) before the nth sample μᵦ₁ = λ * μᵦ₀ + (1-λ) * y y₀ = y - μᵦ₀ # ybar[L-1](n) the centralized feature vector y₁ = y - μᵦ₁ # ybar[L-1](n) the centralized feature vector z = Bᵦ₀ * y₀ # unscaled gradient ξ = 1 / ((1/(1-λ)) + (y₀' * Bᵦ₀ * y₀)) # gradient scaling A = (1/λ)*(Bᵦ₀ - z*z'*ξ) Bᵦ₁ = A-(1-λ)*η*A*A/(1+(1-λ)*η*tr(A)) # updated inverse covariance matrix q=Bᵦ₁*y₁ α=(y₁' * Bᵦ₁ * y₁) g=-1/2*logdet(Bᵦ₁)+1/2*α ∇g=(1-λ)*(1-α)*q+λ^2*Bᵦ₁*y₀ for j=1:M if (j!=β) μⱼ=m.μ[:,j] Bⱼ=m.B[:,:,j] ∇g+=-1/(M-1)*(Bⱼ*(y-μⱼ)) αⱼ=((y-μⱼ)'*Bⱼ*(y-μⱼ)) g+=1/(M-1)*(1/2*logdet(Bⱼ)-1/2*αⱼ) end end if training() # Store ∇g if differentiating m.∇g .= ∇g end if update # Update state if specified m.B[:,:,β] .= Bᵦ₁ m.μ[:,β] .= μᵦ₁ end return g end @primitive likelihoodratio(y,l,m;o...),dy dy*m.∇g # - # Macro for debugging macro summ(exs...) blk = Expr(:block) for ex in exs push!(blk.args, :(println($(sprint(Base.show_unquoted,ex)*" = "), summ(begin value=$(esc(ex)) end)))) end isempty(exs) || push!(blk.args, :value) return blk end summ(x)=(isbits(x) || isa(x,String) || isa(x,Symbol) ? repr(x) : summary(x)) #macro summ(exs...); esc(exs[1]); end # Experiment 1: check model functions UPDATE=false (x,labels) = first(dtrn) m = MLP(XSIZE,HSIZE...,YSIZE) @summ x @show labels @summ y = featurevector(m,x) @summ scores = m(x) @summ J=nll(scores,labels) @summ g=likelihoodratio(y,labels,m) @summ J + GAMMA * g @summ m(x,labels) UPDATE=true; # Experiment 2: check gradients using AutoGrad: @gcheck, gcheck (x,labels) = first(dtrn) m = MLP(XSIZE,HSIZE...,YSIZE) y = featurevector(m,x) py = Param(y) UPDATE=false @show @gcheck likelihoodratio(py,labels,m) @show @gcheck nll(m(x),labels) @show @gcheck m(x,labels) UPDATE=true # Experiment 3: train one epoch with regularization Random.seed!(1) m = MLP(XSIZE,HSIZE...,YSIZE) GAMMA=0.1 progress!(adam(m,dtst)) (acc=accuracy(m,dtst),nll=nll(m(xtst),ytst)) # Experiment 4: train one epoch without regularization Random.seed!(1) m = MLP(XSIZE,HSIZE...,YSIZE) GAMMA = 0 progress!(adam(m,dtst)) (acc=accuracy(m,dtst),nll=nll(m(xtst),ytst)) # Experiment 5: run to convergence with 100 instances Random.seed!(1) d100 = take(dtrn,100) countmap([Int(y[1]) for (x,y) in d100]) |> println # make sure labels are balanced for γ in (0,0.00001,0.0001,0.001,0.01,0.1,1.0,10.0,100.0,1000.0,10000.0) GAMMA = γ m = MLP(XSIZE,HSIZE...,YSIZE) a = collect(progress((adam!(m,d100);accuracy(m(xtst),ytst)) for i in 1:100)) fmax,imax = findmax(a) println((γ=γ,acc=fmax,iter=imax)) end #= best gamma = 0.0001 for 100 instances: (γ = 0, acc = 0.689, iter = 21) (γ = 1.0e-5, acc = 0.6877, iter = 18) (γ = 0.0001, acc = 0.6937, iter = 42) (γ = 0.001, acc = 0.6841, iter = 17) (γ = 0.01, acc = 0.6812, iter = 6) (γ = 0.1, acc = 0.6585, iter = 4) (γ = 1.0, acc = 0.6413, iter = 2) =# # Experiment 9: compute learning curve results9 = Dict() for p in 5:15, g in (0,0.0001) data = take(dtrn,2^p) GAMMA = g Random.seed!(1) m = MLP(XSIZE,HSIZE...,YSIZE) a = [ (adam!(m,data); accuracy(m(xtst),ytst)) for i in 1:100 ] println((n=2^p,γ=g,acc=maximum(a))) results9[(n=2^p,γ=g)] = a end #= Learning curve: (n = 32, γ = 0, acc = 0.5578) (n = 32, γ = 0.0001, acc = 0.5573) (n = 64, γ = 0, acc = 0.6832) (n = 64, γ = 0.0001, acc = 0.684) (n = 128, γ = 0, acc = 0.7236) (n = 128, γ = 0.0001, acc = 0.7236) (n = 256, γ = 0, acc = 0.798) (n = 256, γ = 0.0001, acc = 0.7965) (n = 512, γ = 0, acc = 0.8546) (n = 512, γ = 0.0001, acc = 0.8491) (n = 1024, γ = 0, acc = 0.8887) =# # Experiment 10: Compute training curve with no regularization Random.seed!(1) EPOCHS, TESTFREQ = 1, 1000 GAMMA, UPDATE = 0, false m0 = MLP(XSIZE,HSIZE...,YSIZE) r0 = collect(progress(accuracy(m0(xtst),ytst) for z in takenth(adam(m0,repeat(dtrn,EPOCHS)),TESTFREQ))); # Experiment 11: Compute training curve with regularization Random.seed!(1) EPOCHS, TESTFREQ = 1, 1000 GAMMA, UPDATE = 0.0001, true m1 = MLP(XSIZE,HSIZE...,YSIZE) r1 = collect(progress(accuracy(m1(xtst),ytst) for z in takenth(adam(m1,repeat(dtrn,EPOCHS)),TESTFREQ))); # Experiment 12: Compute training curve with regularization after an initial period Random.seed!(1) EPOCHS, TESTFREQ = 1, 1000 GAMMA, UPDATE = 0, false m2 = MLP(XSIZE,HSIZE...,YSIZE) r2 = collect(progress( (a=accuracy(m2(xtst),ytst); if a>0.9; global GAMMA,UPDATE=0.0001,true; end; a) for z in takenth(adam(m2,repeat(dtrn,EPOCHS)),TESTFREQ))); # Plot training curves Plots.default(fmt=:png,ls=:auto,legend=:bottomright) plot([r0 r1 r2])
mnist-likelihoodratio.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # # Display Web Map ArcGIS DevLab # This is the completed solution for the [Display a Web Map](https://developers.arcgis.com/labs/data/python/display-webmap/) ArcGIS DevLab. [ArcGIS DevLabs](https://developers.arcgis.com/labs/) are short introductory tutorials to guide you through the three phases of building geospatial apps: Data, Design, Develop # + from arcgis.gis import GIS gis = GIS("https://www.arcgis.com") # - # Let's search for a Los Angeles Parks and Trails Map. webmaps = gis.content.search(query="LA Parks and Trails *", item_type="Web Map") webmaps # The index position for the LA Parks and Trails Map in your search list may vary from the example. webmap = webmaps[2] webmap # ## To display a web scene in your notebook, query the `WebMap` object. from arcgis.mapping import WebMap la_parks_trails = WebMap(webmap) la_parks_trails # ## Challenge op_layers = la_parks_trails['operationalLayers'] print("The webmap has {} layers.".format(len(op_layers))) for lyr in la_parks_trails['operationalLayers']: print("{}\n\t{}".format(lyr['id'], lyr['url']))
labs/display_web_map.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:gmaps] # language: python # name: conda-env-gmaps-py # --- # + # Dependencies # import numpy as np import pandas as pd # import matplotlib.pyplot as plt import requests from census import Census # import us # from us import states from sqlalchemy import create_engine from sqlalchemy.orm import Session # Census API Key from config import api_key c = Census(api_key, year=2016) # - # Using census api to pull fields of interest. Codes below are from Census website census_data=c.acs5.get(("NAME", "B19013_001E", "B01003_001E", "B02001_002E", "B02001_003E", "B02001_004E","B02001_005E","B02001_006E","B02001_007E","B02001_008E", "B02001_009E", "B02001_010E", "B01002_001E", "B19301_001E","B23025_001E","B23025_002E", "B23025_005E", "B17001_002E", "B01001_001E", "B01001_002E","B01001_026E", "B01001_007E", "B01001_008E", "B01001_009E", "B01001_010E", "B01001_011E","B01001_012E", "B01001_013E", "B01001_014E", "B01001_015E", "B01001_016E", "B01001_017E", "B01001_018E", "B01001_019E", "B01001_020E", "B01001_021E", "B01001_022E","B01001_023E", "B01001_024E","B01001_025E", "B01001_031E", "B01001_032E", "B01001_033E", "B01001_034E", "B01001_035E","B01001_036E", "B01001_037E", "B01001_038E", "B01001_039E", "B01001_040E", "B01001_041E", "B01001_042E", "B01001_043E", "B01001_044E", "B01001_045E", "B01001_046E","B01001_047E", "B01001_048E","B01001_049E", "B05001_001E", "B05001_006E"), {'for': 'state:*'}) # + # Convert to DataFrame census_pd = pd.DataFrame(census_data) # Column Renaming census_pd = census_pd.rename(columns={"B01003_001E": "Population", "B01002_001E": "Median Age", "B02001_002E": "White alone", "B02001_003E": "Black or African American alone", "B02001_004E": "American Indian and Alaska Native alone", "B02001_005E": "Asian alone", "B02001_006E": "Native Hawaiian and Other Pacific Islander alone", "B02001_007E": "Some other race alone", "B02001_008E": "Two or more races", "B02001_009E": "Two races including Some other race", "B02001_010E": "Two races excluding Some other race and three or more races", "B19013_001E": "Household Income", "B19301_001E": "Per Capita Income", "B17001_002E": "Poverty Count", "B23025_001E": "Total employable population over 16", "B23025_002E": "Total in labor force", "B23025_005E": "Civilian labor force Unemployed", "B01001_001E": "Population Total count by sex", "B01001_002E": "Total Male", "B01001_026E": "Total Female", "B01001_007E": "Male 18-19", "B01001_008E": "Male 20", "B01001_009E": "Male 21", "B01001_010E": "Male 22-24", "B01001_011E": "Male 25-29", "B01001_012E": "Male 30-34", "B01001_013E": "Male 35-39", "B01001_014E": "Male 40-44", "B01001_015E": "Male 45-49", "B01001_016E": "Male 50-54", "B01001_017E": "Male 55-59", "B01001_018E": "Male 60-61", "B01001_019E": "Male 62-64", "B01001_020E": "Male 65-66", "B01001_021E": "Male 67-69", "B01001_022E": "Male 70-74", "B01001_023E": "Male 75-79", "B01001_024E": "Male 80-84", "B01001_025E": "Male 85 and up", "B01001_031E": "Female 18-19", "B01001_032E": "Female 20", "B01001_033E": "Female 21", "B01001_034E": "Female 22-24", "B01001_035E": "Female 25-29", "B01001_036E": "Female 30-34", "B01001_037E": "Female 35-39", "B01001_038E": "Female 40-44", "B01001_039E": "Female 45-49", "B01001_040E": "Female 50-54", "B01001_041E": "Female 55-59", "B01001_042E": "Female 60-61", "B01001_043E": "Female 62-64", "B01001_044E": "Female 65-66", "B01001_045E": "Female 67-69", "B01001_046E": "Female 70-74", "B01001_047E": "Female 75-79", "B01001_048E": "Female 80-84", "B01001_049E": "Female 85 and up", "B05001_001E": "Citizenship population count citizen and non-citizen", "B05001_006E": "Non Citizen", "NAME": "state_name"}) # Add in Poverty Rate, Unemployement Rate, and citzenship rates. census_pd["Poverty Rate"] = 100 * \ census_pd["Poverty Count"].astype( int) / census_pd["Population"].astype(int) census_pd["Unemployement Rate"] = 100 * census_pd["Civilian labor force Unemployed"].astype(int) / census_pd["Total employable population over 16"].astype(int) census_pd["% Non-Citizen"] = 100 * census_pd["Non Citizen"] / census_pd["Citizenship population count citizen and non-citizen"] census_pd["% Citizen"] = 100 - census_pd["% Non-Citizen"] # Calculate total female voting population census_pd["Female Voting population - 18&up"] = census_pd["Female 18-19"] + census_pd["Female 20"] + census_pd["Female 21"] + census_pd["Female 22-24"] + census_pd["Female 25-29"] + census_pd["Female 30-34"] \ + census_pd["Female 30-34"] + census_pd["Female 35-39"] + census_pd["Female 40-44"] + census_pd["Female 45-49"] + census_pd["Female 50-54"] + census_pd["Female 55-59"] \ + census_pd["Female 60-61"] + census_pd["Female 62-64"] + census_pd["Female 65-66"] +census_pd["Female 67-69"] +census_pd["Female 70-74"] + census_pd["Female 75-79"] \ + census_pd["Female 80-84"] + census_pd["Female 85 and up"] # Calculate total male voting population census_pd["Male Voting population - 18&up"] = census_pd["Male 18-19"] + census_pd["Male 20"] + census_pd["Male 21"] + census_pd["Male 22-24"] + census_pd["Male 25-29"] + census_pd["Male 30-34"] \ + census_pd["Male 30-34"] + census_pd["Male 35-39"] + census_pd["Male 40-44"] + census_pd["Male 45-49"] + census_pd["Male 50-54"] + census_pd["Male 55-59"] \ + census_pd["Male 60-61"] + census_pd["Male 62-64"] + census_pd["Male 65-66"] +census_pd["Male 67-69"] +census_pd["Male 70-74"] + census_pd["Male 75-79"] \ + census_pd["Male 80-84"] + census_pd["Male 85 and up"] # - #Drop unneeded dataframe columns. Only want total age by voting group census_pd=census_pd.drop(columns=["Female 18-19", "Female 20", "Female 21", "Female 22-24", "Female 25-29", "Female 30-34", "Female 35-39", "Female 40-44", "Female 45-49", "Female 50-54", "Female 55-59", "Female 60-61", "Female 62-64", "Female 65-66", "Female 67-69", "Female 70-74", "Female 75-79", "Female 80-84", "Female 85 and up", "Male 18-19", "Male 20", "Male 21", "Male 22-24", "Male 25-29", "Male 30-34", "Male 35-39", "Male 40-44", "Male 45-49", "Male 50-54", "Male 55-59", "Male 60-61", "Male 62-64", "Male 65-66", "Male 67-69", "Male 70-74", "Male 75-79", "Male 80-84", "Male 85 and up", "state"] ) census_pd.columns = [column.lower().replace(' ', '_') for column in census_pd.columns] # + jupyter={"outputs_hidden": true} # View results for each state census_pd.head(50) # + jupyter={"outputs_hidden": true} # Export dataframe to sql database. Also print out to view all data and db. conn = 'sqlite:///voting.db' engine = create_engine(conn, echo=False) sample_sql_database = census_pd.to_sql('sample_database', if_exists='replace', con=engine) sample_sql_database = engine.execute("SELECT * FROM sample_database").fetchall() print(sample_sql_database) # -
notebook_census_api/Census_Query.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: qbo1d_pytorch # language: python # name: qbo1d_pytorch # --- # + from matplotlib.colors import Normalize from matplotlib.ticker import MultipleLocator import matplotlib.pyplot as plt import numpy as np import torch from qbo1d import adsolver, utils from qbo1d.stochastic_forcing import WaveSpectrum # %load_ext autoreload # %autoreload 2 # - def ax_pos_inch_to_absolute(fig_size, ax_pos_inch): ax_pos_absolute = [] ax_pos_absolute.append(ax_pos_inch[0]/fig_size[0]) ax_pos_absolute.append(ax_pos_inch[1]/fig_size[1]) ax_pos_absolute.append(ax_pos_inch[2]/fig_size[0]) ax_pos_absolute.append(ax_pos_inch[3]/fig_size[1]) return ax_pos_absolute # In some cases I found it necessary to use double precision torch.set_default_dtype(torch.float64) # # Demonstration # instantiating a solver using the default values, except for t_max which is set up for a 96 year long run solver = adsolver.ADSolver(t_max=360*96*86400, w=3e-4) model = WaveSpectrum(solver) # ### The following cell runs the solver -- takes about 20 sec for 96 years. u = solver.solve(source_func=model) # + # estimating QBO amplitudes and period spinup_time = 12*360*86400 amp25 = utils.estimate_amplitude(solver.time, solver.z, u, height=25e3, spinup=spinup_time) amp20 = utils.estimate_amplitude(solver.time, solver.z, u, height=20e3, spinup=spinup_time) tau25 = utils.estimate_period(solver.time, solver.z, u, height=25e3, spinup=spinup_time) # - # ### Plotting the solution # + fig_size = (06.90, 02.20+01.50) fig = plt.figure(figsize=fig_size) ax = [] ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [00.75, 01.25, 06.00, 02.00]))) cmin = -np.max(np.abs(u.numpy())) cmax = np.max(np.abs(u.numpy())) xmin = 84. xmax = 96. ymin = 17. ymax = 35. ax[0].set_xlim(left=84.) ax[0].set_xlim(right=96.) ax[0].set_ylim(bottom=17.) ax[0].set_ylim(top=35.) h = [] h.append(ax[0].contourf(solver.time/86400/360, solver.z[:]/1000, u.numpy().T, 21, cmap="RdYlBu_r", vmin=cmin, vmax=cmax)) ax[0].axhline(25., xmin=0, xmax=1, color='black', linestyle='dashed', linewidth=1.) ax[0].axhline(20., xmin=0, xmax=1, color='black', linestyle='dashed', linewidth=1.) ax[0].set_ylabel('Km', fontsize=10) ax[0].set_xlabel('model year', fontsize=10) xticks_list = np.arange(xmin, xmax+1, 1) ax[0].set_xticks(xticks_list) yticks_list = np.arange(ymin, ymax+2, 2) ax[0].set_yticks(yticks_list) xticklabels_list = list(xticks_list) xticklabels_list = [ '%.0f' % elem for elem in xticklabels_list ] ax[0].set_xticklabels(xticklabels_list, fontsize=10) ax[0].xaxis.set_minor_locator(MultipleLocator(1.)) ax[0].yaxis.set_minor_locator(MultipleLocator(1.)) ax[0].tick_params(which='both', left=True, right=True, bottom=True, top=True) ax[0].tick_params(which='both', labelbottom=True) ax[0].text(84.50, 33, 'Zonal wind', backgroundcolor='white', horizontalalignment='left', verticalalignment='top', color='black') ax[0].text(95.50, 25, r'$\sigma_{25}$ = ' '%.1f' %amp25 + r' $\mathrm{m s^{-1}}$', horizontalalignment='right', verticalalignment='bottom', color='black') ax[0].text(95.50, 20, r'$\sigma_{20}$ = ' '%.1f' %amp20 + r' $\mathrm{m s^{-1}}$', horizontalalignment='right', verticalalignment='bottom', color='black') ax[0].text(84.50, 25, r'$\tau_{25}$ = ' '%.0f' %tau25 + ' months', horizontalalignment='left', verticalalignment='bottom', color='black') # # colorbars cbar_ax0 = fig.add_axes(ax_pos_inch_to_absolute(fig_size, [01.00, 00.50, 05.50, 00.10])) ax[0].figure.colorbar(plt.cm.ScalarMappable(cmap="RdYlBu_r"), cax=cbar_ax0, format='% 2.0f', boundaries=np.linspace(cmin, cmax, 21), orientation='horizontal', label=r'$\mathrm{m s^{-1}}$') # - # # Plotting the drags and shears # Next we plot the drags and shears to demonstrate that they are proportional. Note that the model instance keeps track of the drags during computation, so we only need to evaluate the shears. # + fig_size = (06.90, 07.00) fig = plt.figure(figsize=fig_size) ax = [] ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [00.75, 04.50, 06.00, 02.00]))) ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [00.75, 01.25, 06.00, 02.00]))) cmin = -np.max(np.abs(model.s.numpy())) cmax = np.max(np.abs(model.s.numpy())) xmin = 84. xmax = 96. ymin = 17. ymax = 35. ax[0].set_xlim(left=84.) ax[0].set_xlim(right=96.) ax[0].set_ylim(bottom=17.) ax[0].set_ylim(top=35.) h = [] h.append(ax[0].contourf(solver.time/86400/360, solver.z/1000, model.s.numpy().T, 21, cmap="RdYlBu_r", vmin=cmin, vmax=cmax)) ax[0].set_ylabel('Km', fontsize=10) ax[0].set_xlabel('model year', fontsize=10) xticks_list = np.arange(xmin, xmax+1, 1) ax[0].set_xticks(xticks_list) yticks_list = np.arange(ymin, ymax+2, 2) ax[0].set_yticks(yticks_list) xticklabels_list = list(xticks_list) xticklabels_list = [ '%.0f' % elem for elem in xticklabels_list ] ax[0].set_xticklabels(xticklabels_list, fontsize=10) ax[0].xaxis.set_minor_locator(MultipleLocator(1.)) ax[0].yaxis.set_minor_locator(MultipleLocator(1.)) ax[0].tick_params(which='both', left=True, right=True, bottom=True, top=True) ax[0].tick_params(which='both', labelbottom=True) ax[0].text(84.50, 33, 'Drag', backgroundcolor='white', horizontalalignment='left', verticalalignment='top', color='black') # # colorbars cbar_ax0 = fig.add_axes(ax_pos_inch_to_absolute(fig_size, [01.00, 03.75, 05.50, 00.10])) ax[0].figure.colorbar(plt.cm.ScalarMappable(cmap="RdYlBu_r"), cax=cbar_ax0, format='% 0.2e', boundaries=np.linspace(cmin, cmax, 21), orientation='horizontal', label=r'$\mathrm{m s^{-2}}$') cmin = -np.max(np.abs(torch.matmul(solver.D1, u.T).numpy())) cmax = np.max(np.abs(torch.matmul(solver.D1, u.T).numpy())) ax[1].set_xlim(left=84.) ax[1].set_xlim(right=96.) ax[1].set_ylim(bottom=17.) ax[1].set_ylim(top=35.) h = [] h.append(ax[1].contourf(solver.time/86400/360, solver.z/1000, torch.matmul(solver.D1, u.T).numpy(), 21, cmap="RdYlBu_r", vmin=cmin, vmax=cmax)) ax[1].set_ylabel('Km', fontsize=10) ax[1].set_xlabel('model year', fontsize=10) xticks_list = np.arange(xmin, xmax+1, 1) ax[1].set_xticks(xticks_list) yticks_list = np.arange(ymin, ymax+2, 2) ax[1].set_yticks(yticks_list) xticklabels_list = list(xticks_list) xticklabels_list = [ '%.0f' % elem for elem in xticklabels_list ] ax[1].set_xticklabels(xticklabels_list, fontsize=10) ax[1].xaxis.set_minor_locator(MultipleLocator(1.)) ax[1].yaxis.set_minor_locator(MultipleLocator(1.)) ax[1].tick_params(which='both', left=True, right=True, bottom=True, top=True) ax[1].tick_params(which='both', labelbottom=True) ax[1].text(84.50, 33, 'Wind shear', backgroundcolor='white', horizontalalignment='left', verticalalignment='top', color='black') # # colorbars cbar_ax0 = fig.add_axes(ax_pos_inch_to_absolute(fig_size, [01.00, 00.50, 05.50, 00.10])) ax[1].figure.colorbar(plt.cm.ScalarMappable(cmap="RdYlBu_r"), cax=cbar_ax0, format='% 0.2e', boundaries=np.linspace(cmin, cmax, 21), orientation='horizontal', label=r'$\mathrm{s^{-1}}$') # - # Note, the differentiation matrix used in the numerical scheme zeros-out the first and last elements which zeros-out the shears at the top and bottom.
example_stochastic.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # - Load Data # + import pandas as pd data = pd.read_csv('..//..//data//input//GOP-debate-twiter-Sentiment.csv') data.head(5) # - data.shape data_sample = data.iloc[0:13871][['text','sentiment']] data_sample.head(5) data_sample['text'] = data_sample['text'].apply(lambda x: x.lower()) data_sample.head(5) # - Case folding # + from nltk import word_tokenize data_sample['text'] = data_sample['text'].apply(lambda x: x.lower()) data_sample.head(5) # - # - Stopwords removal # + from nltk.corpus import stopwords from nltk import word_tokenize stopwords = stopwords.words('english') data_sample['text'] = data_sample['text'].apply(lambda text: [w for w in word_tokenize(text) if not w in stopwords]) data_sample.head(5) # - # - Punctuation # + from string import punctuation data_sample['text'] = data_sample['text'].apply(lambda text: [w for w in text if w not in punctuation]) data_sample.head(5) # - # - Digit removal data_sample['text'] = data_sample['text'].apply(lambda text: [w for w in text if not w.isdigit()]) data_sample.head(5) # - Non-ASCII Characters removal from string import printable data_sample['text'] = data_sample['text'].apply(lambda text: [word for word in text if all([char in printable for char in word])]) data_sample.head(5) # - Join data data_sample['text'] = data_sample['text'].apply(lambda text: ' '.join(text)) data_sample.head(5) # - TF & IDF # + from sklearn.model_selection import train_test_split X= data_sample['text'] y= data_sample['sentiment'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # + from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(lowercase=True, binary=False, use_idf=True, max_features=None) vectorizer.fit(X_train) X_train_vector = vectorizer.transform(X_train) X_test_vector = vectorizer.transform(X_test) # - # - Machine Learning Model # 1) SVM # + from sklearn.svm import SVC model1 = SVC(kernel='linear') model1.fit(X_train_vector, y_train) # - from sklearn.metrics import classification_report y_pred1 = model1.predict(X_test_vector) print(classification_report(y_test, y_pred1)) # 2) Naive Bayes from sklearn.naive_bayes import MultinomialNB model2 = MultinomialNB() model2.fit(X_train_vector, y_train) from sklearn.metrics import classification_report y_pred2 = model2.predict(X_test_vector) print(classification_report(y_test, y_pred2)) # 3) Decision Tree from sklearn.tree import DecisionTreeClassifier model3 = DecisionTreeClassifier(criterion = 'entropy', random_state=0) model3.fit(X_train_vector, y_train) from sklearn.metrics import classification_report y_pred3 = model3.predict(X_test_vector) print(classification_report(y_test, y_pred3)) # 4) Random Forest Classification from sklearn.ensemble import RandomForestClassifier model4 = RandomForestClassifier(n_estimators=100, criterion='entropy', random_state=0) model4.fit(X_train_vector, y_train) from sklearn.metrics import classification_report y_pred4 = model4.predict(X_test_vector) print(classification_report(y_test, y_pred4)) # 5) Multi Layer Percepton # + from sklearn.neural_network import MLPClassifier model5 = MLPClassifier(hidden_layer_sizes=(50,50), activation='relu') model5.fit(X_train_vector, y_train) # - from sklearn.metrics import classification_report y_pred5 = model5.predict(X_test_vector) print(classification_report(y_test, y_pred5)) # - Which one better? # SVM with 0.70 # + from sklearn.pipeline import Pipeline estimators = [ ('vectorizer',TfidfVectorizer(lowercase=True, binary=False, use_idf=True, max_features=None)), ('model1',SVC()), ] pipeline = Pipeline(estimators) pipeline.fit(X_train, y_train) # + from sklearn.pipeline import Pipeline estimators = [ ('vectorizer',TfidfVectorizer(lowercase=True, binary=False, use_idf=True, max_features=None)), ('model2',MultinomialNB()), ] pipeline = Pipeline(estimators) pipeline.fit(X_train, y_train) # + from sklearn.pipeline import Pipeline estimators = [ ('vectorizer',TfidfVectorizer(lowercase=True, binary=False, use_idf=True, max_features=None)), ('model3',DecisionTreeClassifier()), ] pipeline = Pipeline(estimators) pipeline.fit(X_train, y_train) # + from sklearn.pipeline import Pipeline estimators = [ ('vectorizer',TfidfVectorizer(lowercase=True, binary=False, use_idf=True, max_features=None)), ('model4',RandomForestClassifier()), ] pipeline = Pipeline(estimators) pipeline.fit(X_train, y_train) # + from sklearn.pipeline import Pipeline estimators = [ ('vectorizer',TfidfVectorizer(lowercase=True, binary=False, use_idf=True, max_features=None)), ('model5', MLPClassifier()), ] pipeline = Pipeline(estimators) pipeline.fit(X_train, y_train) # -
codes/project/Project (Sentiment Social Media Analyse).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="endangered-bearing" # ## Install dependencies # + id="qxdVX-7uPyQ4" colab={"base_uri": "https://localhost:8080/"} outputId="3c444aa0-c45d-4cba-97da-011790ee27b1" # + [markdown] id="aware-marijuana" # ## Download files # + id="UiueEhiIQbf_" # + [markdown] id="surface-denmark" # ## Display text and images # + [markdown] id="civic-occurrence" # # + [markdown] id="proper-remainder" # ## Get information about the FMU # + id="unexpected-passport" # + [markdown] id="embedded-bouquet" # ## Simulate the FMU and plot the result # + id="undefined-rebate" # + [markdown] id="promising-wrestling" # ## Upload files to the notebook # + id="compliant-expansion" # + [markdown] id="light-tragedy" # ## Download files from the notebook # + id="plastic-audio" # + [markdown] id="excellent-barbados" # ## Show docstring # + id="looking-tragedy" # + [markdown] id="military-beauty" # ## Open documentation # + id="warming-michael" # + [markdown] id="earned-surgeon" # ## Access the source code # + id="alternative-basin" # + [markdown] id="sharp-courage" # ## Show progress of long running simulations # + id="amino-suspect" # + [markdown] id="available-iraqi" # ## Move code to a library module # + id="victorian-infection" # + [markdown] id="incorrect-archives" # ## Compile the platform binary from source # + id="personalized-switzerland"
Heater.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # -*- coding: utf-8 -*- # @Time : 2021-05-07 2:54 p.m. # @Author : <NAME> # @FileName: lambda_gCNR.py # @Software: PyCharm """this script generates images for the figure 5 as seen in the paper. Sparse reconstructions of the same OCT middle ear image using the same learned dictionary for optimal values of the weighting parameter and lambda""" import numpy as np import matplotlib from matplotlib import pyplot as plt from misc import processing, quality, annotation import matplotlib.gridspec as gridspec from scipy.ndimage import median_filter from tabulate import tabulate from matplotlib.ticker import (MultipleLocator) import matplotlib.ticker # Define ROIs roi = {} width, height = (20, 10) roi['artifact'] = [[212, 142, int(width * 1.2), int(height * 1.2)]] roi['background'] = [[390, 260, int(width * 1.2), int(height * 1.2)]] roi['homogeneous'] = [[212, 165, int(width * 1.2), int(height * 1.2)], [390, 230, int(width * 1.2), int(height * 1.2)]] # Module level constants eps = 1e-14 bins = 32 w_lmbda = 0.05 def anote(ax,s,median_flag =False): legend_font = 15 text = r'${A}$' ax.annotate(text, xy=(roi['artifact'][0][0], roi['artifact'][0][1]), xycoords='data', xytext=(roi['artifact'][0][0] - 100, roi['artifact'][0][1] - 45), textcoords='data', fontsize=legend_font, color='white', fontname='Arial', arrowprops=dict(facecolor='white', shrink=0.025), horizontalalignment='left', verticalalignment='top') text = r'${H_{1}}$' ax.annotate(text, xy=(roi['homogeneous'][0][0], roi['homogeneous'][0][1] + height), xycoords='data', xytext=(roi['homogeneous'][0][0] - 50, roi['homogeneous'][0][1] + 50), textcoords='data', fontsize=legend_font, color='white', fontname='Arial', arrowprops=dict(facecolor='white', shrink=0.025), horizontalalignment='right', verticalalignment='top') text = r'${H_{2}}$' ax.annotate(text, xy=(roi['homogeneous'][1][0], roi['homogeneous'][1][1] + height), xycoords='data', xytext=(roi['homogeneous'][1][0] - 60, roi['homogeneous'][1][1]+10), textcoords='data', fontsize=legend_font, color='white', fontname='Arial', arrowprops=dict(facecolor='white', shrink=0.025), horizontalalignment='right', verticalalignment='top') text = r'${B}$' ax.annotate(text, xy=(roi['background'][0][0] + width, roi['background'][0][1] + height), xycoords='data', xytext=(roi['background'][0][0] + 2 * width, roi['background'][0][1] + 40), textcoords='data', color='white', fontname='Arial', arrowprops=dict(facecolor='white', shrink=0.025), horizontalalignment='left', verticalalignment='top') ax.set_axis_off() for i in range(len(roi['artifact'])): for j in annotation.get_artifact(*roi['artifact'][i]): ax.add_patch(j) for i in range(len(roi['homogeneous'])): for j in annotation.get_homogeneous(*roi['homogeneous'][i]): ax.add_patch(j) for i in range(len(roi['background'])): for j in annotation.get_background(*roi['background'][i]): ax.add_patch(j) h1 = quality.ROI(*roi['homogeneous'][0], s) h2 = quality.ROI(*roi['homogeneous'][1], s) ba = quality.ROI(*roi['background'][0], s) ar = quality.ROI(*roi['artifact'][0], s) if median_flag == True: textstr = '\n'.join(( r'${gCNR_{{H_1}/{A}}}$: %.2f' % (quality.log_gCNR(h1, ar,improvement=True)), r'${gCNR_{{H_2}/{A}}}$: %.2f' % (quality.log_gCNR(h2, ar,improvement=True)), r'${gCNR_{{H_2}/B}}$: %.2f' % (quality.log_gCNR(h2, ba,improvement=True)), r'${gCNR_{{H_1}/{H_2}}}$: %.2f' % (quality.log_gCNR(h1, h2,improvement=True)))) ax.text(0.55, 0.98, textstr, transform=ax.transAxes, fontsize=legend_font, verticalalignment='top', fontname='Arial', color='white') else: textstr = '\n'.join(( r'${SNR_{{H_2}/B}}$: %.1f $dB$' % (quality.SNR(h2, ba)), r'${C_{{H_2}/B}}$: %.1f $dB$' % (quality.Contrast(h2, ba)), r'${C_{{H_1}/{H_2}}}$: %.1f $dB$' % (quality.Contrast(h1, h2)))) ax.text(0.025, 0.98, textstr, transform=ax.transAxes, fontsize=legend_font, verticalalignment='top', fontname='Arial', color='white') textstr = '\n'.join(( r'${gCNR_{{H_1}/{A}}}$: %.2f' % (quality.log_gCNR(h1, ar)), r'${gCNR_{{H_2}/{A}}}$: %.2f' % (quality.log_gCNR(h2, ar)), r'${gCNR_{{H_2}/B}}$: %.2f' % (quality.log_gCNR(h2, ba)), r'${gCNR_{{H_1}/{H_2}}}$: %.2f' % (quality.log_gCNR(h1, h2)))) ax.text(0.55, 0.98, textstr, transform=ax.transAxes, fontsize=legend_font, verticalalignment='top', fontname='Arial', color='white') return ax def lmbda_search(s,lmbda,speckle_weight): x = processing.make_sparse_representation(s,D, lmbda,w_lmbda,speckle_weight) s_intensity = abs(s)**2 x_intensity = abs(x)**2 ho_s_1 = quality.ROI(*roi['homogeneous'][0], s_intensity) ho_s_2 = quality.ROI(*roi['homogeneous'][1], s_intensity) ho_x_1 = quality.ROI(*roi['homogeneous'][0], x_intensity) ho_x_2 = quality.ROI(*roi['homogeneous'][1], x_intensity) ar_s = quality.ROI(*roi['artifact'][0], s_intensity) ar_x = quality.ROI(*roi['artifact'][0], x_intensity) ba_s = quality.ROI(*roi['background'][0], s_intensity) ba_x = quality.ROI(*roi['background'][0], x_intensity) # calcuate image quality metrics #'gCNR ', 'H_1/A', gcnrh1a = quality.log_gCNR(ho_s_1, ar_s), quality.log_gCNR(ho_x_1, ar_x) #'gCNR', 'H_2/B', gcnrh2b = quality.log_gCNR(ho_s_2, ba_s), quality.log_gCNR(ho_x_2, ba_x) #'gCNR', 'H_1/H_2', gcnrh12 = quality.log_gCNR(ho_s_1, ho_s_2), quality.log_gCNR(ho_x_1, ho_x_2) #'gCNR', 'H_2/A', gcnrh2a = quality.log_gCNR(ho_s_2, ar_s), quality.log_gCNR(ho_x_2, ar_x) return (gcnrh1a,gcnrh2b,gcnrh12,gcnrh2a) def value_plot(lmbda,value): fig,ax = plt.subplots(1,1, figsize=(16,9)) ax.set_title(r'Generalized $CNR$ versus $𝜆$') reference = [] for i in range(4): temp = value[0] reference.append(temp[i][0]) gcnrh1a,gcnrh2b,gcnrh12,gcnrh2a = [],[],[],[] for i in range(len(value)): temp = value[i] gcnrh1a.append(temp[0][1]) gcnrh2b.append(temp[1][1]) gcnrh12.append(temp[2][1]) gcnrh2a.append(temp[3][1]) ax.plot(lmbda, gcnrh1a,color='green', label = r'${gCNR_{{H_1}/{A}}}$') ax.axhline(reference[0],color='green',linestyle = '--') ax.plot(lmbda, gcnrh2b,color='red',label = r'${gCNR_{{H_2}/{B}}}$') ax.axhline(reference[1],color='red',linestyle = '--') ax.plot(lmbda, gcnrh12, color='orange',label = r'${gCNR_{{H_1}/{H_2}}}$') ax.axhline(reference[2],color='orange',linestyle = '--') ax.plot(lmbda, gcnrh2a, color='purple',label = r'${gCNR_{{H_2}/{A}}}$') ax.axhline(reference[3],color='purple',linestyle = '--') ax.set_ylabel(r'${gCNR}$') ax.set_xlabel(r'$𝜆$') ax.set_xscale('log') ax.set_ylim(0,1) ax.legend() plt.tight_layout() plt.show() return lmbda[np.argmax(gcnrh2a)] def gCNRPlot(r1, r2, min, max,ax,median_flag = False,y_flag = False): region_r1 = np.ravel(r1) region_r2 = np.ravel(r2) if median_flag == True: log_r1 = processing.imag2uint(region_r1, min, max) log_r2 = processing.imag2uint(region_r2, min, max) else: log_r1 = processing.imag2uint(10 * np.log10(region_r1), min, max) log_r2 = processing.imag2uint(10 * np.log10(region_r2), min, max) weights = np.ones_like(log_r1) / float(len(log_r1)) ax.hist(log_r1, bins=bins, range=(0, 255), weights=weights, histtype='step', label=r'${H_1}$') ax.hist(log_r2, bins=bins, range=(0, 255), weights=weights, histtype='step', label=r'${H_2}$') ax.legend() ax.set_ylim(0,0.5) if y_flag == True: ax.set_ylabel('pixel percentage',fontsize=20) y_vals = ax.get_yticks() ax.set_yticklabels(['{:d}%'.format(int(x*100)) for x in y_vals]) pass else: ax.set_yticks([]) ax.set_ylabel('') return ax if __name__ == '__main__': #Image processing and display paramaters speckle_weight = 0.1 rvmin, vmax = 5, 55 #dB plt.close('all') # Customize matplotlib params matplotlib.rcParams.update( { 'font.size': 16, 'text.usetex': False, 'font.family': 'sans-serif', 'mathtext.fontset': 'stix', } ) file_name = 'finger' # Load the example dataset s, D = processing.load_data(file_name, decimation_factor=20) lmbda = np.logspace(-4,0,50) value = [] for i in range(len(lmbda)): value.append(lmbda_search(s,lmbda[i],0.05)) best = value_plot(lmbda,value) x = processing.make_sparse_representation(s,D, best,w_lmbda,speckle_weight) # Generate log intensity arrays s_intensity = abs(s) ** 2 x_intensity = abs(x) ** 2 s_log = 10 * np.log10(s_intensity) x_log = 10 * np.log10(x_intensity) ho_s_1 = quality.ROI(*roi['homogeneous'][0], s_intensity) ho_s_2 = quality.ROI(*roi['homogeneous'][1], s_intensity) ho_x_1 = quality.ROI(*roi['homogeneous'][0], x_intensity) ho_x_2 = quality.ROI(*roi['homogeneous'][1], x_intensity) ar_s = quality.ROI(*roi['artifact'][0], s_intensity) ar_x = quality.ROI(*roi['artifact'][0], x_intensity) ba_s = quality.ROI(*roi['background'][0], s_intensity) ba_x = quality.ROI(*roi['background'][0], x_intensity) fig = plt.figure(figsize=(16, 9),constrained_layout=True) gs = gridspec.GridSpec(ncols=4, nrows=2, figure=fig) ax = fig.add_subplot(gs[0,0]) ax.set_axis_off() ax.set_title('(a) reference') ax.imshow(s_log, 'gray', aspect=s_log.shape[1] / s_log.shape[0], vmax=vmax, vmin=rvmin, interpolation='none') anote(ax,s_intensity) ax = fig.add_subplot(gs[1, 0]) gCNRPlot(ho_s_1, ho_s_2, rvmin, vmax,ax,y_flag=True) ax = fig.add_subplot(gs[0,1]) textstr = r'(b) $𝜆$ = %.2f,$W$ = %.1f' % (best,speckle_weight) ax.set_title(textstr) ax.set_axis_off() ax.imshow(x_log, 'gray', aspect=x_log.shape[1] / x_log.shape[0], vmax=vmax, vmin=rvmin, interpolation='none') anote(ax,x_intensity) ax = fig.add_subplot(gs[1, 1]) gCNRPlot(ho_x_1, ho_x_2, rvmin, vmax,ax) b_log = median_filter(x_log, size=(3, 3)) ax = fig.add_subplot(gs[0, 2]) textstr = '\n'.join(( r'(c) $𝜆$ = %.2f ' % (best), r'$W$ = %.1f,3x3 median' % (speckle_weight))) ax.set_title(textstr) ax.imshow(b_log, 'gray', aspect=x_log.shape[1] / x_log.shape[0], vmax=vmax, vmin=rvmin, interpolation='none') anote(ax,x_intensity,median_flag = True) ho_b_1 = quality.ROI(*roi['homogeneous'][0], b_log) ho_b_2 = quality.ROI(*roi['homogeneous'][1], b_log) ar_b = quality.ROI(*roi['background'][0], b_log) ax = fig.add_subplot(gs[1, 2]) gCNRPlot(ho_b_1, ho_b_2, rvmin, vmax,ax, median_flag = True) ax = fig.add_subplot(gs[:,3]) ax.set_title(r'(d) generalized $CNR$ $vs.$ $𝜆$') reference = [] for i in range(4): temp = value[0] reference.append(temp[i][0]) gcnrh1a, gcnrh2b, gcnrh12, gcnrh2a = [], [], [], [] for i in range(len(value)): temp = value[i] gcnrh1a.append(temp[0][1]) gcnrh2b.append(temp[1][1]) gcnrh12.append(temp[2][1]) gcnrh2a.append(temp[3][1]) ax.semilogx(lmbda, gcnrh1a, color='green', label=r'${gCNR_{{H_1}/{A}}}$') ax.axhline(reference[0], color='green', linestyle='--') ax.semilogx(lmbda, gcnrh2b, color='red', label=r'${gCNR_{{H_2}/{B}}}$') ax.axhline(reference[1], color='red', linestyle='--') ax.semilogx(lmbda, gcnrh12, color='orange', label=r'${gCNR_{{H_1}/{H_2}}}$') ax.axhline(reference[2], color='orange', linestyle='--') ax.semilogx(lmbda, gcnrh2a, color='purple', label=r'${gCNR_{{H_2}/{A}}}$') ax.axhline(reference[3], color='purple', linestyle='--') ax.set_ylabel(r'${gCNR}$',fontsize=20) ax.set_xlabel(r'$𝜆$') ax.set_ylim(0.25, 1) locmaj = matplotlib.ticker.LogLocator(base=10, numticks=12) ax.xaxis.set_major_locator(locmaj) locmin = matplotlib.ticker.LogLocator(base=10.0, subs=(0.2, 0.4, 0.6, 0.8), numticks=12) ax.xaxis.set_minor_locator(locmin) ax.xaxis.set_minor_formatter(matplotlib.ticker.NullFormatter()) ax.legend(loc = 'best',fontsize = 13) plt.show() # table formant original then sparse table = [['SNR', 'H_2/B', quality.SNR(ho_s_2, ba_s), quality.SNR(ho_x_2, ba_x)], ['Contrast', 'H_2/B', quality.Contrast(ho_s_2, ba_s), quality.Contrast(ho_x_2, ba_x)], ['Contrast', 'H_1/H_2', quality.Contrast(ho_s_1, ho_s_2), quality.Contrast(ho_x_1, ho_x_2)], ['gCNR ', 'H_1/A', quality.log_gCNR(ho_s_1, ar_s), quality.log_gCNR(ho_x_1, ar_x)], ['gCNR', 'H_2/B', quality.log_gCNR(ho_s_2, ba_s), quality.log_gCNR(ho_x_2, ba_x)], ['gCNR', 'H_1/H_2', quality.log_gCNR(ho_s_1, ho_s_2), quality.log_gCNR(ho_x_1, ho_x_2)], ['gCNR', 'H_2/A', quality.log_gCNR(ho_s_2, ar_s), quality.log_gCNR(ho_x_2, ar_x)]] print(tabulate(table, headers=['IQA', 'Region', 'Reference image', 'Deconvolved image'], tablefmt='fancy_grid', floatfmt='.2f', numalign='right'))
scripts/lambda_gCNR.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # If statement # ============ # # # As always I believe I should start each chapter with a warm up typing exercise so here is a short program to compute the absolute value of a number: # n = float(input("Number? ")) if n < 0: print("The absolute value of", n, "is", -n) else: print("The absolute value of", n, "is", n) # # Here is the output from the two times that I ran this program: # # + active="" # Number? -34 # The absolute value of -34.0 is 34.0 # # Number? 1 # The absolute value of 1.0 is 1.0 # - # So what does the computer do when when it sees this piece of code? First it prompts the user for a number with the statement `n = float(input("Number? "))`. Next it reads the line `if n < 0:` . If `n` is less than zero Python runs the line `print("The absolute value of", n, "is", -n)`. Otherwise python runs the line `print("The absolute value of", n, "is", n)`. # # # More formally Python looks at whether the *expression* `n < 0` is true or false. A if statement is followed by a *block* of statements that are run when the expression is true. Optionally after the if statement is a else statement. The else statement is run if the expression is false. # # # There are several different tests that a expression can have. Here is a table of all of them: # # # # # | | | | | # | --- | --- | --- | --- | # | operator | function | | | # | `<` | less than | | | # | `<=` | less than or equal to | | | # | `>` | greater than | | | # | `>=` | greater than or equal to | | | # | `==` | equal | | | # | `!=` | not equal | | | # # # # Another feature of the if command is the `elif` statement. It stands for else if and means if the original if statement is false and then the elif part is true do that part. Here's a example: # a = 0 while a < 10: a = a + 1 if a > 5: print(a, " > ", 5) elif a <= 7: print(a, " <= ", 7) else: print("Neither test was true") # # and the output: # # + active="" # 1 <= 7 # 2 <= 7 # 3 <= 7 # 4 <= 7 # 5 <= 7 # 6 > 5 # 7 > 5 # 8 > 5 # 9 > 5 # 10 > 5 # - # # Notice how the `elif a <= 7` is only tested when the if statement fail to be true. elif allows multiple tests to be done in a single if statement. # # # Examples # ======== # # # # High\_low.py # # + #Plays the guessing game higher or lower # (originally written by <NAME>, improved by Quique) #This should actually be something that is semi random like the # last digits of the time or something else, but that will have to # wait till a later chapter. (Extra Credit, modify it to be random # after the Modules chapter) number = 78 guess = 0 while guess != number: guess = int(input("Guess a number: ")) if guess > number: print("Too high") elif guess < number: print("Too low") print("Just right") # - # # Sample run: # # + active="" # Guess a number:100 # Too high # Guess a number:50 # Too low # Guess a number:75 # Too low # Guess a number:87 # Too high # Guess a number:81 # Too high # Guess a number:78 # Just right # - # # # even.py # # + #Asks for a number. #Prints if it is even or odd number = float(input("Tell me a number: ")) if number % 2 == 0: print(number, "is even.") elif number % 2 == 1: print(number, "is odd.") else: print(number, "is very strange.") # - # # Sample runs. # # + active="" # Tell me a number: 3 # 3.0 is odd. # # Tell me a number: 2 # 2.0 is even. # # Tell me a number: 3.14159 # 3.14159 is very strange. # - # # # average1.py # # + #keeps asking for numbers until 0 is entered. #Prints the average value. count = 0 sum = 0.0 number = 1 #set this to something that will not exit # the while loop immediatly. print("Enter 0 to exit the loop") while number != 0: number = float(input("Enter a number:")) count = count + 1 sum = sum + number count = count - 1 #take off one for the last number print("The average was:", sum/count) # - # # Sample runs # # + active="" # Enter 0 to exit the loop # Enter a number:3 # Enter a number:5 # Enter a number:0 # The average was: 4.0 # # Enter 0 to exit the loop # Enter a number:1 # Enter a number:4 # Enter a number:3 # Enter a number:0 # The average was: 2.66666666667 # - # # # average2.py # # + #keeps asking for numbers until count have been entered. #Prints the average value. sum = 0.0 print("This program will take several numbers then average them") count = int(input("How many numbers would you like to sum:")) current_count = 0 while current_count < count: current_count = current_count + 1 print("Number ", current_count) number = float(input("Enter a number:")) sum = sum + number print("The average was:", sum/count) # - # # Sample runs # # + active="" # This program will take several numbers then average them # How many numbers would you like to sum:2 # Number 1 # Enter a number:3 # Number 2 # Enter a number:5 # The average was: 4.0 # # This program will take several numbers then average them # How many numbers would you like to sum:3 # Number 1 # Enter a number:1 # Number 2 # Enter a number:4 # Number 3 # Enter a number:3 # The average was: 2.66666666667 # - # Exercises # ========= # # Modify the password guessing program to keep track of how many times the # user has entered the password wrong. If it is more than 3 times, print # “That must have been complicated.” # Write a program that asks for two numbers. If the sum of the numbers # is greater than 100, print “That is big number”. # Write a program that asks the user their name, if they enter your name # say “That is a nice name”, if they enter “<NAME>” or “Michael # Palin”, tell them how you feel about them ;), otherwise tell them “You # have a nice name”.
tutorial4.ipynb