code
stringlengths
38
801k
repo_path
stringlengths
6
263
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Measuring similarity between categorical data records # # We are interested in the problem of detecting similar records within categorical data. These may be records which differ slightly due to data misentry, missing data fields, slight variations in reporting or coding or a variety of other factors. Alternatively they may just be natural groupings of records within a particular data set. # # We will look at this problem from the perspective of inducing a similarity or distance function over the set of all records and then clustering the results into piles of near duplicate candidates for a human to either explore or to follow up and triage. # # There are a variety of interesting ways to measure similarity between categorical records. Many of them involve building a complex function testing matches and non-matches between rows. Unfortunately, these approaches don't easily scale to very large amounts of data since they typically require $O(n^2)$ similarity comparisons. For the purposes of this notebook we will make use of some recent advances in nearest neighbour search in order to reduce that complexity. That may initially force us to restrict ourselves to simpler models. In the case that these aren't sufficient one take an iterative approach where a coarse similarity measure was used to estimate which record pairs were likely near duplicates and a refined (and more expensive) similarity measure was applied to only those candidate records. # ## Representation # In order to make use of modern techniques we need to convert our data from columns of categorical values into numerical vectors. There are a variety of ways to do this but by far the most common would be to use a **one hot encoding**. That is to build a new column for each categorical variable name and value and then place a one in that new column if that particular value was present within a particular row. # # Two of the most common are pandas [get_dummies](https://pandas.pydata.org/pandas-docs/version/0.23.3/generated/pandas.get_dummies.html) function or scikit-learns [OneHotEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html). Both are fine functions, but for large amounts of data (particulary if some of your columns have many values) you really want to ensure that you keep good track of your column names and maintain a sparse matrix representation. A sparse matrix representation is one where you never actually write down any of the zero values. This is important in a one-hot encoded representation because you have one column for every single categorical value in your data set. If, for example, one of your columns contained a name or phone number then you could easily have added hundreds of thousands of (mostly zero) columns to every row within your data matrix. For even medium sized data sets this can quickly exceed your computers memory limit. # # For the purposes of this tutorial I'll make use of our Vectorizers library for performing this task. # ## Import Some libraries # # We'll need CategoricalColumnTransformer and NgramVectorizer from our vectorizers library along with a few helper functions for dealing with our data and plotting it. # + jupyter={"outputs_hidden": false} from vectorizers import NgramVectorizer from vectorizers.transformers import CategoricalColumnTransformer, InformationWeightTransformer from sklearn.decomposition import TruncatedSVD from random import sample import pandas as pd import numpy as np import umap import umap.plot # - # We'll add some bokeh imports for easy interactive plots import bokeh.io bokeh.io.output_notebook() # ## Let's fetch some data # # Health Canada has made some of it's reporting available to the public so this is a nice, real world, data set that we can use for examining potential near duplicate entries amongst categorical records. The data set is [Canada Vigilance Adverse Reaction Online Database - Data Structure](https://www.canada.ca/en/health-canada/services/drugs-health-products/medeffect-canada/adverse-reaction-database/canada-vigilance-adverse-reaction-online-database-data-structures.html#a1). When there are adverse reactions to drugs within Canada hospitals and manufacturers must report these problems to Health Canada. We'll make use of this data to see if we can find interesting structure contained within categorical record data. # # For ease of reproducibility, we'll download and extract the files directly in this notebook. import requests import zipfile import os # Download the file only if it isn't already on our computer. if not os.path.isfile('extract_extrait.zip'): print("Failed to find the data file, downloading... ") results = requests.get('https://www.canada.ca/content/dam/hc-sc/migration/hc-sc/dhp-mps/alt_formats/zip/medeff/databasdon/extract_extrait.zip') with open("extract_extrait.zip", "wb") as code: code.write(results.content) zipped = zipfile.ZipFile('extract_extrait.zip') zipped.namelist() # Extract he file if it isn't already uncompressed. if not os.path.isfile('cvponline_extract_20210331/reactions.txt'): print("Failed to find the uncompressed input file, extracting... ") zipped.extractall() # Now let's load up one of the more useful text files into a pandas data frame for manipulation. # # It should be noted that these are, unfortunatly, only text dumps of the contents of a database and as such don't contain any reference to their column names. To get those column names we're forced to go to the web linked above and cut and paste these column names directly into our code. input_file = 'cvponline_extract_20210331/reports.txt' data = pd.read_csv(input_file, sep='$', header=None, low_memory=False) column_names = 'REPORT_ID,REPORT_NO,VERSION_NO,DATRECEIVED,DATINTRECEIVED,MAH_NO,REPORT_TYPE_CODE,REPORT_TYPE_ENG,REPORT_TYPE_FR,GENDER_CODE,GENDER_ENG,GENDER_FR,AGE,AGE_Y,AGE_UNIT_ENG,AGE_UNIT_FR,OUTCOME_CODE,OUTCOME_ENG,OUTCOME_FR,WEIGHT,WEIGHT_UNIT_ENG,WEIGHT_UNIT_FR,HEIGHT,HEIGHT_UNIT_ENG,HEIGHT_UNIT_FR,SERIOUSNESS_CODE,SERIOUSNESS_ENG,SERIOUSNESS_FR,DEATH,DISABILITY,CONGENITAL_ANOMALY,LIFE_THREATENING,HOSP_REQUIRED,OTHER_MEDICALLY_IMP_COND,REPORTER_TYPE_ENG,REPORTER_TYPE_FR,SOURCE_CODE,SOURCE_ENG,SOURCE_FR,E2B_IMP_SAFETYREPORT_ID,AUTHORITY_NUMB,COMPANY_NUMB'.split(",") data.columns = column_names data.shape data.head(3) # ## Restructure our data # # Now we need to restructure this data into a format for easy consumption via our Vectorizers library. This format will make it very efficient for measuring the similarity between records that are made up of multiple categorical variables. # # To do this we must first decide which object from our data we are interested in examining and measuring similarity between. We are going to refer to these as our objects. In the case of this data our objects are our adverse reaction reports. # # We also need to decide which categorical columns in our data frame we are going to use as descriptors for these objects. # # We will use our <code>CategoricalColumnTransformer</code> from our Vectorizers library to construct a new pandas Series with the index being our objects and the values being the corresponding list of categorical values associated with each object. Notionaly, we've decided to say that two objects are similar if they match on many of these categorical values. # # Since we're combining categorical values from multiple columns we should also append the column name to each categorical value in order to be sure to not confuse things. We'll set <code>include_column_name=True</code> in order to have our categorical descriptors take the form of the string <code>f'{column_name}:{value}'</code>. # %%time report_list = CategoricalColumnTransformer(object_column_name='REPORT_ID', descriptor_column_name=['GENDER_ENG','OUTCOME_ENG'], include_column_name=True).fit_transform(data) len(report_list) # Since we had one <code>REPORT_ID</code> for every row in our data frame we now a series of the same length with a list of at most two descriptors assicated with each. report_list.head(3) report_list.apply(len).value_counts() # We implicitely filter all categorical values that are Null. As we see here, this can result in smaller or empty lists. Around 46,000 records only had one of our fields present and 2 of our records had neither field present. # ### Enrich with reactions # # In order to find out what the reports we are interested in are actually about we need to enrich this data with the adverse reaction being reported. This information comes from the reactions table which contains just over 3 million reactions associated with our reports. <code>REPORT_ID</code> is also contained in this table so it can also be used to describe our objects of interest, but now we will describe our reports as a list of the categorical values contained within the <code>PT_NAME_ENG</code> column. This is the Adverse Reaction Term column. data = pd.read_csv('cvponline_extract_20210331/reactions.txt', sep='$', header=None, low_memory=False) column_names = 'REACTION_ID,REPORT_ID,DURATION,DURATION_UNIT_ENG,DURATION_UNIT_FR,PT_NAME_ENG,PT_NAME_FR,SOC_NAME_ENG,SOC_NAME_FR,MEDDRA_VERSION'.split(',') data.columns = column_names data.shape # Of note here, we have more rows than objects of interest. That is because a single report can be responsible for multiple reactions. That is going to necessitate grouping up the rows with the same <code>REPORT_ID</code> we can do this with a pandas groupby manually but this is exactly what our CategoricalColumnTransformer is doing under the hood. As such, just as before, we simply specify our <code>object_column_name</code> and the columns from this table that we'd like to use as descriptors and apply our transformer. # %%time reaction_list = CategoricalColumnTransformer(object_column_name='REPORT_ID', descriptor_column_name='PT_NAME_ENG', include_column_name=True).fit_transform(data) len(reaction_list) # Now, just as before, we have a series with an index that is <code>REPORT_ID</code> and values that are lists of descriptors (strings). reaction_list.head(3) # ## Drug ingredient data # Lastly, we will include some categorical columns to describe the drug being reported on. This will give us the oportunity to see how we might make use a pair of linked tables to enrich our data further. It will also give us enough descriptors to induce an interesting notion of similarity on our data. Similarity measures based on very few categorical values will end up with a tremendous number of records at exactly the same distance making for an uninteresting metric space that is also difficult to work with. # # To do this we will enrich our reports with the name of the drug involved in the report and the active ingredients which make up that drug. We've included the active ingredients in this notion to because there are a wide variety of drugs with different names that contain the same drug. For example, there are over three hundred different drugs who's only active ingredient is listed as cannabis. We'd like our notion of distance to treat reports about these drugs as being similar. # # As before, we load the table that contains our active ingredients for each drug we know about. input_file = 'cvponline_extract_20210331/drug_product_ingredients.txt' data = pd.read_csv(input_file, sep='$', header=None, low_memory=False) column_names = 'DRUG_PRODUCT_INGREDIENT_ID,DRUG_PRODUCT_ID,DRUGNAME,ACTIVE_INGREDIENT_ID,ACTIVE_INGREDIENT_NAME'.split(",") data.columns = column_names data.shape # The first thing we notice is that this table has no concept of reports or a <code>REPORT_ID</code>. In order to get this information we will have to load a another table which contains this linking information. Since we only want to use this table to link our drug ids with our report ids we will only load the two columns of interest from our file. input_file = 'cvponline_extract_20210331/report_drug.txt' drug2report_joining = pd.read_csv(input_file, sep='$', header=None, low_memory=False, usecols=[1,2]) drug2report_joining.columns = 'REPORT_ID,DRUG_PRODUCT_ID'.split(',') drug2report_joining.shape # Now we make use of pandas excellent merge function to perform an inner join between this data. This will create a new DataFrame that contains both our ingredient lists as well as the reports which their corresponding drugs were contained in. # # It should be noted that this is a much larger table due to the fact any given drug may have many reports associated with it. merged_data = data.merge(drug2report_joining, how='inner', on='DRUG_PRODUCT_ID') merged_data.shape # Lastly, we deploy our <code>CategoricalColumnTransformer</code> in order to reduce this data to an pandas Series containing lists of our categorical column values of interest and indexed by our column of interest, <code>REPORT_ID</code>. We've included <code>unique_values=True</code> in this transformer to ensure that any report that potentially contains multiple drugs with overlapping ingredients is only represented by the unique set of ingredients across all the drugs involved. # %%time report_drug_ingredient_list = CategoricalColumnTransformer(object_column_name='REPORT_ID', descriptor_column_name=['DRUGNAME','ACTIVE_INGREDIENT_NAME'], include_column_name=True, unique_values=True).fit_transform(merged_data) len(report_drug_ingredient_list) report_drug_ingredient_list.head(3) report_drug_ingredient_list.apply(len).value_counts().sort_index() # We see that no report contains less than two tokens in our new representation. One from its DRUGNAME and one from its ACTIVE_INGREDIENT. In other words all drugs have both a name and at least one active ingredient. On the other end of the spectrum there is a drug that has been reported on that contain closer to 814 active ingredients. # # That last statement should be taken with a grain of salt since in this particular data set various synonyms for drug ingredients are sometimes recorded multiple times for a given drug. # ## Combine our series # # Now we have three pandas Series each indexed by <code>REPORT_ID</code> and containing a list of strings representing our categorical values. # # Since these series all share an index we can use pandas <code>concat</code> to join them together. For those familiar with database terminology this can be done via either an outer or inner join. An outer join says that if an object is present in either series index it will be included in our results. An inner join requires the index to be present in all of our series. # # Outer joins may introduce NaN into our newly joined_df if some of our index values aren't present in all series. Since pandas doesn't allow us to fillna with lists we would have to do this manually ourselves via the commented code included below. In this case we are are performing an inner join since we only care about reports that have both symptoms and drugs associated with them. joined_df = pd.concat([report_list, reaction_list, report_drug_ingredient_list], axis='columns', join='inner') # Use this code if you are doing an outer join #for column in joined_df.columns: # joined_df[column] = joined_df[column].apply(lambda x: x if isinstance(x, list) else []) print(f'{joined_df.shape[0]} records joined') joined_df.head(3) # We see that this has resulted in a pandas dataframe with one column derived from each of our series and an index that is our <code>REPORT_ID</code>. # We'd now like to combine all the lists in the columns of this data frame into one big list per <code>REPORT_ID</code>. To do this we will take advantage of the fact that python lists have overloaded the addition opperator with concatination. Adding two lists together just merges them in exactly the way we'd like. As such, what we'd like to do with our dataframe is just sum up all the values (i.e. lists) in every row. Adding two Series (i.e. columns) together just performs elementwise addition on all their elements. joined_report = joined_df[0] + joined_df[1] + joined_df[2] joined_report.head(3) # Here is a quick summary of the distribution of how many reports have however many tokens. You see that many of our reports have between 4 and 20 tokens while one of them has as many as 825 tokens associated with it. As we saw earlier this large list is mostly made up of active drug ingredients. joined_report.apply(len).value_counts().sort_index() # Now we have a series with one entry for each <code>REPORT_ID</code> and a value that is a list of all the categorical descriptors we've derived from multiple columns across multiple tables. # # In the above analysis we have constructed our reprsentation via joining multiple seperate CategoricalColumnTransformer operations. Alternatively, we could have done this by building one large table containing all the columns of interest to us and then applying our CategoricalColumnTransformer to that single larger table. It is generally preferable to reduce the size of our tables via grouping and filtering before joining them together which is why we went with the approach we have here. That said, depending on your data base and compute access you could easily take a different approach. # ## Ngram Vectorizer # # In order to quickly and efficiently compare variable length sequences of categorical tokens we will convert into a sparse matrix format. This is a commonly practice in natural language processing where documents are often described as a 'bag of words'. In essence we have a single column for each unique categorical value in our list. That column has a one in it if that categorical value is present in our list. The sparse representation simply means that we don't encode any of the multitude of zeroes that would otherwise be contained in this matrix. # # We will do this with our NgramVectorizer. This is the simplest use case for our Ngram vectorizer with the default value of N=1. record_id_labels = list(joined_report.index) # %%time ngram_vectorizer = NgramVectorizer() sparse_reports = ngram_vectorizer.fit_transform(list(joined_report)) print(f"ngram is now a {type(sparse_reports)} object of with {sparse_reports.shape[0]} rows and {sparse_reports.shape[1]} columns\n" f"The important thing to note is that it only contains {sparse_reports.nnz} values representing its non-zero entries.\n" f"That is significantly sparser than its dense matrix representation which would contain {sparse_reports.shape[0]*sparse_reports.shape[1]} entries.\n" f"That is only about {100*sparse_reports.nnz/(sparse_reports.shape[0]*sparse_reports.shape[1]):.4f} percent of the entries.") # ## Count the matches # # The simplest similarity score one can think of would be to simply count the number of categorical values two records have in common. You might be tempted to normalize this score via something like Jaccard and there are certainly cases where that might be a good idea. That said, if we were looking for likely duplicates in the presence of missing data we might like two records that share 15 out of 20 existing categorical fields to be considered far more similar than two records that share only 3 out of their 4 existing categorical fields. In the case of constant number of fields and no missing data this is, of course, a non-issue. In that case a metric like hamming distance is probably more appropriate. # Let's quickly look at the column names of our first five columns. for column_index in range(5): print(f"column {column_index} is called {ngram_vectorizer.column_index_dictionary_[column_index]}") # Now a quick bit of code to look at how many non-missing entries we had in each row of our sparse matrix. row_counts = pd.Series(np.array(sparse_reports.sum(1)).flatten()) row_counts.hist(bins=20); # It's also often informative to look at the distribution column sums for your sparse matrix. This is often referred to in mathematical circles as your column marginals. Because these are often exponentially distributed we'll look at them on a logarithmic scale with a base of 10. pd.Series(np.log10(np.array(sparse_reports.sum(0)).flatten())).hist(bins=19);#.value_counts().sort_index() pd.Series((np.array(sparse_reports.sum(0)).flatten())).value_counts().sort_index() # We quickly see that over a fifth of our columns only showed up a single time (remember log10(1) = 0). This is likely due to the fine level of resolution being used to describe the active ingredients. A courser description of these active ingredients wouuld combine more ingredients and reduce the number of columns with only a single entry. Alternatively, you could spend some time clustering the ingredients themselves together or embed them into their own vector space based on some other data. # # You also have columns which showed up in about 60% of our reports. That shouldn't be surprising considering that two of our columns are associated with gender. # ## Embedd our data into a metric space and visualize # # Now that we have a simple numeric sparse matrix representation of our data we can apply a variety of dimension reduction techniques for reducing these approimately fifty five thousand dimensional vectors down to something more manageable. # # For the purposes of this notebook we are going to simply combine our InformationWeightTransformer with our UMAP algorithms. InformationWeightTransformer will re-weight our columns to account for their marginal distributions across our data. It can thought of as performing a similar function to the populr Term Frequency Inverse Document Frequency (TFIDF). UMAP will attempt to preserve the local neighbourhood structure within our data and represent it within two dimensions for easy visualization and exploration. info_weight_matrix = InformationWeightTransformer().fit_transform(sparse_reports) report_matrix = TruncatedSVD(n_components=64).fit_transform(info_weight_matrix) # Now let's map these down into a two dimensional Euclidean space for visualization. One should note that since we are embedding over 800,000 records this last step can be quite memory and time intensive. For the sake of efficiency we will sample our matrix down to a smaller number of records and only visualize those. # %%time selected = sample(range(report_matrix.shape[0]), 40000) reducer = umap.UMAP(metric='cosine') embedding = reducer.fit_transform(report_matrix[selected]) # Here is a scatter plot of our sampled reports. We've shaded each point by the log of the number of categorical values used to describe our report. There is quite a lot of interesting structure contained within even this small sample of our data. A more detailed follow up analysis with a larger sample and an interactive plot would be a good next step for an interested data scientist. umap.plot.points(reducer, theme='fire', values=np.log10(row_counts[selected]));
doc/categorical_column_transformer_example.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # How to complete this assignment # First, download [this Kaggle dataset](https://www.kaggle.com/hugomathien/soccer) and extract *sqlite* database. You may need to register at https://www.kaggle.com/ beforehand. Then complete 15 graded tasks below, the score is given in brackets. Finally submit the resulting `.ipynb` file to rs-app Auto-test. # # - Do not delete or rename the variables given before the inscription `#your code here`, they are needed for the correct verification. # - Do not change the code in the last Notebook cell, it is required for the server check. # - Your Notebook must run completely without errors to be graded! Please check everything before submission by going *Cell -> Run All* # ## Some important notes # - If you need to **calculate the number of "something"** that means we expect you to assign an Integer to the given variable # - If you need to **make a list of "something"** we expect you to assign a Python list with appropriate values to the given variable # - If you need to find a **specifiс player, day of the week, team, etc.** we expect you to assign a String with the full name of the entity to the given variable (`player_name`, day of week full name, `team_name`, etc.) # + import sqlite3 import pandas as pd import os import numpy as np pd.set_option('display.max_column', None) # - # Leave that code unchanged, it is required for the server check! db = sqlite3.connect(os.environ.get("DB_PATH") or 'database.sqlite') # You may load the data from SQL table directly to the Pandas dataframe as player_data = pd.read_sql("SELECT * FROM Player;", db) # Also we should connect other tables that we have to use in these tasks player_stats_data = pd.read_sql("SELECT * FROM Player_Attributes;", db) teams_data = pd.read_sql("SELECT * FROM Team;", db) match_data = pd.read_sql("SELECT * FROM Match;", db) league_data = pd.read_sql("SELECT * FROM League;", db) player_data.head() # **Task 1 (0.25 point).** Calculate the number of players with a height between 180 and 190 inclusive # + # this is my solution # players_180_190 = int(player_data['height'][(player_data['height'] >= 180.00) & (player_data['height'] <= 190.00)].count()) # this is right solution players_180_190 = player_data[(player_data['height'] >= 180) & (player_data['height'] <= 190)].shape[0] # - assert(isinstance(players_180_190, int)) # **Task 2 (0.25 point).** Calculate the number of players born in 1980. <br> # **Hint:** you may want to cast your 'birthday' column to DateTime type by [pandas.to_datetime](https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html) # + # this is my solution # players_1980 = int(pd.to_datetime(player_data['birthday'])[(player_data['birthday'] >= "1980-01-01") & (player_data['birthday'] <= "1980-12-31")].count()) # this is right solution player_data['birthday'] = pd.to_datetime(player_data['birthday']) players_1980 = player_data[player_data['birthday'].dt.year == 1980].shape[0] # - assert(isinstance(players_1980, int)) # **Task 3 (0.25 point).** Make a list of the top 10 players with the highest weight sorted in descending order. If there are several players with the same weight put them in the lexicographic order by name. # + # this is my solution # players = player_data[['player_name', 'weight']].sort_values(by=['weight'], ascending=False).head(10) # highest_players = players['player_name'].values.tolist() # this is right solution highest_players = player_data.sort_values(by=['weight', 'player_name'], ascending=[False, True])[:10]['player_name'].tolist() # - assert(len(highest_players) == 10) assert(isinstance(highest_players, list)) for i in range(10): assert(isinstance(highest_players[i], str)) # **Task 4 (0.5 point).** Make a list of tuples containing years along with the number of players born in that year from 1980 up to 1990. <br> # **Structure example**: [(1980, 123), (1981, 140) ..., (1990, 83)] -> There were born 123 players in 1980, there were born 140 players in 1981 and etc. # + # this is my solution # data_list = list() # dataset = pd.to_datetime(player_data['birthday']) # data_1980 = int(pd.to_datetime(player_data['birthday'])[(player_data['birthday'] >= "1980-01-01") & (player_data['birthday'] <= "1980-12-31")].count()) # data_1981 = int(pd.to_datetime(player_data['birthday'])[(player_data['birthday'] >= "1981-01-01") & (player_data['birthday'] <= "1981-12-31")].count()) # data_1982 = int(pd.to_datetime(player_data['birthday'])[(player_data['birthday'] >= "1982-01-01") & (player_data['birthday'] <= "1982-12-31")].count()) # data_1983 = int(pd.to_datetime(player_data['birthday'])[(player_data['birthday'] >= "1983-01-01") & (player_data['birthday'] <= "1983-12-31")].count()) # data_1984 = int(pd.to_datetime(player_data['birthday'])[(player_data['birthday'] >= "1984-01-01") & (player_data['birthday'] <= "1984-12-31")].count()) # data_1985 = int(pd.to_datetime(player_data['birthday'])[(player_data['birthday'] >= "1985-01-01") & (player_data['birthday'] <= "1985-12-31")].count()) # data_1986 = int(pd.to_datetime(player_data['birthday'])[(player_data['birthday'] >= "1986-01-01") & (player_data['birthday'] <= "1986-12-31")].count()) # data_1987 = int(pd.to_datetime(player_data['birthday'])[(player_data['birthday'] >= "1987-01-01") & (player_data['birthday'] <= "1987-12-31")].count()) # data_1988 = int(pd.to_datetime(player_data['birthday'])[(player_data['birthday'] >= "1988-01-01") & (player_data['birthday'] <= "1988-12-31")].count()) # data_1989 = int(pd.to_datetime(player_data['birthday'])[(player_data['birthday'] >= "1989-01-01") & (player_data['birthday'] <= "1989-12-31")].count()) # data_1990 = int(pd.to_datetime(player_data['birthday'])[(player_data['birthday'] >= "1990-01-01") & (player_data['birthday'] <= "1990-12-31")].count()) # data_list.append((1980, data_1980)) # data_list.append((1981, data_1981)) # data_list.append((1982, data_1982)) # data_list.append((1983, data_1983)) # data_list.append((1984, data_1984)) # data_list.append((1985, data_1985)) # data_list.append((1986, data_1986)) # data_list.append((1987, data_1987)) # data_list.append((1988, data_1988)) # data_list.append((1989, data_1989)) # data_list.append((1990, data_1990)) # years_born_players = data_list # this is right solution player_data['birthday_year'] = player_data['birthday'].dt.year born_years = player_data.groupby('birthday_year').size().loc[lambda x: (1980 <= x.index) & (x.index <= 1990)] years_born_players = list(born_years.items()) # - assert(len(years_born_players) == 11) assert(isinstance(years_born_players, list)) for i in range(10): assert(isinstance(years_born_players[i], tuple)) assert(isinstance(years_born_players[i][0], int)) assert(isinstance(years_born_players[i][1], int)) # **Task 5 (0.5 point).** Calculate the mean and the standard deviation of the players' **height** with the name **Adriano**. <br> # **Note:** Name is represented by the first part of `player_name`. # + # this is my solution # adriano = player_data[['player_name', 'height']][(player_data['player_name'].str.match('^Adriano'))] # adriano_mean, adriano_std = float(adriano.mean()), float(adriano.std()) # this is right solution mask = player_data['player_name'].str.startswith('Adriano') adriano_mean, adriano_std = player_data[mask]['height'].mean(), player_data[mask]['height'].std() # - assert(isinstance(adriano_mean, float)) assert(isinstance(adriano_std, float)) # **Task 6 (0.75 point).** How many players were born on each day of the week? Find the day of the week with the minimum number of players born. dow_with_min_players_born = player_data.groupby(player_data['birthday'].dt.day_name()).size().sort_values().keys()[0] assert(isinstance(dow_with_min_players_born, str)) # **Task 7 (0.75 point).** Find a league with the most matches in total. If there are several leagues with the same amount of matches, take the first in the lexical order. # + league = league_data[['id', 'name']] league.columns = ['league_id', 'league_name'] league_most_matches = match_data \ .merge(league, 'left') \ .groupby('league_name').size() \ .reset_index() \ .sort_values(by=[0, 'league_name'], ascending=[False, True]).iloc[0]['league_name'] # - assert(isinstance(league_most_matches, str)) # **Task 8 (1.25 point).** Find a player who participated in the largest number of matches during the whole match history. Assign a `player_name` to the given variable # + # this is right solution import re home_player_cols = [c for c in match_data.columns if re.match(r'home_player_\d\d?', c)] away_player_cols = [c for c in match_data.columns if re.match(r'away_player_\d\d?', c)] player_cols = home_player_cols + away_player_cols max_matches_player_id = match_data \ .melt(value_vars=player_cols) \ .groupby('value').size() \ .sort_values(ascending=False) \ .keys()[0] max_matches_player = player_data[player_data['player_api_id'] == max_matches_player_id]['player_name'].values[0] # - assert(isinstance(max_matches_player, str)) # **Task 9 (1.5 point).** List top-5 tuples of most correlated **player's characteristics** in the descending order of the absolute [Pearson's coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) value. # # **Note 1:** Players characteristics are all the columns in `Player_Attributes` table except `[id, player_fifa_api_id, player_api_id, date, preferred_foot, attacking_work_rate, defensive_work_rate]`). <br> # **Note 2:** Exclude duplicated pairs from the list. E.g. ('gk_handling', 'gk_reflexes') and ('gk_reflexes', 'gk_handling') are duplicates, leave just one of them in the resulting list. # # **Hint:** You may use [dataframe.corr()](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.corr.html) for calculating pairwise Pearson correlation. # + # this is right solution other_cols = ['id', 'player_fifa_api_id', 'date', 'preferred_foot', 'attacking_work_rate', 'defensive_work_rate'] char_cols = [c for c in player_stats_data if c not in other_cols] # corr - this is a correlation matrix # corr = player_stats_data[char_cols] \ # .corr().abs().unstack().to_frame().reset_index() # corr = corr[corr['level_0'] != corr['level_1']] # corr['feature_pair'] = corr.apply(lambda x: (x['level_0'], x['level_1']), axis=1) # corr['feature_pair'] = corr.apply(lambda x: (tuple(sorted(x['feature_pair']))), axis=1) # top_correlated_features = corr[['feature_pair', 0]] \ # .drop_duplicates().sort_values(by=0, ascending=False)[:5]['feature_pair'].values.tolist() # alternative right solution with numpy player_attributes = pd.read_sql('SELECT * FROM Player_Attributes;', db) df = player_attributes.copy().drop(['id', 'player_fifa_api_id', 'player_api_id', 'date', 'preferred_foot', 'attacking_work_rate', 'defensive_work_rate'],axis=1).corr(method='pearson').abs() val = df.values i,j = np.tril_indices_from(val, -1) top_correlated_features = pd.Series(val[i, j], [df.index[i], df.columns[j]]).nlargest(5).index.tolist() # - assert(len(top_correlated_features) == 5) assert(isinstance(top_correlated_features, list)) for i in range(5): assert(isinstance(top_correlated_features[i], tuple)) assert(isinstance(top_correlated_features[i][0], str)) assert(isinstance(top_correlated_features[i][1], str)) # **Task 10 (2 points).** Find top-5 most similar players to **Neymar** whose names are given. The similarity is measured as [Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance) between vectors of players' characteristics (described in the task above). Put their names in a vector in ascending order by Euclidean distance and sorted by `player_name` if the distance is the same<br> # **Note 1:** There are many records for some players in the `Player_Attributes` table. You need to take the freshest data (characteristics with the most recent `date`). <br> # **Note 2:** Use pure values of the characteristics even if you are aware of such preprocessing technics as normalization. <br> # **Note 3:** Please avoid using any built-in methods for calculating the Euclidean distance between vectors, think about implementing your own. # + players_extended = player_stats_data \ .merge(player_data[['player_api_id', 'player_name']], 'inner', 'player_api_id') \ .sort_values('date').drop_duplicates(['player_api_id'], keep='last') name='Neymar' neymar_vector = players_extended[players_extended['player_name'] == name] players_vector = players_extended[char_cols].values # players_extended['neymar_similar'] = np.sqrt(np.sum((players_vector - neymar_vector) ** 2, axis=1)) # neymar_similarities = players_extended \ # .sort_values(by=['neymar_similar', 'player_name'], ascending=True)['player_name'][1:6] \ # .values.tolist() # should works, but fail with error # Unable to coerce to DataFrame, shape must be (1, 43): given (11060, 36) # - assert(len(neymar_similarities) == 5) assert(isinstance(neymar_similarities, list)) for i in range(5): assert(isinstance(neymar_similarities[i], str)) # **Task 11 (1 point).** Calculate the number of home matches played by the **Borussia Dortmund** team in **Germany 1. Bundesliga** in season **2008/2009** # + # this is right solution league = league_data[['id', 'name']] league.columns = ['league_id', 'league_name'] match_league = match_data.merge(league, on=['league_id']) home_teams = teams_data[['team_api_id', 'team_long_name']] \ .rename(columns={'team_api_id': 'home_team_api_id', 'team_long_name': 'home_team_name'}) away_teams = teams_data[['team_api_id', 'team_long_name']] \ .rename(columns={'team_api_id': 'away_team_api_id', 'team_long_name': 'away_team_name'}) match_team_league = match_league \ .merge(home_teams, 'left', ['home_team_api_id']) \ .merge(away_teams, 'left', ['away_team_api_id']) borussia_bundesliga_2008_2009_matches = match_team_league[(match_team_league['league_name'] == 'Germany 1. Bundesliga') & (match_team_league['season'] == '2008/2009') & (match_team_league['home_team_name'] == '<NAME>')].shape[0] # - assert(isinstance(borussia_bundesliga_2008_2009_matches, int)) # **Task 12 (1 point).** Find a team having the most matches (both home and away!) in the **Germany 1. Bundesliga** in **2008/2009** season. Return number of matches. # + # this is right solution match_team_target_league_season = match_team_league[(match_team_league['league_name'] == 'Germany 1. Bundesliga') & (match_team_league['season'] == '2008/2009')] team_most_matches_bundesliga_2008_2009 = match_team_target_league_season.groupby('home_team_name') \ .size() \ .add(match_team_target_league_season.groupby('away_team_name').size()) \ .sort_values(ascending=False)[0].item() team_most_matches_bundesliga_2008_2009 # - assert(isinstance(team_most_matches_bundesliga_2008_2009, int)) # **Task 13 (1 point).** Count total number of **Arsenal** matches (both home and away!) in the **2015/2016** season which they have won. <br><br> # **Note:** Winning a game means scoring **more** goals than an opponent. # + # this is right solution def team_won(row): if row['home_team_goal'] > row['away_team_goal']: return row['home_team_name'] elif row['home_team_goal'] < row['home_team_goal']: return row['away_team_name'] else: return None match_team_league['team_won'] = match_team_league.apply(lambda x: team_won(x), axis=1) arsenal_won_matches_2015_2016 = \ match_team_league[(match_team_league['team_won'] == 'Arsenal') & (match_team_league['season'] == '2015/2016')].shape[0] # - assert(isinstance(arsenal_won_matches_2015_2016, int)) # **Task 14 (2 points).** Find a team with the highest win rate in the **2015/2016** season. Win rate means won matches / all matches. If there are several teams with the highest win rate return the first by name in lexical order team_highest_winrate_2015_2016 = '' assert(isinstance(team_highest_winrate_2015_2016, str)) # **Task 15 (2 points).** Determine the team with the maximum days' gap between matches in **England Premier League 2010/2011 season**. Return number of days in that gap. <br> # **Note**: a *gap* means the number of days between two consecutive matches of the same team. highest_gap_england_2010_2011 = 0 assert(isinstance(highest_gap_england_2010_2011, int)) # ### Warning! Do not change anything in the area below with open('student_answers.txt', 'w') as file: file.write(f"{players_180_190}\n") file.write(f"{players_1980}\n") file.write(f"{highest_players}\n") file.write(f"{years_born_players}\n") file.write(f"{round(adriano_mean, 3)} {round(adriano_std, 3)}\n") file.write(f"{dow_with_min_players_born}\n") file.write(f"{league_most_matches}\n") file.write(f"{max_matches_player}\n") file.write(f"{';'.join(['%s,%s' % tup for tup in top_correlated_features])};\n") file.write(f"{neymar_similarities}\n") file.write(f"{borussia_bundesliga_2008_2009_matches}\n") file.write(f"{team_most_matches_bundesliga_2008_2009}\n") file.write(f"{arsenal_won_matches_2015_2016}\n") file.write(f"{team_highest_winrate_2015_2016}\n") file.write(f"{highest_gap_england_2010_2011}\n")
Pandas_data_manipulations_tasks.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Take Home Challenge: # ## Data Analysis and Visualization # # *Note: this notebook is included just as an example; feel free to use the tools with which you are most comfortable!* # # Analytics is about more than just numbers! At ICX, we take a holistic view of analysis, which includes gathering and interacting with data, performing statistical analyses, creating visualizations, and telling stories. For this challenge, you'll get to show off your end-to-end analytics chops! # # # ## Imports # + # Imports # - # ## Data Loading # + # Load the data # - # ## Statistical Analysis # + # Compute summary statistics # - # ## Visualization # + # Create a visualization # - # ## Storytelling # What are some takeaways from your analysis?
analytics/ghostbusters/ghostbusters_analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="vN99YjPTDena" # # Polynomial Regression # + [markdown] colab_type="text" id="ZIx_naXnDyHd" # ## Importing the libraries # + colab={} colab_type="code" id="FjnmdyPLD2tS" import numpy as np import matplotlib.pyplot as plt import pandas as pd # + [markdown] colab_type="text" id="6c8YExmOD5x5" # ## Importing the dataset # + colab={} colab_type="code" id="nQOdXhjXD_AE" dataset = pd.read_csv('CA_housing.csv') dataset = dataset.dropna(axis=0) # - from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [-1])], remainder='passthrough') X = pd.concat([dataset.iloc[:, :-2], dataset.iloc[:, -1]], axis=1).values X = np.array(ct.fit_transform(X)) y = dataset.iloc[:, -2:-1].values from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0) # + [markdown] colab_type="text" id="Le8SEL-YEOLb" # ## Training the Linear Regression model on the whole dataset # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="2eZ4xxbKEcBk" outputId="41074f6d-44c7-4a04-fd49-14bda9fb2885" from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X_train, y_train) # + [markdown] colab_type="text" id="Rb5nWuSHEfBV" # ## Training the Polynomial Regression model on the whole dataset # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="HYplp4pTEm0O" outputId="4c3c03dd-0def-4584-a893-aa2e72629e8f" from sklearn.preprocessing import PolynomialFeatures poly_reg = PolynomialFeatures(degree = 4) X_train_poly = poly_reg.fit_transform(X_train) lin_reg_2 = LinearRegression() lin_reg_2.fit(X_train_poly, y_train) # + [markdown] colab_type="text" id="0O8R0tzbEpvy" # ## Visualising the Linear Regression results # + colab={"base_uri": "https://localhost:8080/", "height": 295} colab_type="code" id="dcTIBAEdEyve" outputId="c242f259-d9e6-442a-f026-79dffab85972" # plt.scatter(X, y, color = 'red') # plt.plot(X, lin_reg.predict(X), color = 'blue') # plt.title('Truth or Bluff (Linear Regression)') # plt.xlabel('Position Level') # plt.ylabel('Salary') # plt.show() # + [markdown] colab_type="text" id="stOnSo74E52m" # ## Visualising the Polynomial Regression results # + colab={"base_uri": "https://localhost:8080/", "height": 295} colab_type="code" id="UCOcurIQE7Zv" outputId="93927499-de98-4a31-a619-c373926cbe56" # plt.scatter(X, y, color = 'red') # plt.plot(X, lin_reg_2.predict(poly_reg.fit_transform(X)), color = 'blue') # plt.title('Truth or Bluff (Polynomial Regression)') # plt.xlabel('Position level') # plt.ylabel('Salary') # plt.show() # + [markdown] colab_type="text" id="U_qsAMKnE-PJ" # ## Visualising the Polynomial Regression results (for higher resolution and smoother curve) # + colab={"base_uri": "https://localhost:8080/", "height": 295} colab_type="code" id="iE6EnC3fFClE" outputId="6ecb5687-3c8a-4b46-db4a-c4955c24b9de" # X_grid = np.arange(min(X), max(X), 0.1) # X_grid = X_grid.reshape((len(X_grid), 1)) # plt.scatter(X, y, color = 'red') # plt.plot(X_grid, lin_reg_2.predict(poly_reg.fit_transform(X_grid)), color = 'blue') # plt.title('Truth or Bluff (Polynomial Regression)') # plt.xlabel('Position level') # plt.ylabel('Salary') # plt.show() # + [markdown] colab_type="text" id="diyJFZHhFFeK" # ## Predicting a new result with Linear Regression # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="Blmp6Hn7FJW6" outputId="f01610bc-b077-4df0-cae4-ea37c8b0037f" lin_reg.predict([[0.0, 1.0, 0.0, 0.0, 0.0, -121.24, 39.37, 16.0, 2785.0, 616.0, 1387.0, 530.0, 2.3886]]) # + [markdown] colab_type="text" id="DW7I7ZVDFNkk" # ## Predicting a new result with Polynomial Regression # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="uQmtnyTHFRGG" outputId="2739bf8a-6dfb-4226-b200-252ee8857097" lin_reg_2.predict(poly_reg.fit_transform([[0.0, 1.0, 0.0, 0.0, 0.0, -121.24, 39.37, 16.0, 2785.0, 616.0, 1387.0, 530.0, 2.3886]])) # - y[-1] y_pred = lin_reg_2.predict(poly_reg.fit_transform(X_test)) np.set_printoptions(precision=2) print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1)) df = pd.DataFrame(data=np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1), columns=['Predicted ($)', 'Actual ($)']) df # + # import dataframe_image as dfi # dfi.export(df, 'act_pred_poly_4.png', max_rows=5) # - px = np.linspace(0, max(y_test), int(max(y_test))) py = np.linspace(0, max(y_test), int(max(y_test))) plt.figure(figsize=(10,6)) import seaborn as sns sns.set() plt.scatter(y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1), color = 'red') plt.plot(px, py, color='blue') plt.title('True vs Predicted Median Home Values (Polynomial Regression)') plt.xlabel('Predicted Values') plt.ylabel('True Values') plt.show() print(lin_reg_2.coef_) print(lin_reg_2.intercept_) from sklearn.metrics import r2_score print('R2: ', r2_score(y_test, y_pred)) print('Adjusted R2: ', 1-(1-r2_score(y_test, y_pred))*((len(X_test)-1)/(len(X_test)-len(X_test[0])-1))) from sklearn.metrics import mean_squared_error import math mean_squared_error(y_test, y_pred, squared=False)
Regression/.ipynb_checkpoints/Polynomial_Regression-checkpoint.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.7.0 # language: julia # name: julia-1.7 # --- # # AR(1) + GARCH(1,1) Model # # This notebook estimates an AR(1) where the residuals follow a GARCH(1,1) model. The results are used to calculate a time-varying (daily) value at risk. # # For learning purposes the code specifies the likelihood function, calls on a optimization function and calculates (numerical) derivatives. For a more automated approach, consider the [ARCHModels.jl](https://github.com/s-broda/ARCHModels.jl) package. # ## Load Packages and Extra Functions # + using Printf, Dates, DelimitedFiles, Statistics, LinearAlgebra, Optim using FiniteDiff: finite_difference_hessian as hessian, finite_difference_jacobian as jacobian include("jlFiles/printmat.jl") # + using Plots #pyplot(size=(600,400)) #pyplot() or gr() gr(size=(480,320)) default(fmt = :png) # - # ## Loading Data # + xx = readdlm("Data/FFdSizePs.csv",',',skipstart=1) ymd = round.(Int,xx[:,1]) #YearMonthDay, like 20121231 R = xx[:,2] #returns for the smallest size portfolio xx = nothing y = R[2:end] #dependent variable, y(t) x = [ones(size(y)) R[1:end-1]] #regressors, [1, y(t-1)] dN = Date.(string.(ymd),"yyyymmdd"); #to Julia dates # - # ## The Likelihood Function # Consider a regression equation, where the residual follows a GARCH(1,1) process # # $ # y_{t} =x_{t}^{\prime}b+u_{t} \: \text{ with }\: u_{t}=v_{t}\sigma_{t} \: \text{ and } # $ # # $ # \sigma_{t}^{2} =\omega+\alpha u_{t-1}^{2}+\beta\sigma_{t-1}^{2}. # $ # # Notice that we require $(\omega,\alpha,\beta)$ to all be positive and $\alpha + \beta < 1$. # # If $v_{t}\sim N(0,1)$, then the likelihood function is # # $ # \ln L=-\frac{T}{2}\ln(2\pi) # -\frac{1}{2}\sum_{t=1}^{T}\ln\sigma_{t}^{2}- # \frac{1}{2}\sum_{t=1}^{T}\frac{u_{t}^{2}}{\sigma_{t} # ^{2}}. # $ # # The likelihood function of a GARCH(1,1) model is in `garch11LL`. # + function garch11LL(par,y,x) (T,k) = (size(x,1),size(x,2)) b = par[1:k] #mean equation, y = x'*b (omega,alpha,beta1) = par[k+1:k+3] #GARCH(1,1) equation: #s2(t) = omega + alpha*u(t-1)^2 + beta1*s2(t-1) yhat = x*b u = y - yhat s2_0 = var(u) s2 = zeros(typeof(alpha),T) #works with ForwardDiff s2[1] = omega + alpha*s2_0 + beta1*s2_0 #simple, but slow approach for t = 2:T #using filter() is perhaps quicker s2[t] = omega + alpha*u[t-1]^2 + beta1*s2[t-1] end LL_t = -(1/2)*log(2*pi) .- (1/2)*log.(s2) .- (1/2)*(u.^2)./s2 LL_t[1] = 0.0 #effectively skip the first observation LL = sum(LL_t) return LL, LL_t, s2, yhat, u end # - # ## Try the Likelihood Function # + par0 = [mean(y),0,var(y)*0.05,0.05,0.90] #initial parameter guess (LL,) = garch11LL(par0,y,x) #testing the log lik printlnPs("Value of log-likelihood fn at starting guess of the parameters: ",LL) # - # ## Maximize the Likelihood Function # + lower = [-Inf,-0.9999,0,0,0] #upper and lower bounds on the parameters: b0,b1,omega,alpha,beta upper = [Inf,0.9999,Inf,1,1] Sol = optimize(par->-garch11LL(par,y,x)[1],lower,upper,par0) #minimize -LL parHat = Optim.minimizer(Sol) #extract the optimal solution printblue("Parameter estimates:\n") coefNames = ["b₀","b₁","ω","α","β"] printmat(parHat,colNames=["coef"],rowNames=coefNames) # - # ## Standard Errors of the Estimates # # MLE is typically asymptotically normally distributed # # $ # \sqrt{T}(\hat{\theta}-\theta) \rightarrow^{d}N(0,V) \: \text{, where } \: V=I(\theta)^{-1}\text{ with } # $ # # $ # I(\theta) =-\text{E}\frac{\partial^{2}\ln L_t}{\partial\theta\partial\theta^{\prime}} # $ # # where $\ln L_t$ is the contribution of period $t$ to the likelihood function and $I(\theta)$ is the information matrix. # # The code below calculates numerical derivatives. # # # Alternatively, we can use the outer product of the gradients to calculate the # information matrix as # # $ # J(\theta)=\text{E}\left[ \frac{\partial\ln L_t}{\partial\theta # }\frac{\partial\ln L_t}{\partial\theta^{\prime}}\right] # $ # # We could also use the "sandwich" estimator # # $ # V=I(\theta)^{-1}J(\theta)I(\theta)^{-1}. # $ # ### Std from Hessian # + T = size(y,1) #finding std(coefs) by inverse of information matrix Ia = -hessian(par->mean(garch11LL(par,y,x)[2]),parHat) Ia = (Ia+Ia')/2 #to guarantee symmetry vcv = inv(Ia)/T std_parHat = sqrt.(diag(vcv)) printblue("std from Hessian:\n") printmat(std_parHat,colNames=["Std"],rowNames=coefNames) # - # ### Std from Gradient and Sandwich # + LLgrad = jacobian(par->garch11LL(par,y,x)[2],parHat) #T x length(par) matrix, T gradients J = LLgrad'LLgrad/T vcv = inv(J)/T stdb_parHat = sqrt.(diag(vcv)) #std from gradients vcv = inv(Ia) * J * inv(Ia)/T stdc_parHat = sqrt.(diag(vcv)) #std from sandwich printblue("GARCH parameter estimates and 3 different standard errors:\n") xx = [parHat std_parHat stdb_parHat stdc_parHat] printmat(xx,colNames=["coef","Std hessian","Std gradient","Std sandwich"],rowNames=coefNames,width=14) # - # # Value at Risk # # calculated by assuming conditional (time-varying) normality, # # $ # \text{VaR} = -(\mu_t - 1.645 \sigma_t), # $ # # where # $\mu_t$ are the predictions from the estimated mean equation ($x_t'b$) and $\sigma_t$ from the GARCH(1,1) model. # + (σ²,μ) = garch11LL(parHat,y,x)[3:4] #get the fitted values of the σ²- and μ-vectors VaR95 = -(μ - 1.645*sqrt.(σ²)) xTicksLoc = [Date(1990),Date(2000),Date(2010)] xTicksLab = Dates.format.(xTicksLoc,"Y") p1 = plot( dN[2:end],VaR95, xticks = (xTicksLoc,xTicksLab), legend = false, title = "1-day VaR (95%)" ) display(p1) # - CovRatio = mean((-y) .>= VaR95) #coverage ratio for VaR printlnPs("Coverage ratio for VaR(95%): ",CovRatio) # # eGARCH (extra) # # This section estimates eGARCH models for 2 return series. include("jlFiles/Garch.jl") # ## Load Data # + xx = readdlm("Data/FFdSizePs.csv",',',skipstart=1) ymd = round.(Int,xx[:,1]) #YearMonthDay, like 20121231 R = xx[:,2:end] #returns for 10 different portfolios xx = nothing R9 = R[2:end,9] #returns, 2nd largest firms x9 = [ones(size(R9)) R[1:end-1,9]] #regressors, [1, R1(t-1)] R10 = R[2:end,10] #returns, largest firms x10 = [ones(size(R10)) R[1:end-1,10]] #regressors, [1, R10(t-1)] dN = Date.(string.(ymd),"yyyymmdd"); #to Julia dates # - # ## Estimate eGARCH(1,1) models # # for each of the two return series. Also, calculate the standardized residuals as # $v_t = u_t/\sigma_t$ # + par0 = [mean(R9),0,var(R9)*0.05,0.05,0.90,0] #initial parameter guess: b0,b1,ω,α,β,γ lower = [-Inf,-0.9999,0,0,0,-Inf] #upper and lower bounds on the parameters upper = [Inf,0.9999,Inf,1,1,Inf] Sol = optimize(par->-egarch11LL(par,R9,x9)[1],lower,upper,par0) parHat = Optim.minimizer(Sol) (_,_,s2_9,_,u) = egarch11LL(parHat,R9,x9) v9 = u./sqrt.(s2_9) #standardized residuals (used below) printblue("eGARCH(1,1), parameter estimates for R9:\n") coefNames = ["b₀","b₁","ω","α","β","γ"] printmat(parHat,colNames=["coef"],rowNames=coefNames) # + Sol = optimize(par->-egarch11LL(par,R10,x10)[1],lower,upper,par0) parHat = Optim.minimizer(Sol) (_,_,s2_10,_,u) = egarch11LL(parHat,R10,x10) v10 = u./sqrt.(s2_10) printblue("eGARCH(1,1), parameter estimates for R10:\n") printmat(parHat,colNames=["coef"],rowNames=coefNames) # - # # DCC (extra) # # This section estimates a DCC model from the two series of standardized residuals (from the eGARCH estimation above). # # To impose the necessary restrictions, we estimate `(a,b)` but they imply the following `(α,β)` via the `DccParTrans()` function: # # $ # \alpha = \exp(a)/[1+\exp(a)+\exp(b)] # $ # # $ # \beta = \exp(b)/[1+\exp(a)+\exp(b)] # $ # + par0 = [0.2,1.5] (α,β) = DccParTrans(par0) #we estimate (a,b), but they imply (α,β) println("Initial guess of parameters") printmat([α,β],colNames=["coef"],rowNames=["α","β"]) v = [v9 v10] s2 = [s2_9 s2_10] Qbar = cov(v) LL, = DccLL(par0,v,s2,Qbar) #testing the log likelihood fn println(LL) # + Sol = optimize(par->-DccLL(par,v,s2,Qbar)[1],par0) #estimate (a,b) parHat = Optim.minimizer(Sol) (α,β) = DccParTrans(parHat) #(a,b) -> (α,β) println("Estimated parameters") printmat([α,β],colNames=["coef"],rowNames=["α","β"]) # - """ CovToCor(covMat) Compute correlation matrix from covariance matrix. """ function CovToCor(covMat) d = diag(covMat) #variances corMat = covMat./sqrt.(d*d') return corMat end # + Σ = DccLL(parHat,v,s2,Qbar)[3] #nxnxT covariance arrays ρ = fill(NaN,T) for t = 1:T ρ[t] = CovToCor(Σ[:,:,t])[1,2] end # - p1 = plot( dN[2:end],ρ, xticks = (xTicksLoc,xTicksLab), legend = false, title = "Fitted correlation" ) display(p1)
Ch17_Garch.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # # # Conditional Independence of Views on Multiview KMeans Clustering # # The co-Expectation Maximization framework (and co-training), relies on the # fundamental assumption that data views are conditionally independent. # In this tutorial we test that assumption by examining the multiview # k-means clustering algorithm on synthetic multiview # datasets under different conditions related to independence # between the views conditioned on true class labels. # # In the following experiments, we will perform singleview kmeans clustering # on two views separately and on them concatenated together. We also # perform multiview clustering using the multiview algorithm. We will also # compare the performance of multiview and singleview versions of kmeans # clustering. We will evaluate the purity of the resulting clusters from each # algorithm with respect to the class labels using the normalized mutual # information metric. # # + # License: MIT import warnings import numpy as np from scipy import special import matplotlib.pyplot as plt from sklearn.metrics import normalized_mutual_info_score as nmi_score from sklearn.cluster import KMeans from sklearn.datasets import fetch_covtype from sklearn.manifold import TSNE from mvlearn.cluster import MultiviewKMeans warnings.filterwarnings("ignore") RANDOM_SEED = 10 # - # ## Artificial dataset with conditionally independent views # # Here, we create an artificial dataset where the conditional independence # assumption between # views, given the true labels, is enforced. Our artificial dataset is derived # from the forest # covertypes dataset from the scikit-learn package. This dataset is comprised # of 7 different classes, with # with 54 different numerical features per sample. To create our artificial # data, we will select 500 samples from # each of the first 6 classes in the dataset, and from these, construct 3 # artificial classes with # 2 views each. <br> # # # + def get_ci_data(num_samples=500): # Load in the vectorized news group data from scikit-learn package cov = fetch_covtype() all_data = np.array(cov.data) all_targets = np.array(cov.target) # Set class pairings as described in the multiview clustering paper view1_classes = [1, 2, 3] view2_classes = [4, 5, 6] # Create lists to hold data and labels for each of the classes across # 2 different views labels = [num for num in range(len(view1_classes)) for _ in range(num_samples)] labels = np.array(labels) view1_data = list() view2_data = list() # Randomly sample items from each of the selected classes in view1 for class_num in view1_classes: class_data = all_data[(all_targets == class_num)] indices = np.random.choice(class_data.shape[0], num_samples) view1_data.append(class_data[indices]) view1_data = np.concatenate(view1_data) # Randomly sample items from each of the selected classes in view2 for class_num in view2_classes: class_data = all_data[(all_targets == class_num)] indices = np.random.choice(class_data.shape[0], num_samples) view2_data.append(class_data[indices]) view2_data = np.concatenate(view2_data) # Shuffle and normalize vectors shuffled_inds = np.random.permutation(num_samples * len(view1_classes)) view1_data = np.vstack(view1_data) view2_data = np.vstack(view2_data) view1_data = view1_data[shuffled_inds] view2_data = view2_data[shuffled_inds] magnitudes1 = np.linalg.norm(view1_data, axis=0) magnitudes2 = np.linalg.norm(view2_data, axis=0) magnitudes1[magnitudes1 == 0] = 1 magnitudes2[magnitudes2 == 0] = 1 magnitudes1 = magnitudes1.reshape((1, -1)) magnitudes2 = magnitudes2.reshape((1, -1)) view1_data /= magnitudes1 view2_data /= magnitudes2 labels = labels[shuffled_inds] return [view1_data, view2_data], labels def perform_clustering(seed, m_data, labels, n_clusters): # Singleview kmeans clustering # Cluster each view separately s_kmeans = KMeans(n_clusters=n_clusters, random_state=seed, n_init=100) s_clusters_v1 = s_kmeans.fit_predict(m_data[0]) s_clusters_v2 = s_kmeans.fit_predict(m_data[1]) # Concatenate the multiple views into a single view s_data = np.hstack(m_data) s_clusters = s_kmeans.fit_predict(s_data) # Compute nmi between true class labels and singleview cluster labels s_nmi_v1 = nmi_score(labels, s_clusters_v1) s_nmi_v2 = nmi_score(labels, s_clusters_v2) s_nmi = nmi_score(labels, s_clusters) print('Singleview View 1 NMI Score: {0:.3f}\n'.format(s_nmi_v1)) print('Singleview View 2 NMI Score: {0:.3f}\n'.format(s_nmi_v2)) print('Singleview Concatenated NMI Score: {0:.3f}\n'.format(s_nmi)) # Multiview kmeans clustering # Use the MultiviewKMeans instance to cluster the data m_kmeans = MultiviewKMeans(n_clusters=n_clusters, n_init=100, random_state=seed) m_clusters = m_kmeans.fit_predict(m_data) # Compute nmi between true class labels and multiview cluster labels m_nmi = nmi_score(labels, m_clusters) print('Multiview NMI Score: {0:.3f}\n'.format(m_nmi)) return m_clusters def display_plots(pre_title, data, labels): # plot the views fig, ax = plt.subplots(1, 2, figsize=(14, 5)) dot_size = 10 ax[0].scatter(new_data[0][:, 0], new_data[0][:, 1], c=labels, s=dot_size) ax[0].set_title(pre_title + ' View 1') ax[0].axes.get_xaxis().set_visible(False) ax[0].axes.get_yaxis().set_visible(False) ax[1].scatter(new_data[1][:, 0], new_data[1][:, 1], c=labels, s=dot_size) ax[1].set_title(pre_title + ' View 2') ax[1].axes.get_xaxis().set_visible(False) ax[1].axes.get_yaxis().set_visible(False) plt.show() # - # ## Comparing the performance with conditionally independent views # # The co-Expectation Maximization framework (and co-training), relies on the # fundamental assumption that data views are conditionally independent. If all # views are informative and conditionally independent, then Multiview KMeans is # expected to produce higher quality clusters than Singleview KMeans, for # either view or for both views concatenated together. Here, we will evaluate # the quality of clusters by using the normalized mutual information metric, # which is essentially a measure of the purity of clusters with respect to the # true underlying class labels. # # As we see below, Multiview KMeans produces clusters with higher purity than # Singleview KMeans across a range of values for the n_clusters parameter for # data with complex and informative views, which is consistent with some of the # results from the original Multiview clustering paper. # # # + data, labels = get_ci_data() m_clusters = perform_clustering(RANDOM_SEED, data, labels, 3) # Running TSNE to display clustering results via low dimensional embedding tsne = TSNE() new_data = list() new_data.append(tsne.fit_transform(data[0])) new_data.append(tsne.fit_transform(data[1])) display_plots('True Labels', new_data, labels) display_plots('Multiview Clustering Results', new_data, m_clusters) # - # ## Artificial dataset with conditionally dependent views # # Here, we create an artificial dataset where the conditional independence # assumption between # views, given the true labels, is violated. We again derive our dataset from # the forest covertypes # dataset from sklearn. However, this time, we use only the first 3 classes of # the dataset, which will # correspond to the 3 clusters for view 1. To produce view 2, we will apply a # simple nonlinear transformation to view 1 # using the logistic function, and we will apply a negligible amount of noise # to the second view to avoid convergence # issues. This will result in a dataset where the correspondance between views # is very high. # # def get_cd_data(num_samples=500): # Load in the vectorized news group data from scikit-learn package cov = fetch_covtype() all_data = np.array(cov.data) all_targets = np.array(cov.target) # Set class pairings as described in the multiview clustering paper view1_classes = [1, 2, 3] # Create lists to hold data and labels for each of the classes across # 2 different views labels = [num for num in range(len(view1_classes)) for _ in range(num_samples)] labels = np.array(labels) view1_data = list() view2_data = list() # Randomly sample 500 items from each of the selected classes in view1 for class_num in view1_classes: class_data = all_data[(all_targets == class_num)] indices = np.random.choice(class_data.shape[0], num_samples) view1_data.append(class_data[indices]) view1_data = np.concatenate(view1_data) # Construct view 2 by applying a nonlinear transformation # to data from view 1 comprised of a linear transformation # and a logistic nonlinearity t_mat = np.random.random((view1_data.shape[1], 50)) noise = 0.005 - 0.01*np.random.random((view1_data.shape[1], 50)) t_mat *= noise transformed = view1_data @ t_mat view2_data = special.expit(transformed) # Shuffle and normalize vectors shuffled_inds = np.random.permutation(num_samples * len(view1_classes)) view1_data = np.vstack(view1_data) view2_data = np.vstack(view2_data) view1_data = view1_data[shuffled_inds] view2_data = view2_data[shuffled_inds] magnitudes1 = np.linalg.norm(view1_data, axis=0) magnitudes2 = np.linalg.norm(view2_data, axis=0) magnitudes1[magnitudes1 == 0] = 1 magnitudes2[magnitudes2 == 0] = 1 magnitudes1 = magnitudes1.reshape((1, -1)) magnitudes2 = magnitudes2.reshape((1, -1)) view1_data /= magnitudes1 view2_data /= magnitudes2 labels = labels[shuffled_inds] return [view1_data, view2_data], labels # ## Comparing the performance with conditionally dependent views # # As mentioned before co-Expectation Maximization framework (and co-training), # relies on the fundamental assumption that data views are conditionally # independent. Here, we will again compare the performance of singleview and # multiview kmeans clustering using the same methods as before, but on our # conditionally dependent dataset. # # As we see below, Multiview KMeans does not beat the best Singleview # clustering performance with respect to purity, since that the views are # conditionally dependent. # # # + data, labels = get_cd_data() m_clusters = perform_clustering(RANDOM_SEED, data, labels, 3) # Running TSNE to display clustering results via low dimensional embedding tsne = TSNE() new_data = list() new_data.append(tsne.fit_transform(data[0])) new_data.append(tsne.fit_transform(data[1])) display_plots('True Labels', new_data, labels) display_plots('Multiview Clustering Results', new_data, m_clusters)
_downloads/01bcb790f39913260f352d58d970ecde/plot_mv_kmeans_validation_complex.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Intermediate Net in Keras # Build a intermediate neural network in to classify MNIST digits # #### Set seed for reproducibility import numpy as np np.random.seed(42) # #### Load dependencies import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.layers.normalization import BatchNormalization from keras import regularizers from keras.optimizers import SGD # #### Load dataset (X_train, y_train), (X_test, y_test) = mnist.load_data() # #### Preprocess data X_train = X_train.reshape(60000, 784).astype("float32") X_test = X_test.reshape(10000, 784).astype("float32") X_train /= 255 X_test /= 255 n_classes = 10 y_train = keras.utils.to_categorical(y_train, n_classes) y_test = keras.utils.to_categorical(y_test, n_classes) # #### Design neural network model = Sequential() # model.add(Dense((64), activation='relu', input_shape=(784, ), kernel_regularizer=regularizers.l2(0.01))) model.add(Dense((64), activation='relu', input_shape=(784, ), kernel_regularizer=regularizers.l2(0.01))) model.add(Dropout(0.5)) model.add(Dense((64), activation='relu')) model.add(Dropout(0.5)) model.add(Dense((10), activation='softmax')) model.summary() model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.1), metrics=['accuracy']) # #### Train the model model.fit(X_train, y_train, batch_size=128, epochs=5, verbose=1, validation_data=(X_test, y_test))
Deep Learning/Tutorials/.ipynb_checkpoints/Deep-net_in_keras-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Hamiltonian simulation with one-hot encoding - QASM w/tomography and measurement error mitigation # + import warnings warnings.filterwarnings(action='once') import numpy as np np.warnings.filterwarnings('ignore') import pickle from scipy.linalg import expm from pprint import pprint from tqdm import tqdm # Everything we need from Qiskit from qiskit import ClassicalRegister, QuantumRegister, QuantumCircuit from qiskit import execute, Aer from qiskit.quantum_info import Pauli from qiskit.compiler import transpile from qiskit.ignis.verification import tomography from qiskit.aqua.operators import WeightedPauliOperator from qiskit.aqua.components.initial_states import Custom import matplotlib.pyplot as plt import seaborn as sns sns.set_style('whitegrid') sns.set(rc={'figure.figsize':(16,8)}) import qutip as qt import sys sys.path.append("../src/") from hamiltonian import * from utils import * from qiskit_circuits import * from device import Device # + # Perform unitary evolution def unitary_evolution(ham, t): return expm(-1j * ham * t) states_gc = ['00', '10', '11', '01'] states_oh = ['1000', '0100', '0010', '0001'] colours = ['tab:blue', 'tab:orange', 'tab:green', 'tab:red'] backend = Aer.get_backend('qasm_simulator') device = Device('ibmq_vigo', N_qubits=5) # - # ## QASM Simulation with quantum state tomography # + # Arbitrary simulation time T = 1 # Range of Trotter steps to use min_trotter_steps = 1 max_trotter_steps = 100 # + oh = SparseEncodingHamiltonian(N_states=4) oh_ham_rep = reduce(lambda x, y: x + y, [p[1] * get_pauli_matrix(p[0]) for p in oh.pauli_coeffs.items()]) uniform_oh = 0.5 * np.array([[0,1,1,0,1,0,0,0,1,0,0,0,0,0,0,0]]).reshape((16, 1)) exact_wavefunction_oh = unitary_evolution(oh_ham_rep, t=T) @ uniform_oh # + # NOTE: output of this cell has been cleared for readability, due to cvxpy warnings produced in every step H_oh = SparseEncodingHamiltonian(N_states=4, qiskit_order=True) weighted_paulis = [(v, Pauli.from_label(k)) for (k, v) in list(H_oh.pauli_coeffs.items())] my_pauli = WeightedPauliOperator(weighted_paulis) trace_dists_oh = np.zeros((4, max_trotter_steps)) # Generate measurement calibration data; same for all circuits import qiskit.ignis.mitigation.measurement as mc meas_calibs, state_labels = mc.complete_meas_cal(qubit_list=[2, 1, 3, 4]) job_cal = execute(meas_calibs, backend=backend, shots=10000, noise_model=device.noise_model) meas_fitter = mc.CompleteMeasFitter(job_cal.result(), state_labels) # To prepare the uniform superposition, run the sparse variational ansatz with the following parameters θ_1 = 2 * np.pi / 3 θ_2 = 2 * np.arccos(1/np.sqrt(3)) θ_3 = 2 * np.arccos(1/(np.sqrt(3) * np.sin(θ_2 / 2))) params = [θ_1, θ_2, θ_3] for opt_level in range(4): for n_trotter_steps in tqdm(range(min_trotter_steps, max_trotter_steps+1)): # Construct uniform superposition over spherical coordinates q = QuantumRegister(H_oh.N_qubits) circuit = QuantumCircuit(q) circuit.x(q[0]) circuit.ry(θ_1, q[1]) circuit.cx(q[1], q[0]) circuit.cry(θ_2, q[1], q[2]) circuit.cx(q[2], q[1]) circuit.cry(θ_3, q[2], q[3]) circuit.cx(q[3], q[2]) circuit += my_pauli.evolve( None, evo_time=T, num_time_slices=n_trotter_steps, quantum_registers=q, expansion_mode='trotter' ) # Gate counts for optimized circuit optimized_circuit = transpile(circuit, backend=backend, basis_gates=device.noise_model.basis_gates, coupling_map=device.coupling_map, initial_layout=[2, 1, 3, 4], optimization_level=opt_level) ###################### # Tomography # ###################### tomography_circuits = tomography.state_tomography_circuits(optimized_circuit, [2, 1, 3, 4]) tomo_job = execute(tomography_circuits, backend=backend, shots=10000, noise_model=device.noise_model) # Correct with measurement error mitigation and fit results correct_tomo_results = meas_fitter.filter.apply(tomo_job.result(), method='least_squares') fitter = tomography.StateTomographyFitter(correct_tomo_results, tomography_circuits) density_mat = fitter.fit() trace_dists_oh[opt_level, n_trotter_steps-1] = qt.tracedist(qt.Qobj(exact_wavefunction_oh), qt.Qobj(density_mat)) # - with open('one-hot-tomo-noise-measmit-alloptlevels.npy', 'wb') as f: np.save(f, trace_dists_oh) for opt_level in range(4): plt.plot(range(min_trotter_steps, max_trotter_steps+1), trace_dists_oh[opt_level,:], color=colours[opt_level], label=f"Opt level {opt_level}") plt.xlabel("Trotter steps", fontsize=15) plt.ylabel("Trace distance", fontsize=15) plt.xticks(fontsize=13) plt.yticks(fontsize=13) plt.title(f"One hot, N=4, Hamiltonian evolution time t = {T}", fontsize=15) #plt.savefig("gc-trotter-probs.pdf") trace_dists_oh_nomeasmit = np.load("one-hot-tomo-noise-alloptlevels.npy") # + fig, ax = plt.subplots(4, 1, sharex=True, sharey=True, figsize=(12, 16)) for opt_level in range(4): ax[opt_level].scatter(range(min_trotter_steps, max_trotter_steps+1), trace_dists_oh[opt_level,:], color=colours[0], marker='o', label=f"Meas mit.") ax[opt_level].scatter(range(min_trotter_steps, max_trotter_steps+1), trace_dists_oh_nomeasmit[opt_level,:], color=colours[1], marker='x', label=f"No meas. mit") ax[opt_level].set_xlabel("Trotter steps", fontsize=14) ax[opt_level].set_ylabel("Trace distance", fontsize=14) ax[opt_level].set_title(f"Opt. level {opt_level}", fontsize=14) ax[opt_level].legend(fontsize=15) plt.suptitle(f"One-hot, N=4, Hamiltonian evolution time t = {T}", fontsize=15) plt.tight_layout() plt.savefig("FIG-REDONE-OH-tomography-hw-noise.pdf") # -
notebooks/Time-Evolution-QASM-Tomography-Noise-OH-MeasMit.ipynb
# --- # jupyter: # jupytext: # formats: ipynb,py # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:generic_expression] * # language: python # name: conda-env-generic_expression-py # --- # # Prepare reference gene file # # This notebook creates a reference gene ranking file to use to compare SOPHIE generated gene ranking. The reference ranking information is obtained from [this repository](https://github.com/DartmouthStantonLab/GAPE). This [RDS object](https://github.com/DartmouthStantonLab/GAPE/blob/main/Pa_GPL84_refine_ANOVA_List_unzip.rds) contains 73 experiments. For each experiment, we will identify DEGs using log2FC > 1 and FDR < 0.05. We will rank genes by the proportion that they appeared as DE. # + # %load_ext autoreload # %autoreload 2 # %load_ext rpy2.ipython import os import pandas as pd import rpy2.robjects as ro from rpy2.robjects import pandas2ri from rpy2.robjects.conversion import localconverter from ponyo import utils pandas2ri.activate() # + # Read in config variables base_dir = os.path.abspath(os.path.join(os.getcwd(), "../")) config_filename = os.path.abspath( os.path.join(base_dir, "configs", "config_pseudomonas_33245.tsv") ) params = utils.read_config(config_filename) # - # Load params local_dir = params["local_dir"] reference_gene_filename = os.path.join( local_dir, "Pa_GPL84_refine_ANOVA_List_unzip.rds" ) readRDS = ro.r["readRDS"] reference_Robject = readRDS(reference_gene_filename) # + # For each experiment get df # For each df, if label gene as DEGs based on log2FC>1 and FDR<0.05 # Concatenate series num_experiments = len(reference_Robject) reference_stats_df = pd.DataFrame() for i in range(num_experiments): print(i) # Get df for experiment reference_df = ro.conversion.rpy2py(reference_Robject[i]) reference_df = reference_df.set_index("ProbeID") print(reference_df.head()) # Find DEGs degs_ids = list( reference_df[ (abs(reference_df["Log2FC"]) > 1) & (reference_df["FDR"] < 0.05) ].index ) reference_df["DEG"] = 0 reference_df.loc[degs_ids, "DEG"] = 1 print(reference_df.head()) # Create df with labels for if gene is DE or not if i == 0: reference_stats_df = reference_df["DEG"].to_frame("experiment_0") else: reference_stats_df = pd.merge( reference_stats_df, reference_df["DEG"].to_frame(f"experiment_{i}"), left_index=True, right_index=True, how="left", ) reference_stats_df # + # Map `ProbeID` to `IntergenicSpotID` that contains PA#### IDs example_reference_df = ro.conversion.rpy2py(reference_Robject[0]) example_reference_df.set_index("ProbeID", inplace=True) merged_df = pd.merge( reference_stats_df, example_reference_df, left_index=True, right_index=True ) # Get relevant columns (`IntergenicSpotID` and `experiment_*`) experiment_cols = [col for col in merged_df.columns if "experiment_" in col] merged_df = merged_df[experiment_cols] merged_df.sum(axis=1) # - # Aggregate to get ranking of genes merged_df["prop DEGs"] = merged_df.sum(axis=1) / num_experiments # Extract PA#### ids from `ProbeID` # This will be used to compare against SOPHIE ranked genes pao1_ids = [str_ls[0] for str_ls in merged_df.index.str.split("_")] merged_df["gene id"] = pao1_ids # Save file # Here are the names that we will use for the comparison in notebook 2_identify_generic_genes_pathways.ipynb # DE_prior_filename = output_filename # ref_gene_col = "gene id" # ref_rank_col = "prop DEGs" merged_df.to_csv("GAPE_proportions.txt", sep="\t")
pseudomonas_analysis/0_prepare_reference_gene_file.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/ftp1/deeplearning/blob/master/sentiment_Emb_NN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="_nm5_43Ef02A" colab_type="code" colab={} #Importing Libraries from __future__ import print_function import numpy as np import pandas as pd #import tensorflow as tf #from tensorflow import keras #from keras.preprocessing import sequence #import sys #sys.setrecursionlimit(100000) import os # Intialize variables and load data from labelled dataset TRAIN_DATA_URL = "https://raw.githubusercontent.com/ftp1/deeplearning/master/sentiment-train.csv" TEST_DATA_URL = "https://raw.githubusercontent.com/ftp1/deeplearning/master/sentiment-eval.csv" train_data=pd.read_csv(TRAIN_DATA_URL) test_data=pd.read_csv(TEST_DATA_URL) # Convert label to 0/1 num_label = {'Positive': 1,'Negative': 0} train_data.label = [num_label[item] for item in train_data.label] maxlen_train = train_data.text.str.len().max() maxlen_test = test_data.text.str.len().max() max_length = max(maxlen_train, maxlen_test) X_train = train_data.loc[:, 'text'].values Y_train = train_data.loc[:, 'label'].values X_test = test_data.loc[:, 'text'].values #print(X_test) # + id="9uxywVuhid4-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e55ef11f-94ed-4daf-f03e-e7da83f28ead" from tensorflow.python.keras.preprocessing.text import Tokenizer from tensorflow.python.keras.preprocessing.sequence import pad_sequences total_reviews = np.concatenate((X_train, X_test)) tokenizer_obj = Tokenizer() tokenizer_obj.fit_on_texts(total_reviews) vocab_size = len(tokenizer_obj.word_index) + 1 print(vocab_size) X_train_tokens = tokenizer_obj.texts_to_sequences(X_train) X_test_tokens = tokenizer_obj.texts_to_sequences(X_test) max_length = 500 X_train_pad = pad_sequences(X_train_tokens, maxlen=max_length, padding='post') X_test_pad = pad_sequences(X_test_tokens, maxlen=max_length, padding='post') # + id="3Q79tt2ZiXNk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 357} outputId="89bc7984-144f-4911-d8de-6632761ff922" from keras.models import Sequential from keras.layers import Dense, Embedding, LSTM, GRU, Flatten, Dropout from keras.layers.embeddings import Embedding EMBEDDING_DIM = 100 print('Build model') max_length = 500 model = Sequential() model.add(Embedding(vocab_size, EMBEDDING_DIM, input_length=max_length)) model.add(Flatten()) #model.add(GRU(units=32, dropout=0.2, recurrent_dropout=0.2)) model.add(Dense(250, activation='relu')) model.add(Dropout(0.2)) #model.add(Dense(128, activation='relu')) #model.add(Dropout(0.2)) model.add(Dense(1, activation='sigmoid')) print(model.summary()) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # + id="aS8kIppriiQ_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 445} outputId="7e41bcef-2b2b-4068-cbef-74fcfeb5eecb" print('Train') model.fit(X_train_pad, Y_train, batch_size=128, epochs=10, verbose=2) # + id="vOqSDPwCqJmE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="16653458-c15f-4069-b6fd-dc1cb6367ee4" Y_test = model.predict_classes(X_test_pad) print(type(Y_test)) predicted = Y_test.flatten() #print(predicted) mylabel = {1:'Positive',0:'Negative'} predicted_label = [mylabel[item] for item in predicted] #print(predicted_label) df = pd.DataFrame(predicted_label) print(type(df)) csv_data = df.to_csv(index_label='id', header=['label']) print(csv_data) # + id="qSn3L23hqdHe" colab_type="code" colab={} cellView="form" #@title mylabel = {1:'Positive',0:'Negative'} predicted_label = [mylabel[item] for item in Y_test] df = pd.DataFrame(predicted_label) print(type(df)) #csv_data = df.to_csv(index_label='id', header=['label']) #print(csv_data) #with open('results_NB.csv', 'w') as csv_file: # df.to_csv(index_label='id', header=['label'],path_or_buf=csv_file)
sentiment_Emb_NN.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_python3 # language: python # name: conda_python3 # --- import boto3 import pandas as pd import numpy as np from io import StringIO # Data posted by <NAME> on Data World # https://data.world/michaelaroy/ncaa-tournament-results raw_df = pd.read_csv('https://query.data.world/s/gp7t4y3y3bcmazezh3hyfa64hjed6j') # ### Transform tournament data into a long format # #### Old format raw_df.head() base_columns = ['Year','Round','Region Number','Region Name'] team0_columns = ['Team','Seed','Score'] team1_columns = ['Team.1','Seed.1','Score.1'] # + high_seed_df = raw_df[base_columns + team0_columns].copy() low_seed_df = raw_df[base_columns + team1_columns].rename(columns=dict(zip(team1_columns,team0_columns))).copy() high_seed_df['Team_Position'] = 'high' low_seed_df['Team_Position'] = 'low' raw_long_df = pd.concat([high_seed_df, low_seed_df]) # - # #### New format raw_long_df.head() # ### Fix data issue with 2019 regions that are flipped # In this data, the 2019 rounds 2 - final four East and South regions are flipped. I checked the rest of the data and this is the only error of this kind. You can see in round 1 Duke is in the East and Virginia is in the South. And in rounds 2-4 that is flipped. is2019 = raw_long_df.Year == 2019 seed1 = raw_long_df.Seed == 1 bad_regions = raw_long_df['Region Name'].isin(['East','South']) raw_long_df[is2019 & seed1 & bad_regions] # Here we look at how bad that problem is. But it is limited to 2019 East and South regions. # This table shows the number of regions that each team is in during each year. regional_rounds = raw_long_df.Round <= 4 # Filter out where region name is Final Four or Championship raw_long_df[regional_rounds] \ .groupby(['Year','Region Name','Team']) \ .count() \ .reset_index() \ .groupby(['Year','Team'])['Region Name'].count() \ .reset_index() \ .sort_values('Region Name', ascending=False) \ .head(20) # Get a boolean mask for the rows that need to be changed # + is2019 = (raw_long_df.Year == 2019) isBadRound = (raw_long_df.Round > 1) isEast = (raw_long_df['Region Name'] == 'East') isSouth = (raw_long_df['Region Name'] == 'South') badEastRows = (is2019 & isBadRound & isEast) badSouthRows = (is2019 & isBadRound & isSouth) # - # Correct the bad data raw_long_df.loc[badEastRows,'Region Name'] = 'South' raw_long_df.loc[badSouthRows,'Region Name'] = 'East' # ### Create some standardization on region names # The region names change year to year. I wanted to have some standardization in region names... # I hope this doesn't make things confusing later. # Below shows the relative number of years each region shows up. raw_long_df.groupby('Region Name').count() # East and West are there every year. # Change Southwest --> Midwest # and Southeast --> South # and then Region Names will be consistent across years # I guess not that it matters since there is also a region number isSoutheast = raw_long_df['Region Name'] == 'Southeast' isSouthwest = raw_long_df['Region Name'] == 'Southwest' raw_long_df.loc[isSoutheast,'Region Name'] = 'South' raw_long_df.loc[isSouthwest,'Region Name'] = 'Midwest' raw_long_df.groupby('Region Name').count() tournament_df = raw_long_df.reset_index().copy() # #### Figure out which regions play each other in the final four because it changes from year to year final_four_games = \ tournament_df[tournament_df['Round'] == 4] \ .assign(game_num = lambda x: (x['Region Number'] <= 2)+1) \ .groupby(['Year','game_num']) \ .agg({'Region Name' : lambda x: dict(zip(['team1','team2'],x))}) \ .groupby('Year') \ .agg({'Region Name' : lambda x: dict(zip(['game1','game2'],x))}) \ .to_dict().get('Region Name') #{1985: {'game1': {'team1': 'Midwest', 'team2': 'South'}, 'game2': {'team1': 'West', 'team2': 'East'}}, # 1986: {'game1': {'team1': 'East', 'team2': 'Midwest'}, 'game2': {'team1': 'South', 'team2': 'West'}}, # 1987: {'game1': {'team1': 'South', 'team2': 'East'}, 'game2': {'team1': 'West', 'team2': 'Midwest'}}, # ... # #### Create dictionary file with all data in easy format for Bracket object # + years = tournament_df.Year.unique().tolist() brackets_dict = {} for year in years: that_year_df = tournament_df[tournament_df['Year'] == year].copy() round1 = that_year_df[that_year_df['Round'] == 1].sort_values('Seed') round2 = that_year_df[that_year_df['Round'] == 2] round3 = that_year_df[that_year_df['Round'] == 3] round4 = that_year_df[that_year_df['Round'] == 4] round5 = that_year_df[that_year_df['Round'] == 5] round6 = that_year_df[that_year_df['Round'] == 6] winner = round6.sort_values('Score', ascending=False).reset_index().loc[0,].to_dict() east1 = round1[round1['Region Name'] == 'East'] west1 = round1[round1['Region Name'] == 'West'] midwest1 = round1[round1['Region Name'] == 'Midwest'] south1 = round1[round1['Region Name'] == 'South'] results = { 'first' : round1[['Team','Seed']].to_dict('recods'), 'second' : round2[['Team','Seed']].to_dict('recods'), 'sweet16' : round3[['Team','Seed']].to_dict('recods'), 'elite8' : round4[['Team','Seed']].to_dict('recods'), 'final4' : round5[['Team','Seed']].to_dict('recods'), 'championship' : round6[['Team','Seed']].to_dict('recods'), 'winner' : {'Team' : winner['Team'], 'Seed' : int(winner['Seed'])} } regions = { 'East' : east1[['Team','Seed']].to_dict('recods'), 'West' : west1[['Team','Seed']].to_dict('recods'), 'Midwest' : midwest1[['Team','Seed']].to_dict('recods'), 'South' : south1[['Team','Seed']].to_dict('recods') } brackets_dict[year] = {'Region' : regions, 'Results': results, 'Finals': final_four_games[year]} # - import json with open('brackets.json' , 'w') as f: json.dump(brackets_dict, f) import json with open('brackets.json', 'r') as f: brackets_dict = json.load(f) brackets_dict['2019']['Results']['winner'] # + # Write the cleaned data to S3 import boto3 from io import StringIO bucket_name = 'ncaabb-data' file_name = 'tournament.csv' boto3.resource('s3').Bucket(bucket_name).Object(file_name).put(Body=tournament_df.to_csv(index=False)) # + # Check to make sure that the data is there byte_str = boto3.resource('s3').Bucket(bucket_name).Object(file_name).get()['Body'].read().decode('utf-8') clean_df = pd.read_csv(StringIO(str(byte_str))) (tournament_df==clean_df).mean().mean() # - bucket_name = 'ncaabb-data' tournament_filename = 'big_dance.csv' season19_filename = 'team19.csv' s3 = boto3.resource('s3') s3_bucket = s3.Bucket(bucket_name) byte_str = s3_bucket.Object(tournament_filename).get()['Body'].read().decode('utf-8') tournament_raw_df = pd.read_csv(StringIO(str(byte_str))) byte_str = s3_bucket.Object(season19_filename).get()['Body'].read().decode('utf-8') season19_df = pd.read_csv(StringIO(str(byte_str))) base_columns = ['Year','Round','Region Number','Region Name'] team0_columns = ['Team','Seed','Score'] team1_columns = ['Team.1','Seed.1','Score.1'] # + high_seed_df = tournament_raw_df[base_columns + team0_columns].copy() low_seed_df = tournament_raw_df[base_columns + team1_columns].rename(columns=dict(zip(team1_columns,team0_columns))).copy() high_seed_df['Team_Position'] = 'high' low_seed_df['Team_Position'] = 'low' tournament_df = pd.concat([high_seed_df, low_seed_df]) # - final_four_games teams = pd.concat([tourney19['Team'], tourney19['Team.1']]).sort_values().unique().tolist() # + # Filter on schools that made it to the tourney season19_df = season19_df[season19_df.School.str.contains('NCAA')].copy() # Take out the NCAA tag on the team name season19_df['School'] = season19_df.School.str.replace(' NCAA', '') # Manually rename teams with more colloquial acronyms season19_df.loc[season19_df['School'] == 'Virginia Commonwealth', 'School'] = 'VCU' season19_df.loc[season19_df['School'] == 'Louisiana State', 'School'] = 'LSU' season19_df.loc[season19_df['School'] == 'Mississippi', 'School'] = 'Ole Miss' # Take out play in game losers play_in_losers = ['Temple','North Carolina Central',"St. John's (NY)",'Prairie View'] lost_before_64 = season19_df.School.isin(play_in_losers) season19_df = season19_df[~lost_before_64].copy() # + # #! conda install -c conda-forge fuzzywuzzy -y # - season19_df.head(64) teams_from_season_data = season19_df.School.values.tolist() len(teams_from_season_data) def get_match(string, potential_matches): best_match, score = process.extractOne(string, potential_matches, scorer=fuzz.token_sort_ratio) if score < 80: print(string, best_match, score)#raise Exception("No good match with score above 80") return best_match team_name_map = {school:get_match(school, teams) for school in teams_from_season_data} season19_df['School'] = season19_df['School'].map(team_name_map) expected_wins_for_seed = { # seed : expected wins 16 : 0, 15 : 0, 14 : 0, 13 : 0, 12 : 0, 11 : 0, 10 : 0, 9 : 0, 8 : 1, 7 : 1, 6 : 1, 5 : 1, 4 : 2, 3 : 2, 2 : 3, 1 : 4 } # how to structure what happened in tournament # how did each seed in each region (or each team) perform ? the_team = 'Duke' tourney19[(tourney19['Team'] == the_team) | (tourney19['Team.1'] == the_team)] tourney19[tourney19['Region Name'] == 'East'] tournament_df.groupby('Region Name').count()
notebooks/Data Cleaning.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "skip"} # # Lesson 0: Introduction # # ##### Navigation Links # # - [Workshop description](https://jenfly.github.io/eoas-python/) # - [Computer setup instructions](https://jenfly.github.io/eoas-python/SETUP) # - Lessons: # - **0 - Introduction** # - [1 - Python Basics](1-python-basics.ipynb) # - [2 - Reading & Summarizing CSV Data](2-reading-summarizing-csv-data.ipynb) # - [3 - Basic Calculations & Plots](3-basic-calculations-plots.ipynb) # - [4 - Sorting, Filtering & Aggregation](4-sorting-filtering-aggregation.ipynb) # - [5 - Indexing & Subsets](5-indexing-subsets.ipynb) # - [6 - Visualization & Geographic Maps](6-visualization-geographic-maps.ipynb) # - [Pandas cheatsheet](pandas-cheatsheet.ipynb) # - [Solutions to exercises](solutions/solutions-1.ipynb) # - [Additional resources](resources.ipynb) # - [Post-workshop survey](http://bit.ly/eoas-python-survey) # + [markdown] slideshow={"slide_type": "slide"} # ## Welcome to EOAS Python Workshop! # # 1. View slides in your browser: [bit.ly/eoas-python-slides](http://bit.ly/eoas-python-slides) # # 2. Download files: [bit.ly/eoas-python-download](http://bit.ly/eoas-python-download) # - Unzip / extract the *eoas-python-master* folder and note where it is located (e.g. Downloads, Desktop, etc.). If desired, move it to another location on your hard drive. # # 3. Open JupyterLab &mdash; 3 options, depending on your setup: # 1. Open Anaconda Navigator and click "Launch" underneath the "lab" icon, or # 2. Open Terminal (Mac/Linux) or Anaconda Prompt (Windows) and run `jupyter lab` at the command line, or # 3. Go to [ubc.syzygy.ca](https://ubc.syzygy.ca/), click on "Sign-In" in the top right corner, and sign in with your CWL credentials. # + [markdown] slideshow={"slide_type": "slide"} # # Agenda # # ### Today # # - Lesson 0: Introduction (45 min) # - Overview of scientific computing in Python # - JupyterLab and Jupyter notebooks # - Lesson 1: Python Basics (1 hr 45 min) # - Lesson 2: Reading & Summarizing CSV Data (30 min) # + [markdown] slideshow={"slide_type": "fragment"} # ### Tomorrow # # - Lesson 3: Basic Calculations & Plots (45 min) # - Lesson 4: Sorting, Filtering & Aggregation (45 min) # - Lesson 5: Indexing & Subsets (45 min) # - Lesson 6: Visualization & Geographic Maps (45 min) # + [markdown] slideshow={"slide_type": "slide"} # # Why Python for science? # # - Free + open source # - Transparency + reproducibility # - Equal access # - Sharing + collaboration # # > *<NAME> - Project Jupyter: From Interactive Python to Open Science* [(video)](https://youtu.be/xuNj5paMuow) [(slides)](https://conferences.oreilly.com/jupyter/jup-ny-2017/public/schedule/detail/62419) # + [markdown] slideshow={"slide_type": "fragment"} # - Powerhouse for scientific computing + data analysis # - High-level, user friendly interface # - Efficient computation with Fortran, C/C++ etc. behind the scenes # + [markdown] slideshow={"slide_type": "fragment"} # - General purpose programming language # # ![python_uses](img/python_uses.png) # # From: [Python Developers Survey 2017](https://www.jetbrains.com/research/python-developers-survey-2017) # + [markdown] slideshow={"slide_type": "fragment"} # - Python as "glue" # # > *<NAME> - The Unexpected Effectiveness of Python in Science* [(video)](https://www.youtube.com/watch?v=ZyjCqQEUa8o) [(slides)](https://speakerdeck.com/jakevdp/the-unexpected-effectiveness-of-python-in-science) # + [markdown] slideshow={"slide_type": "fragment"} # Example: download and merge tens/hundreds of [Environment Canada weather data files](http://climate.weather.gc.ca/climate_data/daily_data_e.html?hlyRange=2013-06-11%7C2018-10-02&dlyRange=2013-06-13%7C2018-10-02&mlyRange=%7C&StationID=51442&Prov=BC&urlExtension=_e.html&searchType=stnName&optLimit=yearRange&StartYear=1840&EndYear=2018&selRowPerPage=100&Line=39&searchMethod=contains&Month=10&Day=2&txtStationName=vancouver&timeframe=2&Year=2018) # + [markdown] slideshow={"slide_type": "skip"} # > If you'd like to learn more about downloading and merging data with Python, you can work through the following bonus exercises, which progress through the data wrangling techniques needed for the Environment Canada weather data. # > # >- 1.1(d) # >- 1.3(d) # >- 2(e), (f) # >- 3(h) # >- 4(g) # + [markdown] slideshow={"slide_type": "slide"} # # Jupyter # # - **JupyterLab**: Development environment for scientific computing + data analysis # - Within JupyterLab, we'll be using Jupyter **notebooks**: # - Code, plots, formatted text, equations, etc. in a single document # - Uses an IPython kernel to run Python code (IPython = "Interactive Python") # - Also supports R, Julia, Perl, and over 100 other languages (and counting!) # + [markdown] slideshow={"slide_type": "fragment"} # ## Example Notebook # # [Demo](example-notebook.ipynb) # + [markdown] slideshow={"slide_type": "fragment"} # - Notebooks are great for exploration and for documenting your workflow # - Many options for sharing notebooks in human readable format: # - Share online with [nbviewer.jupyter.org](http://nbviewer.jupyter.org/) # - If you use Github, any notebooks you upload are automatically rendered on the site # - Convert to HTML, PDF, etc. with [nbconvert](https://nbconvert.readthedocs.io/en/latest/) # + [markdown] slideshow={"slide_type": "slide"} # # Other Workflows & Environments # # For repetitive tasks, you can re-use code by creating a **library** or automate a workflow with a Python **script**. # - Code is saved in `.py` text files # - There are many, many options for development environments: # - Command line + text editor (e.g. Atom, Sublime, Emacs, etc.) # - Integrated development environment (e.g. PyCharm, Spyder, Visual Studio Code, etc.) # + [markdown] slideshow={"slide_type": "fragment"} # # In this workshop, we will focus only on Jupyter notebooks. # - For a great example of a typical workflow of interactive exploration in Jupyter $\rightarrow$ automation with libraries/scripts, check out <NAME>' blog post [Reproducible Data Analysis in Jupyter](https://jakevdp.github.io/blog/2017/03/03/reproducible-data-analysis-in-jupyter/). # + [markdown] slideshow={"slide_type": "slide"} # # Getting Started with JupyterLab # # Let's take a tour of the main features of JupyterLab and create our first Jupyter notebook! # # If you're running JupyterLab locally on your computer: # - Navigate to your `eoas-python-master` folder # # If you're working online at [ubc.syzygy.ca](https://ubc.syzygy.ca/): # - To switch from the older Jupyter notebook app to JupyterLab, edit the url to replace `/tree` with `/lab` # - For example, if your CWL username is `bsmith00`, change the url from https://ubc.syzygy.ca/jupyter/user/bsmith00/tree to https://ubc.syzygy.ca/jupyter/user/bsmith00/lab # # # [This quick-start tutorial](http://bit.ly/jupyter-quickstart) may be a helpful reference as you navigate around JupyterLab. # + [markdown] slideshow={"slide_type": "fragment"} # ### Jupyter Tour # # - Files Sidebar - show/hide # - Demo CSV viewer, text editor # - Create a new notebook # - Note the .ipynb extension (comes from "IPython notebook", the previous name before it was changed to Jupyter to reflect multi-language support) # - Rename the notebook to "workshop.ipynb" # - Notebooks auto-save periodically, or you can manually save # - Next time you open JupyterLab, you can open your "workshop.ipynb" notebook by double-clicking it in the Files Sidebar # + [markdown] slideshow={"slide_type": "slide"} # # Working with Notebooks # # A notebook consists of a series of "cells": # - **Code cells**: execute snippets of code and display the output # - **Markdown cells**: formatted text, equations, images, and more # # By default, a new cell is always a code cell. # + [markdown] slideshow={"slide_type": "fragment"} # ## Code Cells # # ### Python as a Calculator # # - We can use mathematical operators such as `+`, `-`, `*`, `/`, `**` # - To run a cell, press `Shift-Enter` or press the Run button on the Notebook Toolbar # + slideshow={"slide_type": "fragment"} 2 + 2 # + slideshow={"slide_type": "fragment"} 3 / 4 # + slideshow={"slide_type": "fragment"} 5 * 6 # + slideshow={"slide_type": "fragment"} 7 ** 2 # + [markdown] slideshow={"slide_type": "fragment"} # Combining mathematical operators: # + slideshow={"slide_type": "-"} 5 + 30 / 3 # + slideshow={"slide_type": "fragment"} (5 + 30) / 3 # + [markdown] slideshow={"slide_type": "slide"} # ## Markdown Cells # # In Markdown cells, you can write plain text or add formatting and other elements with [Markdown](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet). These include headers, **bold text**, *italic text*, hyperlinks, equations $A=\pi r^2$, inline code `print('Hello world!')`, bulleted lists, and more. # + [markdown] slideshow={"slide_type": "fragment"} # - To create a Markdown cell, select an empty cell and change the cell type from "Code" to "Markdown" in the dropdown menu of the Notebook Toolbar # - To run a Markdown cell, press `Shift-Enter` or the Run button from the Notebook Toolbar # - To edit a Markdown cell, you need to double-click inside it # + [markdown] slideshow={"slide_type": "slide"} # # Other Notebook Basics # # - Organizating cells &mdash; insert, delete, cut/copy/paste, move up/down, split, merge # - Closing vs. shutting down a notebook &mdash; kernel process in background # - Re-opening a notebook after shutdown # - All the code output is maintained from the previous kernel session # - Clear output of all cells or selected cell(s) # - Running all cells or selected cell(s) # - Restarting and interrupting the kernel # + [markdown] slideshow={"slide_type": "slide"} # # Python Scientific Ecosystem # # A **library** is a collection of pre-written code. It can consist of: # - A single file with Python code (a *module*), or # - A collection of multiple files bundled together (a *package*) # + [markdown] slideshow={"slide_type": "fragment"} # Some libraries come built-in with core Python but most are developed and maintained by external "3rd party" development teams # - Python core + 3rd party libraries = **ecosystem** # - To install and manage 3rd party libraries, you need to use a package manager such as `conda` (which comes with Anaconda/Miniconda) # + [markdown] slideshow={"slide_type": "fragment"} # Over-simplified view of the ecosystem we'll be using in this workshop: # # ![ecosystem](img/ecosystem.png) # + [markdown] slideshow={"slide_type": "fragment"} # Some of the main libraries in the Python scientific ecosystem: # # ![ecosystem_big](img/ecosystem_big.png) # # From [The Unexpected Effectiveness of Python in Science](https://speakerdeck.com/jakevdp/the-unexpected-effectiveness-of-python-in-science) (<NAME>) # + [markdown] slideshow={"slide_type": "slide"} # In this workshop, we'll be focusing on Pandas, with a brief introduction to Matplotlib and Cartopy. # # Other useful libraries for scientific computing include: # # - [Numpy + Scipy](https://www.scipy.org/getting-started.html) - Numeric and scientific tools including: # - Linear algebra # - Statistics and random numbers # - Numerical integration # - Differential equations # - Interpolation, optimization and curve fitting # - Special functions # - Fast Fourier transforms # - Signal processing # - [Matplotlib](https://matplotlib.org/) for visualization # - [Seaborn](https://seaborn.pydata.org/) for statistical data visualization # - [Statsmodels](https://www.statsmodels.org/stable/index.html) for statistical analysis and modelling # - [Sympy](https://docs.sympy.org/latest/tutorial/intro.html) for symbolic computation # # See the [additional resources page](resources.ipynb) for links to tutorials and examples for these libraries. # + [markdown] slideshow={"slide_type": "skip"} # --- # # Go to: [next lesson](1-python-basics.ipynb)
0-introduction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from numpy import corrcoef, sum, log, arange from numpy.random import rand from pylab import pcolor, show, colorbar, xticks, yticks import pandas as pd import matplotlib from matplotlib import pyplot as plt df = pd.read_csv("Documents/registration.csv") del df['state'] #pandas don't deal well with strings #print cross-correlation matrix print(df.corr()) #graph same fig = plt.gcf() fig.set_size_inches(10, 10) # plotting the correlation matrix R = corrcoef(df) pcolor(R) colorbar() yticks(arange(0.5,10.5),range(0,10)) xticks(arange(0.5,10.5),range(0,10)) show() # + import numpy as np import statsmodels.formula.api as sm model = sm.ols(formula='voted ~ registered', data=df) fitted = model.fit() print (fitted.summary()) # - plt.plot(df['voted'], df['registered'], 'ro') plt.plot(df['voted'], fitted.fittedvalues, 'b') plt.legend(['Data', 'Fitted model']) plt.xlabel('Voted') plt.ylabel('Registered') plt.title('Voted vs Registered') plt.show() from statsmodels.graphics.regressionplots import * plot_leverage_resid2(fitted) plt.show() influence_plot(fitted)
Chapter05/2a Voter Registration Python .ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/activeloopai/examples/blob/main/colabs/Getting_Started_with_Hub.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="lKU8kmSs65xv" # # **Step 1**: _Hello World_ # + [markdown] id="ZrjGQON37lk2" # ## Installing Hub # + [markdown] id="9pcfYcPu7KxY" # Hub can be installed via `pip`. # + id="oC_N5qOx6o0d" from IPython.display import clear_output # !pip3 install hub clear_output() # + [markdown] id="z4_rfJ_GVxLz" # **By default, Hub does not install dependencies for audio, video, and google-cloud (GCS) support. They can be installed using:** # + id="AwpEic3jV2nV" #pip install hub[audio] -> Audio support via miniaudio #pip install hub[video] -> Video support via pyav #pip install hub[gcp] -> GSS support via google-* dependencies #pip install hub[all] -> Installs everything - audio, video and GCS support # + [markdown] id="0N-f2SYU7OjQ" # ## Fetching your first Hub dataset # + [markdown] id="9aNFn7rZ7qxP" # Begin by loading in [MNIST](https://en.wikipedia.org/wiki/MNIST_database), the hello world dataset of machine learning. # # First, load the `Dataset` by pointing to its storage location. Datasets hosted on the Activeloop Platform are typically identified by the namespace of the organization followed by the dataset name: `activeloop/mnist-train`. # + id="izccjS4k7NvX" import hub dataset_path = 'hub://activeloop/mnist-train' ds = hub.load(dataset_path) # Returns a Hub Dataset but does not download data locally # + [markdown] id="bR5n8yYg-0Wu" # ## Reading Samples From a Hub Dataset # + [markdown] id="0XdaAKaS-3NO" # Data is not immediately read into memory because Hub operates [lazily](https://en.wikipedia.org/wiki/Lazy_evaluation). You can fetch data by calling the `.numpy()` method, which reads data into a NumPy array. # # + id="6qpQeNoq-xfo" # Indexing W = ds.images[0].numpy() # Fetch image return a NumPy array X = ds.labels[0].numpy(aslist=True) # Fetch label and store as list of NumPy array # Slicing Y = ds.images[0:100].numpy() # Fetch 100 images and return a NumPy array if possible # This method produces an exception if # the shape of the images is not equal Z = ds.labels[0:100].numpy(aslist=True) # Fetch 100 labels and store as list of # NumPy arrays # + id="eNGHXfdKwJ7W" print('X is {}'.format(X)) # + [markdown] id="tmi2w0_e_LtH" # Congratulations, you've got Hub working on your local machine! 🤓 # + [markdown] id="G-DM6PKq_di2" # # **Step 2**: _Creating Hub Datasets_ # *Creating and storing Hub Datasets manually.* # + [markdown] id="FEzK8LTe_gJW" # Creating Hub datasets is simple, you have full control over connecting your source data (files, images, etc.) to specific tensors in the Hub Dataset. # + [markdown] id="EGXGvKU1qsp1" # ## Manual Creation # + [markdown] id="CQk29Mnhqn1V" # Let's follow along with the example below to create our first dataset. First, download and unzip the small classification dataset below called the *animals dataset*. # + id="QDJRrlDP_DsW" # Download dataset from IPython.display import clear_output # !wget https://github.com/activeloopai/examples/raw/main/colabs/starting_data/animals.zip clear_output() # + id="SIQf9cY6_vyn" # Unzip to './animals' folder # !unzip -qq /content/animals.zip # + [markdown] id="IIz-MYImAfCg" # The dataset has the following folder structure: # + [markdown] id="IuhZZqVIAqj_" # animals # - cats # - image_1.jpg # - image_2.jpg # - dogs # - image_3.jpg # - image_4.jpg # + [markdown] id="6Lez5uCJAto4" # Now that you have the data, you can **create a Hub `Dataset`** and initialize its tensors. Running the following code will create a Hub dataset inside of the `./animals_hub` folder. # # + id="qtzmT0iBNV23" import hub from PIL import Image import numpy as np import os ds = hub.empty('./animals_hub') # Creates the dataset # + [markdown] id="PQ5yt0aaNeP5" # Next, let's inspect the folder structure for the source dataset `'./animals'` to find the class names and the files that need to be uploaded to the Hub dataset. # + id="ubGLkgG8Njbb" # Find the class_names and list of files that need to be uploaded dataset_folder = './animals' class_names = os.listdir(dataset_folder) files_list = [] for dirpath, dirnames, filenames in os.walk(dataset_folder): for filename in filenames: files_list.append(os.path.join(dirpath, filename)) # + [markdown] id="CtVSh0FnNmyI" # Next, let's **create the dataset tensors and upload metadata**. Check out our page on [Storage Synchronization](https://docs.activeloop.ai/how-hub-works/storage-synchronization) for details about the `with` syntax below. # # + id="a6QDC6caNpiH" with ds: # Create the tensors with names of your choice. ds.create_tensor('images', htype = 'image', sample_compression = 'jpeg') ds.create_tensor('labels', htype = 'class_label', class_names = class_names) # Add arbitrary metadata - Optional ds.info.update(description = 'My first Hub dataset') ds.images.info.update(camera_type = 'SLR') # + [markdown] id="TD-hCSBKBA_m" # **Note:** Specifying `htype` and `dtype` is not required, but it is highly recommended in order to optimize performance, especially for large datasets. Use `dtype` to specify the numeric type of tensor data, and use `htype` to specify the underlying data structure. More information on `htype` can be found [here](https://api-docs.activeloop.ai/htypes.html). # + [markdown] id="HR4kLo6YBOhO" # Finally, let's **populate the data** in the tensors. # + id="0QRAyS-HA-Fp" with ds: # Iterate through the files and append to hub dataset for file in files_list: label_text = os.path.basename(os.path.dirname(file)) label_num = class_names.index(label_text) #Append data to the tensors ds.append({'images': hub.read(file), 'labels': np.uint32(label_num)}) # + [markdown] id="lWqYzfI1DCPG" # **Note:** `ds.append({'images': hub.read(path)})` is functionally equivalent to `ds.append({'images': PIL.Image.fromarray(path)})`. However, the `hub.read()` method is significantly faster because it does not decompress and recompress the image if the compression matches the `sample_compression` for that tensor. Further details are available in the next section. # # **Note:** In order to maintain proper indexing across tensors, `ds.append({...})` requires that you to append to all tensors in the dataset. If you wish to skip tensors during appending, please use `ds.append({...}, skip_ok = True)` or append to a single tensor using `ds.tensor_name.append(...)`. # + [markdown] id="WzHVb521XSud" # Check out the first image from this dataset. More details about Accessing Data are available in **Step 4**. # + id="OMG2oif0XSDZ" Image.fromarray(ds.images[0].numpy()) # + [markdown] id="g8E_f-eXqy1c" # ## Automatic Creation # + [markdown] id="MCjy5dH9q3Gi" # The above animals dataset can also be converted to Hub format automatically using 1 line of code: # + id="CUtOL7F8q1xB" src = "./animals" dest = './animals_hub_auto' ds = hub.ingest(src, dest) # + id="o6xboPUKrs1l" Image.fromarray(ds.images[0].numpy()) # + [markdown] id="03b3r7owq7o8" # **Note**: Automatic creation currently only supports image classification datasets, though support for other dataset types is continually being added. A full list of supported datasets is available [here](https://api-docs.activeloop.ai/#hub.ingest). # + [markdown] id="PK_wpkYsDdH2" # ## Creating Tensor Hierarchies # + [markdown] id="1btlOtBDDe4G" # Often it's important to create tensors hierarchically, because information between tensors may be inherently coupled—such as bounding boxes and their corresponding labels. Hierarchy can be created using tensor `groups`: # + id="ICg3Z1z8CRGN" ds = hub.empty('./groups_test') # Creates the dataset # Create tensor hierarchies ds.create_group('my_group') ds.my_group.create_tensor('my_tensor') # Alternatively, a group can us created using create_tensor with '/' ds.create_tensor('my_group_2/my_tensor') # Automatically creates the group 'my_group_2' # + [markdown] id="wE-rWBCkpI9T" # Tensors in groups are accessed via: # + id="78s3Oa_jpKXV" ds.my_group.my_tensor #OR ds['my_group/my_tensor'] # + [markdown] id="3fhjWZ9hDvKe" # For more detailed information regarding accessing datasets and their tensors, check out **Step 4**. # + [markdown] id="46H4nEnZDv5m" # # **Step 3**: _Understanding Compression_ # # *Using compression to achieve optimal performance.* # + [markdown] id="_ajldDggEp8O" # **Data in Hub can be stored in raw uncompressed format. However, compression is highly recommended for achieving optimal performance in terms of speed and storage.** # # # Compression is specified separately for each tensor, and it can occur at the `sample` or `chunk` level. For example, when creating a tensor for storing images, you can choose the compression technique for the image samples using the `sample_compression` input: # + id="uOw9hc0jDpQY" import hub # Set overwrite = True for re-runability ds = hub.empty('./compression_test', overwrite = True) ds.create_tensor('images', htype = 'image', sample_compression = 'jpeg') # + [markdown] id="nv4ktXoCE2K2" # In this example, every image added in subsequent `.append(...)` calls is compressed using the specified `sample_compression` method. # + [markdown] id="8WaFBxrEE9GI" # ### **Choosing the Right Compression** # + [markdown] id="yM8VtZ98FCUu" # There is no single answer for choosing the right compression, and the tradeoffs are described in detail in the next section. However, good rules of thumb are: # # # # 1. For data that has application-specific compressors (`image`, `audio`, `video`,...), choose the sample_compression technique that is native to the application such as `jpg`, `mp3`, `mp4`,... # 2. For other data containing large samples (i.e. large arrays with >100 values), `lz4` is a generic compressor that works well in most applications. `lz4` can be used as a `sample_compression` or `chunk_compression`. In most cases, `sample_compression` is sufficient, but in theory, `chunk_compression` produces slightly smaller data. # 3. For other data containing small samples (i.e. labels with <100 values), it is not necessary to use compression. # + [markdown] id="hotuAwslFbAu" # ### **Compression Tradeoffs** # + [markdown] id="QrWvN558v4xn" # **Lossiness -** Certain compression techniques are lossy, meaning that there is irreversible information loss when compressing the data. Lossless compression is less important for data such as images and videos, but it is critical for label data such as numerical labels, binary masks, and segmentation data. # # # **Memory -** Different compression techniques have substantially different memory footprints. For instance, png vs jpeg compression may result in a 10X difference in the size of a Hub dataset. # # # **Runtime -** The primary variables affecting download and upload speeds for generating usable data are the network speed and available compute power for processing the data . In most cases, the network speed is the limiting factor. Therefore, the highest end-to-end throughput for non-local applications is achieved by maximizing compression and utilizing compute power to decompress/convert the data to formats that are consumed by deep learning models (i.e. arrays). # # # **Upload Considerations -** When applicable, the highest uploads speeds can be achieved when the `sample_compression` input matches the compression of the source data, such as: # + id="qkJKv00UFexo" # sample_compression is "jpg" and appended image is "jpeg" ds.create_tensor('images_jpg', htype = 'image', sample_compression = 'jpg') ds.images_jpg.append(hub.read('./animals/dogs/image_3.jpg')) # + [markdown] id="3LMsd3K9GJJ9" # In this case, the input data is a `.jpg`, and the hub `sample_compression` is `jpg`. # # However, a mismatch between compression of the source data and sample_compression in Hub results in significantly slower upload speeds, because Hub must decompress the source data and recompress it using the specified `sample_compression` before saving. # + id="c5MaC1lBwa3a" # sample_compression is "jpg" and appended image is "jpeg" ds.create_tensor('images_png', htype = 'image', sample_compression = 'png') ds.images_png.append(hub.read('./animals/dogs/image_3.jpg')) # + [markdown] id="EXGCWxu7wjHX" # **NOTE:** Due to the computational costs associated with decompressing and recompressing data, it is important that you consider the runtime implications of uploading source data that is compressed differently than the specified sample_compression. # + [markdown] id="JGo-E8Z8Ho6F" # # **Step 4**: _Accessing Data_ # _Accessing and loading Hub Datasets._ # + [markdown] id="A8Mye_Z5Htut" # ## Loading Datasets # + [markdown] id="0DI_D7flHvEN" # Hub Datasets can be loaded and created in a variety of storage locations with minimal configuration. # + id="I9dl3mfENulO" import hub # + id="sltdan65HmRN" # Local Filepath ds = hub.load('./animals_hub') # Dataset created in Step 2 in this Colab Notebook # + id="41FBvx25NWMN" # S3 # ds = hub.load('s3://my_dataset_bucket', creds={...}) # + id="PuacdMOgNNmT" # Public Dataset hosted by Activeloop ## Activeloop Storage - See Step 6 ds = hub.load('hub://activeloop/k49-train') # + id="ocs18sNqNQfG" # Dataset in another workspace on Activeloop Platform # ds = hub.load('hub://workspace_name/dataset_name') # + [markdown] id="ZD60qFaAH2qg" # **Note:** Since `ds = hub.dataset(path)` can be used to both create and load datasets, you may accidentally create a new dataset if there is a typo in the path you provided while intending to load a dataset. If that occurs, simply use `ds.delete()` to remove the unintended dataset permanently. # + [markdown] id="1Kb9q_ZqIARN" # ## Referencing Tensors # + [markdown] id="bq5WSI5LIClV" # Hub allows you to reference specific tensors using keys or via the `.` notation outlined below. # # # **Note:** data is still not loaded by these commands. # + id="jr_ZEtBnN1Wp" ds = hub.dataset('hub://activeloop/k49-train') # + id="24trRqlLH0Tl" ### NO HIERARCHY ### ds.images # is equivalent to ds['images'] ds.labels # is equivalent to ds['labels'] ### WITH HIERARCHY ### # ds.localization.boxes # is equivalent to # ds['localization/boxes'] # ds.localization.labels # is equivalent to # ds['localization/labels'] # + [markdown] id="bjmnRLWHINXG" # ## Accessing Data # + [markdown] id="js3jsmBHIPqu" # Data within the tensors is loaded and accessed using the `.numpy()` command: # + id="6QUWjQNGILWQ" # Indexing ds = hub.dataset('hub://activeloop/k49-train') W = ds.images[0].numpy() # Fetch an image and return a NumPy array X = ds.labels[0].numpy(aslist=True) # Fetch a label and store it as a # list of NumPy arrays # Slicing Y = ds.images[0:100].numpy() # Fetch 100 images and return a NumPy array # The method above produces an exception if # the images are not all the same size Z = ds.labels[0:100].numpy(aslist=True) # Fetch 100 labels and store # them as a list of NumPy arrays # + [markdown] id="DykgrsBEIfk1" # **Note:** The `.numpy()` method will produce an exception if all samples in the requested tensor do not have a uniform shape. If that's the case, running `.numpy(aslist=True)` solves the problem by returning a list of NumPy arrays, where the indices of the list correspond to different samples. # + [markdown] id="K385Fpvmqc0l" # #**Step 5**: *Visualizing Datasets* # + [markdown] id="uSIK-TCAqqQF" # One of Hub's core features is to enable users to visualize and interpret large amounts of data. Let's load the COCO dataset, which is one of the most popular datasets in computer vision. # + id="_YRBC6ehqpgz" import hub ds = hub.load('hub://activeloop/coco-train') # + [markdown] id="o5TW4f4Zqzlw" # The tensor layout for this dataset can be inspected using: # + id="YU10NNvNqz54" ds.summary() # + [markdown] id="StDTRjIJq3qI" # The dataset can be [visualized in Platform](https://app.activeloop.ai/activeloop/coco-train), or using an iframe in a jupyter notebook: # + id="7G3X22Tdq5tn" ds.visualize() # + [markdown] id="L713rdfJtVKS" # **Note:** Visualizing datasets in [Activeloop Platform](https://app.activeloop.ai/) will unlock more features and faster performance compared to visualization in Jupyter notebooks. # + [markdown] id="ul8Q0CK6rH50" # ##Visualizing your own datasets # + [markdown] id="DQzmJazJrOaF" # Any hub dataset can be visualized using the methods above as long as it follows the conventions necessary for the visualization engine to interpret and parse the data. These conventions [are explained here](https://docs.activeloop.ai/dataset-visualization). # + [markdown] id="NQipSo2OF_lB" # # **Step 6**: _Using Activeloop Storage_ # # _Storing and loading datasets from Activeloop Platform Storage._ # + [markdown] id="2TJfXx2pgG7P" # ## Register # + [markdown] id="bA39G647GHX4" # You can store your Hub Datasets with Activeloop by first creating an account in [Activeloop Platform](https://app.activeloop.ai/) or in the CLI using: # + id="PCDC-5dmGFdJ" # !activeloop register # + [markdown] id="o-nf5Sb-gMED" # ## Login # + [markdown] id="o1iZpxtOGJ0N" # In order for the Python API to authenticate with the Activeloop Platform, you should log in from the CLI using: # + id="Z0OUCCMGGLv0" # !activeloop login # prompts for inputting username and password will follow ... # Alternatively, you can directly input your username and password in the same line: # # !activeloop login -u my_username -p my_password # + [markdown] id="FvBxhaAYGNOi" # You can then access or create Hub Datasets by passing the Activeloop Platform path to `hub.dataset()`. # + id="FeL0a2zwGXeU" import hub # platform_path = 'hub://workspace_name/dataset_name' # 'hub://jane_smith/my_awesome_dataset' ds = hub.dataset(platform_path) # + [markdown] id="huQQ1M8kGcyL" # **Note**: When you create an account in Activeloop Platform, a default workspace is created that has the same name as your username. You are also able to create other workspaces that represent organizations, teams, or other collections of multiple users. # + [markdown] id="vUdVLQUGGnsA" # Public datasets such as `hub://activeloop/mnist-train` can be accessed without logging in. # + [markdown] id="vgfj-ldqgZa_" # ## Tokens # + [markdown] id="HhguQ8IxgeBd" # Once you have an Activeloop account, you can create tokens in [Activeloop Platform](https://app.activeloop.ai/) (Organization Details -> API Tokens) and pass them to python commands that require authentication using: # + id="sxETFtMlgw0E" #ds = hub.load(platform_path, token = 'xyz') # + [markdown] id="LVma__gxGq97" # # **Step 7**: _Connecting Hub Datasets to ML Frameworks_ # # _Connecting Hub Datasets to machine learning frameworks such as PyTorch and TensorFlow._ # + [markdown] id="8r-AkeJMGwxB" # You can connect Hub Datasets to popular ML frameworks such as PyTorch and TensorFlow using minimal boilerplate code, and Hub takes care of the parallel processing! # + [markdown] id="Bnr9ItdkGzDk" # ## PyTorch # + [markdown] id="wKkrCv2NG1GG" # You can train a model by creating a PyTorch DataLoader from a Hub Dataset using `ds.pytorch()`. # + id="HP3C2uoAGnNK" import hub from torchvision import datasets, transforms, models ds = hub.dataset('hub://activeloop/cifar100-train') # Hub Dataset # + [markdown] id="F3J24ptPAyTw" # The transform parameter in `ds.pytorch()` is a dictionary where the `key` is the tensor name and the `value` is the transformation function that should be applied to that tensor. If a specific tensor's data does not need to be returned, it should be omitted from the keys. If a tensor's data does not need to be modified during preprocessing, the transformation function is set as `None`. # + id="AvqbqsCnA4P3" tform = transforms.Compose([ transforms.ToPILImage(), # Must convert to PIL image for subsequent operations to run transforms.RandomRotation(20), # Image augmentation transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), ]) #PyTorch Dataloader dataloader= ds.pytorch(batch_size = 16, num_workers = 2, transform = {'images': tform, 'labels': None}, shuffle = True) # + [markdown] id="GX_MIn_rA70v" # You can iterate through the Hub DataLoader just like you would for a Pytorch DataLoader. Loading the first batch of data takes the longest time because the shuffle buffer is filled before any data is returned. # + id="NuowobdbA96I" for data in dataloader: print(data) break # Training Loop # + [markdown] id="-EQ2LUPydfPo" # **Note:** Some datasets such as imagenet contain both grayscale and color images, which can cause errors when the transformed images are passed to the model. To convert only the grayscale images to color format, you can add this Torchvision transform to your pipeline: # + id="3Wi0K1aVdjMr" # transforms.Lambda(lambda x: x.repeat(int(3/x.shape[0]), 1, 1)) # + [markdown] id="x5bX92ZUG_2F" # ## TensorFlow # + [markdown] id="jeRUG-arHP1F" # Similarly, you can convert a Hub Dataset to a TensorFlow Dataset via the `tf.Data` API. # + id="I1bma0HSHOAO" ds # Hub Dataset object, to be used for training ds_tf = ds.tensorflow() # A TensorFlow Dataset # + [markdown] id="guao84xTb4Zg" # # **Step 8**: _Parallel Computing_ # # _Running computations and processing data in parallel._ # + [markdown] id="BVcZ28epcKRc" # Hub enables you to easily run computations in parallel and significantly accelerate your data processing workflows. This example primarily focuses on parallel dataset uploading, and other use cases such as dataset transformations can be found in [this tutorial](https://docs.activeloop.ai/tutorials/data-processing-using-parallel-computing). # # Parallel compute using Hub has two core elements: #1. defining a function or pipeline that will run in parallel and #2. evaluating it using the appropriate inputs and outputs. Let's start with #1 by defining a function that processes files and appends their data to the labels and images tensors. # + [markdown] id="ZWNxzF1pcWxn" # **Defining the parallel computing function** # # The first step for running parallel computations is to define a function that will run in parallel by decorating it using `@hub.compute`. In the example below, `file_to_hub` converts data from files into hub format, just like in **Step 2: Creating Hub Datasets Manually**. If you have not completed Step 2, please complete the section that downloads and unzips the *animals* dataset # + id="JMjMF_-LcHtl" import hub from PIL import Image import numpy as np import os @hub.compute def file_to_hub(file_name, sample_out, class_names): ## First two arguments are always default arguments containing: # 1st argument is an element of the input iterable (list, dataset, array,...) # 2nd argument is a dataset sample # Other arguments are optional # Find the label number corresponding to the file label_text = os.path.basename(os.path.dirname(file_name)) label_num = class_names.index(label_text) # Append the label and image to the output sample sample_out.labels.append(np.uint32(label_num)) sample_out.images.append(hub.read(file_name)) return sample_out # + [markdown] id="d-ZhXH-pcgT8" # In all functions decorated using `@hub.compute`, the first argument must be a single element of any input iterable that is being processed in parallel. In this case, that is a filename `file_name`, becuase `file_to_hub` reads image files and populates data in the dataset's tensors. # # The second argument is a dataset sample `sample_out`, which can be operated on using similar syntax to dataset objects, such as `sample_out.append(...)`, `sample_out.extend(...)`, etc. # # The function decorated using `@hub.compute` must return `sample_out`, which represents the data that is added or modified by that function. # + [markdown] id="TIUiNuQqchnH" # **Executing the transform** # # To execute the transform, you must define the dataset that will be modified by the parallel computation. # + id="TZfEn1g_cno_" ds = hub.empty('./animals_hub_transform') # Creates the dataset # + [markdown] id="u7FIReeLcpka" # Next, you define the input iterable that describes the information that will be operated on in parallel. In this case, that is a list of files `files_list` from the animals dataset in Step 2. # + id="8CwypbTxcrx0" # Find the class_names and list of files that need to be uploaded dataset_folder = './animals' class_names = os.listdir(dataset_folder) files_list = [] for dirpath, dirnames, filenames in os.walk(dataset_folder): for filename in filenames: files_list.append(os.path.join(dirpath, filename)) # + [markdown] id="5IC-VRKVcuRI" # You can now create the tensors for the dataset and **run the parallel computation** using the `.eval` syntax. Pass the optional input arguments to `file_to_hub`, and we skip the first two default arguments `file_name` and `sample_out`. # # The input iterable `files_list` and output dataset `ds` is passed to the `.eval` method as the first and second argument respectively. # + id="p4H4Fug0cxJG" with ds: ds.create_tensor('images', htype = 'image', sample_compression = 'jpeg') ds.create_tensor('labels', htype = 'class_label', class_names = class_names) file_to_hub(class_names=class_names).eval(files_list, ds, num_workers = 2) # + id="BfWc3_fkhr0W" Image.fromarray(ds.images[0].numpy()) # + [markdown] id="5xTj7kt0jrd3" # Congrats! You just created a dataset using parallel computing! 🎈 # + [markdown] id="iXRCphquSFs3" # # **Step 9**: _Dataset Version Control_ # # *Managing changes to your datasets using Version Control.* # + [markdown] id="4y_V53L8SCuB" # Hub dataset version control allows you to manage changes to datasets with commands very similar to Git. It provides critical insights into how your data is evolving, and it works with datasets of any size! # # Let's check out how dataset version control works in Hub! If you haven't done so already, please download and unzip the *animals* dataset from **Step 2**. # # First let's create a hub dataset in the `./version_control_hub` folder. # + id="YgEWowxySUDL" import hub import numpy as np from PIL import Image # Set overwrite = True for re-runability ds = hub.dataset('./version_control_hub', overwrite = True) # Create a tensor and add an image with ds: ds.create_tensor('images', htype = 'image', sample_compression = 'jpeg') ds.images.append(hub.read('./animals/cats/image_1.jpg')) # + [markdown] id="DNLh-JE5pkS_" # The first image in this dataset is a picture of a cat: # + id="w4hVjQaVpksW" Image.fromarray(ds.images[0].numpy()) # + [markdown] id="_CEF-kjySdLp" # ##Commit # + [markdown] id="joKq3VV0SdEW" # To commit the data added above, simply run `ds.commit`: # # + id="pj9uTZeSTGwT" first_commit_id = ds.commit('Added image of a cat') print('Dataset in commit {} has {} samples'.format(first_commit_id, len(ds))) # + [markdown] id="Tc2-MRmaSc4x" # Next, let's add another image and commit the update: # + id="zArtG0phTZRv" with ds: ds.images.append(hub.read('./animals/dogs/image_3.jpg')) second_commit_id = ds.commit('Added an image of a dog') print('Dataset in commit {} has {} samples'.format(second_commit_id, len(ds))) # + [markdown] id="fYjnY_1RTcjM" # The second image in this dataset is a picture of a dog: # + id="gbPu9JoFp0ap" Image.fromarray(ds.images[1].numpy()) # + [markdown] id="kWvgUH25Tj8V" # ##Log # + [markdown] id="CiqOb8POTkb4" # The commit history starting from the current commit can be show using `ds.log`: # # + id="XQSxvzIcTuU-" log = ds.log() # + [markdown] id="TgefyAuATwi4" # This command prints the log to the console and also assigns it to the specified variable log. The author of the commit is the username of the [Activeloop account](https://docs.activeloop.ai/getting-started/using-activeloop-storage) that logged in on the machine. # + [markdown] id="2JRpqeYqV-oT" # ##Branch # + [markdown] id="4TWcOT4RV-d4" # Branching takes place by running the `ds.checkout` command with the parameter `create = True`. Let's create a new branch `dog_flipped`, flip the second image (dog), and create a new commit on that branch. # + id="eY-CZmzrXr0X" ds.checkout('dog_flipped', create = True) with ds: ds.images[1] = np.transpose(ds.images[1], axes=[1,0,2]) flipped_commit_id = ds.commit('Flipped the dog image') # + [markdown] id="VUUMXFKEXuIq" # The dog image is now flipped and the log shows a commit on the `dog_flipped` branch as well as the previous commits on `main`: # + id="DIP6V3VFqPKS" Image.fromarray(ds.images[1].numpy()) # + id="O3-UgHZPX_0u" ds.log() # + [markdown] id="HCrKgp6FYDG9" # ##Checkout # + [markdown] id="07nHcIIiYFtW" # A previous commit of branch can be checked out using `ds.checkout`: # + id="YZe8iXjlYEdf" ds.checkout('main') Image.fromarray(ds.images[1].numpy()) # + [markdown] id="7AZXuEVYYVHm" # As expected, the dog image on `main` is not flipped. # + [markdown] id="gmydIxas3XsV" # ## Diff # + [markdown] id="dTEuB-4C3a-B" # Understanding changes between commits is critical for managing the evolution of datasets. Hub's `ds.diff` function enables users to determine the number of samples that were added, removed, or updated for each tensor. The function can be used in 3 ways: # + id="XhlPmK9E37Do" ds.diff() # Diff between the current state and the last commit # + id="tCa8-nlJ4Dxg" ds.diff(first_commit_id) # Diff between the current state and a specific commit # + id="Bj2Yez624Ecb" ds.diff(second_commit_id, first_commit_id) # Diff between two specific commits # + [markdown] id="i1GqH1JvYkNP" # ##HEAD Commit # # + [markdown] id="RbiRZ0eGiBrz" # Unlike Git, Hub's version control does not have a staging area because changes to datasets are not stored locally before they are committed. All changes are automatically reflected in the dataset's permanent storage (local or cloud). **Therefore, any changes to a dataset are automatically stored in a HEAD commit on the current branch**. This means that the uncommitted changes do not appear on other branches. Let's see how this works: # # You should currently be on the `main` branch, which has 2 samples. Let's adds another image: # # + id="FwuzyJUViZC6" print('Dataset on {} branch has {} samples'.format('main', len(ds))) with ds: ds.images.append(hub.read('./animals/dogs/image_4.jpg')) print('After updating, the HEAD commit on {} branch has {} samples'.format('main', len(ds))) # + [markdown] id="p3qePpVFqkG9" # The 3rd sample is also an image of a dog: # + id="tDfKKuhLqlMM" Image.fromarray(ds.images[2].numpy()) # + [markdown] id="4brOnBdyiq6p" # Next, if you checkout `dog_flipped` branch, the dataset contains 2 samples, which is sample count from when that branch was created. Therefore, the additional uncommitted third sample that was added to the `main` branch above is not reflected when other branches or commits are checked out. # + id="cvG-X9VqipM3" ds.checkout('dog_flipped') print('Dataset in {} branch has {} samples'.format('dog_flipped', len(ds))) # + [markdown] id="7aoAeA7vixsC" # Finally, when checking our the `main` branch again, the prior uncommitted changes and visible and they are stored in the `HEAD` commit on `main`: # + id="6DnXiwTmi6G9" ds.checkout('main') print('Dataset in {} branch has {} samples'.format('main', len(ds))) # + [markdown] id="ztVUV_BDqyHR" # The dataset now contains 3 samples and the uncommitted dog image is visible: # + id="ci0IHCP9q0In" Image.fromarray(ds.images[2].numpy()) # + [markdown] id="uinXs4r1i7Zz" # ##Merge - Coming Soon # # + [markdown] id="kQOGilvkjG2c" # Merging is a critical feature for collaborating on datasets, and Activeloop is currently working on an implementation. # + [markdown] id="Fz15ukH5jiIm" # Congrats! You just are now an expert in dataset version control!🎓 # + [markdown] id="vBM1DKntaXwS" # # **Step 10:** *Dataset Filtering* # + [markdown] id="W8K5WREdaf--" # Filtering and querying is an important aspect of data engineering because it enables users to focus on subsets of their datasets in order to obtain important insights, perform quality control, and train models on parts of their data. # # Hub enables you to perform queries using user-defined functions or Hub's Pythonic query language, all of which can be parallelized using our simple multi-processing API. # + [markdown] id="Rp9i4Flsai0p" # ## Filtering with user-defined-functions # + [markdown] id="ZtwDPVwmapjL" # The first step for querying using UDFs is to define a function that returns a boolean depending on whether an input sample in a dataset meets the user-defined condition. In this example, we define a function that returns `True` if the labels for a tensor are in the desired labels_list. If there are inputs to the filtering function other than `sample_in`, it must be decorated with `@hub.compute`. # + id="2lWTjNbUamyp" @hub.compute def filter_labels(sample_in, labels_list, class_names): text_label = class_names[sample_in.labels.numpy()[0]] return text_label in labels_list # + [markdown] id="zwITkEcmvdNo" # Let's load a dataset and specify the `labels_list` that we want to filter for. # + id="FLb_JgfxbYGA" import hub from PIL import Image ds = hub.load('hub://activeloop/cifar10-test') labels_list = ['automobile', 'ship'] # Desired labels for filtering class_names = ds.labels.info.class_names # Mapping from numeric to text labels # + [markdown] id="ccAa1QuZazC-" # The filtering function is executed using the `ds.filter()` command below, and it returns a virtual view of the dataset (`dataset_view`) that only contains the indices that met the filtering condition. Just like in the Parallel Computing API, the `sample_in` parameter does not need to be passed into the filter function when evaluating it, and multi-processing can be specified using the `scheduler` and `num_workers` parameters. # + id="mq4e5gRIbZ74" ds_view = ds.filter(filter_labels(labels_list, class_names), scheduler = 'threaded', num_workers = 0) # + [markdown] id="G2DowEHfbb2m" # The data in the returned `ds_view` can be accessed just like a regular dataset. # + id="4FCxLH0Nbec8" Image.fromarray(ds_view.images[0].numpy()) # + [markdown] id="5ibjQKAEbs9F" # **Note:** in most cases, multi-processing is not necessary for queries that involve simple data such as labels or bounding boxes. However, multi-processing significantly accelerates queries that must load rich data types such as images and videos. # + [markdown] id="-OjmRFFVb1pn" # ## Filtering using our pythonic query language # + [markdown] id="fgIOy-aJcOBF" # Queries can also be executed using hub's Pythonic query language. This UX is primarily intended for use in [Activeloop Platform](https://app.activeloop.ai/), but it can also be applied programmatically in Python. # + id="YS4CD0Wncb99" ds_view = ds.filter("labels == 'automobile' or labels == 'automobile'", scheduler = 'threaded', num_workers = 0) # + [markdown] id="FtaZboftcemF" # Tensors can be referred to by name, the language supports common logical operations (`in, ==, !=, >, <, >=, <=`), and numpy-like operators and indexing can be applied such as `'images.min > 5'`, `'images.shape[2]==1'`, and others. # + [markdown] id="KEi-DxIOcpHp" # Congrats! You just learned to filter data with hub! 🎈
colabs/Getting_Started_with_Hub.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/magdachmiel/dw_matrix/blob/master/Transformacja%233/Day5.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="FGaxxH9FH1Vc" colab_type="code" outputId="baa7535a-dfff-405a-f3f9-f7bc9aeb0020" colab={"base_uri": "https://localhost:8080/", "height": 202} # !pip install hyperopt # + [markdown] id="SC-ucZ-hIVAx" colab_type="text" # ## Optymalizacja parametrów z hyperopt # # + id="ZAQm3gC1ICmd" colab_type="code" colab={} import pandas as pd import numpy as np import os import datetime import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout from tensorflow.keras.utils import to_categorical # %load_ext tensorboard import matplotlib.pyplot as plt from skimage import color, exposure from sklearn.metrics import accuracy_score from hyperopt import hp, STATUS_OK, tpe, Trials, fmin # + id="4FmdxV4JIS9C" colab_type="code" outputId="641c3eb9-073a-4791-d3bd-4461476f61b9" colab={"base_uri": "https://localhost:8080/", "height": 35} # cd '/content/drive/My Drive/Colab Notebooks/dw_matrix/Transformacja#3' # + id="Mj4xWBd7IS_9" colab_type="code" colab={} train = pd.read_pickle('data/train.p') X_train, y_train = train['features'], train['labels'] test = pd.read_pickle('data/test.p') X_test, y_test = test['features'], test['labels'] # + id="a2tE_7HKITCo" colab_type="code" colab={} if y_train.ndim == 1: y_train = to_categorical(y_train) if y_test.ndim == 1: y_test = to_categorical(y_test) # + id="RoApBN8rITFd" colab_type="code" colab={} input_shape = X_train.shape[1:] num_classes = y_train.shape[1] # + id="JubFjoegIsFQ" colab_type="code" colab={} def train_model(model, X_train, y_train, params_fit={}): model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy']) logdir = os.path.join('logs', datetime.datetime.now().strftime('%Y%m%d-%H%M%S')) tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1) model.fit( X_train, y_train, batch_size = params_fit.get('batch_size', 128), epochs = params_fit.get('epochs', 5), verbose = params_fit.get('verbose', 1), validation_data = params_fit.get('validation_data', (X_train, y_train)), callbacks = [tensorboard_callback] ) return model # + id="kQwpjWPkIsIg" colab_type="code" colab={} def predict(model_trained, X_test, y_test, scoring = accuracy_score): y_pred_prob = model_trained.predict(X_test) y_pred = np.argmax(y_pred_prob, axis=1) y_test_norm = np.argmax(y_test, axis=1) #znormalizowany wektor wynikow ze zbioru testowego return accuracy_score(y_test_norm, y_pred) # + id="slOIqN55IsLG" colab_type="code" colab={} def train_and_predict(model): model_trained = train_model(model, X_train, y_train) return predict(model_trained, X_test, y_test) # + id="zAbtn7AVIsOr" colab_type="code" colab={} def get_cnn_v5(input_shape, num_classes): return Sequential([ Conv2D(filters=32, kernel_size=(3,3), activation='relu', input_shape=input_shape), Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding='same'), MaxPool2D(), Dropout(0.3), Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'), Conv2D(filters=64, kernel_size=(3,3), activation='relu'), MaxPool2D(), Dropout(0.3), Conv2D(filters=128, kernel_size=(3,3), activation='relu', padding='same'), Conv2D(filters=128, kernel_size=(3,3), activation='relu'), MaxPool2D(), Dropout(0.3), Flatten(), Dense(1024, activation='relu'), Dropout(0.3), Dense(1024, activation='relu'), Dropout(0.3), Dense(num_classes, activation='softmax') ]) # + id="DVBazJSyJ28n" colab_type="code" outputId="237756f9-af71-4c9b-e205-6f4e534b0495" colab={"base_uri": "https://localhost:8080/", "height": 221} model_trained = train_model(get_cnn_v5(input_shape, num_classes), X_train, y_train) predict_results = predict(model_trained, X_test, y_test) # + id="Zu5xaBSxJlLR" colab_type="code" outputId="46b029fb-6d10-42d5-bd38-3b711e956bd6" colab={"base_uri": "https://localhost:8080/", "height": 54} model_trained.evaluate(X_test, y_test) # + [markdown] id="dKK66JVEJL1k" colab_type="text" # ## Tworzenie funkcji celu do optymalizacji # + id="5lxaP7JKNhzg" colab_type="code" colab={} def get_model(params): return Sequential([ Conv2D(filters=32, kernel_size=(3,3), activation='relu', input_shape=input_shape), Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding='same'), MaxPool2D(), Dropout(params['dropout_cnn_block_1']), Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'), Conv2D(filters=64, kernel_size=(3,3), activation='relu'), MaxPool2D(), Dropout(params['dropout_cnn_block_2']), Conv2D(filters=128, kernel_size=(3,3), activation='relu', padding='same'), Conv2D(filters=128, kernel_size=(3,3), activation='relu'), MaxPool2D(), Dropout(params['dropout_cnn_block_3']), Flatten(), Dense(1024, activation='relu'), Dropout(params['dropout_dense_block_4']), Dense(1024, activation='relu'), Dropout(params['dropout_dense_block_5']), Dense(num_classes, activation='softmax') ]) # + id="GNTSoTLBJIgm" colab_type="code" colab={} def func_obj(params): #model = get_cnn_v5(input_shape, num_classes) model = get_model(params) model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy']) model.fit( X_train, y_train, batch_size = int(params.get('batch_size')), epochs = 10, verbose = 0 ) accuracy = model.evaluate(X_test, y_test)[1] print(' For params = ', params, ' ACC = {}'.format(accuracy)) return {'loss' : -accuracy, 'status' : STATUS_OK, 'model' : model} # + id="SH0SkVjpJIkC" colab_type="code" outputId="9f4774b9-217d-4782-c7b0-4c4c61195744" colab={"base_uri": "https://localhost:8080/", "height": 1000} space = { 'batch_size' : hp.quniform('batch_size', 50, 200, 10), 'dropout_cnn_block_1' : hp.quniform('dropout_cnn_block_1', 0.2, 0.5, 0.1), 'dropout_cnn_block_2' : hp.quniform('dropout_cnn_block_2', 0.2, 0.5, 0.1), 'dropout_cnn_block_3' : hp.quniform('dropout_cnn_block_3', 0.2, 0.5, 0.1), 'dropout_dense_block_4' : hp.quniform('dropout_dense_block_4', 0.2, 0.7, 0.1), 'dropout_dense_block_5' : hp.quniform('dropout_dense_block_5', 0.2, 0.7, 0.1), } best = fmin( func_obj, space, tpe.suggest, 30, Trials() ) # + id="MCBA65wxxFry" colab_type="code" outputId="2b100df8-25b4-4b00-a6bd-889a8468c0d6" colab={"base_uri": "https://localhost:8080/", "height": 127} best # + [markdown] id="4Gq3Dwn2woMp" colab_type="text" # ## Najlepsze wyniki dla zadanej przestrzeni parametrów # + [markdown] id="hLLPz4K4QzeN" colab_type="text" # Bez optymalizacji parametrów - loss: 0.2339 - accuracy: 0.9635 # + [markdown] id="YlfS5yx1xAQX" colab_type="text" # Z optymalizacją - - loss: 0.0894 - accuracy: 0.9785 # # + [markdown] id="U9fNO1wXo6N4" colab_type="text" # ## Weryfikacja modelu z parametrami # + id="2QNEz558o6eO" colab_type="code" colab={} params = best # + id="uNmaQ57ho6mf" colab_type="code" outputId="0b82960b-2e4c-44d4-a4fd-2e2f1b0f06a7" colab={"base_uri": "https://localhost:8080/", "height": 35} model_ver = get_model(params) model_ver.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy']) model_ver.fit( X_train, y_train, batch_size = int(params.get('batch_size', 128)), epochs = 7, verbose = 0 ) accuracy = model_ver.evaluate(X_test, y_test)[0] # + id="lQR_tN8x_7mN" colab_type="code" colab={}
Transformacja#3/Day5.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.7 64-bit (''ml'': conda)' # language: python # name: python3 # --- # + import cv2 import numpy as np from importlib import reload import matplotlib.pyplot as plt from IPython.display import Video import torch from torchvision import transforms from torchvision.io import read_video, read_video_timestamps import kornia as K import kornia.feature as KF from kornia_moons.feature import * from kornia.contrib import ImageStitcher from kornia.geometry.transform import warp_perspective, get_perspective_transform import utils def load_torch_image(fname): img = K.image_to_tensor(cv2.imread(fname), False).float() /255. img = K.color.bgr_to_rgb(img) return img # - fname = "../deep-stabilization/dvs/video/s_114_outdoor_running_trail_daytime/ControlCam_20200930_104820.mp4" video_frames, audio_frames, meta = read_video(fname, end_pts=100, pts_unit="sec") print(meta) print("video size: ", video_frames.shape) print("audio size: ", audio_frames.shape) # + # utils.show_frames(video_frames[:100:10], 2, 5, (30,16)) # + img1 = video_frames[0:1].permute(0,3,1,2).float() / 255 img2 = video_frames[100:101].permute(0,3,1,2).float() / 255 print(img1.shape) feature1 = transforms.CenterCrop((270*3,480*3))(img1) feature2 = transforms.CenterCrop((270*3,480*3))(img2) feature1 = torch.cat(transforms.FiveCrop(256)(feature1)) feature2 = torch.cat(transforms.FiveCrop(256)(feature2)) print(feature1.shape) # K.color.rgb_to_grayscale(img1).shape utils.show_frame(feature1[3].permute(1,2,0)) # + matcher2 = KF.LocalFeatureMatcher( KF.SIFTFeature(2000, device="cuda"), KF.DescriptorMatcher('smnn', 0.9) ) input_dict = {"image0": K.color.rgb_to_grayscale(feature1).cuda(), # LofTR works on grayscale images only "image1": K.color.rgb_to_grayscale(feature2).cuda()} with torch.no_grad(): correspondences = matcher2(input_dict) del input_dict["image0"], input_dict["image1"] for k,v in correspondences.items(): print (k) print(len(correspondences["keypoints0"])) # - # for x in range(5): # idx = torch.topk(correspondences["confidence"][correspondences["batch_indexes"]==x], 100).indices # print((correspondences["keypoints0"][correspondences["batch_indexes"]==x][idx] - correspondences["keypoints1"][correspondences["batch_indexes"]==x][idx]).mean(dim=0)) # print("\n\n\n") # for x in range(5): # idx = torch.topk(correspondences["confidence"][correspondences["batch_indexes"]==x], 150).indices # print((correspondences["keypoints0"][correspondences["batch_indexes"]==x][idx] - correspondences["keypoints1"][correspondences["batch_indexes"]==x][idx]).mean(dim=0)) # print("\n\n\n") tmp = [] for x in range(5): tmp.append((correspondences["keypoints0"][correspondences["batch_indexes"]==x] - correspondences["keypoints1"][correspondences["batch_indexes"]==x]).median(dim=0)[0]) print(tmp[-1]) # + src = torch.Tensor([ [135*1+128, 240*1+128],# 左上 [135*1+128, 240*7-128],# 右上 [135*7-128, 240*1+128],# 左下 [135*7-128, 240*7-128] # 右下 ]).cuda() dst = torch.vstack(tmp[:4]) + src # - img1[0].permute(1,2,0).shape res = cv2.warpAffine(img1[0].permute(1,2,0).numpy(), H[:2], (1080, 1920)) utils.show_frame(torch.from_numpy(res)) H, inliers = cv2.findFundamentalMat(mkpts0, mkpts1, cv2.USAC_MAGSAC, 0.5, 0.999, 100000) # + print(src) print(dst) b = get_perspective_transform(src.unsqueeze(0), dst.unsqueeze(0)) out = warp_perspective(img1.cuda(), b, (1080,1920)).cpu() outt = torch.where(out == 0.0, img2, out) utils.show_frame(outt[0].permute(1,2,0)) # - out = warp_perspective(img1.cuda(), torch.from_numpy(H).cuda().unsqueeze(0).float(), (1080,1920)).cpu() outtt = torch.where(out == 0.0, img2, out) utils.show_frame(outtt[0].permute(1,2,0)) th = torch.quantile(correspondences["confidence"], 0.0) idx = correspondences["confidence"] > th print(idx.sum()) mkpts0 = correspondences['keypoints0'][idx].cpu().numpy() mkpts1 = correspondences['keypoints1'][idx].cpu().numpy() H, inliers = cv2.findFundamentalMat(mkpts0, mkpts1, cv2.USAC_MAGSAC, 0.5, 0.999, 100000) inliers = inliers > 0 draw_LAF_matches( KF.laf_from_center_scale_ori(torch.from_numpy(mkpts0).view(1,-1, 2), torch.ones(mkpts0.shape[0]).view(1,-1, 1, 1), torch.ones(mkpts0.shape[0]).view(1,-1, 1)), KF.laf_from_center_scale_ori(torch.from_numpy(mkpts1).view(1,-1, 2), torch.ones(mkpts1.shape[0]).view(1,-1, 1, 1), torch.ones(mkpts1.shape[0]).view(1,-1, 1)), torch.arange(mkpts0.shape[0]).view(-1,1).repeat(1,2), K.tensor_to_image(img1), K.tensor_to_image(img2), inliers, draw_dict={'inlier_color': (0.2, 1, 0.2), 'tentative_color': None, 'feature_color': (0.2, 0.5, 1), 'vertical': False}) # + from kornia.geometry.transform import get_perspective_transform, warp_perspective idx = torch.topk(correspondences["confidence"], 12).indices # idx = torch.randperm(20) src = correspondences["keypoints0"][idx[:4]].unsqueeze(0) dst = correspondences["keypoints1"][idx[:4]].unsqueeze(0) a = get_perspective_transform(src, dst) src = correspondences["keypoints0"][idx[2:6]].unsqueeze(0) dst = correspondences["keypoints1"][idx[2:6]].unsqueeze(0) b = get_perspective_transform(src, dst) out = warp_perspective(img1.cuda(), (a+b)/2, (1080//4,1920//4)).cpu() outt = torch.where(out < 0.0, img2, out) utils.show_frame(outt[0].permute(1,2,0)) # + # Import numpy and OpenCV import numpy as np import cv2# Read input video fname = "../deep-stabilization/dvs/video/s_114_outdoor_running_trail_daytime/ControlCam_20200930_104820.mp4" cap = cv2.VideoCapture(fname) # Get frame count n_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) # Get width and height of video stream w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) # Define the codec for output video # Set up output video fps = 30 print(w, h) # Read first frame _, prev = cap.read() # Convert frame to grayscale prev_gray = cv2.cvtColor(prev, cv2.COLOR_BGR2GRAY) # prev_gray = (prev_gray&192)|((prev_gray&32)<<1) # Pre-define transformation-store array transforms = np.zeros((n_frames-1, 3), np.float32) log = [] homo = [] for i in range(n_frames-2): log.append([]) # Detect feature points in previous frame prev_pts = cv2.goodFeaturesToTrack(prev_gray, maxCorners=400, qualityLevel=0.3, minDistance=20, blockSize=9) criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001) prev_pts = cv2.cornerSubPix( prev_gray, prev_pts, (5,5), (-1,1), criteria ) # Read next frame success, curr = cap.read() if not success: break # Convert to grayscale curr_gray = cv2.cvtColor(curr, cv2.COLOR_BGR2GRAY) # Calculate optical flow (i.e. track feature points) curr_pts, status, err = cv2.calcOpticalFlowPyrLK(prev_gray, curr_gray, prev_pts, None) # Sanity check assert prev_pts.shape == curr_pts.shape # Filter only valid points idx = np.where(status==1)[0] prev_pts = prev_pts[idx] curr_pts = curr_pts[idx] #Find transformation matrix retval, inliers = cv2.estimateAffine2D(prev_pts, curr_pts) retval = cv2.findHomography(prev_pts, curr_pts)[0] homo.append(cv2.findHomography(prev_pts, curr_pts)[0]) # """ # adding # """ # cv2.find(prev_pts, curr_pts) # arr = np.arange(prev_pts.shape[0]) # tests = [] # for x in range(100): # index = np.random.choice(prev_pts.shape[0], size=(4,), replace=False) # tests.append(cv2.getPerspectiveTransform(prev_pts[index], curr_pts[index])) # test = np.stack(tests) # Extract traslation dx = retval[0][2] dy = retval[1][2] # Extract rotation angle da = np.arctan2(retval[1,0], retval[0,0]) log[-1].append(len(inliers)) log[-1].append(np.arctan2(retval[0,1], retval[1,1])) # Store transformation transforms[i] = [dx,dy,da] # Move to next frame prev_gray = curr_gray print("Frame: {:03d}/{:3d} - Tracked points : {:3d}".format(i, n_frames, len(prev_pts)), end="\r", flush=True) # Compute trajectory using cumulative sum of transformations print("transforms: ", len(transforms)) trajectory = np.cumsum(transforms, axis=0) # - intrinsic = np.array([ [1920/1.27, 0.0, 0.5*(1920-1)], [0.0, 1920/1.27, 0.5*(1080-1)], [0.0, 0.0, 1.0] ]) tests = [] arr = np.arange(prev_pts.shape[0]) for x in range(100): index = np.random.choice(prev_pts.shape[0], size=(10,), replace=False) tests.append(cv2.findFundamentalMat(prev_pts[index], curr_pts[index])) import kornia p_pts = torch.from_numpy(prev_pts).permute(1,0,2) c_pts = torch.from_numpy(curr_pts).permute(1,0,2) pts, tran = kornia.geometry.epipolar.normalize_points(torch.cat([p_pts, c_pts], dim=1)) p_pts, c_pts = pts.narrow(1,0,128), pts.narrow(1,128,128) fund1 = kornia.geometry.epipolar.find_fundamental(p_pts, c_pts, weights=torch.ones((1,128))) kornia.geometry.epipolar.projections_from_fundamental(fund1)[0, :, : ,1] fund2 = cv2.findFundamentalMat(prev_pts, curr_pts)[0] fund2 = torch.from_numpy(fund2).view(1,3,3) kornia.geometry.epipolar.projections_from_fundamental(fund2)[0, :, : ,1] # + from scipy.signal import savgol_filter def movingAverage(curve, window_size): # Define the filter f = np.ones(window_size)/window_size # Add padding to the boundaries curve_pad = np.lib.pad(curve, (window_size-3, 2), 'edge') # Apply convolution curve_smoothed = np.convolve(curve_pad, f, mode='valid') # Remove padding curve_smoothed = curve_smoothed # return smoothed curve return savgol_filter(curve, window_size, 3) # return curve_smoothed # def movingAverage(curve, radius): # window_size = 2 * radius + 1 # # Define the filter # f = np.ones(window_size)/window_size # # Add padding to the boundaries # curve_pad = np.lib.pad(curve, (radius, radius), 'edge') # # Apply convolution # curve_smoothed = np.convolve(curve_pad, f, mode='same') # # Remove padding # curve_smoothed = curve_smoothed[radius:-radius] # # return smoothed curve # return savgol_filter(curve, window_size, 3) # # return curve_smoothed def fixBorder(frame): s = frame.shape # Scale the image 4% without moving the center T = cv2.getRotationMatrix2D((s[1]/2, s[0]/2), 0, 1.04) frame = cv2.warpAffine(frame, T, (s[1], s[0])) return frame def smooth(trajectory, SMOOTHING_RADIUS=31): smoothed_trajectory = np.copy(trajectory) # Filter the x, y and angle curves for i in range(3): smoothed_trajectory[:,i] = movingAverage(trajectory[:,i], SMOOTHING_RADIUS) return smoothed_trajectory # + fps, w, h = 30, 1920, 1080 # Calculate difference in smoothed_trajectory and trajectory smoothed_trajectory = smooth(trajectory) difference = smoothed_trajectory - trajectory transforms_smooth = transforms + difference # Reset stream to first frame cap.set(cv2.CAP_PROP_POS_FRAMES, 0) frames=[] # Write n_frames-1 transformed frames fourcc = cv2.VideoWriter_fourcc(*'mp4v') out = cv2.VideoWriter('../video_out.mp4', fourcc, fps, (w, h)) for i in range(n_frames-2): # Read next frame success, frame = cap.read() if not success: break # Extract transformations from the new transformation array dx = transforms_smooth[i,0] dy = transforms_smooth[i,1] da = transforms_smooth[i,2] # Reconstruct transformation matrix accordingly to new values m = np.zeros((3,3), np.float32) m[0,0] = np.cos(da) m[0,1] = -np.sin(da) m[1,0] = np.sin(da) m[1,1] = np.cos(da) m[0,2] = dx m[1,2] = dy m[2] = homo[i][2] # Apply affine wrapping to the given frame # frame_stabilized = cv2.warpAffine(frame.astype(np.float64)/255, m, (w,h)) # tmp = sqrtm(sqrtm(sqrtm(sqrtm(acc_homo[i]@np.linalg.inv(acc_homo[max(0,i-16)]))))).real # tmp = homo[i]@tmp@np.linalg.inv(acc_homo[i]) # tmp[2] = homo[i][2] frame_stabilized = cv2.warpPerspective(frame.astype(np.float64)/255, m, (w,h)) # Fix border artifacts # frame_stabilized = fixBorder(frame_stabilized) # Write the frame to the file frame_out = cv2.hconcat([frame.astype(np.float64)/255, frame_stabilized]) # If the image is too big, resize it. if frame_out.shape[1] > 1920: frame_out = cv2.resize(frame_out, (frame_out.shape[1]//2, frame_out.shape[0])); frames.append(frame_out) out.write((frame_out*255).astype(np.uint8)) out.release() # + import numpy.linalg as LA from torch.autograd import Variable def norm_quat(quat): norm_quat = LA.norm(quat) if norm_quat > 1e-6: quat = quat / norm_quat # [0 norm_quat norm_quat - 1e-6] else: # print('bad len for Reciprocal') quat = np.array([0,0,0,1]) return quat def torch_norm_quat(quat, USE_CUDA = True): # Method 1: batch_size = quat.size()[0] quat_out = Variable(torch.zeros((batch_size, 4), requires_grad=True)) if USE_CUDA == True: quat_out = quat_out.cuda() for i in range(batch_size): norm_quat = torch.norm(quat[i]) if norm_quat > 1e-6: quat_out[i] = quat[i] / norm_quat # [0 norm_quat norm_quat - 1e-6] else: quat_out[i,:3] = quat[i,:3] * 0 quat_out[i,3] = quat[i,3] / quat[i,3] # Method 2: # quat = quat / (torch.unsqueeze(torch.norm(quat, dim = 1), 1) + 1e-6) # check norm return quat_out def QuaternionReciprocal(q): quat = np.array([-q[0], -q[1], -q[2], q[3]]) return norm_quat(quat) def torch_QuaternionReciprocal(q, USE_CUDA = True): quat = torch.cat((-q[:,0:1], -q[:,1:2], -q[:,2:3], q[:,3:]), dim = 1) batch_size = quat.size()[0] quat = torch_norm_quat(quat) return quat def QuaternionProduct(q1, q2): x1 = q1[0] y1 = q1[1] z1 = q1[2] w1 = q1[3] x2 = q2[0] y2 = q2[1] z2 = q2[2] w2 = q2[3] quat = np.zeros(4) quat[3] = w1 * w2 - x1 * x2 - y1 * y2 - z1 * z2 quat[0] = w1 * x2 + x1 * w2 + y1 * z2 - z1 * y2 quat[1] = w1 * y2 - x1 * z2 + y1 * w2 + z1 * x2 quat[2] = w1 * z2 + x1 * y2 - y1 * x2 + z1 * w2 return norm_quat(quat) def torch_QuaternionProduct(q1, q2, USE_CUDA = True): x1 = q1[:,0] y1 = q1[:,1] z1 = q1[:,2] w1 = q1[:,3] x2 = q2[:,0] y2 = q2[:,1] z2 = q2[:,2] w2 = q2[:,3] batch_size = q1.size()[0] quat = Variable(torch.zeros((batch_size, 4), requires_grad=True)) if USE_CUDA == True: quat = quat.cuda() quat[:,3] = w1 * w2 - x1 * x2 - y1 * y2 - z1 * z2 quat[:,0] = w1 * x2 + x1 * w2 + y1 * z2 - z1 * y2 quat[:,1] = w1 * y2 - x1 * z2 + y1 * w2 + z1 * x2 quat[:,2] = w1 * z2 + x1 * y2 - y1 * x2 + z1 * w2 quat = torch_norm_quat(quat) return quat def get_data_at_timestamp(self, gyro_data, ois_data, time_stamp, quat_t_1): quat_t = GetGyroAtTimeStamp(gyro_data, time_stamp) quat_dif = QuaternionProduct(quat_t, QuaternionReciprocal(quat_t_1)) return quat_dif def get_relative_quats(quat): num_inputs = quat.shape[0] quats = np.zeros((num_inputs, 4)) quats[0,:] = np.array([0, 0, 0, 1]) for i in range(1, num_inputs): quats[i,:] = QuaternionProduct(quat[i], quats[i-1,:]) # R_t = delta R_t * R_t-1 quats[i,:] = quats[i,:] / LA.norm(quats[i,:]) return quats # - quats.shape import torch import numpy as np from kornia.geometry.conversions import rotation_matrix_to_quaternion, QuaternionCoeffOrder, normalize_homography n_homo = normalize_homography(torch.from_numpy(np.array(homo)), (1080,1920), (1080,1920)) quats = rotation_matrix_to_quaternion(n_homo, order=QuaternionCoeffOrder.WXYZ)[10:-2,[1,2,3,0]] get_relative_quats(quats.numpy()) # + x = sqrtm(sqrtm(sqrtm(sqrtm(sqrtm(acc_homo[i]))))).real print(x) print(acc_homo[i]) dx = transforms_smooth[i,0] dy = transforms_smooth[i,1] da = transforms_smooth[i,2] # Reconstruct transformation matrix accordingly to new values m = np.zeros((2,3), np.float32) m[0,0] = np.cos(da) m[0,1] = -np.sin(da) m[1,0] = np.sin(da) m[1,1] = np.cos(da) m[0,2] = dx m[1,2] = dy print(m) # - from torchvision.io import read_video, read_video_timestamps from IPython.display import Video Video("../video_out.mp4", width=960, height=540) Video("../stable_video.avi", width=960, height=540)
src/test.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.10 ('cuticulus') # language: python # name: python3 # --- # + from pathlib import Path import torch import numpy as np from torchvision import models from sklearn.manifold import TSNE from cuticulus.datasets import RoughSmoothFull from cuticulus.console import console # + import matplotlib.pyplot as plt plt.style.use('classic') # %matplotlib inline import seaborn as sns sns.set() # - device = torch.device("cuda:4") class ResNet101(models.ResNet): def __init__(self, pretrained=True): super().__init__( block=models.resnet.Bottleneck, layers = [3, 4, 23, 3], num_classes=2, ) if pretrained: model = torch.load( Path('./output/models/model.pt'), map_location=torch.device("cuda:4") ) self.load_state_dict(model.state_dict()) def _forward_impl(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) # Notice there is no forward pass through the original classifier. x = self.avgpool(x) x = torch.flatten(x, 1) return x class TorchDS(torch.utils.data.Dataset): """Torch dataset class for ant image dataset.""" def __init__(self, imgs: np.ndarray, labels: np.ndarray): """Initialize dataset. Args: imgs (np.ndarray): List of data. labels (np.ndarray): List of labels. """ self.imgs = imgs self.labels = labels def __len__(self) -> int: """Return length of dataset. Returns: int: Length of dataset. """ return len(self.imgs) def __getitem__(self, idx) -> tuple: """Return item at index idx. Returns: tuple: Tuple of image and label. """ return self.imgs[idx], self.labels[idx] ds = RoughSmoothFull((256, 256)) ds.images = ds.images.transpose(0, 3, 1, 2) ds.images = ds.images.astype(np.float32) print(ds.images.shape, ds.labels.shape) model = ResNet101() device = torch.device("cuda:4") model.to(device) # + data_loader = torch.utils.data.DataLoader( dataset=TorchDS(ds.images, ds.labels), batch_size=16, shuffle=False ) res_images = [] res_labels = [] for inputs, labels in data_loader: inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs) res_images.append(outputs.cpu().detach().numpy()) res_labels.append(labels.cpu().detach().numpy()) res_images = np.concatenate(res_images, axis=0) res_labels = np.concatenate(res_labels, axis=0) # + tsne = TSNE(n_components=2).fit_transform(res_images) # normalize to [0, 1] tsne = (tsne - tsne.min()) / (tsne.max() - tsne.min()) # plot the simplified dataset fig, ax = plt.subplots() label_map = { 0: 'rough', 1: 'smooth', } # randomly subsample to avoid overplotting idx = np.random.choice(tsne.shape[0], size=1000, replace=False) tsne = tsne[idx] tsne_labels = res_labels[idx] for label in [0, 1]: idx = np.where(tsne_labels == label)[0] ax.scatter(tsne[idx, 0], tsne[idx, 1], label=label_map[label], color=sns.color_palette()[label]) ax.set_title('t-SNE visualization of ant dataset') ax.legend(loc='best') plt.show()
scripts/visualization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- from tnia.deconvolution.psfs import gibson_lanni_3D from tnia.plotting.projections import show_xyz_max from tnia.io.tifffile_helper import open_ij3D from clij2fft.richardson_lucy import richardson_lucy, richardson_lucy_nc import matplotlib.pyplot as plt from skimage.io import imread import numpy as np input_name="D:\\images\\ABRF LMRG Image Analysis Study\\nuclei\\nuclei1_out_c00_dr90_image.tif" input_name="D:\\images\\ABRF LMRG Image Analysis Study\\nuclei\\nuclei4_out_c90_dr10_image.tif" input_name="D:\\images\\ABRF LMRG Image Analysis Study\\nuclei\\nuclei2_out_c90_dr90_image.tif" decon_name="D:\\images\\ABRF LMRG Image Analysis Study\\nuclei\\nuclei2_out_c90_dr90_image_decon.tif" im, x_voxel_size, y_voxel_size, z_voxel_size = open_ij3D(input_name) print(x_voxel_size, z_voxel_size) print(im.shape) # + x_voxel_size=.3 z_voxel_size=1.5 xy_psf_dim=64 z_psf_dim=100 NA=0.75 ni=1 ns=1 psf = gibson_lanni_3D(NA, ni, ns, x_voxel_size, z_voxel_size, xy_psf_dim, z_psf_dim, 0, 0.45) psf = psf.astype('float32') #psf = gibson_lanni_3D(NA, ni, ns, xy_pixel_size, xy, zv, 0, 0.65) #plt.imshow(psf_xyz[int(size[0]/2),:,:]) print('PSF using given parameters') fig=show_xyz_max(psf) # + x_voxel_size=.12 z_voxel_size=.85 psf2 = gibson_lanni_3D(NA, ni, ns, x_voxel_size, z_voxel_size, xy_psf_dim, z_psf_dim, 0, 0.45) psf2 = psf2.astype('float32') #psf = gibson_lanni_3D(NA, ni, ns, xy_pixel_size, xy, zv, 0, 0.65) #plt.imshow(psf_xyz[int(size[0]/2),:,:]) print('PSF assuming spacing is 0.8') fig=show_xyz_max(psf2) # - from skimage.io import imsave imsave('D:\\images\\ABRF LMRG Image Analysis Study\\nuclei\\psf.tif', psf) psf.dtype # + decon1=richardson_lucy_nc(im, psf, 200, 0) decon2=richardson_lucy_nc(im, psf2, 200, 0) #decon2=richardson_lucy(im, psf, 200, 0) # - print('Original Image') fig=show_xyz_max(im) print('deconvolved assuming spacing is 0.8') fig=show_xyz_max(decon2) print('deconvolved original parameters') fig=show_xyz_max(decon1) imsave(decon_name, decon1) from tnia.viewing.napari_helper import show_image viewer=show_image(decon1, 'deconvolved') viewer=show_image(im, 'image', viewer=viewer)
notebooks/Deconvolution/Deconvolve_ABRF_Nuclei.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # The Effect of Time Spent Studying on Exam Scores # ## Data was gathered in a poll from students asking how long they studied before taking an exam import pandas as pd import collections import statistics import matplotlib.pyplot as plt #Imports Data from an Excel Sheet timeGradeData = pd.read_excel (r'TimeVsGrade.xlsx', sheet_name='Sheet1') #Frequency Plot Code freq = collections.Counter(list(timeGradeData['Hours Spent Studying'])) fig = plt.figure() axes = fig.add_axes([0,0,1,1]) axes.bar(list(freq.keys()), list(freq.values())) plt.xlabel("Number of Hours Studied") plt.ylabel("Number of Students") plt.title("Freqeuncy Plot of Study Hours and Number of Students") plt.show() # + HorizontalStatus={"numCells": 1} #Line Graph Code #The teacher decides to take the mean of each group and plot it xvalues = list(set(timeGradeData['Hours Spent Studying'])) yvalues = [] for val in xvalues: hour_df = timeGradeData.loc[timeGradeData['Hours Spent Studying'] == val] grade_lst = list(hour_df['Grade']) yvalues.append(statistics.mean(grade_lst)) hoursVsScore = {'Hours Studied': xvalues, 'Mean Exam Score': yvalues} hoursScoreDf = pd.DataFrame(data=hoursVsScore) print(hoursScoreDf) # - #Line Graph Code #The teacher decides to take the median of each group and plot it xvalues = list(set(timeGradeData['Hours Spent Studying'])) yvalues = [] for val in xvalues: hour_df = timeGradeData.loc[timeGradeData['Hours Spent Studying'] == val] grade_lst = list(hour_df['Grade']) yvalues.append(statistics.median(grade_lst)) hoursVsScore = {'Hours Studied': xvalues, 'Median Exam Score': yvalues} hoursScoreDf = pd.DataFrame(data=hoursVsScore) print(hoursScoreDf) plt.plot(xvalues,yvalues,'--bo') plt.xlabel("Hours Studied") plt.ylabel("Exam Score") plt.title("Hours Studied vs Median Exam Score") plt.show()
src/jupyter_contrib_nbextensions/ThesisExampleUseCase.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # - # <a target="_blank" href="https://colab.research.google.com/github/GoogleCloudPlatform/keras-idiomatic-programmer/blob/master/books/deep-learning-design-patterns/Workshops/Novice/Deep%20Learning%20Design%20Patterns%20-%20Workshop%20-%20Chapter%20II.ipynb"> # <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> # # Deep Learning Design Patterns - Code Labs # # ## Lab Exercise #2 - Get Familiar with Convolutional Neural Networks (CNN) # # ## Prerequistes: # # 1. Familiar with Python # 2. Completed Chapter II: Convolutional Neural Networks # # ## Objectives: # # 1. Create a basic CNN. # 2. Create a VGG class CNN # 3. Create a CNN with an identity link (Residual CNN) # ## Basic CNN as Sequential API # # Let's create a basic CNN. We will make it as two convolutional layers, each followed by a max pooling layer. # # We will use these approaches: # # 1. We will double the number of filters with each subsequent layer. # 2. We will reduce the size of the feature maps by using a stride > 1. # # You fill in the blanks (replace the ??), make sure it passes the Python interpreter, and then verify it's correctness with the summary output. # # You will need to: # # 1. Set the number of channels on the input vector (i.e., input shape). # 2. Set the number of filters and stride on the convolutional layers. # 3. Set the max pooling window size and stride. # + from tensorflow.keras import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense # Let's start with a Sequential model model = Sequential() # Let's assume we are building a model for CIFAR-10, which are 32x32 RGB images # HINT: how many channels are in an RGB image input_shape=(32, 32, ??) # Let's add a first convolution layer with 16 filters of size 3x3 and stride of 2 # HINT: first parameter is the number of filters and the second is the filter (kernel) size model.add(Conv2D(??, ??, strides=2, activation='relu', input_shape=input_shape)) # Let's reduce the feature maps by 75% # HINT: 2x2 window and move 2 pixels at a time model.add(MaxPooling2D(??, strides=??)) # Let's add a second convolution layer with 3x3 filter and strides=2 and double the filters # HINT: double the number of filters you specified in the first Conv2D model.add(Conv2D(??, ??, strides=2, activation='relu')) # Let's reduce the feature maps by 75% model.add(MaxPooling2D(??, strides=??)) model.add(Dense(10, activation='softmax')) # - # ### Verify the model architecture using summary method # # It should look like below: # # ``` # Layer (type) Output Shape Param # # ================================================================= # conv2d_4 (Conv2D) (None, 15, 15, 16) 448 # _________________________________________________________________ # max_pooling2d_4 (MaxPooling2 (None, 7, 7, 16) 0 # _________________________________________________________________ # conv2d_5 (Conv2D) (None, 3, 3, 32) 4640 # _________________________________________________________________ # max_pooling2d_5 (MaxPooling2 (None, 1, 1, 32) 0 # _________________________________________________________________ # dense_3 (Dense) (None, 1, 1, 10) 330 # ================================================================= # Total params: 5,418 # Trainable params: 5,418 # Non-trainable params: 0 # ``` model.summary() # ## VGG16 as Sequential API # # Next, we will create a VGG convolutional network. VGG networks are sequential, but they add the concept of convolutional groups. The basic elements of a VGG are: # # 1. Each convolutional group consists of two or more convolutional layers. # 2. Max pooling is deferred to the end of the convolutional group. # 3. Each convolutional group is the same or double the number of filters as the last # group. # 4. Multiple dense layers are used for the classifer. # # You will need to: # # 1. Set the number of filers ,filter size and padding on the stem convolutional group. # 2. Set the the number of filters for the convolutional blocks. # 3. Add the flattening layer between the feature learning and classifier groups. # 4. Set the number of nodes in the dense layers of the classifier. # + from tensorflow.keras import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense def conv_block(n_layers, n_filters): """ n_layers : number of convolutional layers n_filters: number of filters """ for n in range(n_layers): model.add(Conv2D(n_filters, (3, 3), strides=(1, 1), padding="same", activation="relu")) model.add(MaxPooling2D(2, strides=2)) # Create a Sequential Model model = Sequential() # Add Convolutional Frontend with 64 3x3 filters of stride 1 # Set the padding so when the filter is slid over the edges of the image, the "imaginary" pixels have the same # value as the pixels on the edge. model.add(Conv2D(??, ??, strides=(1, 1), padding=??, activation="relu", input_shape=(224, 224, 3))) # These are the convolutional groups - double the number of filters on each progressive group conv_block(1, 64) conv_block(2, ??) conv_block(3, ??) # The last two groups in a VGG16, its double the size of the previous of the group, but both groups are the same size. # HINT: the number should be the same for both conv_block(3, ??) conv_block(3, ??) # Add layer to transistion from final 2D feature maps (bottleneck layer) to 1D vector for DNN. # HINT: think of what you need to do to the 2D feature maps from the convolutional layers before passing to dense layers. model.add(??) # Add DNN Backend with two layers of 4096 nodes # HINT: model.add(Dense(??, activation='relu')) model.add(Dense(??, activation='relu')) # Output layer for classification (1000 classes) model.add(Dense(1000, activation=??)) # - # ### Verify the model architecture using summary method # # It should look like below: # # ``` # _________________________________________________________________ # Layer (type) Output Shape Param # # ================================================================= # conv2d_14 (Conv2D) (None, 224, 224, 64) 1792 # _________________________________________________________________ # conv2d_15 (Conv2D) (None, 224, 224, 64) 36928 # _________________________________________________________________ # max_pooling2d_6 (MaxPooling2 (None, 112, 112, 64) 0 # _________________________________________________________________ # conv2d_16 (Conv2D) (None, 112, 112, 128) 73856 # _________________________________________________________________ # conv2d_17 (Conv2D) (None, 112, 112, 128) 147584 # _________________________________________________________________ # max_pooling2d_7 (MaxPooling2 (None, 56, 56, 128) 0 # _________________________________________________________________ # conv2d_18 (Conv2D) (None, 56, 56, 256) 295168 # _________________________________________________________________ # conv2d_19 (Conv2D) (None, 56, 56, 256) 590080 # _________________________________________________________________ # conv2d_20 (Conv2D) (None, 56, 56, 256) 590080 # _________________________________________________________________ # max_pooling2d_8 (MaxPooling2 (None, 28, 28, 256) 0 # _________________________________________________________________ # conv2d_21 (Conv2D) (None, 28, 28, 512) 1180160 # _________________________________________________________________ # conv2d_22 (Conv2D) (None, 28, 28, 512) 2359808 # _________________________________________________________________ # conv2d_23 (Conv2D) (None, 28, 28, 512) 2359808 # _________________________________________________________________ # max_pooling2d_9 (MaxPooling2 (None, 14, 14, 512) 0 # _________________________________________________________________ # conv2d_24 (Conv2D) (None, 14, 14, 512) 2359808 # _________________________________________________________________ # conv2d_25 (Conv2D) (None, 14, 14, 512) 2359808 # _________________________________________________________________ # conv2d_26 (Conv2D) (None, 14, 14, 512) 2359808 # _________________________________________________________________ # max_pooling2d_10 (MaxPooling (None, 7, 7, 512) 0 # _________________________________________________________________ # flatten_2 (Flatten) (None, 25088) 0 # _________________________________________________________________ # dense_4 (Dense) (None, 4096) 102764544 # _________________________________________________________________ # dense_5 (Dense) (None, 4096) 16781312 # _________________________________________________________________ # dense_6 (Dense) (None, 1000) 4097000 # ================================================================= # Total params: 138,357,544 # Trainable params: 138,357,544 # Non-trainable params: 0 # __________________________ # ``` model.summary() # ## Residual CNN as Functional API # # Finally, we will create a residual convolutional network (ResNet). The basic elements of a ResNet are: # # 1. A stem convolutional group of 7x7 filter size. # 2. A sequence of residual blocks, where each doubles the number of filters. # A. Each residual block consists of two 3x3 filters, w/o max pooling. # B. The input to the residual block is added to the output. # 3. Between residual blocks is a convolutional block that doubles the number of # filters from the previous block, so the number of filters coming in and going # out are the same for the identity link matrix add operation. # A. Each convolutional block consists of two 3x3 filters, but uses stride=2 # to downsample the size of the feature maps. # # You will need to: # 1. Save the input to the residual block for the identity link. # 2. Complete the matrix add of the identity link to the output of the residual block. # 3. Set (double) the filters for the convolutional block between residual block groups to match filter sizes for matrix add operations. # 4. Add the global averaging layer between the feature learning groups and the classifier. # # + from tensorflow.keras import Model import tensorflow.keras.layers as layers def residual_block(n_filters, x): """ Create a Residual Block of Convolutions n_filters: number of filters x : input into the block """ # Save the input as the shortcut for the identity link # Hint: read the comment on the params to the function. shortcut = ?? x = layers.Conv2D(n_filters, (3, 3), strides=(1, 1), padding="same", activation="relu")(x) x = layers.Conv2D(n_filters, (3, 3), strides=(1, 1), padding="same", activation="relu")(x) # Add the saved input (identity link) to the output. # HINT: the name of the variable you used above to save the input. x = layers.Add()([??, x]) return x def conv_block(n_filters, x): """ Create Block of Convolutions without Pooling n_filters: number of filters x : input into the block """ x = layers.Conv2D(n_filters, (3, 3), strides=(2, 2), padding="same", activation="relu")(x) x = layers.Conv2D(n_filters, (3, 3), strides=(2, 2), padding="same", activation="relu")(x) return x # The input tensor inputs = layers.Input(shape=(224, 224, 3)) # First Convolutional layer, where pooled feature maps will be reduced by 75% x = layers.Conv2D(64, kernel_size=(7, 7), strides=(2, 2), padding='same', activation='relu')(inputs) x = layers.MaxPool2D(pool_size=(3, 3), strides=(2, 2), padding='same')(x) # First Residual Block Group of 64 filters for _ in range(3): x = residual_block(64, x) # Double the size of filters and reduce feature maps by 75% (strides=2, 2) to fit the next Residual Group # HINT: number should be twice as big as the number of filters in prior residual_blocks. x = conv_block(??, x) # Second Residual Block Group of 128 filters for _ in range(3): x = residual_block(128, x) # Double the size of filters and reduce feature maps by 75% (strides=2, 2) to fit the next Residual Group # HINT: number should be twice as big as the number of filters in prior residual_blocks. x = conv_block(??, x) # Third Residual Block Group of 256 filters for _ in range(5): x = residual_block(256, x) # Double the size of filters and reduce feature maps by 75% (strides=2, 2) to fit the next Residual Group # HINT: number should be twice as big as the number of filters in prior residual_blocks. x = conv_block(??, x) # Fourth Residual Block Group of 512 filters for _ in range(2): x = residual_block(??, x) # Add a Global Averaging Pooling (inplace of a Flatten) at the end of all the convolutional residual blocks x = layers.??()(x) # Final Dense Outputting Layer for 1000 outputs outputs = layers.Dense(1000, activation='softmax')(x) model = Model(inputs, outputs) # - # ### Verify the model architecture using summary method # # It should look like below: # # ``` # __________________________________________________________________________________________________ # Layer (type) Output Shape Param # Connected to # ================================================================================================== # input_1 (InputLayer) (None, 224, 224, 3) 0 # __________________________________________________________________________________________________ # conv2d_1 (Conv2D) (None, 112, 112, 64) 9472 input_1[0][0] # __________________________________________________________________________________________________ # max_pooling2d_1 (MaxPooling2D) (None, 56, 56, 64) 0 conv2d_1[0][0] # __________________________________________________________________________________________________ # conv2d_2 (Conv2D) (None, 56, 56, 64) 36928 max_pooling2d_1[0][0] # __________________________________________________________________________________________________ # conv2d_3 (Conv2D) (None, 56, 56, 64) 36928 conv2d_2[0][0] # __________________________________________________________________________________________________ # add_1 (Add) (None, 56, 56, 64) 0 max_pooling2d_1[0][0] # conv2d_3[0][0] # __________________________________________________________________________________________________ # conv2d_4 (Conv2D) (None, 56, 56, 64) 36928 add_1[0][0] # __________________________________________________________________________________________________ # conv2d_5 (Conv2D) (None, 56, 56, 64) 36928 conv2d_4[0][0] # __________________________________________________________________________________________________ # add_2 (Add) (None, 56, 56, 64) 0 add_1[0][0] # conv2d_5[0][0] # __________________________________________________________________________________________________ # conv2d_6 (Conv2D) (None, 56, 56, 64) 36928 add_2[0][0] # __________________________________________________________________________________________________ # conv2d_7 (Conv2D) (None, 56, 56, 64) 36928 conv2d_6[0][0] # __________________________________________________________________________________________________ # add_3 (Add) (None, 56, 56, 64) 0 add_2[0][0] # conv2d_7[0][0] # __________________________________________________________________________________________________ # conv2d_8 (Conv2D) (None, 28, 28, 128) 73856 add_3[0][0] # __________________________________________________________________________________________________ # conv2d_9 (Conv2D) (None, 14, 14, 128) 147584 conv2d_8[0][0] # __________________________________________________________________________________________________ # conv2d_10 (Conv2D) (None, 14, 14, 128) 147584 conv2d_9[0][0] # __________________________________________________________________________________________________ # conv2d_11 (Conv2D) (None, 14, 14, 128) 147584 conv2d_10[0][0] # __________________________________________________________________________________________________ # add_4 (Add) (None, 14, 14, 128) 0 conv2d_9[0][0] # conv2d_11[0][0] # __________________________________________________________________________________________________ # conv2d_12 (Conv2D) (None, 14, 14, 128) 147584 add_4[0][0] # __________________________________________________________________________________________________ # conv2d_13 (Conv2D) (None, 14, 14, 128) 147584 conv2d_12[0][0] # __________________________________________________________________________________________________ # add_5 (Add) (None, 14, 14, 128) 0 add_4[0][0] # conv2d_13[0][0] # __________________________________________________________________________________________________ # conv2d_14 (Conv2D) (None, 14, 14, 128) 147584 add_5[0][0] # __________________________________________________________________________________________________ # conv2d_15 (Conv2D) (None, 14, 14, 128) 147584 conv2d_14[0][0] # __________________________________________________________________________________________________ # add_6 (Add) (None, 14, 14, 128) 0 add_5[0][0] # conv2d_15[0][0] # __________________________________________________________________________________________________ # conv2d_16 (Conv2D) (None, 7, 7, 256) 295168 add_6[0][0] # __________________________________________________________________________________________________ # conv2d_17 (Conv2D) (None, 4, 4, 256) 590080 conv2d_16[0][0] # __________________________________________________________________________________________________ # conv2d_18 (Conv2D) (None, 4, 4, 256) 590080 conv2d_17[0][0] # __________________________________________________________________________________________________ # conv2d_19 (Conv2D) (None, 4, 4, 256) 590080 conv2d_18[0][0] # __________________________________________________________________________________________________ # add_7 (Add) (None, 4, 4, 256) 0 conv2d_17[0][0] # conv2d_19[0][0] # __________________________________________________________________________________________________ # conv2d_20 (Conv2D) (None, 4, 4, 256) 590080 add_7[0][0] # __________________________________________________________________________________________________ # conv2d_21 (Conv2D) (None, 4, 4, 256) 590080 conv2d_20[0][0] # __________________________________________________________________________________________________ # add_8 (Add) (None, 4, 4, 256) 0 add_7[0][0] # conv2d_21[0][0] # __________________________________________________________________________________________________ # conv2d_22 (Conv2D) (None, 4, 4, 256) 590080 add_8[0][0] # __________________________________________________________________________________________________ # conv2d_23 (Conv2D) (None, 4, 4, 256) 590080 conv2d_22[0][0] # __________________________________________________________________________________________________ # add_9 (Add) (None, 4, 4, 256) 0 add_8[0][0] # conv2d_23[0][0] # __________________________________________________________________________________________________ # conv2d_24 (Conv2D) (None, 4, 4, 256) 590080 add_9[0][0] # __________________________________________________________________________________________________ # conv2d_25 (Conv2D) (None, 4, 4, 256) 590080 conv2d_24[0][0] # __________________________________________________________________________________________________ # add_10 (Add) (None, 4, 4, 256) 0 add_9[0][0] # conv2d_25[0][0] # __________________________________________________________________________________________________ # conv2d_26 (Conv2D) (None, 4, 4, 256) 590080 add_10[0][0] # __________________________________________________________________________________________________ # conv2d_27 (Conv2D) (None, 4, 4, 256) 590080 conv2d_26[0][0] # __________________________________________________________________________________________________ # add_11 (Add) (None, 4, 4, 256) 0 add_10[0][0] # conv2d_27[0][0] # __________________________________________________________________________________________________ # conv2d_28 (Conv2D) (None, 2, 2, 512) 1180160 add_11[0][0] # __________________________________________________________________________________________________ # conv2d_29 (Conv2D) (None, 1, 1, 512) 2359808 conv2d_28[0][0] # __________________________________________________________________________________________________ # conv2d_30 (Conv2D) (None, 1, 1, 512) 2359808 conv2d_29[0][0] # __________________________________________________________________________________________________ # conv2d_31 (Conv2D) (None, 1, 1, 512) 2359808 conv2d_30[0][0] # __________________________________________________________________________________________________ # add_12 (Add) (None, 1, 1, 512) 0 conv2d_29[0][0] # conv2d_31[0][0] # __________________________________________________________________________________________________ # conv2d_32 (Conv2D) (None, 1, 1, 512) 2359808 add_12[0][0] # __________________________________________________________________________________________________ # conv2d_33 (Conv2D) (None, 1, 1, 512) 2359808 conv2d_32[0][0] # __________________________________________________________________________________________________ # add_13 (Add) (None, 1, 1, 512) 0 add_12[0][0] # conv2d_33[0][0] # __________________________________________________________________________________________________ # global_average_pooling2d_1 (Glo (None, 512) 0 add_13[0][0] # __________________________________________________________________________________________________ # dense_1 (Dense) (None, 1000) 513000 global_average_pooling2d_1[0][0] # ================================================================================================== # Total params: 21,616,232 # Trainable params: 21,616,232 # Non-trainable params: 0 # ``` model.summary() # ## Training # # Next, we will train two mini-VGGs (6 and 10 layers) on the CIFAR-10 dataset and compare the results. As we have not covered data preprocessing or training, just following the steps. # # ### VGG (6) # # Let's make a 6 layer VGG. # + from tensorflow.keras import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense def makeVGG6(): def conv_block(n_layers, n_filters): """ n_layers : number of convolutional layers n_filters: number of filters """ for n in range(n_layers): model.add(Conv2D(n_filters, (3, 3), strides=(1, 1), padding="same", activation="relu")) model.add(MaxPooling2D(2, strides=2)) model = Sequential() model.add(Conv2D(64, (3, 3), strides=(1, 1), padding='same', activation="relu", input_shape=(32, 32, 3))) # These are the convolutional groups conv_block(1, 64) conv_block(2, 128) model.add(Flatten()) model.add(Dense(4096, activation='relu')) model.add(Dense(10, activation='softmax')) model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['acc']) return model vgg6 = makeVGG6() # - # Let's now check the summary(). You should see 34 million parameters. vgg6.summary() # ### VGG(10) # # Let's now make a 10 layer VGG. # + def makeVGG10(): def conv_block(n_layers, n_filters): """ n_layers : number of convolutional layers n_filters: number of filters """ for n in range(n_layers): model.add(Conv2D(n_filters, (3, 3), strides=(1, 1), padding="same", activation="relu")) model.add(MaxPooling2D(2, strides=2)) model = Sequential() model.add(Conv2D(64, (3, 3), strides=(1, 1), padding='same', activation="relu", input_shape=(32, 32, 3))) # These are the convolutional groups conv_block(1, 64) conv_block(2, 128) conv_block(3, 256) model.add(Flatten()) model.add(Dense(4096, activation='relu')) model.add(Dense(4096, activation='relu')) model.add(Dense(10, activation='softmax')) model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['acc']) return model vgg10 = makeVGG10() # - # Let's now check the summary(). You should see 35 million parameters. Note how there have nearly the same number of parameters, but the 10 layer VGG is deeper. vgg10.summary() # ### Dataset # # Let's get the tf.Keras builtin dataset for CIFAR-10. These are 32x32 color images (3 channels) of 10 classes (airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks). We will preprocess the image data (not covered yet). # + from tensorflow.keras.datasets import cifar10 import numpy as np (x_train, y_train), (x_test, y_test) = cifar10.load_data() x_train = (x_train / 255.0).astype(np.float32) # - # ### Results # # Let's train both the 6 and 10 layer VGG on CIFAR-10 for 3 epochs and compare the results. vgg6.fit(x_train, y_train, epochs=3, batch_size=32, validation_split=0.1, verbose=1) vgg10.fit(x_train, y_train, epochs=3, batch_size=32, validation_split=0.1, verbose=1) # ### Observation # # Notice how the shallower VGG (6 layers) increasesn in accuracy across all three epochs (??), but the deeper VGG (10) does not and in fact it learns nothing (10% is same as random guessing). # # While this is not a vanishing gradient (we do not see a NaN on the loss), it does show how early CNN architectures when made deeper became less reliable to converge - not covered yet. # # If we use a larger image size, like 224x224, we can go more layers because we have more pixel data, but eventually we hit the same problem again. # ## End of Lab Exercise
books/deep-learning-design-patterns/Workshops/Novice/Deep Learning Design Patterns - Workshop - Chapter II.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import pandas as pd import numpy as np import json import os import tqdm import re import xgboost as xgb from sklearn import preprocessing from sklearn import cross_validation # + def extract_date(row): return row['postedDate']['$date'] def get_df_from_json(filename, mode): with open(filename) as json_data: data = json_data.readlines() data = map(lambda x: x.rstrip(), data) data_json_str = "[" + ','.join(data) + "]" df = pd.read_json(data_json_str) if mode != 'test': df = df.drop('_id', axis = 1) df.postedDate = df.apply(lambda row: extract_date(row), axis = 1) return df df = get_df_from_json('jobs_huge.json', mode = 'train') # + new_df = pd.DataFrame(columns=['string', 'class']) columns = df.columns #wanted to name it extract def extra_string_n_class(row, new_df): for column in tqdm.tqdm(columns): new_df.loc[len(new_df)]=[row[column], column] df.apply(lambda row: extra_string_n_class(row, new_df), axis = 1) # - # to save time if my ipython starts again. new_df.to_csv('transformed_jobs.csv', encoding='utf-8', index=False) new_df = pd.read_csv('transformed_jobs.csv', encoding='utf-8') # + ######Feature Engineering####### def all_digits(key): try: x = int(key) except ValueError: return 0 return 1 def num_digits(key): try: count = sum(c.isdigit() for c in key) return count except TypeError: print "error while counting digts in", key return 10 def has_html_tags(key): try: pat = re.compile('<.*?>') match = re.match(pat, key) if match: return 1 else: return 0 except TypeError: print "error while has_html_tags in", key return 10 def len_str(key): return len(key) def occurance_count(df, key, keyname): return len(df[df[keyname] == key]) # - #save occurance as feature and then drop duplicates new_df['occurance_count'] = new_df.apply(lambda row: occurance_count(new_df, row['string'], 'string'), axis = 1) new_df = new_df.drop_duplicates() # New feature columns 'all_digits', 'num_digits', 'has_html_tags', 'len_str', 'is_known_country', 'occurance_count' new_df['all_digits'] = new_df.apply(lambda row: all_digits(row['string']), axis = 1) new_df['num_digits'] = new_df.apply(lambda row: num_digits(row['string']), axis = 1) new_df['has_html_tags'] = new_df.apply(lambda row: has_html_tags(row['string']), axis = 1) new_df['len_str'] = new_df.apply(lambda row: len_str(row['string']), axis = 1) ###########Classification############ le_class = preprocessing.LabelEncoder() le_class.fit(new_df['class']) print le_class.classes_ new_df['en_class'] = le_class.transform(new_df['class']) new_df Y = new_df.en_class X = new_df.drop(['string','class', 'en_class'], axis = 1) Y = Y.reshape(6048, 1) # + clf = xgb.XGBClassifier(objective='reg:logistic', nthread=4, seed=0) clf.fit(X,Y) # - Y = Y.reshape(6048,) #by default 3 fold cross_validation scores = cross_validation.cross_val_score(clf, X, Y) print scores clf.score(X,Y) #accurcy 99% on training set test_df = get_df_from_json('test/test_tiny_1.txt', mode = 'test') test_new_df = test_df[test_df['key']!=''] #5112 non-empty keys, 882 empty keys. # + # to save time if my ipython starts again. test_new_df.to_csv('transformed_test.csv', encoding='utf-8', index=False) test_new_df = pd.read_csv('transformed_test.csv', encoding='utf-8') test_new_df['occurance_count'] = test_new_df.apply(lambda row: occurance_count(test_new_df, row['key'], 'key'), axis = 1) test_new_df = test_new_df.drop_duplicates() strings = test_new_df['key'] # New feature columns 'all_digits', 'num_digits', 'has_html_tags', 'len_str', 'is_known_country', 'occurance_count' test_new_df['all_digits'] = test_new_df.apply(lambda row: all_digits(row['key']), axis = 1) test_new_df['num_digits'] = test_new_df.apply(lambda row: num_digits(row['key']), axis = 1) test_new_df['has_html_tags'] = test_new_df.apply(lambda row: has_html_tags(row['key']), axis = 1) test_new_df['len_str'] = test_new_df.apply(lambda row: len_str(row['key']), axis = 1) test_new_df # - id = test_new_df.id X = test_new_df.drop(['actual', 'id', 'key'], axis = 1) Y_predict = clf.predict(X) Y_predict = le_class.inverse_transform(Y_predict) Y_predict #dropped empty keys and dropped duplicates. print len(id), len(Y_predict), len(strings) ans_df = pd.DataFrame({'id': id, 'actual': Y_predict, 'key': strings}) ans_df.to_csv('test_tiny_1_out.csv', index= False, encoding='utf=8') ans_df
jobs_structure.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + # Copyright UCL Business plc 2017. Patent Pending. All rights reserved. # # The MonoDepth Software is licensed under the terms of the UCLB ACP-A licence # which allows for non-commercial use only, the full terms of which are made # available in the LICENSE file. # # # Modified by <NAME>. # Copyright (c) by <NAME> # All Rights Reserved # Contact: <EMAIL> # from __future__ import absolute_import, division, print_function # only keep warnings and errors import os import sys import argparse import time import pandas as pd import cv2 import tensorflow as tf import tensorflow.contrib.slim as slim import scipy.misc import matplotlib.pyplot as plt from monodepth_model import * from monodepth_dataloader import * from average_gradients import * # - # ## Helper Methods # + def post_process_disparity(disp): _, h, w = disp.shape l_disp = disp[0,:,:] r_disp = np.fliplr(disp[1,:,:]) m_disp = 0.5 * (l_disp + r_disp) l, _ = np.meshgrid(np.linspace(0, 1, w), np.linspace(0, 1, h)) l_mask = 1.0 - np.clip(20 * (l - 0.05), 0, 1) r_mask = np.fliplr(l_mask) return r_mask * l_disp + l_mask * r_disp + (1.0 - l_mask - r_mask) * m_disp # github.com/aubricus def print_progress(iteration, total, prefix='', suffix='', decimals=1, bar_length=100): """ Call in a loop to create terminal progress bar @params: iteration - Required : current iteration (Int) total - Required : total iterations (Int) prefix - Optional : prefix string (Str) suffix - Optional : suffix string (Str) decimals - Optional : positive number of decimals in percent complete (Int) bar_length - Optional : character length of bar (Int) """ str_format = "{0:." + str(decimals) + "f}" percents = str_format.format(100 * (iteration / float(total))) filled_length = int(round(bar_length * iteration / float(total))) bar = '█' * filled_length + '-' * (bar_length - filled_length) sys.stdout.write('\r%s |%s| %s%s %s' % (prefix, bar, percents, '%', suffix)), if iteration == total: sys.stdout.write('\n') sys.stdout.flush() # - # ## Define Parameters checkpoint_path = '/home/neil/Workspace/monodepth/models/cityscape/model_cityscapes.data-00000-of-00001' input_height = 256 input_width =512 # ## Setting Up TensorFlow # + params = monodepth_parameters( encoder='vgg', height=input_height, width=input_width, batch_size=2, num_threads=1, num_epochs=1, do_stereo=False, wrap_mode="border", use_deconv=False, alpha_image_loss=0, disp_gradient_loss_weight=0, lr_loss_weight=0, full_summary=False) left = tf.placeholder(tf.float32, [2, input_height, input_width, 3]) model = MonodepthModel(params, "test", left, None) # SESSION config = tf.ConfigProto(allow_soft_placement=True) sess = tf.Session(config=config) # SAVER train_saver = tf.train.Saver() # INIT sess.run(tf.global_variables_initializer()) sess.run(tf.local_variables_initializer()) # RESTORE restore_path = checkpoint_path.split(".")[0] train_saver.restore(sess, restore_path) # - # ## Create a new directory for depth images # + dirs = os.listdir('/hdd/ssd_2/dataset/segmentation/val') print("making new directories... \n" + str(dirs)) for d in dirs: if (os.path.isdir('/hdd/ssd_2/dataset/segmentation/val_depth/' + d)): print("directory: " + d + " already exists") else: os.mkdir('/hdd/ssd_2/dataset/segmentation/val_depth/' + d) # - # ## Loop through all images labels = pd.read_csv('/hdd/ssd_2/dataset/segmentation/val_labels.csv').values i = 0 j = 0 # + for row in labels: if i < 510: image_path = row[0] input_image = scipy.misc.imread(image_path, mode="RGB") original_height, original_width, num_channels = input_image.shape input_image = scipy.misc.imresize(input_image, [input_height, input_width], interp='lanczos') input_image = input_image.astype(np.float32) / 255 input_images = np.stack((input_image, np.fliplr(input_image)), 0) disp = sess.run(model.disp_left_est[0], feed_dict={left: input_images}) disp_pp = post_process_disparity(disp.squeeze()).astype(np.float32) new_image_path = image_path.replace("val", "val_depth") disp_to_img = scipy.misc.imresize(disp_pp.squeeze(), [original_height, original_width]) plt.imsave(new_image_path, disp_to_img, cmap='gray') j = j + 1 print_progress(j, 500, prefix='Progress:', suffix='Complete', bar_length=50) i = i + 1 print("data finished processing") # + # Create new labels # + new_labels = [] for row in labels: image_path = row[0] new_image_path = image_path.replace("val", "val_depth") new_row = np.append(row, new_image_path) if len(new_labels) == 0: new_labels = [new_row.tolist()] else: new_labels.append(new_row.tolist()) print(len(new_labels)) # + import csv csvfile = open('new_val_labels.csv', 'w') csvwriter = csv.writer(csvfile) for item in new_labels: csvwriter.writerow(item) csvfile.close() # -
monodepth/Monodepth Cityscape (Validation).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Introduction to Deep Learning with PyTorch # # This notebook will walk through the procedure of developing a neural network to solve a supervised learning problem with PyTorch. Let's begin with an example of a regression problem: # ## Toy Example # The requirements are the following: # # 1. A dataset D = {$(x_1,y_1), (x_2,y_2), ..., (x_N, y_N)$} # 2. A parametric model f with a specified model architecture (i.e. the functional form of f). # 3. A loss function that evaluates the error in the outputs (predictions) of the model with respect to the ground truth labels y from the dataset D import torch import matplotlib.pyplot as plt # ### Dataset # Let's collect a dataset of x,y samples where: $$y = 3x^2 + x - 2$$ # # $$X = [x_1, x_2, x_3, ..., x_N]^T$$ # $$Y = [y_1, y_2, y_3, ..., y_N]^T$$ # # 1. $x_i$ is a real value, so X has dimensions Nx1 # 2. $y_i$ is a real value, so Y has dimensions Nx1 # + x = torch.linspace(start=-1, end=1, steps=10000).unsqueeze(1) # 10000 equally spaced values between [-1, 1] y = 3*x**2 + x - 2 print(x.shape, y.shape) # - # visualize the dataset plt.scatter(x, y) # ### Model # Let's define the model (i.e. the function f) that will map x to y. We assume that we do not know that y is a quadratic function of x. All we will assume is that x and y have a highly nonlinear relationship. Then, what should this function f be? A neural network, specifically a Multi-Layer Perceptron! # **Multi-Layer Perceptron (MLP)** # # MLP is a generic yet useful neural network model. Consider a MLP for regression task, where we want to learn a mapping $x \in \mathbb{R}^k \mapsto y \in \mathbb{R}$. The input $x$ is $k$-dimensional and $y$ is scalar. The simplest regression model is linear model with parameter $w \in \mathbb{R}^k$ and $w_0 \in \mathbb{R}$, we can propose to predict $y$ as $y_{pred} = w^T x + w_0$. # # MLP is a generalization of linear regression. Consider a linear transformation by matrix $W_1\in\mathbb{R}^{k\times l}$ and bias vector $b_1\in \mathbb{R}^l$: # # $$h_1 = x * W_1 + b_1$$ # # In addition to linear transformation, apply a non-linear function $\sigma(\cdot)$ elementwise to the above hidden vector $h_1$ # # $$z_1 = \sigma (h_1) $$ # # We have essentially generated a set of features $z_1$ from $x$. This set of features can be used as input to another linear model to predict $y$. Let us specify another set of parameter $W_2 \in \mathbb{R}^l, b_2 \in \mathbb{R}$. The prediction is: # # $$y_{pred} = z_1 * W_2 + b_2$$ # # A input-output mapping from $x$ to $y_{pred}$ can be compactly written as below: # # $$y_{pred} = \sigma(x * W_1 + b_1) * W_2 + b_2$$ # # We have just defined a simplest MLP. In simple words, MLP is defined by alternately stacking linear mappings ($W_1,b_1$) and nonlinear function ($\sigma(\cdot)$). Now, instead of having parameter $w,w_0$ in linear model, we have parameters $W_1,b_1,W_2,b_2$ in our simple MLP. Though this inevitably introduces more parameters, the model becomes more expressive and powerful as well. # # We can define our model as: $$ f(x) = y_{pred} = \sigma(x * W_1 + b_1) * W_2 + b_2$$ # # 1. $x$ has dimension Nx1 # 2. $W_1$ has dimension 1x100 # 3. $b_1$ has dimension 1x1 # 4. $W_2$ has dimension 100x1 # 5. $b_2$ has dimension 1x1 # + model = torch.nn.Sequential( torch.nn.Linear(1, 100), torch.nn.ReLU(), torch.nn.Linear(100, 1) ) optimizer = torch.optim.Adam(model.parameters(), lr=0.001) y_hat = model(x) # - # visualize initial model predictions plt.scatter(x, y, label="ground-truth") plt.scatter(x, y_hat.detach(), label="prediction") plt.legend(loc="upper right") plt.show() # ### Loss # For training, we need a loss function that will evaluate the quality of model predictions against ground truth labels y. # # Let's use $L(\hat{y}, y) = (\hat{y}-y)^2$. L = torch.mean(((y_hat-y)**2)) print("The average loss or error in the model's predictions compared to the ground truth is %.4f" % (L)) # ### Optimization (training) # Now we will optimize our model f until the loss is below a threshold. We have already discussed the general recipe at the very beginning of this tutorial. alpha = 0.001 error_tolerance = 0.01 while L > error_tolerance: # forward pass y_hat = model(x) L = torch.mean((y_hat-y)**2) print(L.item()) # backward pass optimizer.zero_grad() L.backward() # update parameters optimizer.step() # visualize model predictions plt.scatter(x, y, label="ground-truth") plt.scatter(x, y_hat.detach(), label="prediction") plt.legend(loc="upper right") plt.show() # ## Harder Example # # ### Dataset # Let's collect a dataset of x,y samples where: $$y = sin(x) + \epsilon$$ # # $$X = [x_1, x_2, x_3, ..., x_N]^T$$ # $$Y = [y_1, y_2, y_3, ..., y_N]^T$$ # # 1. xi is a real value, so X has dimensions Nx1 # 2. yi is a real value, so Y has dimensions Nx1 # 3. $\epsilon \sim \mathcal{N}(0,1)$ # + x = torch.linspace(start=-10, end=10, steps=10000).unsqueeze(1) # 10000 equally spaced values between [-10, 10] y = torch.sin(x) + 0.2*torch.normal(mean=torch.zeros_like(x), std=torch.ones_like(x)) print(x.shape, y.shape) # - plt.scatter(x, y) plt.show() # Let's also collect a test dataset that we will use to evaluate the model. x_test = torch.linspace(start=-10, end=10, steps=5000).unsqueeze(1) y_test = torch.sin(x_test) # Now it's your turn to define and train the model for this new dataset. You can copy code from the previous example. # # ### Model # + # TODO: Define the model, the optimizer and the predictor y_hat model = torch.nn.Sequential( torch.nn.Linear(1, 512), torch.nn.ReLU(), torch.nn.Linear(512, 1) ) optimizer = torch.optim.Adam(model.parameters(), lr=0.0001) y_hat = model(x) # - # TODO: plot initial predictions plt.scatter(x, y, label="ground-truth") plt.scatter(x, y_hat.detach(), label="prediction") plt.legend(loc="upper right") plt.show() # ### Loss # TODO: Define loss and print initial loss criterion = torch.nn.MSELoss(reduction="mean") L = criterion(input=y_hat, target=y) print("The average loss or error in the model's predictions compared to the ground truth is %.4f" % (L)) # ### Optimization # Finally, train the model! # + # TODO: Train the model alpha = 0.001 error_tolerance = 0.01 while L > error_tolerance: # forward pass y_hat = model(x) L = criterion(input=y_hat, target=y) print(L.item()) # backward pass optimizer.zero_grad() L.backward() # update parameters optimizer.step() # - # TODO: Visualize model predictions plt.scatter(x, y, label="ground-truth") plt.scatter(x, y_hat.detach(), label="prediction") plt.legend(loc="upper right") plt.show() # + # DO NOT CHANGE # given predictions y_hat from the test input x_test print("test loss", ((y_test-y_hat)**2).mean().item()) # - # ## Conclusion # We have only scratched the surface of deep learning. Try changing the model architecture for better performance! # # Both PyTorch and Tensorflow help us compute derivatives, which is what we ultimately need to incrementally improve the model through several forward and backward passes. # # Please check out more information about the different APIs here: # # PyTorch: https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html # # Tensorflow 2: https://www.tensorflow.org/tutorials/quickstart/advanced
labs/lab2_regression_torch.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np np.random.seed(77) pHat = np.random.binomial(62, 0.1, size=10000)/62 sum(pHat < 0.048)/10000 # %matplotlib inline import matplotlib.pyplot as plt plt.hist(pHat, bins=18)
Code/Statistical Test.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # %pip install futhark_ffi # !CFLAGS="-O3 -fomit-frame-pointer -mtune=cannonlake -march=cascadelake -mavx2 -mavx512f -ffast-math -ftree-vectorize" build_futhark_ffi bvep from futhark_ffi import Futhark import _bvep lib = Futhark(_bvep) import numpy as np globals().update(np.load('bvep.npz')) Obs_seeg.shape, Gr.shape data = lib.mk_bvep_data(Obs_seeg, SC, Gr, dt, tau0, I1) eta = np.zeros((87,))-2.5 x_init = np.zeros((87,)) - 2.5 z_init = np.zeros((87,)) + 4.0 amp = 1.0 off = np.zeros((91,)) + 1.0 K = 0.1 parm = lib.mk_bvep_parm(eta, 1.0, x_init, z_init, amp, off, K) # %timeit lib.bvep_loss(data, parm) # %timeit lib.bvep_grad_loss(data,parm) 0.02*10_000
FastBVEP.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Lambda School Data Science # # *Unit 2, Sprint 1, Module 4* # # --- # + # %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/' # !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/' # - # # Module Project: Logistic Regression # # Do you like burritos? 🌯 You're in luck then, because in this project you'll create a model to predict whether a burrito is `'Great'`. # # The dataset for this assignment comes from [<NAME>](https://srcole.github.io/100burritos/), a San Diego-based data scientist and burrito enthusiast. # # ## Directions # # The tasks for this project are the following: # # - **Task 1:** Import `csv` file using `wrangle` function. # - **Task 2:** Conduct exploratory data analysis (EDA), and modify `wrangle` function . # - **Task 3:** Split data into feature matrix `X` and target vector `y`. # - **Task 4:** Split feature matrix `X` and target vector `y` into training and test sets. # - **Task 5:** Establish the baseline accuracy score for your dataset. # - **Task 6:** Build `model_logr` using a pipeline that includes three transfomers and `LogisticRegression` predictor. Train model on `X_train` and `X_test`. # - **Task 7:** Calculate the training and test accuracy score for your model. # - **Task 8:** Create a horizontal bar chart showing the 10 most influencial features for your model. # - **Task 9:** Demonstrate and explain the differences between `model_lr.predict()` and `model_lr.predict_proba()`. # # **Note** # # You should limit yourself to the following libraries: # # - `category_encoders` # - `matplotlib` # - `pandas` # - `sklearn` # # I. Wrangle Data # + def wrangle(filepath): # Import w/ DateTimeIndex df = pd.read_csv(filepath, parse_dates=['Date'], index_col='Date') # Drop unrated burritos df.dropna(subset=['overall'], inplace=True) # Derive binary classification target: # We define a 'Great' burrito as having an # overall rating of 4 or higher, on a 5 point scale df['Great'] = (df['overall'] >= 4).astype(int) # Drop high cardinality categoricals df = df.drop(columns=['Notes', 'Location', 'Address', 'URL', 'Neighborhood']) # Drop columns to prevent "leakage" df = df.drop(columns=['Rec', 'overall']) return df filepath = DATA_PATH + 'burritos/burritos.csv' # - # **Task 1:** Use the above `wrangle` function to import the `burritos.csv` file into a DataFrame named `df`. filepath = DATA_PATH + 'burritos/burritos.csv' df = wrangle(filepath) # During your exploratory data analysis, note that there are several columns whose data type is `object` but that seem to be a binary encoding. For example, `df['Beef'].head()` returns: # # ``` # 0 x # 1 x # 2 NaN # 3 x # 4 x # Name: Beef, dtype: object # ``` # # **Task 2:** Change the `wrangle` function so that these columns are properly encoded as `0` and `1`s. Be sure your code handles upper- and lowercase `X`s, and `NaN`s. # + # Conduct your exploratory data analysis here # And modify the `wrangle` function above. # - # If you explore the `'Burrito'` column of `df`, you'll notice that it's a high-cardinality categorical feature. You'll also notice that there's a lot of overlap between the categories. # # **Stretch Goal:** Change the `wrangle` function above so that it engineers four new features: `'california'`, `'asada'`, `'surf'`, and `'carnitas'`. Each row should have a `1` or `0` based on the text information in the `'Burrito'` column. For example, here's how the first 5 rows of the dataset would look. # # | **Burrito** | **california** | **asada** | **surf** | **carnitas** | # | :---------- | :------------: | :-------: | :------: | :----------: | # | California | 1 | 0 | 0 | 0 | # | California | 1 | 0 | 0 | 0 | # | Carnitas | 0 | 0 | 0 | 1 | # | <NAME> | 0 | 1 | 0 | 0 | # | California | 1 | 0 | 0 | 0 | # # **Note:** Be sure to also drop the `'Burrito'` once you've engineered your new features. # + # Conduct your exploratory data analysis here # And modify the `wrangle` function above. # - # # II. Split Data # # **Task 3:** Split your dataset into the feature matrix `X` and the target vector `y`. You want to predict `'Great'`. X = ... y = ... # **Task 4:** Split `X` and `y` into a training set (`X_train`, `y_train`) and a test set (`X_test`, `y_test`). # # - Your training set should include data from 2016 through 2017. # - Your test set should include data from 2018 and later. X_train, y_train = ..., ... X_test, y_test = ..., ... # # III. Establish Baseline # # **Task 5:** Since this is a **classification** problem, you should establish a baseline accuracy score. Figure out what is the majority class in `y_train` and what percentage of your training observations it represents. baseline_acc = ... print('Baseline Accuracy Score:', baseline_acc) # # IV. Build Model # # **Task 6:** Build a `Pipeline` named `model_logr`, and fit it to your training data. Your pipeline should include: # # - a `OneHotEncoder` transformer for categorical features, # - a `SimpleImputer` transformer to deal with missing values, # - a [`StandarScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) transfomer (which often improves performance in a logistic regression model), and # - a `LogisticRegression` predictor. model_logr = ... # # IV. Check Metrics # # **Task 7:** Calculate the training and test accuracy score for `model_lr`. # + training_acc = ... test_acc = ... print('Training MAE:', training_acc) print('Test MAE:', test_acc) # - # # V. Communicate Results # # **Task 8:** Create a horizontal barchart that plots the 10 most important coefficients for `model_lr`, sorted by absolute value. # # **Note:** Since you created your model using a `Pipeline`, you'll need to use the [`named_steps`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) attribute to access the coefficients in your `LogisticRegression` predictor. Be sure to look at the shape of the coefficients array before you combine it with the feature names. # + # Create your horizontal barchart here. # - # There is more than one way to generate predictions with `model_lr`. For instance, you can use [`predict`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html?highlight=logisticregression) or [`predict_proba`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html?highlight=logisticregression#sklearn.linear_model.LogisticRegression.predict_proba). # # **Task 9:** Generate predictions for `X_test` using both `predict` and `predict_proba`. Then below, write a summary of the differences in the output for these two methods. You should answer the following questions: # # - What data type do `predict` and `predict_proba` output? # - What are the shapes of their different output? # - What numerical values are in the output? # - What do those numerical values represent? # + # Write code here to explore the differences between `predict` and `predict_proba`. # - # **Give your written answer here:** # # ``` # # # ```
module4-logistic-regression/LS_DS_214_assignment.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/CarolinaGarcia19/Clase1/blob/main/clase7.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="3R0SfgcUzEpr" # **Continuacion de funciones** # + id="uvkdsJyhzOYb" ##Recordemos el ejercicio de la clase pasada def suma(*number): print(type(number)) total=0 for i in number: total+=i return total suma(2,3,5,4) # + colab={"base_uri": "https://localhost:8080/"} id="YwblRU_QyklL" outputId="993e9c76-646b-47c4-f3c3-31997c2dc929" suma(1,2,5,6,7,9) #Al realizar el cambio y/o edicion de la funcion suma que veniamos usando, se puede añadir n elementos para iterar la funcion # + [markdown] id="ouzdQA4X2pzW" # El uso de **: # El uso de doble * nos permite tener como parametro de entrada una lista de elementos en forma de clave y valor. La forma de iterar sera usando el comando * item() # + colab={"base_uri": "https://localhost:8080/"} id="xasYJEi03Hlb" outputId="18034e38-c769-416d-a487-1f3311e86f8b" def suma(**dado): suma=0; for i,j in dado.items(): suma +=j return suma suma(a=3, b=30, c=7) # + [markdown] id="U3FJSM8039Sx" # **Ampliemos la definicion del comando return** # # --- # 1. Devolver parametros. # + colab={"base_uri": "https://localhost:8080/", "height": 35} id="oh6EfWkI4WG3" outputId="5f85ecbc-20d7-43c9-a44b-f26eff84bac8" def w(): return "Hi" w() # + [markdown] id="H1lqIECq4pLD" # 2. Devolver varias variables # + colab={"base_uri": "https://localhost:8080/"} id="A-spw2aM4x9J" outputId="f080acbb-8206-4d03-c28c-46c2df335298" def sd(a,b,c): q=a+b+c #Declaracion de variable con una suma de argumentos z=q/3 #Declaracion de variable con el promedio return q,z q,z=sd(2,4,6) print(q) print(z) # + [markdown] id="OIF0nfze5mmq" # **Formas de hacer documentacion en programacion y por bloques de codigo o linea # # --- # + colab={"base_uri": "https://localhost:8080/"} id="JF26S7HN6Adn" outputId="46a96b5c-0f7e-4c84-c51b-2d7eb3bf907a" def paula(a,b): """ Descripcion de la funcion, los parametros que se van a usar, explicar como se usa y que operaciones devuelve esta funcion """ return a+b paula(2,3) # + colab={"base_uri": "https://localhost:8080/"} id="XVdSP1Yr7C32" outputId="8df8e046-d475-431c-f52a-8a5ca4abc652" help(paula) # + colab={"base_uri": "https://localhost:8080/"} id="Fm0h95BG7g98" outputId="73071fdb-d421-4ef2-8971-2b05d0f7345f" print(paula.__doc__) #Otra forma de llamar al comentario es print del # funcion segudo de .__doc__ # + [markdown] id="Od0kkce-8gME" # **Diferencias entre valores y referencias** # # --- # Existen dos conceptos de paso por *valor* u p*r *referencia* que se aplican a la hora de tratar a los parametros de entrada de una funcion: # # 1. Si se usa un parametro que pasa por valor, se crea una copia locar de la variable, lo cual implica que cualquier modificacion sobre esta no tendra repercusiones sobre la basa original. # # 2. Si la variable pasa como referencia, # si actuara directamente sobre la base original. # # + colab={"base_uri": "https://localhost:8080/"} id="xUFg6n0I9rm9" outputId="5eb9e9c9-c15a-478a-b99b-83cd9461729d" #Ejemplo de paso por**valor** x=3 def funcion(w): w=0 funcion(x) print(x) # + [markdown] id="T6ClJ97d-b67" # 1. inicio con x=3 # 2. Se lo pasamos a la funcion() # 3. Dentro de esta funcion hacemos que la variable valga cero # 4. Dado que pasa como valor dentro de la funcion se crea una copia locar de x. # por lo cual la variable original no se modifica # + colab={"base_uri": "https://localhost:8080/"} id="I3Yqz1ek_G2c" outputId="18987d80-b7cd-4a59-8fea-466dc0292690" ##Caso que pasa por **Referencia** x=[3,6,9,12] def sw(tipo): tipo.append(50) sw(x) print(x) # + [markdown] id="jVzMxdpWBnzB" # **Tipos de Funciones** # # --- # 1. Funcion Lambda # --- # Son funciones que normalmente se usan para definir el codigo en una sola linea. # + colab={"base_uri": "https://localhost:8080/"} id="MwBGk7AbB31K" outputId="7c2d25f0-e621-4cbd-fddb-f12b6a0c982b" #Lo que normalmente seria la definicion de la funcion de este estilo: #def suma(a,b,c,d): # return a+b+c+d #,# se puede escribir de la siguiente forma: (lambda a,b,c,d: a+b+c+d)(1,2,3,4) # + colab={"base_uri": "https://localhost:8080/"} id="y6d7aEL4DQTc" outputId="68dd3828-db62-4d53-e10a-86f0c59c6539" (lambda *number:sum(number))(1,4,5,6,7,8,9) # + colab={"base_uri": "https://localhost:8080/"} id="xcXkz4yKEWDG" outputId="7b36ca4a-db19-4181-c0bf-62847e749626" (lambda **number:sum(number.values()))(a=1,b=4,c=10) # + colab={"base_uri": "https://localhost:8080/"} id="ckbSpxZUExQo" outputId="09329d11-0199-42ae-9b13-0de3baa7762d" luisa=lambda a,b,c,d:(a,b,c,d) print(luisa(2,5,7,9)) # + colab={"base_uri": "https://localhost:8080/"} id="VN6X-lGLFf8r" outputId="37e2e566-cbc0-451d-f382-fbe04c53790f" luisa=lambda a,b,c,d:a+b+c+d print(luisa(4,6,1,1)) # + [markdown] id="epVlWu2gFu_5" # **RECURSIVIDAD** # # --- # Las caracteristicas de una funcion recursiva son las siguientes: # # 1. Seccion en que se llama a si misma. # 2. seccion del bloque de codigo donde este la condicion que implique retornar sin volver a llamarse.(IMPORTANTE). # # Los ejemplos tradicionales de recursividad son **calculo factorial** y la serie **fibonacci**. # # **1. Calculo factorial** # # En matematicas definimos factorial(n!) como la multiplicacion de todos los numeros predecesores hasta llegar a 1. # # Ejemplo: # 5!= 5*4*3*2*1 # 100!= 100*99*98*98*97*96....*1 # # # + colab={"base_uri": "https://localhost:8080/"} id="qtU4t3egHgzn" outputId="d839249f-cd8f-4f10-d9a1-985e3a950ef3" #Apliquemos un enfoque no recursivo de calculo factorial: def factorial(n): w=1 i=2 while i<=n: w *=i i +=1 return w factorial(5) # + colab={"base_uri": "https://localhost:8080/"} id="w87STTSvI7lF" outputId="df947b57-3cc1-4b97-9ea9-96bd6bf51184" #Definamos factorial recursivo: def recursivofac(n): if n==1: return 1 else: return n*recursivofac(n-1) recursivofac(10) # + [markdown] id="Ksy5OBldKdiQ" # A continuacion complementamos los tipos de generadores: # # 1. Return # 2. Yield # # + id="g8MIOhrTKvbk" # + colab={"base_uri": "https://localhost:8080/"} id="3QV7IKQLKviQ" outputId="2f1bdfb9-1fae-43f2-fe52-44860d2e8885" def s(): return 4 s() # + colab={"base_uri": "https://localhost:8080/"} id="q32aQfEDK5aO" outputId="acf42b35-8c12-4f3f-b793-ea7bdc69c52f" def s(): yield 4 s() # + [markdown] id="fm_NLzGtLHc8" # Cuando usamos el return, nos devuelve el numero 4, pero cuando usamos el yield lo que devuelve es un objeto de clase (generador)
clase7.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from IPython.display import Markdown, display def printmd(string): display(Markdown(string)) import numpy as np df_2018=pd.read_csv('2018_Financial_Data.csv') df_2018 df_2018[df_2018['Unnamed: 0']=='SNP']['Profit Margin'] # + df_2018=pd.read_csv('2018_Financial_Data.csv') df_2017=pd.read_csv('2017_Financial_Data.csv') df_2016=pd.read_csv('2016_Financial_Data.csv') df_2015=pd.read_csv('2015_Financial_Data.csv') df_2014=pd.read_csv('2014_Financial_Data.csv') df_2018['Year']=2018 df_2017['Year']=2017 df_2016['Year']=2016 df_2015['Year']=2015 df_2014['Year']=2014 expand_labels={'2019 PRICE VAR [%]':'Next year PRICE VAR [%]'} df_2018.rename(columns=expand_labels,inplace=True) expand_labels={'2018 PRICE VAR [%]':'Next year PRICE VAR [%]'} df_2017.rename(columns=expand_labels,inplace=True) expand_labels={'2017 PRICE VAR [%]':'Next year PRICE VAR [%]'} df_2016.rename(columns=expand_labels,inplace=True) expand_labels={'2016 PRICE VAR [%]':'Next year PRICE VAR [%]'} df_2015.rename(columns=expand_labels,inplace=True) expand_labels={'2015 PRICE VAR [%]':'Next year PRICE VAR [%]'} df_2014.rename(columns=expand_labels,inplace=True) frames = [df_2018, df_2017, df_2016,df_2015,df_2014] result = pd.concat(frames,sort=False) # + df_2018=pd.read_csv('2018_Financial_Data.csv') df_2017=pd.read_csv('2017_Financial_Data.csv') df_2016=pd.read_csv('2016_Financial_Data.csv') df_2015=pd.read_csv('2015_Financial_Data.csv') df_2014=pd.read_csv('2014_Financial_Data.csv') df_2018['Year']=2018 df_2017['Year']=2017 df_2016['Year']=2016 df_2015['Year']=2015 df_2014['Year']=2014 expand_labels={'2019 PRICE VAR [%]':'Next year PRICE VAR [%]'} df_2018.rename(columns=expand_labels,inplace=True) expand_labels={'2018 PRICE VAR [%]':'Next year PRICE VAR [%]'} df_2017.rename(columns=expand_labels,inplace=True) expand_labels={'2017 PRICE VAR [%]':'Next year PRICE VAR [%]'} df_2016.rename(columns=expand_labels,inplace=True) expand_labels={'2016 PRICE VAR [%]':'Next year PRICE VAR [%]'} df_2015.rename(columns=expand_labels,inplace=True) expand_labels={'2015 PRICE VAR [%]':'Next year PRICE VAR [%]'} df_2014.rename(columns=expand_labels,inplace=True) frames = [df_2018, df_2017, df_2016,df_2015,df_2014] result1 = pd.concat(frames,sort=False) # - result.sort_values(by='Year',axis=0,ascending=True,inplace=True) expand_labels1={'Unnamed: 0':'Company Name'} result.rename(columns=expand_labels1,inplace=True) result.columns alpha=pd.read_csv("null from alpha vantage/alpha_final.csv") dict7={'company name': 'Company Name', 'totalRevenue': 'Revenue', 'costOfRevenue': 'Cost of Revenue', 'grossProfit': 'Gross Profit', 'totalOperatingExpense': 'Operating Expenses', 'researchAndDevelopment': 'R&D Expenses', 'sellingGeneralAdministrative': 'SG&A Expense', 'operatingIncome': 'Operating Income', 'depreciation': 'Depreciation & Amortization', 'ebit': 'EBIT', 'interestExpense': 'Interest Expense', 'incomeBeforeTax': 'Earnings before Tax', 'incomeTaxExpense': 'Income Tax Expense', 'discontinuedOperations': 'Net Income - Discontinued ops', 'netIncomeApplicableToCommonShares': 'Net Income Com', 'netIncome': 'Net Income', 'commonStockSharesOutstanding': 'Weighted Average Shs Out', 'preferredStockAndOtherAdjustments': 'Preferred Dividends', 'cash': 'Cash and cash equivalents', 'shortTermInvestments': 'Short-term investments', 'cashAndShortTermInvestments': 'Cash and short-term investments', 'netReceivables': 'Average Receivables', 'inventory': 'Average Inventory', 'totalCurrentAssets': 'Total current assets', 'propertyPlantEquipment': 'Property, Plant & Equipment Net', 'goodwill': 'Goodwill and Intangible Assets', 'totalNonCurrentAssets': 'Total non-current assets', 'otherAssets': 'Other Assets', 'totalAssets': 'Total assets', 'accountsPayable': 'Average Payables', 'shortTermDebt': 'Short-term debt', 'totalCurrentLiabilities': 'Total current liabilities', 'longTermDebt': 'Long-term debt', 'totalNonCurrentLiabilities': 'Total non-current liabilities', 'otherCurrentLiabilities': 'Other Liabilities', 'totalLiabilities': 'Total liabilities', 'retainedEarnings': 'Retained earnings (deficit)', 'totalShareholderEquity': 'Total shareholders equity', 'longTermInvestments': 'Investments', 'longTermDebt shortTermDebt': 'Total debt', 'cashflowFromInvestment': 'Investing Cash flow', 'dividendPayout': 'Dividend payments', 'cashflowFromFinancing': 'Financing Cash Flow', 'changeInExchangeRate': 'Effect of forex changes on cash', 'changeInCashAndCashEquivalents': 'Net cash flow / Change in cash', 'operatingCashflow': 'Operating Cash Flow', 'capitalExpenditures': 'Capital Expenditure', 'Year_corrected': 'Year'} dict8={ 'totalRevenue': 'Revenue', 'costOfRevenue': 'Cost of Revenue', 'grossProfit': 'Gross Profit', 'totalOperatingExpense': 'Operating Expenses', 'researchAndDevelopment': 'R&D Expenses', 'sellingGeneralAdministrative': 'SG&A Expense', 'operatingIncome': 'Operating Income', 'depreciation': 'Depreciation & Amortization', 'ebit': 'EBIT', 'interestExpense': 'Interest Expense', 'incomeBeforeTax': 'Earnings before Tax', 'incomeTaxExpense': 'Income Tax Expense', 'discontinuedOperations': 'Net Income - Discontinued ops', 'netIncomeApplicableToCommonShares': 'Net Income Com', 'netIncome': 'Net Income', 'commonStockSharesOutstanding': 'Weighted Average Shs Out', 'preferredStockAndOtherAdjustments': 'Preferred Dividends', 'cash': 'Cash and cash equivalents', 'shortTermInvestments': 'Short-term investments', 'cashAndShortTermInvestments': 'Cash and short-term investments', 'netReceivables': 'Average Receivables', 'inventory': 'Average Inventory', 'totalCurrentAssets': 'Total current assets', 'propertyPlantEquipment': 'Property, Plant & Equipment Net', 'goodwill': 'Goodwill and Intangible Assets', 'totalNonCurrentAssets': 'Total non-current assets', 'otherAssets': 'Other Assets', 'totalAssets': 'Total assets', 'accountsPayable': 'Average Payables', 'shortTermDebt': 'Short-term debt', 'totalCurrentLiabilities': 'Total current liabilities', 'longTermDebt': 'Long-term debt', 'totalNonCurrentLiabilities': 'Total non-current liabilities', 'otherCurrentLiabilities': 'Other Liabilities', 'totalLiabilities': 'Total liabilities', 'retainedEarnings': 'Retained earnings (deficit)', 'totalShareholderEquity': 'Total shareholders equity', 'longTermInvestments': 'Investments', 'longTermDebt shortTermDebt': 'Total debt', 'cashflowFromInvestment': 'Investing Cash flow', 'dividendPayout': 'Dividend payments', 'cashflowFromFinancing': 'Financing Cash Flow', 'changeInExchangeRate': 'Effect of forex changes on cash', 'changeInCashAndCashEquivalents': 'Net cash flow / Change in cash', 'operatingCashflow': 'Operating Cash Flow', 'capitalExpenditures': 'Capital Expenditure'} col_filled=[] for a in dict8.values(): col_filled.append(a) # + def extract(x,y,z,a): if pd.isna(z): return alpha[(alpha['Company Name']==y) & (alpha['Year']==x)][a].max() else: return z # - # + for cols in col_filled: result[cols]=result.apply(lambda x:extract(x['Year'],x['Company Name'],x[cols],cols),axis=1) # - len(col_filled) np.array(alpha.columns) result[result['Company Name']=='AAPL'] ev=pd.read_csv("null from alpha vantage/ev.csv") # + dict98={ 'numberOfShares':'Weighted Average Shs Out', 'marketCapitalization' : 'Market Cap', 'enterpriseValue': 'Enterprise Value' } # - col_filled_ev=[] for a in dict98.values(): col_filled_ev.append(a) # + def extract1(x,y,z,a): if pd.isna(z): return ev[(ev['Company Name']==y) & (ev['Year']==x)][a].max() else: return z # - # + for cols in col_filled_ev: result[cols]=result.apply(lambda x:extract1(x['Year'],x['Company Name'],x[cols],cols),axis=1) # - # + # - # + result['Revenue per Share']=np.where(result['Revenue per Share'].isna(),result['Revenue']/result['Weighted Average Shs Out (Dil)'],result['Revenue per Share']) # + a2='Price to Sales Ratio' result[a2]=np.where(result[a2].isna(),result['Market Cap']/result['Revenue'],result[a2]) # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['Revenue'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='Revenue Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['Revenue'],x[a2]),axis=1) # - a2='Gross Margin' result[a2]=np.where(result[a2].isna(),result['Gross Profit']/result['Revenue'],result[a2]) # + # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['Gross Profit'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='Gross Profit Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['Gross Profit'],x[a2]),axis=1) # - a2='SG&A to Revenue' result[a2]=np.where(result[a2].isna(),result['SG&A Expense']/result['Revenue'],result[a2]) # + a2='R&D to Revenue' result[a2]=np.where(result[a2].isna(),result['R&D Expenses']/result['Revenue'],result[a2]) # - # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['R&D Expenses'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='R&D Expense Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['R&D Expenses'],x[a2]),axis=1) # - # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['SG&A Expense'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='SG&A Expenses Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['SG&A Expense'],x[a2]),axis=1) # + a2='operatingProfitMargin' result[a2]=np.where(result[a2].isna(),result['Operating Income']/result['Revenue'],result[a2]) # - # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['Operating Income'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='Operating Income Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['Operating Income'],x[a2]),axis=1) # - # + a2='EBITDA' result[a2]=np.where(result[a2].isna(),result['EBIT']+result['Depreciation & Amortization'],result[a2]) # + a2='EBITDA Margin' result[a2]=np.where(result[a2].isna(),result['EBITDA']/result['Revenue'],result[a2]) # + a2='EBIT Margin' result[a2]=np.where(result[a2].isna(),result['EBIT']/result['Revenue'],result[a2]) # - # + a2='ebtperEBIT' result[a2]=np.where(result[a2].isna(),result['Earnings before Tax']/result['EBIT'],result[a2]) # + a2='nIperEBT' result[a2]=np.where(result[a2].isna(),result['Net Income']/result['Earnings before Tax'],result[a2]) # - # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['EBIT'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='EBIT Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['EBIT'],x[a2]),axis=1) # - a2='Earnings Before Tax Margin' result[a2]=np.where(result[a2].isna(),result['Earnings before Tax']/result['Revenue'],result[a2]) # + result['effectiveTaxRate'].fillna(result['effectiveTaxRate'].median(),inplace=True) # + a2='netProfitMargin' result[a2]=np.where(result[a2].isna(),result['Net Income']/result['Revenue'],result[a2]) # + a2='Net Income per Share' result[a2]=np.where(result[a2].isna(),result['Net Income']/result['Weighted Average Shs Out (Dil)'],result[a2]) # - # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['Net Income'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='Net Income Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['Net Income'],x[a2]),axis=1) # + a2='Profit Margin' result[a2]=np.where(result[a2].isna(),(result['Net Income']-result['Preferred Dividends'])/result['Revenue'],result[a2]) # + a2='EPS' result[a2]=np.where(result[a2].isna(),(result['Net Income']-result['Preferred Dividends'])/result['Weighted Average Shs Out'],result[a2]) # - # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['EPS'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='EPS Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['EPS'],x[a2]),axis=1) # - # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['Weighted Average Shs Out'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='Weighted Average Shares Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['Weighted Average Shs Out'],x[a2]),axis=1) # + a2='EPS Diluted' result[a2]=np.where(result[a2].isna(),(result['Net Income']-result['Preferred Dividends'])/result['Weighted Average Shs Out (Dil)'],result[a2]) # - # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['EPS Diluted'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='EPS Diluted Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['EPS Diluted'],x[a2]),axis=1) # + a2='Weighted Average Shs Out (Dil)' result[a2]=np.where(result[a2].isna(),result['Weighted Average Shs Out'],result[a2]) # - # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['Weighted Average Shs Out (Dil)'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='Weighted Average Shares Diluted Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['Weighted Average Shs Out (Dil)'],x[a2]),axis=1) # + a2='Earnings Yield' result[a2]=np.where(result[a2].isna(), result['EPS']/(result['Market Cap']/result['Weighted Average Shs Out (Dil)']),result[a2]) # + a2='PE ratio' result[a2]=np.where(result[a2].isna(), (result['Market Cap']/result['Weighted Average Shs Out (Dil)'])/result['EPS'],result[a2]) # + a2='priceEarningsToGrowthRatio' result[a2]=np.where(result[a2].isna(), ((result['Market Cap']/result['Weighted Average Shs Out (Dil)'])/result['EPS'])/result['EPS Growth'],result[a2]) # + a2='ROIC' result[a2]=np.where(result[a2].isna(),(result['EBIT']* (1-result['effectiveTaxRate']))/result['Invested Capital'],result[a2]) # + a2='ROE' result[a2]=np.where(result[a2].isna(), result['Net Income']/result['Total shareholders equity'],result[a2]) # + a2='returnOnAssets' result[a2]=np.where(result[a2].isna(), result['Net Income']/result['Total assets'],result[a2]) # - a2='Return on Tangible Assets' result[a2]=np.where(result[a2].isna(), result['Net Income']/(result['Total assets']-result['Goodwill and Intangible Assets']),result[a2]) # + a2='returnOnCapitalEmployed' result[a2]=np.where(result[a2].isna(), result['EBIT']/(result['Total assets']-result['Total current liabilities']),result[a2]) # + a2='Dividend Yield' result[a2]=np.where(result[a2].isna(), result['Dividend payments']/result['Market Cap'],result[a2]) # - a2='Dividend per Share' result[a2]=np.where(result[a2].isna(), result['Dividend payments']/result['Weighted Average Shs Out (Dil)'],result[a2]) # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['Dividend per Share'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='Dividends per Share Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['Dividend per Share'],x[a2]),axis=1) # - a2='Payout Ratio' result[a2]=np.where(result[a2].isna(), result['Dividend payments']/result['Net Income'],result[a2]) # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['Receivables'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='Receivables growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['Receivables'],x[a2]),axis=1) # - # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['Inventories'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='Inventory Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['Inventories'],x[a2]),axis=1) # - a2='Net Current Asset Value' result[a2]=np.where(result[a2].isna(), result['Total current assets']-result['Total current liabilities'],result[a2]) a2='Intangibles to Total Assets' result[a2]=np.where(result[a2].isna(), result['Goodwill and Intangible Assets']/result['Total assets'],result[a2]) def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['Total assets'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='Asset Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['Total assets'],x[a2]),axis=1) # - a2='Tangible Asset Value' result[a2]=np.where(result[a2].isna(), result['Total assets']-result['Goodwill and Intangible Assets'],result[a2]) a2='longtermDebtToCapitalization' result[a2]=np.where(result[a2].isna(), result['Long-term debt']/result['Market Cap'],result[a2]) a2='Shareholders Equity per Share' result[a2]=np.where(result[a2].isna(), result['Total shareholders equity']/result['Weighted Average Shs Out'],result[a2]) a2='totalDebtToCapitalization' result[a2]=np.where(result[a2].isna(), result['Total debt']/result['Market Cap'],result[a2]) a2='Net Debt' result[a2]=np.where(result[a2].isna(), result['Long-term debt']+result['Short-term debt']-result['Cash and cash equivalents'],result[a2]) a2='Net Debt to EBITDA' result[a2]=np.where(result[a2].isna(), result['Net Debt']/result['EBITDA'],result[a2]) a2='Debt to Assets' result[a2]=np.where(result[a2].isna(), result['Total debt']/result['Total assets'],result[a2]) a2='Debt to Equity' result[a2]=np.where(result[a2].isna(), result['Total debt']/result['Total shareholders equity'],result[a2]) # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['Total debt'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='Debt Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['Total debt'],x[a2]),axis=1) # + a2='Book Value per Share' result[a2]=np.where(result[a2].isna(), (result['Total assets']-result['Total liabilities'])/result['Weighted Average Shs Out (Dil)'],result[a2]) # - a2='Tangible Book Value per Share' result[a2]=np.where(result[a2].isna(), (result['Tangible Asset Value']-result['Total liabilities'])/result['Weighted Average Shs Out (Dil)'],result[a2]) a2='PTB ratio' result[a2]=np.where(result[a2].isna(), (result['Market Cap']/result['Weighted Average Shs Out (Dil)'])/result['Book Value per Share'],result[a2]) # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['Book Value per Share'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='Book Value per Share Growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['Book Value per Share'],x[a2]),axis=1) # - a2='Working Capital' result[a2]=np.where(result[a2].isna(), result['Total current assets']-result['Total current liabilities'],result[a2]) a2='Cash per Share' result[a2]=np.where(result[a2].isna(), result['Cash and cash equivalents']/result['Weighted Average Shs Out (Dil)'],result[a2]) a2='operatingCashFlowSalesRatio' result[a2]=np.where(result[a2].isna(), result['Operating Cash Flow']/result['Revenue'],result[a2]) a2='Operating Cash Flow per Share' result[a2]=np.where(result[a2].isna(), result['Operating Cash Flow']/result['Weighted Average Shs Out (Dil)'],result[a2]) # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['Operating Cash Flow'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='Operating Cash Flow growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['Operating Cash Flow'],x[a2]),axis=1) # + a2='priceToOperatingCashFlowsRatio' result[a2]=np.where(result[a2].isna(), (result['Market Cap']/result['Weighted Average Shs Out (Dil)'])/result['Operating Cash Flow'],result[a2]) # + a2='Capex to Operating Cash Flow' result[a2]=np.where(result[a2].isna(), result['Capital Expenditure']/result['Operating Cash Flow'],result[a2]) # + a2='Capex to Revenue' result[a2]=np.where(result[a2].isna(), result['Capital Expenditure']/result['Revenue'],result[a2]) # + a2='Capex to Depreciation' result[a2]=np.where(result[a2].isna(), result['Capital Expenditure']/result['Depreciation & Amortization'],result[a2]) # + a2='Capex per Share' result[a2]=np.where(result[a2].isna(), result['Capital Expenditure']/result['Weighted Average Shs Out (Dil)'],result[a2]) # + a2='Free Cash Flow' result[a2]=np.where(result[a2].isna(), result['Operating Cash Flow']-result['Capital Expenditure'],result[a2]) # + a2='Free Cash Flow margin' result[a2]=np.where(result[a2].isna(), result['Free Cash Flow']/result['Revenue'],result[a2]) # - a2='Free Cash Flow Yield' result[a2]=np.where(result[a2].isna(), result['Free Cash Flow']/result['Market Cap'],result[a2]) a2='Free Cash Flow per Share' result[a2]=np.where(result[a2].isna(), result['Free Cash Flow']/result['Weighted Average Shs Out (Dil)'],result[a2]) a2='freeCashFlowOperatingCashFlowRatio' result[a2]=np.where(result[a2].isna(), result['Free Cash Flow']/result['Operating Cash Flow'],result[a2]) # + def fun_rg(x,y,z,a): if (pd.isna(a)): prev_year=result[(result['Company Name']==y) & (result['Year']==x-1)]['Free Cash Flow'].max() this_year=z if ((pd.isna(prev_year)) or (prev_year==0)): return np.nan else: return (this_year-prev_year)/(prev_year) else: return a # + a2='Free Cash Flow growth' result[a2]=result.apply(lambda x: fun_rg(x['Year'],x['Company Name'],x['Free Cash Flow'],x[a2]),axis=1) # + a2='priceToFreeCashFlowsRatio' result[a2]=np.where(result[a2].isna(), (result['Market Cap']/result['Weighted Average Shs Out (Dil)'])/result['Free Cash Flow'],result[a2]) # + a2='Days Sales Outstanding' result[a2]=np.where(result[a2].isna(), 365/result['Receivables Turnover'],result[a2]) # + a2='Inventory Turnover' result[a2]=np.where(result[a2].isna(), result['Cost of Revenue']/result['Average Inventory'],result[a2]) # + a2='Days of Inventory on Hand' result[a2]=np.where(result[a2].isna(), 365/result['Inventory Turnover'],result[a2]) # + a2='fixedAssetTurnover' result[a2]=np.where(result[a2].isna(), result['Revenue']/result['Total non-current assets'],result[a2]) # + a2='assetTurnover' result[a2]=np.where(result[a2].isna(), result['Revenue']/result['Total assets'],result[a2]) # + a2='Current ratio' result[a2]=np.where(result[a2].isna(), result['Total current assets']/result['Total current liabilities'],result[a2]) # + a2='quickRatio' result[a2]=np.where(result[a2].isna(), (result['Total current assets']-result['Inventories'])/result['Total current liabilities'],result[a2]) # + a2='cashRatio' result[a2]=np.where(result[a2].isna(), result['Cash and cash equivalents']/result['Total current liabilities'],result[a2]) # + a2='operatingCycle' result[a2]=np.where(result[a2].isna(), result['daysOfSalesOutstanding']+result['Days of Inventory on Hand'],result[a2]) # + a2='cashConversionCycle' result[a2]=np.where(result[a2].isna(), result['daysOfSalesOutstanding']+result['Days of Inventory on Hand']-result['Days Payables Outstanding'],result[a2]) # + # + a2='Interest Coverage' result[a2]=np.where(result[a2].isna(), result['EBIT']/result['Interest Expense'],result[a2]) # + a2='cashFlowCoverageRatios' result[a2]=np.where(result[a2].isna(), result['Operating Cash Flow']/result['Total debt'],result[a2]) # + a2='shortTermCoverageRatios' result[a2]=np.where(result[a2].isna(), result['Operating Cash Flow']/result['Total current liabilities'],result[a2]) # + a2='capitalExpenditureCoverageRatios' result[a2]=np.where(result[a2].isna(), result['Operating Cash Flow']/result['Capital Expenditure'],result[a2]) # + a2='dividendpaidAndCapexCoverageRatios' result[a2]=np.where(result[a2].isna(), result['Operating Cash Flow']/(result['Capital Expenditure']+result['Dividend payments']),result[a2]) # + a2='POCF ratio' result[a2]=np.where(result[a2].isna(), result['Market Cap']/result['Operating Cash Flow'],result[a2]) # - a2='Net Cash/Marketcap' result[a2]=np.where(result[a2].isna(), result['Net cash flow / Change in cash']/result['Market Cap'],result[a2]) a2='PFCF ratio' result[a2]=np.where(result[a2].isna(), result['Market Cap']/result['Free Cash Flow'],result[a2]) a2='enterpriseValueMultiple' result[a2]=np.where(result[a2].isna(), result['Enterprise Value']/result['EBITDA'],result[a2]) a2='EV to Operating cash flow' result[a2]=np.where(result[a2].isna(), result['Enterprise Value']/result['Operating Cash Flow'],result[a2]) a2='EV to Free cash flow' result[a2]=np.where(result[a2].isna(), result['Enterprise Value']/result['Free Cash Flow'],result[a2]) a2='EV to Sales' result[a2]=np.where(result[a2].isna(), result['Enterprise Value']/result['Revenue'],result[a2]) a2='Graham Number' result[a2]=np.where(result[a2].isna(), np.sqrt(result['Book Value per Share']*result['EPS']*22.5),result[a2]) a2='Graham Net-Net' result[a2]=np.where(result[a2].isna(), result['Total current assets']/result['Total liabilities'],result[a2]) # + a2='companyEquityMultiplier' result[a2]=np.where(result[a2].isna(), result['Total assets']/result['Total shareholders equity'],result[a2]) # - result.replace(to_replace=[np.inf], value=np.nan, inplace=True) result.replace(to_replace=[-np.inf], value=np.nan, inplace=True) max(list(result.isna().sum())) max(list(result1.isna().sum())) result df_null_report=pd.DataFrame() for cols in result.columns: df_null_report=df_null_report.append({'col name':cols,'sum of null':result[cols].isna().sum()},ignore_index=True) df_null_report['% of null']=(df_null_report['sum of null']/22077)*100 pd.set_option('display.max_rows', None) df_null_report[df_null_report['% of null']>10] # + df_null_report1=pd.DataFrame() for cols in result1.columns: df_null_report1=df_null_report1.append({'col name':cols,'sum of null':result1[cols].isna().sum()},ignore_index=True) df_null_report1['% of null']=(df_null_report1['sum of null']/22077)*100 pd.set_option('display.max_rows', None) df_null_report1 # + df_null_report2=pd.DataFrame() for cols in result_drop_comp.columns: df_null_report2=df_null_report2.append({'col name':cols,'sum of null':result_drop_comp[cols].isna().sum()},ignore_index=True) df_null_report2['% of null']=(df_null_report2['sum of null']/22077)*100 pd.set_option('display.max_rows', None) df_null_report2 # - result['Revenue Growth'].isna().sum() result[result['Revenue Growth'].isna()] result.drop(['priceToSalesRatio','priceSalesRatio'],axis=1,inplace=True) # + result.drop(['grossProfitMargin'],axis=1,inplace=True) # - result.drop(['ebitperRevenue'],axis=1,inplace=True) result.drop(['eBITperRevenue'],axis=1,inplace=True) result.drop(['niperEBT'],axis=1,inplace=True) result.drop(['Net Profit Margin'],axis=1,inplace=True) result.drop(['priceEarningsRatio'],axis=1,inplace=True) result.drop(['dividendYield'],axis=1,inplace=True) result.drop(['payoutRatio','dividendPayoutRatio'],axis=1,inplace=True) result.drop(['debtEquityRatio'],axis=1,inplace=True) result.drop(['debtRatio'],axis=1,inplace=True) result.drop(['priceBookValueRatio','priceToBookRatio','PB ratio'],axis=1,inplace=True) result.drop(['operatingCashFlowPerShare'],axis=1,inplace=True) result.drop(['freeCashFlowPerShare'],axis=1,inplace=True) result.drop(['priceCashFlowRatio'],axis=1,inplace=True) result.drop(['currentRatio'],axis=1,inplace=True) result.drop(['interestCoverage'],axis=1,inplace=True) result.drop(['cashFlowToDebtRatio'],axis=1,inplace=True) result.drop(['eBTperEBIT'],axis=1,inplace=True) result.drop(['returnOnEquity'],axis=1,inplace=True) result.drop(['cashPerShare'],axis=1,inplace=True) result.drop(['daysOfSalesOutstanding'],axis=1,inplace=True) result.drop(['payablesTurnover'],axis=1,inplace=True) result.drop(['daysOfPayablesOutstanding'],axis=1,inplace=True) result.drop(['inventoryTurnover'],axis=1,inplace=True) result.drop(['daysOfInventoryOutstanding'],axis=1,inplace=True) result.drop(['Enterprise Value over EBITDA'],axis=1,inplace=True) more_col_to_remove=['10Y Net Income Growth (per Share)', '5Y Net Income Growth (per Share)', '10Y Dividend per Share Growth (per Share)', '5Y Dividend per Share Growth (per Share)', '3Y Dividend per Share Growth (per Share)', '10Y Shareholders Equity Growth (per Share)', '5Y Shareholders Equity Growth (per Share)', '3Y Shareholders Equity Growth (per Share)', '10Y Operating CF Growth (per Share)', '5Y Operating CF Growth (per Share)', '3Y Operating CF Growth (per Share)','3Y Net Income Growth (per Share)', '10Y Revenue Growth (per Share)','5Y Revenue Growth (per Share)', '3Y Revenue Growth (per Share)'] result.drop(more_col_to_remove,axis=1,inplace=True) # + # - # + #result[['daysOfSalesOutstanding','Days Sales Outstanding']] # - result['Enterprise Value over EBITDA'].isna().sum() result['enterpriseValueMultiple'].isna().sum() result['PTB ratio'].isna().sum() result['PB ratio'].isna().sum() pd.set_option('display.max_rows', 20) pd.set_option('display.max_columns', 20) pd.set_option('display.width', 200) result # + #result.to_csv("null from alpha vantage/final_result_before_kmeans.csv") # - result.isnull().sum(axis=1).max() result_drop_comp=result[result.isnull().sum(axis=1) < 110] result_drop_comp.to_csv('null from alpha vantage/final_result_before_kmeans.csv') result1[result1['Unnamed: 0']=='']['Revenue'] len(result.columns) result_kmeans=result_drop_comp.copy() from sklearn.impute import KNNImputer imputer = KNNImputer(n_neighbors=200) result_kmeans_filled = imputer.fit_transform(result_kmeans) # ! pip install -U scikit-learn list_null_revenue=result[result['Revenue'].isnull()]['Company Name'] list_null_revenue_set=set(list_null_revenue) list_null_revenue_unique=list(list_null_revenue_set) len(list_null_revenue_unique) pd.DataFrame(list_null_revenue_unique).to_csv('list_revenue_null.csv') # + #df=pd.read_csv('2018_Financial_Data.csv') # - list_null_revenue_unique.index("FOX") list_null_asset=result[result['Total assets'].isnull()]['Company Name'] list_null_asset_set=set(list_null_asset) list_null_asset_unique=list(list_null_asset_set) len(list_null_asset_unique) # + #new_list_asset+list_null_revenue_unique #len(list_null_revenue_unique) # - new_list_asset=[] for i in list_null_asset_unique : if(i in list_null_revenue_unique): print('p') else: new_list_asset.append(i) # + #len(new_list_asset) 723 list_done_income=set(new_list_asset+list_null_revenue_unique) len(list_done_income) # + list_overview=[] for cols in result.columns: list_null_col=result[result[cols].isnull()]['Company Name'] list_null_col_set=set(list_null_col) list_null_col_unique=list(list_null_col_set) for i in list_null_col_unique: if ( (i not in list_overview) & (i in company_name) ): list_overview.append(i) # - len(list_overview) list_overview_ashish=list_overview[0:1000] list_overview_poonam=list_overview[1000:2000] list_overview_vijay=list_overview[2000:] pd.DataFrame(list_overview_ashish).to_csv("list_overview_ashish.csv") result_new=result.reset_index() result_new row_company=result_new.isnull().sum(axis=1) row_company_missing=row_company[row_company>18] row_company_missing # + company_name=[] for company in row_company_missing.index: if (result_new.iloc[company,1] not in company_name): company_name.append(result_new.iloc[company,1]) # - company_name list_cash_balance_left=[] for cols in result.columns: list_null_col=result[result[cols].isnull()]['Company Name'] list_null_col_set=set(list_null_col) list_null_col_unique=list(list_null_col_set) for i in list_null_col_unique: if ((i not in list_null_revenue_unique ) & (i not in list_cash_balance_left) & (i in company_name) ): list_cash_balance_left.append(i) len(list_cash_balance_left) pd.DataFrame(list_cash_balance_left).to_csv("list_cash_balance_left.csv") list_cash_flow_left=list(set(list_null_revenue_unique[541:]+list_cash_balance_left)) pd.DataFrame(list_cash_flow_left).to_csv("list_cash_flow_left.csv") list_null=result[result.isnull()]['Company Name'] list_null_set=set(list_null) list_null_unique=list(list_null_set) result[result.isnull()] len(list_null_unique) pd.DataFrame(new_list_asset).to_csv('asset_null_list.csv') list_already=new_list_asset+list_null_revenue_unique new_list_cash_flow=[] for i in list_null_cash_flow_unique : if(i in list_already): print('p') else: new_list_cash_flow.append(i) len(new_list_cash_flow) # + #list1=result.columns #np.array(list1) # + list2=['Company Name', 'Revenue', 'Revenue per Share', 'priceToSalesRatio', 'priceSalesRatio', 'Price to Sales Ratio', 'Revenue Growth', '10Y Revenue Growth (per Share)', '5Y Revenue Growth (per Share)', '3Y Revenue Growth (per Share)', 'Cost of Revenue', 'Gross Profit', 'Gross Margin', 'grossProfitMargin', 'Gross Profit Growth', 'Operating Expenses', 'R&D Expenses', 'SG&A Expense', 'SG&A to Revenue', 'R&D to Revenue', 'R&D Expense Growth', 'SG&A Expenses Growth', 'Operating Income', 'operatingProfitMargin', 'Operating Income Growth', 'Other comprehensive income', 'Income Quality', 'EBITDA', 'EBITDA Margin', 'Depreciation & Amortization', 'EBIT', 'EBIT Margin', 'ebitperRevenue', 'eBITperRevenue', 'ebtperEBIT', 'niperEBT', 'EBIT Growth', 'Interest Expense', 'Earnings before Tax', 'Earnings Before Tax Margin', 'pretaxProfitMargin', 'nIperEBT', 'eBTperEBIT', 'Income Tax Expense', 'effectiveTaxRate', 'Consolidated Income', 'Net Income - Non-Controlling int', 'Net Income - Discontinued ops', 'Net Income Com', 'Net Income', 'netProfitMargin', 'Net Income per Share', 'Net Income Growth', '10Y Net Income Growth (per Share)', '5Y Net Income Growth (per Share)', '3Y Net Income Growth (per Share)', 'Profit Margin', 'Net Profit Margin', 'EPS', 'EPS Growth', 'Weighted Average Shs Out', 'Weighted Average Shares Growth', 'EPS Diluted', 'EPS Diluted Growth', 'Weighted Average Shs Out (Dil)', 'Weighted Average Shares Diluted Growth', 'Earnings Yield', 'priceEarningsRatio', 'priceEarningsToGrowthRatio', 'PE ratio', 'ROIC', 'ROE', 'returnOnEquity', 'returnOnAssets', 'Return on Tangible Assets', 'returnOnCapitalEmployed', 'Preferred Dividends', 'dividendYield', 'Dividend Yield', 'Dividend per Share', 'Dividends per Share Growth', '10Y Dividend per Share Growth (per Share)', '5Y Dividend per Share Growth (per Share)', '3Y Dividend per Share Growth (per Share)', 'payoutRatio', 'Payout Ratio', 'dividendPayoutRatio', 'Cash and cash equivalents', 'Short-term investments', 'Cash and short-term investments', 'Receivables', 'Receivables growth', 'Inventories', 'Inventory Growth', 'Total current assets', 'Net Current Asset Value', 'Property, Plant & Equipment Net', 'Goodwill and Intangible Assets', 'Intangibles to Total Assets', 'Long-term investments', 'Tax assets', 'Total non-current assets', 'Other Assets', 'Total assets', 'Asset Growth', 'Tangible Asset Value', 'Payables', 'Short-term debt', 'Tax Liabilities', 'Deferred revenue', 'Total current liabilities', 'Long-term debt', 'longtermDebtToCapitalization', 'Total non-current liabilities', 'Deposit Liabilities', 'Other Liabilities', 'Total liabilities', 'Retained earnings (deficit)', 'Total shareholders equity', 'Shareholders Equity per Share', '10Y Shareholders Equity Growth (per Share)', '5Y Shareholders Equity Growth (per Share)', '3Y Shareholders Equity Growth (per Share)', 'Investments', 'Invested Capital', 'Total debt', 'totalDebtToCapitalization', 'Net Debt', 'Net Debt to EBITDA', 'debtRatio', 'debtEquityRatio', 'Debt to Equity', 'Debt to Assets', 'Debt Growth', 'Interest Debt per Share', 'Book Value per Share', 'Tangible Book Value per Share', 'priceBookValueRatio', 'priceToBookRatio', 'PB ratio', 'PTB ratio', 'Book Value per Share Growth', 'Stock-based compensation', 'Stock-based compensation to Revenue', 'Working Capital', '10Y Operating CF Growth (per Share)', '5Y Operating CF Growth (per Share)', '3Y Operating CF Growth (per Share)', 'Acquisitions and disposals', 'Investment purchases and sales', 'Investing Cash flow', 'Issuance (repayment) of debt', 'Issuance (buybacks) of shares', 'Dividend payments', 'Financing Cash Flow', 'Effect of forex changes on cash', 'Net cash flow / Change in cash', 'cashPerShare', 'Cash per Share', 'Operating Cash Flow', 'operatingCashFlowSalesRatio', 'operatingCashFlowPerShare', 'Operating Cash Flow per Share', 'Operating Cash Flow growth', 'priceToOperatingCashFlowsRatio', 'Capital Expenditure', 'Capex to Operating Cash Flow', 'Capex to Revenue', 'Capex to Depreciation', 'Capex per Share', 'Free Cash Flow', 'Free Cash Flow margin', 'Free Cash Flow Yield', 'freeCashFlowPerShare', 'Free Cash Flow per Share', 'freeCashFlowOperatingCashFlowRatio', 'Free Cash Flow growth', 'priceToFreeCashFlowsRatio', 'priceCashFlowRatio', 'Average Receivables', 'Receivables Turnover', 'Days Sales Outstanding', 'daysOfSalesOutstanding', 'Average Payables', 'Payables Turnover', 'payablesTurnover', 'Days Payables Outstanding', 'daysOfPayablesOutstanding', 'Average Inventory', 'Inventory Turnover', 'inventoryTurnover', 'Days of Inventory on Hand', 'daysOfInventoryOutstanding', 'fixedAssetTurnover', 'assetTurnover', 'currentRatio', 'Current ratio', 'quickRatio', 'cashRatio', 'operatingCycle', 'cashConversionCycle', 'interestCoverage', 'cashFlowCoverageRatios', 'shortTermCoverageRatios', 'capitalExpenditureCoverageRatios', 'dividendpaidAndCapexCoverageRatios', 'Interest Coverage', 'cashFlowToDebtRatio', 'POCF ratio', 'PFCF ratio', 'Market Cap', 'Net Cash/Marketcap', 'Enterprise Value', 'enterpriseValueMultiple', 'Enterprise Value over EBITDA', 'EV to Operating cash flow', 'EV to Free cash flow', 'EV to Sales', 'priceFairValue', 'Graham Number', 'Graham Net-Net', 'companyEquityMultiplier', 'Sector', 'Next year PRICE VAR [%]', 'Class', 'Year'] # - # + #result=result[list2] # + #result[result['Company Name']=='AAPL'].to_csv('apple_test.csv') # + #result.to_excel('Financial_Data_Merged_ordered.xlsx') # + #df1.to_excel("output.xlsx") # - pd.set_option('display.max_rows', None) result[(result['Company Name']=="GE") & (result['Year']==2018)]['Revenue'] result['Revenue']/result['Weighted Average Shs Out (Dil)'] result['Revenue per Share'] company_cap=list(set(result[result['Market Cap'].isnull()]['Company Name'])) result[result['Company Name']=='SCD']['Market Cap'] len(company_cap) company_share=list(set(result[result['Weighted Average Shs Out'].isnull()]['Company Name'])) len(company_share) pd.DataFrame(company_cap).to_csv('company_cap.csv') len(set(company_cap+company_share)) # + #df=result #import numpy as np # + cap_left=[] for comp in company_share: if (comp not in company_cap): cap_left.append(comp) # - len(cap_left) pd.DataFrame(cap_left).to_csv('cap_left_final.csv')
data all years merge.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Dataproc - Submit Spark Job # # ## Intended Use # A Kubeflow Pipeline component to submit a Spark job on YARN in Google Cloud Dataproc service. # # ## Run-Time Parameters: # Name | Description # :--- | :---------- # project_id | Required. The ID of the Google Cloud Platform project that the cluster belongs to. # region | Required. The Cloud Dataproc region in which to handle the request. # cluster_name | Required. The cluster to run the job. # main_jar_file_uri | The HCFS URI of the jar file that contains the main class. # main_class | The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris. # args | Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. # spark_job | Optional. The full payload of a [SparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/SparkJob). # job | Optional. The full payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). # wait_interval | Optional. The wait seconds between polling the operation. Defaults to 30s. # # ## Output: # Name | Description # :--- | :---------- # job_id | The ID of the created job. # ## Sample # # Note: the sample code below works in both IPython notebook or python code directly. # # ### Setup a Dataproc cluster # Follow the [guide](https://cloud.google.com/dataproc/docs/guides/create-cluster) to create a new Dataproc cluster or reuse an existing one. # # ### Prepare Spark job # Upload your Spark jar file to a Google Cloud Storage (GCS) bucket. In the sample, we will use a jar file that is pre-installed in the main cluster `file:///usr/lib/spark/examples/jars/spark-examples.jar`. # # Here is the [source code of example](https://github.com/apache/spark/blob/master/examples/src/main/java/org/apache/spark/examples/JavaSparkPi.java). # # To package a self-contained spark application, follow the [instructions](https://spark.apache.org/docs/latest/quick-start.html#self-contained-applications). # ### Set sample parameters # + tags=["parameters"] PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' SPARK_FILE_URI = 'file:///usr/lib/spark/examples/jars/spark-examples.jar' MAIN_CLASS = 'org.apache.spark.examples.SparkPi' ARGS = ['1000'] EXPERIMENT_NAME = 'Dataproc - Submit Spark Job' COMPONENT_SPEC_URI = 'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/dataproc/submit_spark_job/component.yaml' # - # ### Install KFP SDK # Install the SDK (Uncomment the code if the SDK is not installed before) # + # KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.12/kfp.tar.gz' # # !pip3 install $KFP_PACKAGE --upgrade # - # ### Load component definitions # + import kfp.components as comp dataproc_submit_spark_job_op = comp.load_component_from_url(COMPONENT_SPEC_URI) display(dataproc_submit_spark_job_op) # - # ### Here is an illustrative pipeline that uses the component import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit Spark job pipeline', description='Dataproc submit Spark job pipeline' ) def dataproc_submit_spark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_jar_file_uri = '', main_class = MAIN_CLASS, args = json.dumps(ARGS), spark_job=json.dumps({ 'jarFileUris': [ SPARK_FILE_URI ] }), job='{}', wait_interval='30' ): dataproc_submit_spark_job_op(project_id, region, cluster_name, main_jar_file_uri, main_class, args, spark_job, job, wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) # ### Compile the pipeline pipeline_func = dataproc_submit_spark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.pipeline.tar.gz' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) # ### Submit the pipeline for execution # + #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
components/gcp/dataproc/submit_spark_job/sample.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Problem # Find the smallest difference between two arrays. # # The function should take in two arrays and find the pair of numbers in the array whose absolute difference is closest to zero. # def smallest_difference(array_one, array_two): """ Complexity: Time: O(nlog(n)) + mlog(m)) where n = length of first array, m = length of second array (the nlog n comes from sorting using an optimal sorting algorithm) Space: O(1) """ # first, we sort the arrays array_one.sort() array_two.sort() # init pointers that we'll use for each array idx_one = 0 idx_two = 0 current_diff = float('inf') smallest_diff = float('inf') while idx_one < len(array_one) and idx_two < len(array_two): first_num = array_one[idx_one] second_num = array_two[idx_two] # find absolute difference current_diff = abs(first_num - second_num) if first_num < second_num: # increment the index of first array idx_one += 1 elif second_num < first_num: # increment the index of second array idx_two += 1 else: return [first_num, second_num] if smallest_diff > current_diff: smallest_diff = current_diff smallest_pair = [first_num, second_num] return smallest_pair array1 = [2, 1, 3, 5, 4] array2 = [4, 5, 6, 3, 2] smallest_difference(array1, array2)
arrays/smallest_difference.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- my_string="helloworld" my_string # + ##Başlık # - "helloworld" ##Başlık # + ##indexing # - my_string my_string[0] # eğer string içinde bir sırayı çağırıyorsan [] kullan dizi gibi davranıyor my_string[-1] # -1 son harfini alıyor my_string_2="12334567890" # my_string_2[0] my_string_2[2:] # 1 ve 2 yi almadı [2:] 2. indexten başlat ve gerisini göster # ##slicing # #slicing my_string_2[4:] my_string_2[:3] # + active="" # 3 den öncekileri yazar # # - # #stopping index # my_string_2[1:5] # 1 ve 5. eleman arasını getirir # ###
03-strings checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <h1>Exploration of use of pytables and h5py libraries for creating forcing files for MOHID<h1> # <ol> # <li> # <a href="#winds">Winds Structure</a> # </li> # <li> # <a href="#currents">Currents Structure</a> # </li> # <li> # <a href="#createwind">Wind Input file pytables</a> # </li> # <li> # <a href="#createcurrents">Create Current Input file pytables</a> # </li> # <li> # <a href="#windsh5py">Create Wind Input file h5py</a> # </li> # <li> # <a href="#currentsh5py">Create Current Input file h5py</a> # </li> # <li> # <a href="#comparison">Looking at file size and time incentive for different compression levels</a> # </li> # </ol> import numpy as np from tables import * import xarray as xr import datetime as datetime from salishsea_tools import viz_tools, utilities import matplotlib.pyplot as plt import time from scipy.interpolate import griddata import datetime # <div id='winds'><a name='Winds'></a><h2>Winds Input File Structure</h2></div> h5wind = open_file('/ocean/abhudia/MEOPAR/analysis-ashutosh/temp/st_georgia_wind.hdf5') for group in h5wind.walk_groups(): print(group) # ### Read wind grid data metadata getattr(getattr(h5wind.root.Results, 'wind velocity X'), 'wind velocity X_00001').attrs # <div id='currents'><a name='Currents'></a><h2>Currents File Structure</h2></div> h5current = open_file('/ocean/abhudia/MEOPAR/analysis-ashutosh/temp/St_georgia.hdf5') for group in h5current.walk_groups(): print(group) # <div id='createwind'><a name='Currents'></a><h2>Create wind input file pytables</h2></div> # + # make hdf5 fle for currents h5file = open_file("temp/winds.hdf5", mode="w", title="Winds Test file") times = h5file.create_group("/", 'Time', 'Time') results = h5file.create_group("/", 'Results', 'Results') wind_u = h5file.create_group(results, 'wind velocity X', 'wind velocity X') wind_v = h5file.create_group(results, 'wind velocity Y', 'wind velocity Y') # - GEM = xr.open_dataset('/results2/MIDOSS/forcing/SalishSeaCast/hrdps/07apr15-16apr15/GEM.nc').isel(time_counter = slice(23,240)) # + GEM = xr.open_dataset('/results2/MIDOSS/forcing/SalishSeaCast/hrdps/07apr15-16apr15/GEM.nc').isel(time_counter = slice(23,240)) # lat lon data GEM_grid = xr.open_dataset('https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSaAtmosphereGridV1') NEMO_grid = xr.open_dataset('https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSnBathymetryV17-02') # GEM data coordinates points = np.array([GEM_grid.latitude.values.ravel(), GEM_grid.longitude.values.ravel()-360]).T # NEMO lat lon grids tuple xi = (NEMO_grid.latitude.values, NEMO_grid.longitude.values) # GEM Data GEM_u = GEM.u_wind.values GEM_v = GEM.v_wind.values bar = utilities.statusbar('Loading ...') u_wind = np.expand_dims(griddata(points, GEM_u[0].ravel(), xi, method='cubic'),0) v_wind = np.expand_dims(griddata(points, GEM_v[0].ravel(), xi, method='cubic'),0) for grid in bar(range(1, GEM_u.shape[0])): interp_u = griddata(points, GEM_u[grid].ravel(), xi, method='cubic') u_wind = np.vstack((u_wind, np.expand_dims(interp_u,0))) interp_v = griddata(points, GEM_v[grid].ravel(), xi, method='cubic') v_wind = np.vstack((v_wind, np.expand_dims(interp_v,0))) u_wind1 = u_wind[...,:,1:897:,1:397].astype('float64') v_wind1 = v_wind[...,:,1:897:,1:397].astype('float64') u_wind2 = np.transpose(u_wind1, [0,2,1]) v_wind2 = np.transpose(v_wind1, [0,2,1]) # - u_wind1 = u_wind[...,:,1:897:,1:397].astype('float64') v_wind1 = v_wind[...,:,1:897:,1:397].astype('float64') u_wind2 = np.transpose(u_wind1, [0,2,1]) v_wind2 = np.transpose(v_wind1, [0,2,1]) datelist = GEM.time_counter.values.astype('datetime64[s]').astype(datetime.datetime) datearrays = [] filters = Filters(complevel=4) for date in datelist: datearrays.append(np.array([date.year, date.month, date.day, date.hour, date.minute, date.second]).astype('float64')) bar = utilities.statusbar('Writing Dates ...') for i in bar(range(len(datearrays))): time_attr = 'Time_' + ((5 - len(str(i + 1))) * '0') + str(i + 1) h5file.create_carray(times, f'{time_attr}',obj = datearrays[i],shape=(6,), chunkshape = (6,), filters = Filters) bar = utilities.statusbar('Writing U winds ...') for i in bar(range(u_wind2.shape[0])): velocity_attr = 'wind velocity X_' + ((5 - len(str(i + 1))) * '0') + str(i + 1) h5file.create_carray(wind_u, f'{velocity_attr}',obj = u_wind2[i],shape=(396, 896), chunkshape = (396, 896), filters = Filters) bar = utilities.statusbar('Writing Y winds ...') for i in bar(range(v_wind2.shape[0])): velocity_attr = 'wind velocity Y_' + ((5 - len(str(i + 1))) * '0') + str(i + 1) h5file.create_carray(wind_v, f'{velocity_attr}',obj = v_wind2[i],shape=(396, 896), chunkshape = (396, 896), filters = Filters) h5file.close() # <div id='createcurrents'><a name='Currents'></a><h2>Create currents input file pytables</h2></div> # # Due to the use of dotted notation in pytables, it doesn't like names with spaces in them and spits out pages of warnings each time these are run. I cleared them . strt = time.time() # + # make hdf5 fle for currents h5file = open_file("temp/currents.hdf5", mode="w", title="Currents Test file") # make tree structure times = h5file.create_group("/", 'Time', '') results = h5file.create_group("/", 'Results', '') velocity_u = h5file.create_group(results, 'velocity U', '') velocity_v = h5file.create_group(results, 'velocity V', '') water_level = h5file.create_group(results, 'water level', '') # - # write in chunks of size 12 sliceend = 35 slicebegin = 23 num_disp = 0 count = 0 while sliceend <= 191: U = xr.open_dataset('/results2/MIDOSS/forcing/SalishSeaCast/nowcast-green/07apr15-14apr15/U.nc').isel(time_counter = slice(slicebegin,sliceend)) V = xr.open_dataset('/results2/MIDOSS/forcing/SalishSeaCast/nowcast-green/07apr15-14apr15/V.nc').isel(time_counter = slice(slicebegin,sliceend)) T = xr.open_dataset('/results2/MIDOSS/forcing/SalishSeaCast/nowcast-green/07apr15-14apr15/T.nc').isel(time_counter = slice(slicebegin,sliceend)) datelist = U.time_counter.values.astype('datetime64[s]').astype(datetime.datetime) # unstagger unstaggered_U = viz_tools.unstagger_xarray(U.vozocrtx, 'x').values[...,:,1:897:,1:397] unstaggered_V = viz_tools.unstagger_xarray(V.vomecrty, 'y').values[...,:,1:897:,1:397] unstaggered_T = T.sossheig.values[...,:,1:897:,1:397] # rotate current_u, current_v = viz_tools.rotate_vel(unstaggered_U, unstaggered_V) # clear memory unstaggered_U, unstaggered_V = 0,0 # transpose grid current_u = np.transpose(current_u, [0,1,3,2]) current_v = np.transpose(current_v, [0,1,3,2]) unstaggered_T = np.transpose(unstaggered_T, [0,2,1]) # flip by depth dimension current_u = np.flip(np.nan_to_num(current_u).astype('float64'), axis = 1) current_v = np.flip(np.nan_to_num(current_v).astype('float64'), axis = 1) unstaggered_T = np.nan_to_num(unstaggered_T).astype('float64') # make list of time arrays datearrays = [] for date in datelist: datearrays.append(np.array([date.year, date.month, date.day, date.hour, date.minute, date.second]).astype('float64')) # write u wind values to hdf5 for i in range(current_u.shape[0]): velocity_attr = 'velocity U_' + ((5 - len(str(i + 1 + num_disp))) * '0') + str(i + 1 + num_disp) h5file.create_carray(velocity_u, f'{velocity_attr}',obj = current_u[i],shape=(40, 396, 896), chunkshape = (40, 396, 896)) # write v wind values to hdf5 for i in range(current_v.shape[0]): velocity_attr = 'velocity V_' + ((5 - len(str(i + 1 + num_disp))) * '0') + str(i + 1 + num_disp) h5file.create_carray(velocity_v, f'{velocity_attr}',obj = current_v[i],shape=(40, 396, 896), chunkshape = (40, 396, 896)) # write water level values to hdf5 for i in range(unstaggered_T.shape[0]): level_attr = 'water level_' + ((5 - len(str(i + 1 + num_disp))) * '0') + str(i + 1 + num_disp) h5file.create_carray(water_level, f'{level_attr}',obj = unstaggered_T[i],shape=(396, 896), chunkshape = (396, 896)) # write time values to hdf5 for i in range(len(datearrays)): time_attr = 'Time_' + ((5 - len(str(i + 1 + num_disp))) * '0') + str(i + 1 + num_disp) h5file.create_carray(times, f'{time_attr}',obj = datearrays[i],shape=(6,), chunkshape = (6,)) num_disp = num_disp + current_u.shape[0] slicebegin = sliceend sliceend = sliceend + 12 count = count + 1 print(count) h5file.close() # #### This is cool, but I was not able to figure out how to write the metadata for each grid in pytables. Also, the output files are humungous. Like 30 gigs for somethoing that should be 3 gigs. So I will now check out h5py # # Trying with h5py compression # <div id='windsh5py'><a name='Winds'></a><h2>Writing winds file h5py</h2></div> f.close() f = h5py.File('winds.hdf5', 'w') results = f.create_group('Results') times = f.create_group('Time') windu = f.create_group('/Results/wind velocity X') windx = f.create_group('/Results/wind velocity Y') # + GEM = xr.open_dataset('/results2/MIDOSS/forcing/SalishSeaCast/hrdps/07apr15-16apr15/GEM.nc').isel(time_counter = slice(23,240)) # lat lon data GEM_grid = xr.open_dataset('https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSaAtmosphereGridV1') NEMO_grid = xr.open_dataset('https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSnBathymetryV17-02') # GEM data coordinates points = np.array([GEM_grid.latitude.values.ravel(), GEM_grid.longitude.values.ravel()-360]).T # NEMO lat lon grids tuple xi = (NEMO_grid.latitude.values, NEMO_grid.longitude.values) # GEM Data GEM_u = GEM.u_wind.values GEM_v = GEM.v_wind.values bar = utilities.statusbar('Loading ...') u_wind = np.expand_dims(griddata(points, GEM_u[0].ravel(), xi, method='cubic'),0) v_wind = np.expand_dims(griddata(points, GEM_v[0].ravel(), xi, method='cubic'),0) for grid in bar(range(1, GEM_u.shape[0])): interp_u = griddata(points, GEM_u[grid].ravel(), xi, method='cubic') u_wind = np.vstack((u_wind, np.expand_dims(interp_u,0))) interp_v = griddata(points, GEM_v[grid].ravel(), xi, method='cubic') v_wind = np.vstack((v_wind, np.expand_dims(interp_v,0))) u_wind1 = u_wind[...,:,1:897:,1:397].astype('float64') v_wind1 = v_wind[...,:,1:897:,1:397].astype('float64') u_wind2 = np.transpose(u_wind1, [0,2,1]) v_wind2 = np.transpose(v_wind1, [0,2,1]) # - datelist = GEM.time_counter.values.astype('datetime64[s]').astype(datetime.datetime) datearrays = [] for date in datelist: datearrays.append(np.array([date.year, date.month, date.day, date.hour, date.minute, date.second]).astype('float64')) bar = utilities.statusbar('Writing Dates ...') for i in bar(range(len(datearrays))): time_attr = 'Time_' + ((5 - len(str(i + 1))) * '0') + str(i + 1) dset = times.create_dataset(time_attr, shape = (6,), data = datearrays[i],chunks=(6,), compression = 'gzip', compression_opts = 1) metadata = {'Maximum' : np.array([float(datearrays[i][0])]), 'Minimum' : np.array([-0.]), 'Units' : b'YYYY/MM/DD HH:MM:SS'} # !!! dset.attrs.update(metadata) #h5file.create_carray(times, f'{time_attr}',obj = datearrays[i],shape=(6,), chunkshape = (6,)) bar = utilities.statusbar('Writing X winds ...') for i in bar(range(u_wind2.shape[0])): velocity_attr = 'wind velocity X_' + ((5 - len(str(i + 1))) * '0') + str(i + 1) dset = windu.create_dataset(velocity_attr, shape = (396, 896), data = u_wind2[i],chunks=(396, 896), compression = 'gzip', compression_opts = 1) metadata = {'FillValue' : np.array([0.]), 'Maximum' : np.array([100.]), 'Minimum' : np.array([-100.]), 'Units' : b'm/s'} dset.attrs.update(metadata) #h5file.create_carray(wind_u, f'{velocity_attr}',obj = u_wind2[i],shape=(396, 896), chunkshape = (396, 896)) bar = utilities.statusbar('Writing Y winds ...') for i in bar(range(v_wind2.shape[0])): velocity_attr = 'wind velocity Y_' + ((5 - len(str(i + 1))) * '0') + str(i + 1) dset = windx.create_dataset(velocity_attr, shape = (396, 896), data = v_wind2[i],chunks=(396, 896), compression = 'gzip', compression_opts = 1) metadata = {'FillValue' : np.array([0.]), 'Maximum' : np.array([100.]), 'Minimum' : np.array([-100.]), 'Units' : b'm/s'} dset.attrs.update(metadata) #h5file.create_carray(wind_v, f'{velocity_attr}',obj = v_wind2[i],shape=(396, 896), chunkshape = (396, 896)) f.close() # <div id='currentsh5py'><a name='Winds'></a><h2>Writing currents file h5py</h2></div> # # This has added metadata like in Shihan's files. Done in smaller chunks as a remnant from h5py code boilerplate # + compression_level = 1 f = h5py.File('currents.hdf5', 'w') results = f.create_group('Results') times = f.create_group('Time') velocity_u = f.create_group('/Results/velocity U') velocity_v = f.create_group('/Results/velocity V') water_level = f.create_group('/Results/water level') # write in chunks of size 12 sliceend = 35 slicebegin = 23 num_disp = 0 count = 0 while sliceend <= 191: # 191 # execute one loop for comparison U = xr.open_dataset('/results2/MIDOSS/forcing/SalishSeaCast/nowcast-green/07apr15-14apr15/U.nc').isel(time_counter = slice(slicebegin,sliceend)) V = xr.open_dataset('/results2/MIDOSS/forcing/SalishSeaCast/nowcast-green/07apr15-14apr15/V.nc').isel(time_counter = slice(slicebegin,sliceend)) T = xr.open_dataset('/results2/MIDOSS/forcing/SalishSeaCast/nowcast-green/07apr15-14apr15/T.nc').isel(time_counter = slice(slicebegin,sliceend)) datelist = U.time_counter.values.astype('datetime64[s]').astype(datetime.datetime) # unstagger unstaggered_U = viz_tools.unstagger_xarray(U.vozocrtx, 'x').values[...,1:897:,1:397] unstaggered_V = viz_tools.unstagger_xarray(V.vomecrty, 'y').values[...,1:897:,1:397] unstaggered_T = T.sossheig.values[...,1:897:,1:397] # rotate #current_u, current_v = viz_tools.rotate_vel(unstaggered_U, unstaggered_V) current_u, current_v = unstaggered_U, unstaggered_V # clear memory # unstaggered_U, unstaggered_V = 0,0 # transpose grid current_u = np.transpose(current_u, [0,1,3,2]) current_v = np.transpose(current_v, [0,1,3,2]) unstaggered_T = np.transpose(unstaggered_T, [0,2,1]) # flip by depth dimension current_u = np.flip(np.nan_to_num(current_u).astype('float64'), axis = 1) current_v = np.flip(np.nan_to_num(current_v).astype('float64'), axis = 1) unstaggered_T = np.nan_to_num(unstaggered_T).astype('float64') # make list of time arrays datearrays = [] for date in datelist: datearrays.append(np.array([date.year, date.month, date.day, date.hour, date.minute, date.second]).astype('float64')) # write u wind values to hdf5 bar = utilities.statusbar('Writing U ...') for i in bar(range(current_u.shape[0])): velocity_attr = 'velocity U_' + ((5 - len(str(i + 1 + num_disp))) * '0') + str(i + 1 + num_disp) dset = velocity_u.create_dataset(velocity_attr, shape = (40, 396, 896), data = current_u[i],chunks=(40, 396, 896), compression = 'gzip', compression_opts = compression_level) metadata = {'FillValue' : np.array([0.]), 'Maximum' : np.array([5.]), 'Minimum' : np.array([-5.]), 'Units' : b'm/s'} dset.attrs.update(metadata) # write v wind values to hdf5 bar = utilities.statusbar('Writing V ...') for i in bar(range(current_v.shape[0])): velocity_attr = 'velocity V_' + ((5 - len(str(i + 1 + num_disp))) * '0') + str(i + 1 + num_disp) dset = velocity_v.create_dataset(velocity_attr, shape = (40, 396, 896), data = current_v[i],chunks=(40, 396, 896), compression = 'gzip', compression_opts = compression_level) metadata = {'FillValue' : np.array([0.]), 'Maximum' : np.array([5.]), 'Minimum' : np.array([-5.]), 'Units' : b'm/s'} dset.attrs.update(metadata) # write water level values to hdf5 bar = utilities.statusbar('Writing Water Levels ...') for i in bar(range(unstaggered_T.shape[0])): level_attr = 'water level_' + ((5 - len(str(i + 1 + num_disp))) * '0') + str(i + 1 + num_disp) dset = water_level.create_dataset(level_attr, shape = (396, 896), data = unstaggered_T[i],chunks=(396, 896), compression = 'gzip', compression_opts = compression_level) metadata = {'FillValue' : np.array([0.]), 'Maximum' : np.array([5.]), 'Minimum' : np.array([-5.]), 'Units' : b'm'} dset.attrs.update(metadata) # write time values to hdf5 bar = utilities.statusbar('Writing Dates ...') for i in bar(range(len(datearrays))): time_attr = 'Time_' + ((5 - len(str(i + 1 + num_disp))) * '0') + str(i + 1 + num_disp) dset = times.create_dataset(time_attr, shape = (6,), data = datearrays[i],chunks=(6,), compression = 'gzip', compression_opts = compression_level) metadata = {'Maximum' : np.array([float(datearrays[i][0])]), 'Minimum' : np.array([-0.]), 'Units' : b'YYYY/MM/DD HH:MM:SS'} # !!! dset.attrs.update(metadata) num_disp = num_disp + current_u.shape[0] slicebegin = sliceend sliceend = sliceend + 12 count = count + 1 print(count) f.close() # - # <div id='comparison'><a name='Comparison'></a><h2>Looking at file size and time incentive for different compression levels</h2></div> # ### Obtained after wrapping cell above in loop. 12 hour chunks were extrapolated to 7 days worth of data. size_mb= [] for i in size_compression: size_mb.append(14* float(i.replace(' MB', ''))) a = float(2016) a plt.plot(range(1,10), time_compression) plt.title('Time taken (mins) to make hdf one week vs compression factor') plt.plot(range(1,10), size_mb) plt.title('Size of output file (MBs) one week vs compression factor')
mohid_hdf5_creation/explore_pytables_h5py.ipynb
# --- # jupyter: # jupytext: # split_at_heading: true # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- #|hide #|skip ! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab # + #|default_exp vision.models.xresnet # - #|export from __future__ import annotations from fastai.torch_basics import * try: from torchvision.models.utils import load_state_dict_from_url except ModuleNotFoundError: from torch.hub import load_state_dict_from_url #|hide from nbdev.showdoc import * # # XResnet # # > Resnet from bags of tricks paper #|export def init_cnn(m): if getattr(m, 'bias', None) is not None: nn.init.constant_(m.bias, 0) if isinstance(m, (nn.Conv1d,nn.Conv2d,nn.Conv3d,nn.Linear)): nn.init.kaiming_normal_(m.weight) for l in m.children(): init_cnn(l) #|export class XResNet(nn.Sequential): @delegates(ResBlock) def __init__(self, block, expansion, layers, p=0.0, c_in=3, n_out=1000, stem_szs=(32,32,64), widen=1.0, sa=False, act_cls=defaults.activation, ndim=2, ks=3, stride=2, **kwargs): store_attr('block,expansion,act_cls,ndim,ks') if ks % 2 == 0: raise Exception('kernel size has to be odd!') stem_szs = [c_in, *stem_szs] stem = [ConvLayer(stem_szs[i], stem_szs[i+1], ks=ks, stride=stride if i==0 else 1, act_cls=act_cls, ndim=ndim) for i in range(3)] block_szs = [int(o*widen) for o in [64,128,256,512] +[256]*(len(layers)-4)] block_szs = [64//expansion] + block_szs blocks = self._make_blocks(layers, block_szs, sa, stride, **kwargs) super().__init__( *stem, MaxPool(ks=ks, stride=stride, padding=ks//2, ndim=ndim), *blocks, AdaptiveAvgPool(sz=1, ndim=ndim), Flatten(), nn.Dropout(p), nn.Linear(block_szs[-1]*expansion, n_out), ) init_cnn(self) def _make_blocks(self, layers, block_szs, sa, stride, **kwargs): return [self._make_layer(ni=block_szs[i], nf=block_szs[i+1], blocks=l, stride=1 if i==0 else stride, sa=sa and i==len(layers)-4, **kwargs) for i,l in enumerate(layers)] def _make_layer(self, ni, nf, blocks, stride, sa, **kwargs): return nn.Sequential( *[self.block(self.expansion, ni if i==0 else nf, nf, stride=stride if i==0 else 1, sa=sa and i==(blocks-1), act_cls=self.act_cls, ndim=self.ndim, ks=self.ks, **kwargs) for i in range(blocks)]) # + #|export def _xresnet(pretrained, expansion, layers, **kwargs): # TODO pretrain all sizes. Currently will fail with non-xrn50 url = 'https://s3.amazonaws.com/fast-ai-modelzoo/xrn50_940.pth' res = XResNet(ResBlock, expansion, layers, **kwargs) if pretrained: res.load_state_dict(load_state_dict_from_url(url, map_location='cpu')['model'], strict=False) return res def xresnet18 (pretrained=False, **kwargs): return _xresnet(pretrained, 1, [2, 2, 2, 2], **kwargs) def xresnet34 (pretrained=False, **kwargs): return _xresnet(pretrained, 1, [3, 4, 6, 3], **kwargs) def xresnet50 (pretrained=False, **kwargs): return _xresnet(pretrained, 4, [3, 4, 6, 3], **kwargs) def xresnet101(pretrained=False, **kwargs): return _xresnet(pretrained, 4, [3, 4, 23, 3], **kwargs) def xresnet152(pretrained=False, **kwargs): return _xresnet(pretrained, 4, [3, 8, 36, 3], **kwargs) def xresnet18_deep (pretrained=False, **kwargs): return _xresnet(pretrained, 1, [2,2,2,2,1,1], **kwargs) def xresnet34_deep (pretrained=False, **kwargs): return _xresnet(pretrained, 1, [3,4,6,3,1,1], **kwargs) def xresnet50_deep (pretrained=False, **kwargs): return _xresnet(pretrained, 4, [3,4,6,3,1,1], **kwargs) def xresnet18_deeper(pretrained=False, **kwargs): return _xresnet(pretrained, 1, [2,2,1,1,1,1,1,1], **kwargs) def xresnet34_deeper(pretrained=False, **kwargs): return _xresnet(pretrained, 1, [3,4,6,3,1,1,1,1], **kwargs) def xresnet50_deeper(pretrained=False, **kwargs): return _xresnet(pretrained, 4, [3,4,6,3,1,1,1,1], **kwargs) # - #|export se_kwargs1 = dict(groups=1 , reduction=16) se_kwargs2 = dict(groups=32, reduction=16) se_kwargs3 = dict(groups=32, reduction=0) g0 = [2,2,2,2] g1 = [3,4,6,3] g2 = [3,4,23,3] g3 = [3,8,36,3] #|export def xse_resnet18(n_out=1000, pretrained=False, **kwargs): return XResNet(SEBlock, 1, g0, n_out=n_out, **se_kwargs1, **kwargs) def xse_resnext18(n_out=1000, pretrained=False, **kwargs): return XResNet(SEResNeXtBlock, 1, g0, n_out=n_out, **se_kwargs2, **kwargs) def xresnext18(n_out=1000, pretrained=False, **kwargs): return XResNet(SEResNeXtBlock, 1, g0, n_out=n_out, **se_kwargs3, **kwargs) def xse_resnet34(n_out=1000, pretrained=False, **kwargs): return XResNet(SEBlock, 1, g1, n_out=n_out, **se_kwargs1, **kwargs) def xse_resnext34(n_out=1000, pretrained=False, **kwargs): return XResNet(SEResNeXtBlock, 1, g1, n_out=n_out, **se_kwargs2, **kwargs) def xresnext34(n_out=1000, pretrained=False, **kwargs): return XResNet(SEResNeXtBlock, 1, g1, n_out=n_out, **se_kwargs3, **kwargs) def xse_resnet50(n_out=1000, pretrained=False, **kwargs): return XResNet(SEBlock, 4, g1, n_out=n_out, **se_kwargs1, **kwargs) def xse_resnext50(n_out=1000, pretrained=False, **kwargs): return XResNet(SEResNeXtBlock, 4, g1, n_out=n_out, **se_kwargs2, **kwargs) def xresnext50(n_out=1000, pretrained=False, **kwargs): return XResNet(SEResNeXtBlock, 4, g1, n_out=n_out, **se_kwargs3, **kwargs) def xse_resnet101(n_out=1000, pretrained=False, **kwargs): return XResNet(SEBlock, 4, g2, n_out=n_out, **se_kwargs1, **kwargs) def xse_resnext101(n_out=1000, pretrained=False, **kwargs): return XResNet(SEResNeXtBlock, 4, g2, n_out=n_out, **se_kwargs2, **kwargs) def xresnext101(n_out=1000, pretrained=False, **kwargs): return XResNet(SEResNeXtBlock, 4, g2, n_out=n_out, **se_kwargs3, **kwargs) def xse_resnet152(n_out=1000, pretrained=False, **kwargs): return XResNet(SEBlock, 4, g3, n_out=n_out, **se_kwargs1, **kwargs) def xsenet154(n_out=1000, pretrained=False, **kwargs): return XResNet(SEBlock, g3, groups=64, reduction=16, p=0.2, n_out=n_out) def xse_resnext18_deep (n_out=1000, pretrained=False, **kwargs): return XResNet(SEResNeXtBlock, 1, g0+[1,1], n_out=n_out, **se_kwargs2, **kwargs) def xse_resnext34_deep (n_out=1000, pretrained=False, **kwargs): return XResNet(SEResNeXtBlock, 1, g1+[1,1], n_out=n_out, **se_kwargs2, **kwargs) def xse_resnext50_deep (n_out=1000, pretrained=False, **kwargs): return XResNet(SEResNeXtBlock, 4, g1+[1,1], n_out=n_out, **se_kwargs2, **kwargs) def xse_resnext18_deeper(n_out=1000, pretrained=False, **kwargs): return XResNet(SEResNeXtBlock, 1, [2,2,1,1,1,1,1,1], n_out=n_out, **se_kwargs2, **kwargs) def xse_resnext34_deeper(n_out=1000, pretrained=False, **kwargs): return XResNet(SEResNeXtBlock, 1, [3,4,4,2,2,1,1,1], n_out=n_out, **se_kwargs2, **kwargs) def xse_resnext50_deeper(n_out=1000, pretrained=False, **kwargs): return XResNet(SEResNeXtBlock, 4, [3,4,4,2,2,1,1,1], n_out=n_out, **se_kwargs2, **kwargs) tst = xse_resnext18() x = torch.randn(64, 3, 128, 128) y = tst(x) tst = xresnext18() x = torch.randn(64, 3, 128, 128) y = tst(x) tst = xse_resnet50() x = torch.randn(8, 3, 64, 64) y = tst(x) tst = xresnet18(ndim=1, c_in=1, ks=15) x = torch.randn(64, 1, 128) y = tst(x) tst = xresnext50(ndim=1, c_in=2, ks=31, stride=4) x = torch.randn(8, 2, 128) y = tst(x) tst = xresnet18(ndim=3, c_in=3, ks=3) x = torch.randn(8, 3, 32, 32, 32) y = tst(x) # ## Export - #|hide from nbdev.export import * notebook2script()
nbs/11_vision.models.xresnet.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.5.3 # language: julia # name: julia-1.5 # --- # - By [lazarusA](https://lazarusa.github.io/Webpage/index.html) # + using CairoMakie, Random include("makieTheme1.jl") # don't forget to include the theme. function tα_qubit(β, ψ1, ψ2, fα, f) 2 + 2*β - cos(ψ1) - cos(ψ2) - 2*β*cos(π*fα)*cos(2*π*f + π*fα - ψ1- ψ2) end ψ1 = ψ2 = LinRange(0, 4*π, 100) z = [tα_qubit(.61, x, y, 0.2, .1) for x in ψ1, y in ψ2] fig = Figure(resolution = (470, 400)) ax = Axis(fig, aspect = 1, xlabel = "ψ1", ylabel = "ψ2") clines = contour!(ψ1, ψ2, z, colormap = :plasma, levels = 20, linewidth = 1.5) limits!(ax, 0, 4π, 0, 4π) cbar = Colorbar(fig, clines, label ="α-q", labelpadding = 0) fig[1, 1] = ax fig[1, 2] = cbar #save("./results/FigContour1.svg", scene, pt_per_unit = 0.7); save("./results/FigContour1.png", fig, px_per_unit = 2) fig
FigContour1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.7 ('django') # language: python # name: python3 # --- # # How to record polymer structure data # ## Run the following cell # - Followings are example mol texts #load module for jupyter import os import sys module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) from PolyMolParser.dict_parse import parse_mol_text # #%load_ext autoreload # #%autoreload 2 # + text=""" block_test.cdx ChemDraw02102210232D 15 15 0 0 0 0 0 0 0 0999 V2000 -0.3935 1.5678 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -0.8060 0.8534 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -0.0128 1.0801 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 0.7853 0.8534 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 1.1979 1.5678 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 0.4045 1.3412 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -1.3422 1.4804 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 1.4804 2.3430 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 0.6391 0.0414 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 0.8867 -0.9489 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 1.5335 -1.4610 0.0000 S 0 0 0 0 0 0 0 0 0 0 0 0 1.2138 -2.2215 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 1.7638 -2.8365 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 2.1415 2.8365 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -2.1415 1.6848 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 1 2 1 0 2 3 1 0 3 4 1 0 4 5 1 0 5 6 1 0 6 1 1 0 2 7 1 0 5 8 1 0 4 9 1 0 10 11 1 0 11 12 1 0 12 13 1 0 9 10 1 0 8 14 1 0 7 15 1 0 A 7 [r] A 8 [r] A 9 [r] A 10 [r] A 13 [r]n=10 A 14 {r} A 15 {r}n=4 M END """ text=text[1:-1] # - fragment_dict=parse_mol_text(text) fragment_dict # # Parsing notes # - Results are parsed as nested dict data # - The module may automatically calculate repeating units (n) from Mn when x is used. # - Structure of dict data # - polymer block 0 # - polymer unit 0 # - graph: networkX object of molecular structure # - unit_data # - unit_info_c: random, block, ... # - unit_info_n: repeating numbers # - block_info_n: repeating numbers in the block # - unit_MW: molecular weight of a unit # - SMILES: SMILES expression of the unit # - repeated_MW: molecular weight of repeated units # - end_groups # - dict data of end groups # - polymer unit 1 # - ... # - repeated_MW: molecular weight of the block # - n: repeating numbers for the block (sometimes becomes unexpectedly 1, due to the program: needs to be fixed) # - polymer block 1 # - ... # - ... # - General info # - Mw, Mn, pdi # - Average molecular weight per unit or block # - parsing result (success or error) # # Other MOL text examples # + text=""" block_test.cdx ChemDraw02092214052D 11 10 0 0 0 0 0 0 0 0999 V2000 -2.1730 -0.0759 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -1.4140 0.2476 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -0.7544 -0.2480 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 0.0252 0.0755 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 0.6644 -0.4196 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 1.3781 -0.0057 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 2.1021 -0.4012 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 2.8451 -0.0425 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 3.6059 -0.3615 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -2.8326 0.4196 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -3.6059 0.1321 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 1 2 1 0 2 3 1 0 3 4 1 0 4 5 1 0 5 6 1 0 6 7 1 0 7 8 1 0 8 9 1 0 1 10 1 0 10 11 1 0 A 1 [r] n=4 A 4 [r]c=random A 5 [r] n=3 A 8 [r] M END """ text=text[1:-1] text # + text=""" block_test.cdx ChemDraw02092218042D 18 17 0 0 0 0 0 0 0 0999 V2000 -3.7975 -1.0278 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -3.0386 -0.7044 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -2.3790 -1.1999 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 -1.5994 -0.8764 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -0.9601 -1.3715 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -0.2464 -0.9577 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 0.4776 -1.3532 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 1.2205 -0.9944 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 1.9607 -0.1581 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -4.4778 0.6230 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -5.2510 0.3356 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 2.5824 0.3842 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 3.4041 -1.0796 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 4.0641 -0.5845 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 4.8640 -0.7863 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 5.0081 0.5604 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 5.2510 1.3715 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 4.1921 0.2305 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 1 2 1 0 2 3 1 0 3 4 1 0 4 5 1 0 5 6 1 0 6 7 1 0 7 8 1 0 8 9 1 0 1 10 1 0 10 11 1 0 9 12 1 0 12 13 1 0 13 14 1 0 14 15 1 0 15 16 1 0 16 17 1 0 14 18 1 0 A 1 [r] n=4 A 4 [r]c=random A 5 [r] n=3 A 8 [r] A 9 {r}n=10 A 10 {r} A 12 {r}n=4 A 13 [r] A 15 [r]n=5 A 16 {r}Mw=3500 M END """ text=text[1:-1] text # + # x case text=""" block_test.cdx ChemDraw02092218372D 18 17 0 0 0 0 0 0 0 0999 V2000 -4.2056 -0.1424 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -3.4466 0.1811 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -2.2907 -1.2188 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 -1.1972 -0.0222 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 0.1438 -0.0428 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 0.5152 -1.1982 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 1.2043 -1.2724 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 1.7118 -0.0222 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 1.7943 1.1744 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -4.8033 1.1990 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 -5.5766 0.9115 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 3.1147 1.1538 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 3.7337 -0.0634 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 4.5796 -1.1157 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 5.1160 -0.1460 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 5.4460 1.1157 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 5.5766 1.9398 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 4.5429 -1.9398 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 1 2 1 0 2 3 1 0 3 4 1 0 4 5 1 0 5 6 1 0 6 7 1 0 7 8 1 0 8 9 1 0 1 10 1 0 10 11 1 0 9 12 1 0 12 13 1 0 13 14 1 0 14 15 1 0 15 16 1 0 16 17 1 0 14 18 1 0 A 1 [r] x=4 A 4 [r]c=random A 5 [r] x=3 A 8 [r] A 9 {r}x=10 A 10 {r} A 12 {r}x=4 A 13 [r] A 15 [r]x=5 A 16 {r}Mn=4500 M END """ text=text[1:-1]
Tutorial.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 实现逻辑回归算法 # # 对损失函数 # # $ # J(\theta) = -\frac {1}{m}\sum _{i=1}^{m}y^{(i)}\log (\hat p^{(i)}) + (1 - y^{(i)})\log(1 - \hat p^{(i)}) # $, # 其中 # $ # \hat p^{(i)} = g ( X_b^{(i)}\theta) = \frac {1}{1+e^{-X_b^{(i)} \theta }} # $ # # 的求导结果为 # # $ # \nabla J(\theta) = \frac {1}{m}\cdot X_b^T\cdot(\sigma (X_b\theta) - y) # $ class LogisticRegression: def __init__(self): """初始化Logistic Regression模型""" self.coef_ = None self.intercept_ = None self._theta = None def _sigmoid(self, t): return 1. / (1. + np.exp(-t)) def fit(self, X_train, y_train, eta=0.01, n_iters=1e4): """根据训练数据集X_train, y_train, 使用梯度下降法训练Logistic Regression模型""" assert X_train.shape[0] == y_train.shape[0], \ "the size of X_train must be equal to the size of y_train" def J(theta, X_b, y): y_hat = self._sigmoid(X_b.dot(theta)) try: return - np.sum(y*np.log(y_hat) + (1-y)*np.log(1-y_hat)) / len(y) except: return float('inf') def dJ(theta, X_b, y): return X_b.T.dot(self._sigmoid(X_b.dot(theta)) - y) / len(y) def gradient_descent(X_b, y, initial_theta, eta, n_iters=1e4, epsilon=1e-8): theta = initial_theta cur_iter = 0 while cur_iter < n_iters: gradient = dJ(theta, X_b, y) last_theta = theta theta = theta - eta * gradient if (abs(J(theta, X_b, y) - J(last_theta, X_b, y)) < epsilon): break cur_iter += 1 return theta X_b = np.hstack([np.ones((len(X_train), 1)), X_train]) initial_theta = np.zeros(X_b.shape[1]) self._theta = gradient_descent(X_b, y_train, initial_theta, eta, n_iters) self.intercept_ = self._theta[0] self.coef_ = self._theta[1:] return self def predict_proba(self, X_predict): """给定待预测数据集X_predict,返回表示X_predict的结果概率向量""" assert self.intercept_ is not None and self.coef_ is not None, \ "must fit before predict!" assert X_predict.shape[1] == len(self.coef_), \ "the feature number of X_predict must be equal to X_train" X_b = np.hstack([np.ones((len(X_predict), 1)), X_predict]) return self._sigmoid(X_b.dot(self._theta)) def predict(self, X_predict): """给定待预测数据集X_predict,返回表示X_predict的结果向量""" assert self.intercept_ is not None and self.coef_ is not None, \ "must fit before predict!" assert X_predict.shape[1] == len(self.coef_), \ "the feature number of X_predict must be equal to X_train" proba = self.predict_proba(X_predict) return np.array(proba >= 0.5, dtype='int') def score(self, X_test, y_test): """根据测试数据集 X_test 和 y_test 确定当前模型的准确度""" y_predict = self.predict(X_test) return np.sum(y_test == y_predict) / len(y_test) def __repr__(self): return "LogisticRegression()" # ## 使用逻辑回归 # + import numpy as np import matplotlib.pyplot as plt from sklearn import datasets iris = datasets.load_iris() # + X = iris.data y = iris.target # 取两种结果 X = X[y<2,:2] y = y[y<2] plt.scatter(X[y==0,0], X[y==0,1], color="red") plt.scatter(X[y==1,0], X[y==1,1], color="blue") plt.show() # + from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=666) # - log_reg = LogisticRegression() log_reg.fit(X_train, y_train) log_reg.score(X_test, y_test) log_reg.predict_proba(X_test) log_reg.predict(X_test) y_test
logistic/logistic-imp.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # MNIST Digit Recognizer (with Keras) import numpy as np # %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg # ## Load dataset from keras.datasets import mnist (X_train, y_train), (X_test, y_test) = mnist.load_data() print("Total images in training set: %d"%len(X_train)) print("Total images in test set: %d"%len(X_test)) print("Image dimension: %d X %d"%(X_train[0].shape[0],X_train[0].shape[1])) print("Max pixel val: %d"%np.max(X_train)) print("Min pixel val: %d"%np.min(X_train)) # ## Plot first 10 images fig = plt.figure() for i in range(10): ax = fig.add_subplot(2,5,i+1) ax.axis('off') ax.imshow(X_train[i],cmap="gray") # ### Image labels print(y_train[:5]) print(y_train[5:10]) # ## Preprocess data # + # All pixel values normalized between 0 and 1 normalize = lambda x: x/255.0 v_normalize = np.vectorize(normalize) X_train_processed_norm = v_normalize(X_train) X_test_processed_norm = v_normalize(X_test) # - # ### One-hot encoding: Image labels # + from keras.utils import np_utils y_train_encoded = np_utils.to_categorical(y_train, 10) y_test_encoded = np_utils.to_categorical(y_test, 10) # - # ### First 10 encoded image labels print(y_train_encoded[:10]) # ## Neural Network Architecture from sklearn.model_selection import GridSearchCV from keras.wrappers.scikit_learn import KerasClassifier from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.constraints import maxnorm from keras.callbacks import ModelCheckpoint # #### Use GridSearch to tune hyper-parameters def nn_model(dropout_prob=0.0, num_layers=1): model = Sequential() model.add(Flatten(input_shape=X_train.shape[1:])) model.add(Dense(512, activation='relu')) model.add(Dropout(dropout_prob)) for i in range(1,num_layers): model.add(Dense(512, activation='relu')) model.add(Dropout(dropout_prob)) model.add(Dense(10, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) return model # + model = KerasClassifier(build_fn=nn_model, epochs=10, batch_size=100, verbose=0) dropout_prob = [0.2,0.3,0.4] num_layers = [2,3,4] param_grid = dict(dropout_prob=dropout_prob, num_layers=num_layers) grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1) grid_result = grid.fit(X_train_processed_norm, y_train_encoded) print("Best validation accuracy: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) # - # Tune batch-size using hyper-parameters obtained above # + model = KerasClassifier(build_fn=nn_model, epochs=10, verbose=0) dropout_prob = [0.2] num_layers = [2] batch_size = [50,100,150,10000,30000] param_grid = dict(dropout_prob=dropout_prob, num_layers=num_layers, batch_size=batch_size) grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1) grid_result = grid.fit(X_train_processed_norm, y_train_encoded) print("Best validation accuracy: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) # - # Tune number of epochs using the tuned value of other hyper-parameters obtained above # + model = Sequential() model.add(Flatten(input_shape=X_train.shape[1:])) model.add(Dense(512, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(512, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(10, activation='softmax')) model.summary() model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) checkpointer = ModelCheckpoint(filepath='mnist.model.best.hdf5',verbose=1, save_best_only=True) results = model.fit(X_train_processed_norm, y_train_encoded, batch_size=150, epochs=30, validation_split=0.33, callbacks=[checkpointer], verbose=0, shuffle=True) # - # Test accuracy model.load_weights('mnist.model.best.hdf5') score = model.evaluate(X_test_processed_norm, y_test_encoded, verbose=0) print('Test accuracy: %f' % score[1]) # ## Convolutional Neural Network from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D X_train_processed_norm = X_train_processed_norm.reshape(X_train.shape[0], X_train.shape[1], X_train.shape[2], 1) X_test_processed_norm = X_test_processed_norm.reshape(X_test.shape[0], X_test.shape[1], X_test.shape[2], 1) input_shape = (X_train.shape[1], X_train.shape[2], 1) def cnn_model(kernel_size=2, num_filters=1, num_dense_layers=1, dropout_prob=0.0): model = Sequential() model.add(Conv2D(filters=num_filters, kernel_size=kernel_size, strides=1, activation='relu',input_shape=input_shape)) model.add(MaxPooling2D(pool_size=2)) model.add(Conv2D(filters=num_filters, kernel_size=kernel_size, strides=1, activation='relu')) model.add(MaxPooling2D(pool_size=2)) model.add(Flatten()) model.add(Dense(512, activation='relu')) model.add(Dropout(dropout_prob)) for i in range(1,num_dense_layers): model.add(Dense(512, activation='relu')) model.add(Dropout(dropout_prob)) model.add(Dense(10, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) return model # #### Use GridSearch to tune hyper-parameters # + model = KerasClassifier(build_fn=cnn_model, epochs=10, batch_size=150, verbose=0) kernel_size = [2,3] num_filters = [8,16] num_dense_layers = [2,3] dropout_prob = [0.2,0.3,0.4] param_grid = dict(kernel_size=kernel_size, num_filters=num_filters, num_dense_layers=num_dense_layers, dropout_prob=dropout_prob) grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1) grid_result = grid.fit(X_train_processed_norm, y_train_encoded) print("Best validation accuracy: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) # - # Tune number of epochs using the tuned value of other hyper-parameters obtained above # + model = Sequential() model.add(Conv2D(filters=16, kernel_size=3, strides=1, activation='relu',input_shape=input_shape)) model.add(MaxPooling2D(pool_size=2)) model.add(Conv2D(filters=16, kernel_size=3, strides=1, activation='relu')) model.add(MaxPooling2D(pool_size=2)) model.add(Flatten()) model.add(Dense(512, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(512, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(10, activation='softmax')) model.summary() model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) checkpointer = ModelCheckpoint(filepath='mnist.cnn_model.best.hdf5',verbose=1, save_best_only=True) results = model.fit(X_train_processed_norm, y_train_encoded, batch_size=150, epochs=30, validation_split=0.33, callbacks=[checkpointer], verbose=0, shuffle=True) # - model.load_weights('mnist.cnn_model.best.hdf5') score = model.evaluate(X_test_processed_norm, y_test_encoded, verbose=0) print('Test accuracy: %f' % score[1])
mnist.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- from pathlib import Path import rdflib from prov.serializers.provxml import ProvXMLSerializer # #!git clone https://github.com/common-workflow-language/cwlprov prov_dir = Path("cwlprov") / "examples" / "revsort-run-1" / "metadata" / "provenance" # The RDF serialization should be the easiest to query with SPARQL prov_path = prov_dir / "primary.cwlprov.nt" g = rdflib.Graph() g.bind("wfprov", "http://purl.org/wf4ever/wfprov#") g.parse(prov_path) query = """\ SELECT ?run WHERE { ?run a wfprov:WorkflowRun . } """ res = g.query(query) for row in res: print(row) # We'd like to extend the query to find out when the workflow run started and ended. This information is available through `was{Started,Ended}By` (with `WorkflowEngine` as the subject), but it's not included in the RDF serialization. It is included in the XML file, but querying that with the `prov` library is more contrived. For instance, to find the `WorkflowRun`: # + from prov.serializers.provxml import ProvXMLSerializer WFPROV = "http://purl.org/wf4ever/wfprov#" PROV = "http://www.w3.org/ns/prov#" prov_path = prov_dir / "primary.cwlprov.xml" with open(prov_path) as f: doc = ProvXMLSerializer().deserialize(f) for r in doc.get_records(): for k, v in r.attributes: if k.uri == f"{PROV}type" and v.uri == f"{WFPROV}WorkflowRun": print(r)
tools/cwlprov_to_crate/read_cwlprov.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + class linear_model: def __init__(self, X, y): self.X = X self.X = np.append(arr = np.ones((self.X.shape[0],1)).astype(int), values = self.X, axis = 1) self.theta = np.zeros((self.X.shape[1],1)) self.y = np.atleast_2d(y) print(y.shape) def fit(self): print(self.theta) self.grad_desc() def grad_desc(self): for i in range(7000): self.theta = self.theta - 0.1/40*(np.transpose(np.transpose((self.X @ self.theta) - self.y ) @ self.X)) print(self.theta) def predict(self, X): pass # Data Preprocessing Template # Importing the libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd # Importing the dataset dataset = pd.read_csv('50_Startups.csv') X = dataset.iloc[:, :-1].values y = dataset.iloc[:, -1:].values #encoding catagorical data state from sklearn.preprocessing import LabelEncoder, OneHotEncoder labelencoder_X = LabelEncoder() X[:, -1] = labelencoder_X.fit_transform(X[:, -1]) onehotencoder = OneHotEncoder(categorical_features = [3]) X = onehotencoder.fit_transform(X).toarray() #Avoind the dummy variable trap X = X[:, 1:] # Splitting the dataset into the Training set and Test set from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0) from sklearn.preprocessing import StandardScaler sc_X = StandardScaler() X_train = sc_X.fit_transform(X_train) X_test = sc_X.transform(X_test) sc_y = StandardScaler() y_train = sc_y.fit_transform(y_train) print(X_train) lin_mod= linear_model(X_train, y_train) lin_mod.fit() # -
Part 2 - Regression/Section 5 - Multiple Linear Regression/.ipynb_checkpoints/practice-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import ctypes libc = ctypes.CDLL("./kepler.so") n, t0, e, a = 10, 0, 0.1, 1 t = np.linspace(0, 10, 1000) n = ctypes.c_double(n) t0 = ctypes.c_double(t0) e = ctypes.c_double(e) a = ctypes.c_double(a) m = ctypes.c_int(len(t)) r, f = np.zeros(len(t)), np.zeros(len(t)) # + #t = np.linspace(0, 10, 60) #arraytype = (ctypes.c_double * len(t)) * 2 #libc.solve_kepler_array.restype = ctypes.POINTER(arraytype) #t = (ctypes.c_double * len(t))(*t) # + # #%time res = libc.solve_kepler_array(t, n, t0, e, a) #res = np.array(res.contents) # - from numpy.ctypeslib import ndpointer #libc.solve_kepler_array.restype = ndpointer(dtype=ctypes.c_double, ndim=2, shape=(2, len(t))) libc.solve_kepler_array.restype = ctypes.POINTER(ctypes.c_double * 2) * len(t) t = (ctypes.c_double * len(t))(*t) r = (ctypes.c_double * len(r))(*r) f = (ctypes.c_double *len(f))(*f) # %time libc.solve_kepler_array(r, f, t, n, t0, e, a, m) import matplotlib.pyplot as pl pl.plot(r) i, w = 90 / 2 / np.pi, 10 / 2 / np.pi r, f = np.array(r), np.array(f) x = -r * np.cos(w + f) y = -r * np.sin(w + f)*np.cos(i) z = r * np.sin(w + f)*np.sin(i) pl.plot(x, y) np.array(res.contents) import copy copy.deepcopy
dynamoon/c_src/Untitled.ipynb
# # GEE score tests # # This notebook uses simulation to demonstrate robust GEE score tests. These tests can be used in a GEE analysis to compare nested hypotheses about the mean structure. The tests are robust to miss-specification of the working correlation model, and to certain forms of misspecification of the variance structure (e.g. as captured by the scale parameter in a quasi-Poisson analysis). # # The data are simulated as clusters, where there is dependence within but not between clusters. The cluster-wise dependence is induced using a copula approach. The data marginally follow a negative binomial (gamma/Poisson) mixture. # # The level and power of the tests are considered below to assess the performance of the tests. import pandas as pd import numpy as np from scipy.stats.distributions import norm, poisson import statsmodels.api as sm import matplotlib.pyplot as plt # The function defined in the following cell uses a copula approach to simulate correlated random values that marginally follow a negative binomial distribution. The input parameter `u` is an array of values in (0, 1). The elements of `u` must be marginally uniformly distributed on (0, 1). Correlation in `u` will induce correlations in the returned negative binomial values. The array parameter `mu` gives the marginal means, and the scalar parameter `scale` defines the mean/variance relationship (the variance is `scale` times the mean). The lengths of `u` and `mu` must be the same. def negbinom(u, mu, scale): p = (scale - 1) / scale r = mu * (1 - p) / p x = np.random.gamma(r, p / (1 - p), len(u)) return poisson.ppf(u, mu=x) # Below are some parameters that govern the data used in the simulation. # + # Sample size n = 1000 # Number of covariates (including intercept) in the alternative hypothesis model p = 5 # Cluster size m = 10 # Intraclass correlation (controls strength of clustering) r = 0.5 # Group indicators grp = np.kron(np.arange(n/m), np.ones(m)) # - # The simulation uses a fixed design matrix. # Build a design matrix for the alternative (more complex) model x = np.random.normal(size=(n, p)) x[:, 0] = 1 # The null design matrix is nested in the alternative design matrix. It has rank two less than the alternative design matrix. x0 = x[:, 0:3] # The GEE score test is robust to dependence and overdispersion. Here we set the overdispersion parameter. The variance of the negative binomial distribution for each observation is equal to `scale` times its mean value. # Scale parameter for negative binomial distribution scale = 10 # In the next cell, we set up the mean structures for the null and alternative models # + # The coefficients used to define the linear predictors coeff = [[4, 0.4, -0.2], [4, 0.4, -0.2, 0, -0.04]] # The linear predictors lp = [np.dot(x0, coeff[0]), np.dot(x, coeff[1])] # The mean values mu = [np.exp(lp[0]), np.exp(lp[1])] # - # Below is a function that carries out the simulation. # hyp = 0 is the null hypothesis, hyp = 1 is the alternative hypothesis. # cov_struct is a statsmodels covariance structure def dosim(hyp, cov_struct=None, mcrep=500): # Storage for the simulation results scales = [[], []] # P-values from the score test pv = [] # Monte Carlo loop for k in range(mcrep): # Generate random "probability points" u that are uniformly # distributed, and correlated within clusters z = np.random.normal(size=n) u = np.random.normal(size=n//m) u = np.kron(u, np.ones(m)) z = r*z +np.sqrt(1-r**2)*u u = norm.cdf(z) # Generate the observed responses y = negbinom(u, mu=mu[hyp], scale=scale) # Fit the null model m0 = sm.GEE(y, x0, groups=grp, cov_struct=cov_struct, family=sm.families.Poisson()) r0 = m0.fit(scale='X2') scales[0].append(r0.scale) # Fit the alternative model m1 = sm.GEE(y, x, groups=grp, cov_struct=cov_struct, family=sm.families.Poisson()) r1 = m1.fit(scale='X2') scales[1].append(r1.scale) # Carry out the score test st = m1.compare_score_test(r0) pv.append(st["p-value"]) pv = np.asarray(pv) rslt = [np.mean(pv), np.mean(pv < 0.1)] return rslt, scales # Run the simulation using the independence working covariance structure. We expect the mean to be around 0 under the null hypothesis, and much lower under the alternative hypothesis. Similarly, we expect that under the null hypothesis, around 10% of the p-values are less than 0.1, and a much greater fraction of the p-values are less than 0.1 under the alternative hypothesis. # + rslt, scales = [], [] for hyp in 0, 1: s, t = dosim(hyp, sm.cov_struct.Independence()) rslt.append(s) scales.append(t) rslt = pd.DataFrame(rslt, index=["H0", "H1"], columns=["Mean", "Prop(p<0.1)"]) print(rslt) # - # Next we check to make sure that the scale parameter estimates are reasonable. We are assessing the robustness of the GEE score test to dependence and overdispersion, so here we are confirming that the overdispersion is present as expected. _ = plt.boxplot([scales[0][0], scales[0][1], scales[1][0], scales[1][1]]) plt.ylabel("Estimated scale") # Next we conduct the same analysis using an exchangeable working correlation model. Note that this will be slower than the example above using independent working correlation, so we use fewer Monte Carlo repetitions. # + rslt, scales = [], [] for hyp in 0, 1: s, t = dosim(hyp, sm.cov_struct.Exchangeable(), mcrep=100) rslt.append(s) scales.append(t) rslt = pd.DataFrame(rslt, index=["H0", "H1"], columns=["Mean", "Prop(p<0.1)"]) print(rslt)
v0.12.2/examples/notebooks/generated/gee_score_test_simulation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Name: <NAME> # Student ID: 12632864 import pandas as pd import os import csv from data_processing import * from transformations import * from visualizations import * from bokeh.io import output_notebook, show output_notebook() # - # # Introduction # The topic of this project is the relationship between countries' performance in the Olympics and their economic affluency. There has been an increasing interest in the relationship between countries' performance in the Olympics and their economical situation [(Takongmo & Yao, 2017)](https://economie.esg.uqam.ca/wp-content/uploads/sites/54/2017/09/Mao-Takongmo-Charle-O-Mai2017-1.pdf), so I believe this project will contribute to the current body of knowledge on the topic. Moreover, this subject is of special interest to me, since I have always been interested in sports, practicing gymnastics, and coming from a country (Greece) that does not do well economically and is not very supportive of professional athletes (in terms of grants, infrastracture, facilites, compensation, etc.). # # ### Main research question: # * Is there a relationship between performance in the Olympics and economic affluency? # # ### How the variables are measured: # * **Performance in the Olympics** : Total number of medals obtained by country # * **Economic affluency** : Gross Domestic Product (GDP) per capita. # * The values are in US dollars and adjusted for inflation. This means that the values are comparable with each other, regardless of the year, or the currency of the country. # # ### Hypothesis: # * It is hypothesized that there will be a positive relationship between performance in the Olympics and GDP, meaning that richer countries will tend to do better in the Olympics. Note that this is a merely descriptive question, so no causal connection can be inferred when only investigating relationships. # ### -------------------------------------------------------------------------------------------------------------------------------------------------------------- # # Refining the research question # * To provide a well-rounded answer to the research question, two subquestions are formulated: # # ## Subquestion 1: # * **(a)** Is there a relationship between countries' medalists and their GDP per capita? # * **(b)** Is this relationship different if the number of medalists is adjusted for each country's population? # * *Justification*: It is conceivable that countries with a large number of citizens have more athletes that win medals. What would the number of medals be, if all countries had the same number of citizens? Adjusting for population size allows to compare each countries' medalists withouth the influence of population. This is done by using the following formula: **Adjusted medalists = (Total medalists * 10^7) / population size**, which gives the number of medalists for every 10 million citizens for every country. # * **(c)** Can there be meaningful groups (clusters) of countries? # * (e.g. countries that are very rich but perform poorly, or countries that are poor but perform great etc.) # # ## Subquestion 2: # * **(a)** What is the relationship between the number of medalists and GDP per capita for any two specific countries? # # * *Justification*: Visualizing a relationship might reveal a general trend in the data, but it cannot allow to compare countries and get more focused information. It is possible that not all countries follow the general trend, so by comparing specific countries with each other, one can get a more complete picture of the relationship. # # * **(b)** Is this relationship different if the number of medalists is adusted for each country's population? # ### -------------------------------------------------------------------------------------------------------------------------------------------------------------- # # General description of datasets: # # ### For this project I used 4 different datasets from 2 different sources: # * From [kaggle](https://www.kaggle.com/) I downloaded the **Olympics** and the **noc_regions** datasets. # * The **Olympics** dataset contains hundreds of thousands of entries, representing each athlete that took part in the Olympics since 1896. The columns refer to several of the athletes' characteristics (for example, age, NOC, medal, year, sport, height, etc.). The column 'NOC' contains the abbreviation of the athlete's country, so I used the noc_regions dataset to match this abbreviation with the full name of the country. # * The **noc_regions** dataset contains the abbreviations and the full name of the countries, along with some (unimportant for this project) notes. # * From [gapminder](https://www.gapminder.org/data/) I downloaded the **population_total** and the **gdppercapita_us_inflation_adjusted** datasets. # * Each entry in the original **populations** dataset represents a year since 1800 and the columns refer to the countries. The values correspond to the population of each country for a given year. This dataset was combined with an edited version of the olympics dataset (grouped by country) to calculate the number of medalists adjusted for population size. # * The original **gdp** dataset is of the same stracture as the populations datasets, with the difference that the year starts from 1960. The values represent the gdp per capita of each country for a given year. This dataset was also combined with the edited version of the olympics datased (grouped by country). # * For the purposes of the project I needed **one value** for each countries' GDP and population. As a result, I used the average for both measures, so the average GDP of each country over the years, and the average population over the years. This allowed to perform all necessary computations and visualize the data properly, without distorting them much. # # ### All datasets were downloaded as .csv files and read into the program afterwards. # ### The pipeline of the project can be divided into 4 phases: # 1. Basic processing # 2. Transformations # 3. Saving the transformed datasets # 4. Visualizations # ### -------------------------------------------------------------------------------------------------------------------------------------------------------------- # # 1. Basic processing of the data # ###### In this phase I read the data # * Firstly, I read the Olympics and the noc_regions datasets into the program with the built-in pandas function. # * I also add the full name of the countries in the Olympics dataset based on the noc_regions dataset (*merge_dfs* function), after having made sure all the abbreviations in the two datasets match (*match_noc_regions* function). # * The population and gdp datasets are read with the function *read_extra_files*, so that they are reshaped appropriately for further transformations. # + # Read files olympics = pd.read_csv("olympics.csv") noc = pd.read_csv("noc_regions.csv") gdp = read_extra_files("gdppercapita_us_inflation_adjusted.csv") pop = read_extra_files("population_total.csv") # - # Create a new df 'merged' with the Olympics dataset that now contains the full name of the countries olympics, noc = match_noc_regions(olympics, noc) merged = merge_dfs(olympics, noc) # # 2. Transformations # ###### In this phase I perform all necessary steps so that the data are in their final version for the first visualization. # * Firstly, with the *clean_data* function I remove some unnecessary columns and I replace missing values in some (irrelevant for the project) columns, so that no important data are lost. # * With the *dummies_for_medals* function I add dummy variables for the medals, so that it is easier to compute the total number of medals. This means that I create separate columns for each category in the *medal* column (Gold, Silver, Bronze, none). When the entry has a categorical value, then the corresponding column gets the value of 1, and the rest get the value of 0. For more on dummy variables you can look [here](https://en.wikipedia.org/wiki/Dummy_variable_(statistics)). # * With the *match_countries* function I make sure that the countries are written in the same way across the datasets, and I include only the countries that are in all datasets (otherwise I could not plot them). # * With the *final_transformation* function I create a final df with all necessary information from all datasets, while excluding irrelevant information. This dataframe is used for the first visualization. # Clean the data data = clean_data(merged) # Add dummies for the medals column data = dummies_for_medals(data) # + # Match the countries in the gdp and pop dfs, so they are the same with the Olympics df gdp, countries_in_gdp_and_ol = match_countries(gdp, data) pop, countries_in_pop_and_ol = match_countries(pop, data) # A list with the common countries across the 3 dataframes common_countries = sorted(list(countries_in_gdp_and_ol.intersection(countries_in_pop_and_ol))) # - # Create the final dataframe, on which the first visualization is based final = final_transformation(data, gdp, pop) # This is how the final dataframe looks like pd.options.display.float_format = '{:.2f}'.format final.head(2) # # 3. Saving the transformed datasets # ###### In this phase I create a copy of the transformed datasets (as is specified in the assignment instructions). This is also useful later, when I create the visualizations, because I can simply import the files without having to perform all previous steps. # * In addition to the datasets, I also export *common_countries*, which is a file with all the common countries. This is used in the second visualization, so the user can select which two countries to compare. # Export the data path = os.getcwd() # get the current location data.to_csv(os.path.join(path, 'data.csv'), index=False, header=True) # export the olympics df gdp.to_csv(os.path.join(path, 'gdp.csv'), index=True, header=True) # export the gdp df pop.to_csv(os.path.join(path, 'pop.csv'), index=True, header=True) # export the population df with open('countries.csv', "w") as output: # export the common countries writer = csv.writer(output, lineterminator='\n') [writer.writerow([val]) for val in common_countries] final.to_csv(os.path.join(path, 'final.csv'), index=False, header=True) # export the final dataframe # # 4. Visualizations # ###### In this phase I create the visualizations with which I aim to answer the research questions. # ## Visualization to answer subquestion 1: # * Relationship between number of medalists and GDP # * The user can select which metric to visualize for the medalists: # 1. Raw total (N of medalists (raw)) # 2. Adjusted for population (N of medalists (per 10m)) # * The user can also hover over the data points to see the name of the country and the relevant values relationship = visualize_relationship show(relationship) # ### Interpretation of the visualization: # * **(a)** It seems that there is **a negligible positive relationship** between countries' economic affluency and their performance in the Olympics: # * The correlation coefficient is only 0.22 when the metric is the raw total of medalists, and 0.36 when the number of medalists is adjusted for population size. In both cases, this implies a negligible to weak positive correlation. # * **(b)** Interestingly, the relationship seems to get slightly stronger when the number of medalists is adjusted for population size. Even more interesting is the fact that the performance of specific countries seems to change significantly when the metric for the medalists is changed (their position on the y axis changes as a result of adjusting for population size). # * For example, the USA is the country with the greatest number of medalists (raw total), so it is the highest data point on the y axis. However, when the metric is changed (adjusted for population size), the USA moves to slightly above the bottom of the y axis, which means that considering its population size, it does not have that many medalists. On the contrary, the country that now has the most medalists is the Bahamas, a small country that seems to produce a lot of medalists, given its population. # * **(c)** Regarding the groups of countries, I implemented **k-means clustering** from the **sklearn library** to visualize the different groups. K-means clustering produces the different clusters by placing several centers in different positions of the board and calculating each data point's distance from that center. When the distance is minimum for a lot of data points the data points belong to this center (cluster). The function applies as many clusters as given by the user. As a result of this process, the position of the center can vary between different implementations (k-means is not deterministic), that is why by default, the function is run 300 times, so the result is the average of this, which results in the same clusters across different implementations. I implemented k-means by following the instructions on the sklearn library. For more information you can look [here](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html). # * The visualization shows that there are 3 different groups of countries. The first group contains the greatest number of countries, which are the ones with the smaller GDP. Most of these countries have no to very few medalists, but there are a few that stand out for their relatively large number of medalists, like *Hungary, Jamaica, Romania, Bulgaria, Cuba, Uruguay, and Serbia*. The second group contains fewer and richer countries, which generaly have more medalists. Finally, the third group is made up of 2 countries that are very rich, but perform very poorly on the Olympics. Looking at their names (Monaco and Liechtenstein, this is not very surprasing since they are very small countries, so they probably have few athletes that perform in the Olympics). # ### Evaluating the performance of kmeans clustering: # * One issue when performing k-means clustering is how to decide on how many clusters to use. There are several methods to asses how representative the number of clusters is, of which I used the average silhouette and the elbow method. # * The **average silhouette score** takes values from -1 to 1 for a specific number of clusters. The closer to 1 the value is, the more representative the number of clusters. With the *kmeans_silhouette* function I visualize the performance for different numbers of clusters. The code is largely based on the documentation of the method from the sklearn library and can be viewed [here](https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html). # * The **elbow method** shows the Within-Cluster Sum of Squares (WCSS), which represents the variance of the data points from their center. Naturally, the fewer the clusters the greater the variance will be, so WCSS gets lower as the number of clusters increases. This method suggests that the optimal number of clusters is the one after which the WCSS begins to level off. For this vidualization is used the inertia_ method of the sklearn library to calculate the WCSS when doing k-means. More information can be found [here](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html) and [here](https://en.wikipedia.org/wiki/K-means_clustering#Description). kmeans_silhouette('mean_gdp', 'N of medalists (raw)') kmeans_elbow_method('mean_gdp', 'N of medalists (raw)') # ### Interpretation of the evaluation: # * Even though the average-silouette score suggests that the best number of clusters is 4, **I opted for 3 clusters**. This is because the extra group that was added did not seem to lead to useful information about the groups. Moreover, its performance was only 0.006 points better than for the performance of 3 clusters. # * I believe that such decisions must always be informed by the relevant statistics, but they should always be evaluated for their meaningfulness and not be applied blindly. In this case, I believe it is justified to use 3 clusters instead of 4, especially since the Elbow method also suggests that using 3 clusters is a good decision (the line levels off after 3 clusters). # ## Visualization to answer subquestion 2: # * Relationship between any two particular countries # * The user can select which two countries to compare # * The user can also hover over the plots and see the particular values for medalists, year, GDP, etc. p = visualize_performances show(p) # ### Interpretation of the visualization: # * **(a)** The exact relationship differs depending on which countries are visualized. # * The left plot shows how the two countries compare on the number of medalists over the years: # * The continuous lines represent the raw total number of medalists # * The dashed lines represent the number of medalists adjusted for population size # * The right plot shows how the two countries compare on GDP per capita over the years # * **(b)** For many countries, it seems that the relationship differs depending on the metric for medalists. # * For example the USA has a lot more medalists in total (raw) than Jamaica, but if we take into account these countries' population, Jamaica far exceeds the USA in their number of medalistts, especially after 1980, even though it is a much poorer country, as is depicted on the left plot. # * Moreover, by comparing the two line charts, we can see the relationship of performance in the Olympics and GDP. For some countries, like the USA, as the line showing GDP rises, so does the line showing the total number of medalists. Other countries follow different patterns. The line showing GDP might rise, while the line showing the total number of medalists might remain stable (Norway), or even decline (Austria). # ## Conclusion: # ### This great variability in the dataset shows why the relationship between performance in the Olympics and economic affluency is so small and also highlights that even though there might be a (negligible) general trend in the data, a lot of individual data points (countries) do not follow that trend. # # ### --------------------------------------------------------------------------------------------------------------------------------------------------------------
.ipynb_checkpoints/Test-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # <span style="align: right; direction: rtl; float: right; clear: both;">תרגילים</span> # ### <span style="text-align: right; direction: rtl; float: right; clear: both;">group_by</span> # <p style="text-align: right; direction: rtl; float: right; clear: both;"> # כתבו פונקציה בשם <var>group_by</var> שמקבלת פונקציה כפרמטר ראשון, ו־iterable כפרמטר שני.<br> # הפונקציה תחזיר מילון, שבו: # </p> # <ul style="text-align: right; direction: rtl; float: right; clear: both;"> # <li>המפתחות הם הערכים שחזרו מהפונקציה שהועברה כפרמטר הראשון.</li> # <li>הערך התואם למפתח מסוים הוא רשימה של כל האיברים שעבורם חזר הערך המופיע במפתח.</li> # </ul> # <p style="text-align: right; direction: rtl; float: right; clear: both;"> # לדוגמה, עבור הקריאה <code dir="ltr" style="direction: ltr; text-align: left;">group_by(len, ["hi", "bye", "yo", "try"])</code> יוחזר הערך: <samp dir="ltr" style="direction: ltr; text-align: left;">{2: ["hi", "yo"], 3: ["bye", "try"]}</samp>. # </p> # + def group_by(fun,lst): res={fun(item):item for item in lst} return res group_by(len,["hi", "bye", "yo", "try"]) # - # ### <span style="text-align: right; direction: rtl; float: right; clear: both;">zipwith</span> # <p style="text-align: right; direction: rtl; float: right; clear: both;"> # כתבו פונקציה בשם <var>zip_with</var> שמקבלת פונקציה כפרמטר ראשון, ושני iterable־ים או יותר בפרמטרים שאחריו.<br> # הפונקציה תחזיר רשימה, שבה האיבר במקום ה־N־י הוא הערך שחזר מהעברת כל הערכים במקום ה־N־י של כל ה־iterables לפונקציה. # </p> # <p style="text-align: right; direction: rtl; float: right; clear: both;"> # לדוגמה: # </p> # <ul style="text-align: right; direction: rtl; float: right; clear: both;"> # <li>עבור הקריאה <code dir="ltr" style="direction: ltr; text-align: left;">zipwith(sum, [1, 2, 3], [4, 5, 6])</code> יוחזר הערך: <samp dir="ltr" style="direction: ltr; text-align: left;">[5, 7, 9]</samp>.</li> # <li>עבור הקריאה <code dir="ltr" style="direction: ltr; text-align: left;">zipwith(max, (5, 4), (2, 5), (6, -6))</code> יוחזר הערך: <samp dir="ltr" style="direction: ltr; text-align: left;">[6, 5]</samp>.</li> # </ul> # <p style="text-align: right; direction: rtl; float: right; clear: both;"> # אפשר להניח שה־iterables המועברים לפונקציה זהים באורכם. # </p> # + def zipwith(fun,*itera): return [fun(t) for t in zip(*itera)] zipwith(max, (5, 4), (2, 5), (6, -6)) zipwith(sum, [1, 2, 3], [4, 5, 6]) # - # ### <span style="text-align: right; direction: rtl; float: right; clear: both;">זכרתם?</span> # # <p style="text-align: right; direction: rtl; float: right; clear: both;"> # כתבו פונקציה שמקבלת מסר להצפנה, ויוצרת ממנו תמונה מוצפנת.<br> # השתמשו בשיטת ההצפנה שהוצגה במחברת הקודמת. # </p> # + from PIL import Image def encrypt(file): im = Image.open(file) message='' for row in range(im.size[0]): for col in range(im.size[1]): if (im.getpixel((row,col)) == 1): message = message + chr(col) im.close() return message def decript(message,file): im=Image.new('L', (len(message),255), color=255) for i in range(len(message)): im.putpixel((i,ord(message[i])),1) im.save(file) message = 'WXYZ !"§ $%& /() =?* ' file = "resources/my_code.png" decript(message,file) print ('test: ', encrypt(file) == message) # - # ### <span style="text-align: right; direction: rtl; float: right; clear: both;">סט</span> # <p style="text-align: right; direction: rtl; float: right; clear: both;"> # למדו את החוקים של המשחק סט, <a href="https://he.wikipedia.org/wiki/%D7%A1%D7%98_(%D7%9E%D7%A9%D7%97%D7%A7_%D7%A7%D7%9C%D7%A4%D7%99%D7%9D)">מהערך בוויקיפדיה</a> או מ־<a href="https://www.youtube.com/watch?v=Kc4WrL7cxeg">YouTube</a>. # </p> # <ol style="text-align: right; direction: rtl; float: right; clear: both;"> # <li>צרו חפיסת סט.</li> # <li>טרפו אותה היטב, ופתחו 12 קלפים על השולחן. הדפיסו את כל הסטים שמצאתם.</li> # <li>בדקו בכמה אחוזים מהפעמים שבהן פותחים 12 קלפים אקראיים מהחפיסה – אין אף סט על הלוח.</li> # </ol> # <p style="text-align: right; direction: rtl; float: right; clear: both;"> # כדי לחשב את סעיף 3, הריצו את הבדיקה על 10,000 מקרים שבהם פתחתם 12 קלפים מהחפיסה המעורבבת. # </p> # + from random import shuffle def set_gen(): return [(color,shape,fill,num) for color in ['red','green','purple'] for shape in ['M','T','A'] for fill in ['full','empty','stripes'] for num in [1,2,3]] def play(): set_play = set_gen() shuffle(set_play) set_play_12_cards = set_play[:12] return set_play_12_cards def is_series(*items): for i in range(len(items[0])): tmp=set() for item in items: tmp.add(item[i]) #print(tmp) if len(tmp) != 1 and len(tmp) != len(items): return False return True def find_set(cards): for i in cards: for j in cards: for k in cards: if i != j and j!= k and is_series(i,j,k): return True return False def statistics(m): sample = [find_set(play()) for _ in range(m)] return 100 * (sample.count(False)/len(sample)) print(statistics(10000)) # - # ### <span style="text-align: right; direction: rtl; float: right; clear: both;">100 מעלות</span> # <p style="text-align: right; direction: rtl; float: right; clear: both;"> # כתבו קוד שמוצא את 100 השירים הפופולריים ביותר לפי מדד <a href="https://www.billboard.com/charts/hot-100">Hot 100</a> של Billboard.<br> # השיגו את המילים של השירים שמצאתם, ושרטטו גרף שמראה כמה פעמים מופיעה כל מילה מ־100 המילים הנפוצות ביותר בכל השירים. # </p> # <p style="text-align: right; direction: rtl; float: right; clear: both;"> # בונוס: בצעו ניתוח מעניין אחר, כמו מיהם האומנים שמשתמשים בהכי הרבה מילים בשירים שלהם! # </p> # + # #!/usr/local/opt/python@3.8/bin/python3.8 -m pip install billboard.py # #!/usr/local/opt/python@3.8/bin/python3.8 -m pip install py-charts # #!/usr/local/opt/python@3.8/bin/python3.8 -m pip install lyricsgenius # + import billboard import lyricsgenius import pycharts genius = lyricsgenius.Genius('<KEY>') def get_songs(): chart = billboard.ChartData('hot-100')[0:100] songs=[] for song in chart: result=genius.search_song(song.title,song.artist) if result: songs.append(result.lyrics) return songs songs = get_songs() # + def clear(s): return ''.join( c for c in s if c not in '?:!/;[](),') words = [] for song in songs: song=clear(song) words.extend(song.split()) statistics={} for word in words: statistics[word] = statistics.get(word,0) + 1 sorted_statistics = {} for w in sorted(statistics, key=statistics.get, reverse=True): sorted_statistics[w] = statistics[w] print(sorted_statistics) # - chart_data = list(map(lambda x:[x[0],x[1]],sorted_statistics.items())) for i in range(0,100,10): pycharts.bar(title=f"Bar Graph Title {i} - {i+10}", data=chart_data[i:i+10])
week6/5_Summary.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/Arshad221b/Handwritten-Digit-Recognition-/blob/master/Digitrecogniser.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="Vx46D0KVg-x7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 94} outputId="3471c416-bd33-4135-b152-81481da184e9" from keras.datasets import mnist import matplotlib.pyplot as plt # + id="CwxuK-yNlhaG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 234} outputId="959b9914-2ed9-46bc-8de8-92d0df8ee881" (X_train, y_train), (X_test, y_test) = mnist.load_data() # + id="A83G2pNhnFfg" colab_type="code" colab={} import numpy from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.utils import np_utils # + id="SZwXI9MvncJz" colab_type="code" colab={} seed = 7 numpy.random.seed(seed) # + id="A2roj6VvnjOW" colab_type="code" colab={} (X_train, y_train), (X_test, y_test) = mnist.load_data() # + id="HNnTfkWgnnkt" colab_type="code" colab={} num_pixels = X_train.shape[1] * X_train.shape[2] X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32') X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32') # + id="r3mdHho6oDI1" colab_type="code" colab={} X_train = X_train / 255 X_test = X_test / 255 # + id="lnpObgI4oIeo" colab_type="code" colab={} y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) num_classes = y_test.shape[1] # + id="Pvivy2o7omWg" colab_type="code" colab={} # define baseline model def larger_model(): # create model model = Sequential() model.add(Conv2D(30, (5, 5), input_shape=(1, 28, 28), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(15, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(50, activation='relu')) model.add(Dense(num_classes, activation='softmax')) # Compile model model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model # + id="aU5lL7Ayo5eb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 3375} outputId="6432b661-985b-4cbe-cd88-a75d63acced6" model = baseline_model() # Fit the model model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=100, batch_size=100, verbose=2) # Final evaluation of the model scores = model.evaluate(X_test, y_test, verbose=0) print("Baseline Error: %.2f%%" % (100-scores[1]*100)) # + id="S48JtbmCpeMo" colab_type="code" colab={}
Digitrecogniser.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="ZN7eE5PON9rC" outputId="2c42396a-00fd-45b8-fca0-071702b3cad9" # #!pip install -U pgmpy==0.1.11 # #!pip install -U bnlearn # #!pip install openml # + id="KF3wS2nVOHqb" import openml # + id="MuSGly_ORvqq" colab={"base_uri": "https://localhost:8080/"} outputId="c5dfee5f-7fc0-4ff1-f6a3-deef774fe1be" import bnlearn import pandas as pd import json # + colab={"base_uri": "https://localhost:8080/"} id="9-vWqJwVOcGl" outputId="3218b6ff-c005-4398-a000-bd1ba6b96d45" # Bankote dataset id = 1462 dataset = openml.datasets.get_dataset(1462) # Print a summary print( f"This is dataset '{dataset.name}', the target feature is " f"'{dataset.default_target_attribute}'" ) print(f"URL: {dataset.url}") print(dataset.description[:500]) # + id="QaqvUFMKQNOY" # Conversion of OpenMl datasets to pandas dataframe X, y, categorical_indicator, attribute_names = dataset.get_data( target=dataset.default_target_attribute, dataset_format="dataframe" ) #print(X.head()) #print(X.info()) #print(type(X)) combined_data = pd.concat([X, y], axis=1) # Rounding off the data and converting it to Integer combined_data_descrete = round(combined_data).astype(int) # + colab={"base_uri": "https://localhost:8080/", "height": 750} id="Wxd3Q45GPwXZ" outputId="238355c9-7edb-45e9-d9a7-b6fd5f2a093f" # Bayesian network # Loading the Banknote dataset containing mixed variables df_raw = combined_data_descrete # Structure learning DAG = bnlearn.structure_learning.fit(df_raw, methodtype='hc',scoretype='bic') # Plot G = bnlearn.plot(DAG) print(df_raw) # + colab={"base_uri": "https://localhost:8080/"} id="FYjG1ZAORynn" outputId="11cee6c6-1ccc-4c2c-8f20-f01737c7643d" # Parameter learning model = bnlearn.parameter_learning.fit(DAG, combined_data_descrete ) bnlearn.print_CPD(model) # + colab={"base_uri": "https://localhost:8080/"} id="SqsYrd4ZR69k" outputId="8fbe5173-c3e8-4143-fb41-4f3598629b77" # Print CPDs bnlearn.print_CPD(model) # Make inference q = bnlearn.inference.fit(model, variables=['Class'], evidence={'V1': 10, 'V2': 2}) print(q.values) print(q.variables) print(q._str()) # + id="iG3iffDmz-jr" result = { "result" :q._str()} with open("result.json","w") as outfile: json.dump(result,outfile)
Bank note classifier/Bank_note_classifier.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="NQzpfH4HfuJ2" colab_type="code" colab={} import pandas as pd # + id="iTeev3t-fqOG" colab_type="code" colab={} df89 = pd.read_excel('https://github.com/pierretd/data-dump/blob/master/finance/topology/typology89.xls?raw=true','Data' ) # + id="RGUH4RmQf3SA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 80} outputId="ccb5d65b-8341-4c15-88a6-c45ca641329e" df89.head(1) # + [markdown] id="uVzVWNAzgdJD" colab_type="text" # Lets see what Cat they both have # # MI # # GV # # RT-RET # # PV # + id="T6BBa1-Af7hI" colab_type="code" colab={} df83 = pd.read_excel('https://github.com/pierretd/data-dump/blob/master/finance/topology/types83%20(1).xls?raw=true','Data' ) # + id="POodbPiagA_F" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 80} outputId="0bc04b2f-6cf2-48ea-f3a9-3d28732d93fa" df83.head(1) # + id="e5cqSHb6gEQ0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="8e80f00d-1665-4304-e2a5-f190cfdb85b2" df89 = df89.drop('FM', 1) df89.head() # + id="d5eT4bIrg2c8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="11b30e3a-9c8c-4c1c-c716-d28c9e12c92e" df89 = df89.drop('MF', 1) df89.head() #TS', 'NS', 'FL', 'CM', 'TP', # + id="7RWm8OjShof6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="ea75d48c-bdb8-473d-b9e3-324c669e1afd" df89 = df89.drop( 'NS', 1) df89.head() #'TS',, 'FL', 'CM', 'TP', # + id="As51jW4ThsKX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="b8fa6193-601a-463d-c358-181eabcddf68" df89 = df89.drop( 'TS', 1) df89.head() #'TS',, 'FL', 'CM', 'TP', # + id="hOkbOSTfhwTt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="b4430c29-e560-4b8c-b40c-1b4605e1ee71" df89 = df89.drop( 'FL', 1) df89.head() #'TS',, 'FL', 'CM', 'TP', # + id="re0ipDWzhzJi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="c746e1e1-6897-4c55-8c57-2c28797d9155" df89 = df89.drop( 'CM', 1) df89.head() #'TS',, 'FL', 'CM', 'TP', # + id="5WKJB1e8h2SE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="6720b6f0-4850-4827-ee15-2e6d66a2ac73" df89 = df89.drop( 'TP', 1) df89.head() #'TS',, 'FL', 'CM', 'TP', # + id="wheIBpGOh6bt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="176a5163-3ee6-4dbe-87cd-d01bba9d6766" df89 = df89.drop( 'GV', 1) df89.head() #'TS',, 'FL', 'CM', 'TP', # + id="8yBrqmY1iEFb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="d09df9c8-104e-4f2c-d8a0-42022dc5a86f" df89.head() # + id="Zl71qyvaiAVj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="ee9cd329-67d5-4d9d-926c-19111b468468" df83.head() # + id="ZFi-x-b1iSzy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 323} outputId="711a725c-fdd7-44b4-d6aa-8a7e6723df06" cols = list(df83.columns.values) cols # + id="WdQ0xRDniQxt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="8df5c3bd-9827-41d1-8a1e-7b96fcfa4c21" df83 = df83.drop( 'FIPS', 1) df83.head() # + [markdown] id="kd8hx4X1ifPY" colab_type="text" # # 'State', # 'County', # 'RURALURB83', # 'NMET', # 'AGTP79R', # 'MFGTP79R', # '**MINTP79R**', # '**GVTTP79R**', # 'FEDTP79', # '**RETTP79**', # '**POVTP79**', # 'UNCL79', # 'AGTP86', # 'MFGTP86', # 'MINTP86', # 'GVTTP86', # 'UNCL86' # + id="FB1PoNOcisV0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="a8b187bc-d112-4d68-c144-1319438f03cc" df83 = df83.drop( 'RURALURB83', 1) df83.head() # + id="Rq1RHNplisYf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="e7fd0e76-bb27-4b0c-86fd-886331b26447" df83 = df83.drop( 'NMET', 1) df83.head() # + id="2F9h4LqKisdY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="e22d26e1-00e8-435e-cd11-ec956b15a7d9" df83 = df83.drop( 'AGTP79R', 1) df83.head() # + id="rURH3flpisgU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="34736f32-64c1-47f3-9190-e4f23730e0a6" df83 = df83.drop( 'MFGTP86', 1) df83.head() # + id="LqvtZsg4isig" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="a19d3535-2f25-4333-b379-e5c0f48de00a" df83 = df83.drop( 'FEDTP79', 1) df83.head() # + id="jY4id7APislQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="294cc135-d6f8-4c5d-b89a-4336817073e0" df83 = df83.drop( 'UNCL79', 1) df83.head() # + id="1nUzvKGtisnr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="29d57800-818d-4d9a-b088-9bb0357a4644" df83 = df83.drop( 'MINTP86', 1) df83.head() # + id="ZjCZUkh0isqK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="f24ede00-0e78-474b-85d0-7b3f921905b7" df83 = df83.drop( 'GVTTP86', 1) df83.head() # + id="7fIKE-4tiss0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="4b264f9f-41c6-4368-e8dd-48a59e4b533e" df83.head() # + id="FTU13lybkUjj" colab_type="code" colab={} df89['Year']= '1989' df83['Year']= '1983' # + id="t1z1A7kZkUw5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="552c876e-1376-461c-874b-94b697edd990" df89.head() # + id="7ahZuGdjja78" colab_type="code" colab={} group89 = df89.groupby(by='State').mean() # + id="jL1pvWPYkJnF" colab_type="code" colab={} group83 = df83.groupby(by='State').mean() # + id="HRMH_DxwkPHT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1661} outputId="4fec583d-6cca-4ac1-f26b-7a36c2541225" group89['Year']= '1989' group83['Year']= '1983' group89 # + id="AfzmjH9FkQjI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1692} outputId="a57a0fd3-7ac8-4b6d-dc7d-d19a07ed1367" group83 # + id="YcqrW-T-lNSe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 17034} outputId="2566ab57-33f9-4d0e-f5a6-76175b0df59f" cols = list(df89.State.values) cols # + id="2LwOYi4ElZ-n" colab_type="code" colab={} df = pd.read_csv('https://raw.githubusercontent.com/pierretd/data-dump/master/finance/topology/8389%20-%20Sheet1.csv') # + id="kMXfN8A1nGDm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="bcae4f00-3dec-4b91-a046-2e018e40d384" df.head() # + id="Dh0DXRjVnG5p" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 367} outputId="90e63a14-74d1-4f70-89a0-6c9448eb105d" df.groupby(by='State')['RETTP79'].mean().plot(kind='bar'); # + id="RK2YrOBBnPCT" colab_type="code" colab={}
statesv1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import gym from keras.models import Sequential from keras.layers import Dense, Activation, Flatten from keras.optimizers import Adam from rl.agents.dqn import DQNAgent from rl.policy import EpsGreedyQPolicy from rl.memory import SequentialMemory # - # Initialize CartPole environment # + ENV_NAME = 'CartPole-v0' # Get the environment and extract the number of actions available in the Cartpole problem env = gym.make(ENV_NAME) np.random.seed(123) env.seed(123) nb_actions = env.action_space.n # - print(env.action_space) # Build a single hidden layer neural network model. model = Sequential() model.add(Flatten(input_shape=(1,) + env.observation_space.shape)) model.add(Dense(16)) model.add(Activation('relu')) model.add(Dense(nb_actions)) model.add(Activation('linear')) print(model.summary()) # Configure and compile the agent. Set policy to Epsilon Greedy. Set memory as sequential memory to store results of previous actions. # + policy = EpsGreedyQPolicy() memory = SequentialMemory(limit=50000, window_length=1) dqn = DQNAgent(model=model, nb_actions=nb_actions, memory=memory, nb_steps_warmup=10, target_model_update=1e-2, policy=policy) dqn.compile(Adam(lr=1e-3), metrics=['mae']) # - dqn.fit(env, nb_steps=5000, visualize=False, verbose=2) dqn.test(env, nb_episodes=5, visualize=True)
Examples/Keras-RL Cartpole.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # + [markdown] colab_type="text" id="view-in-github" # <a href="https://colab.research.google.com/github/conqueror/agile-ml/blob/master/intro_to_neural_nets.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] colab_type="text" id="JndnmDMp66FL" # #### Copyright 2017 Google LLC. # + cellView="both" colab={} colab_type="code" id="hMqWDc_m6rUC" # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] colab_type="text" id="eV16J6oUY-HN" # # Intro to Neural Networks # + [markdown] colab_type="text" id="_wIcUFLSKNdx" # **Learning Objectives:** # * Define a neural network (NN) and its hidden layers using the TensorFlow `DNNRegressor` class # * Train a neural network to learn nonlinearities in a dataset and achieve better performance than a linear regression model # + [markdown] colab_type="text" id="_ZZ7f7prKNdy" # In the previous exercises, we used synthetic features to help our model incorporate nonlinearities. # # One important set of nonlinearities was around latitude and longitude, but there may be others. # # We'll also switch back, for now, to a standard regression task, rather than the logistic regression task from the previous exercise. That is, we'll be predicting `median_house_value` directly. # + [markdown] colab_type="text" id="J2kqX6VZTHUy" # ## Setup # # First, let's load and prepare the data. # + colab={} colab_type="code" id="AGOM1TUiKNdz" from __future__ import print_function import math from IPython import display from matplotlib import cm from matplotlib import gridspec from matplotlib import pyplot as plt import numpy as np import pandas as pd from sklearn import metrics import tensorflow as tf from tensorflow.python.data import Dataset tf.logging.set_verbosity(tf.logging.ERROR) pd.options.display.max_rows = 10 pd.options.display.float_format = '{:.1f}'.format california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",") california_housing_dataframe = california_housing_dataframe.reindex( np.random.permutation(california_housing_dataframe.index)) # + colab={} colab_type="code" id="2I8E2qhyKNd4" def preprocess_features(california_housing_dataframe): """Prepares input features from California housing data set. Args: california_housing_dataframe: A Pandas DataFrame expected to contain data from the California housing data set. Returns: A DataFrame that contains the features to be used for the model, including synthetic features. """ selected_features = california_housing_dataframe[ ["latitude", "longitude", "housing_median_age", "total_rooms", "total_bedrooms", "population", "households", "median_income"]] processed_features = selected_features.copy() # Create a synthetic feature. processed_features["rooms_per_person"] = ( california_housing_dataframe["total_rooms"] / california_housing_dataframe["population"]) return processed_features def preprocess_targets(california_housing_dataframe): """Prepares target features (i.e., labels) from California housing data set. Args: california_housing_dataframe: A Pandas DataFrame expected to contain data from the California housing data set. Returns: A DataFrame that contains the target feature. """ output_targets = pd.DataFrame() # Scale the target to be in units of thousands of dollars. output_targets["median_house_value"] = ( california_housing_dataframe["median_house_value"] / 1000.0) return output_targets # + colab={} colab_type="code" id="pQzcj2B1T5dA" # Choose the first 12000 (out of 17000) examples for training. training_examples = preprocess_features(california_housing_dataframe.head(12000)) training_targets = preprocess_targets(california_housing_dataframe.head(12000)) # Choose the last 5000 (out of 17000) examples for validation. validation_examples = preprocess_features(california_housing_dataframe.tail(5000)) validation_targets = preprocess_targets(california_housing_dataframe.tail(5000)) # Double-check that we've done the right thing. print("Training examples summary:") display.display(training_examples.describe()) print("Validation examples summary:") display.display(validation_examples.describe()) print("Training targets summary:") display.display(training_targets.describe()) print("Validation targets summary:") display.display(validation_targets.describe()) # + [markdown] colab_type="text" id="RWq0xecNKNeG" # ## Building a Neural Network # # The NN is defined by the [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor) class. # # Use **`hidden_units`** to define the structure of the NN. The `hidden_units` argument provides a list of ints, where each int corresponds to a hidden layer and indicates the number of nodes in it. For example, consider the following assignment: # # `hidden_units=[3,10]` # # The preceding assignment specifies a neural net with two hidden layers: # # * The first hidden layer contains 3 nodes. # * The second hidden layer contains 10 nodes. # # If we wanted to add more layers, we'd add more ints to the list. For example, `hidden_units=[10,20,30,40]` would create four layers with ten, twenty, thirty, and forty units, respectively. # # By default, all hidden layers will use ReLu activation and will be fully connected. # + colab={} colab_type="code" id="ni0S6zHcTb04" def construct_feature_columns(input_features): """Construct the TensorFlow Feature Columns. Args: input_features: The names of the numerical input features to use. Returns: A set of feature columns """ return set([tf.feature_column.numeric_column(my_feature) for my_feature in input_features]) # + colab={} colab_type="code" id="zvCqgNdzpaFg" def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None): """Trains a neural net regression model. Args: features: pandas DataFrame of features targets: pandas DataFrame of targets batch_size: Size of batches to be passed to the model shuffle: True or False. Whether to shuffle the data. num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely Returns: Tuple of (features, labels) for next data batch """ # Convert pandas data into a dict of np arrays. features = {key:np.array(value) for key,value in dict(features).items()} # Construct a dataset, and configure batching/repeating. ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit ds = ds.batch(batch_size).repeat(num_epochs) # Shuffle the data, if specified. if shuffle: ds = ds.shuffle(10000) # Return the next batch of data. features, labels = ds.make_one_shot_iterator().get_next() return features, labels # + colab={} colab_type="code" id="U52Ychv9KNeH" def train_nn_regression_model( learning_rate, steps, batch_size, hidden_units, training_examples, training_targets, validation_examples, validation_targets): """Trains a neural network regression model. In addition to training, this function also prints training progress information, as well as a plot of the training and validation loss over time. Args: learning_rate: A `float`, the learning rate. steps: A non-zero `int`, the total number of training steps. A training step consists of a forward and backward pass using a single batch. batch_size: A non-zero `int`, the batch size. hidden_units: A `list` of int values, specifying the number of neurons in each layer. training_examples: A `DataFrame` containing one or more columns from `california_housing_dataframe` to use as input features for training. training_targets: A `DataFrame` containing exactly one column from `california_housing_dataframe` to use as target for training. validation_examples: A `DataFrame` containing one or more columns from `california_housing_dataframe` to use as input features for validation. validation_targets: A `DataFrame` containing exactly one column from `california_housing_dataframe` to use as target for validation. Returns: A `DNNRegressor` object trained on the training data. """ periods = 10 steps_per_period = steps / periods # Create a DNNRegressor object. my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0) dnn_regressor = tf.estimator.DNNRegressor( feature_columns=construct_feature_columns(training_examples), hidden_units=hidden_units, optimizer=my_optimizer, ) # Create input functions. training_input_fn = lambda: my_input_fn(training_examples, training_targets["median_house_value"], batch_size=batch_size) predict_training_input_fn = lambda: my_input_fn(training_examples, training_targets["median_house_value"], num_epochs=1, shuffle=False) predict_validation_input_fn = lambda: my_input_fn(validation_examples, validation_targets["median_house_value"], num_epochs=1, shuffle=False) # Train the model, but do so inside a loop so that we can periodically assess # loss metrics. print("Training model...") print("RMSE (on training data):") training_rmse = [] validation_rmse = [] for period in range (0, periods): # Train the model, starting from the prior state. dnn_regressor.train( input_fn=training_input_fn, steps=steps_per_period ) # Take a break and compute predictions. training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn) training_predictions = np.array([item['predictions'][0] for item in training_predictions]) validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn) validation_predictions = np.array([item['predictions'][0] for item in validation_predictions]) # Compute training and validation loss. training_root_mean_squared_error = math.sqrt( metrics.mean_squared_error(training_predictions, training_targets)) validation_root_mean_squared_error = math.sqrt( metrics.mean_squared_error(validation_predictions, validation_targets)) # Occasionally print the current loss. print(" period %02d : %0.2f" % (period, training_root_mean_squared_error)) # Add the loss metrics from this period to our list. training_rmse.append(training_root_mean_squared_error) validation_rmse.append(validation_root_mean_squared_error) print("Model training finished.") # Output a graph of loss metrics over periods. plt.ylabel("RMSE") plt.xlabel("Periods") plt.title("Root Mean Squared Error vs. Periods") plt.tight_layout() plt.plot(training_rmse, label="training") plt.plot(validation_rmse, label="validation") plt.legend() print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error) print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error) return dnn_regressor # + [markdown] colab_type="text" id="2QhdcCy-Y8QR" # ## Task 1: Train a NN Model # # **Adjust hyperparameters, aiming to drop RMSE below 110.** # # Run the following block to train a NN model. # # Recall that in the linear regression exercise with many features, an RMSE of 110 or so was pretty good. We'll aim to beat that. # # Your task here is to modify various learning settings to improve accuracy on validation data. # # Overfitting is a real potential hazard for NNs. You can look at the gap between loss on training data and loss on validation data to help judge if your model is starting to overfit. If the gap starts to grow, that is usually a sure sign of overfitting. # # Because of the number of different possible settings, it's strongly recommended that you take notes on each trial to help guide your development process. # # Also, when you get a good setting, try running it multiple times and see how repeatable your result is. NN weights are typically initialized to small random values, so you should see differences from run to run. # # + colab={} colab_type="code" id="rXmtSW1yKNeK" dnn_regressor = train_nn_regression_model( learning_rate=0.01, steps=500, batch_size=10, hidden_units=[10, 2], training_examples=training_examples, training_targets=training_targets, validation_examples=validation_examples, validation_targets=validation_targets) # + [markdown] colab_type="text" id="O2q5RRCKqYaU" # ### Solution # # Click below to see a possible solution # + [markdown] colab_type="text" id="j2Yd5VfrqcC3" # **NOTE:** This selection of parameters is somewhat arbitrary. Here we've tried combinations that are increasingly complex, combined with training for longer, until the error falls below our objective. This may not be the best combination; others may attain an even lower RMSE. If your aim is to find the model that can attain the best error, then you'll want to use a more rigorous process, like a parameter search. # + colab={} colab_type="code" id="IjkpSqmxqnSM" dnn_regressor = train_nn_regression_model( learning_rate=0.001, steps=2000, batch_size=100, hidden_units=[10, 10], training_examples=training_examples, training_targets=training_targets, validation_examples=validation_examples, validation_targets=validation_targets) # + [markdown] colab_type="text" id="c6diezCSeH4Y" # ## Task 2: Evaluate on Test Data # # **Confirm that your validation performance results hold up on test data.** # # Once you have a model you're happy with, evaluate it on test data to compare that to validation performance. # # Reminder, the test data set is located [here](https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv). # + cellView="both" colab={} colab_type="code" id="icEJIl5Vp51r" california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",") # YOUR CODE HERE # + [markdown] colab_type="text" id="vvT2jDWjrKew" # ### Solution # # Click below to see a possible solution. # + [markdown] colab_type="text" id="FyDh7Qy6rQb0" # Similar to what the code at the top does, we just need to load the appropriate data file, preprocess it and call predict and mean_squared_error. # # Note that we don't have to randomize the test data, since we will use all records. # + colab={} colab_type="code" id="vhb0CtdvrWZx" california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",") test_examples = preprocess_features(california_housing_test_data) test_targets = preprocess_targets(california_housing_test_data) predict_testing_input_fn = lambda: my_input_fn(test_examples, test_targets["median_house_value"], num_epochs=1, shuffle=False) test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn) test_predictions = np.array([item['predictions'][0] for item in test_predictions]) root_mean_squared_error = math.sqrt( metrics.mean_squared_error(test_predictions, test_targets)) print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
intro_to_neural_nets.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd length = 1000 cols = ["Q", "X", "Y", "Z"] mu = 0 sigma = 5 # + import pingouin lst_dct = {col:[] for col in cols } for i in range(length): lst_dct["Q"].append(50 + np.random.normal(mu, sigma)) lst_dct["X"].append(5 * lst_dct["Q"][-1] + 10 + np.random.normal(mu, sigma )) lst_dct["Y"].append(lst_dct["Q"][-1] * -3 + 20 + np.random.normal(mu, sigma)) lst_dct["Z"].append(5 * lst_dct["X"][-1] + 10 * lst_dct["Y"][-1] + np.random.normal(mu, 3 * sigma)) df = pd.DataFrame(lst_dct) df.pcorr() # - df[["Q","X","Y"]].pcorr() df[["X","Y","Z"]].pcorr() df[["Q","X","Z"]].pcorr() df[["Q","Y","Z"]].pcorr() df[["X","Z"]].pcorr()
Projects/Glyphs/Generating Data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/targino12/OlaMundo-/blob/master/MD_2020_1_Prova2_DianaPereiraTargino.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="V_dQvGKDDaN1" # #MD - Avaliação 2 - Parte 2 - 19-10-2020 # # Olá! A segunda avaliação da disciplina Mineração de Dados de 2020.1 remoto será dividida em 2 partes, 1 teórica e 1 prática. Teórica: 3 Questões, valendo 2,5 cada, total 7,5 # Prática: 1 questão, upload de um .ipynb # # * A parte 1 (teórica) está no questionário no SIGAA. # * A parte 2 (prática) está na tarefa no SIGAA. # # Nessa segunda parte prática você deve: # 1. **Copiar** esse notebook (Arquivo -> Salvar uma cópia no Drive) # 2. **Fazer** a questão. # + [markdown] id="E-t2m8qpYsQN" # ##Identificação do aluno(a) # # **Nome completo:** <NAME> # # **Matrícula:** 2016020921 # + [markdown] id="jlHrfUJiUkse" # ##(2,5) Questão # # Escolha um estado da federação e faça 4 animações que exibam vídeos em html5 que possam ser salvos em .mp4 da progressão dos casos de COVID-19 no dataset EstadosCov19.csv que estão no repositório https://github.com/covid19br/covid19br.github.io. Cada animação deverá representar: novos casos, casos acumulados, obitos novos e obitos acumulados. # + id="4pL3utamDaN3" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="343918f3-bbc3-4611-f55b-f273318e4e85" import pandas as pd import seaborn as sns import statistics as sts import matplotlib.pyplot as plt dados = pd.read_csv("https://raw.githubusercontent.com/covid19br/covid19br.github.io/master/dados/EstadosCov19.csv", delimiter=";") display(dados)
MD_2020_1_Prova2_DianaPereiraTargino.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={} colab_type="code" id="vga1znuak7QB" import pandas as pd import numpy as np import json import math import urllib.request import dateutil.parser import dateutil.rrule import datetime import pickle import gc # + colab={} colab_type="code" id="44C0MsGOKSKn" # Used across all of the plots dateToday = datetime.datetime.combine(datetime.date.today(), datetime.datetime.min.time()) # Levels would have been relatively normal up until Friday 13 March, so take a year before that date baselineEnd = datetime.datetime.strptime('2020-03-13T23:59:59Z', '%Y-%m-%dT%H:%M:%SZ') baselineStart = baselineEnd - pd.Timedelta(days=365) + pd.Timedelta(seconds=1) print('Baseline data from %s to %s' % (baselineStart, baselineEnd)) resampleFrequency = 900 # + colab={"base_uri": "https://localhost:8080/", "height": 946} colab_type="code" id="GNidLPlDTvHE" outputId="6e702db4-5a86-4d29-d996-7b4c5588197d" carParkRequestBase = 'https://api.newcastle.urbanobservatory.ac.uk/api/v2/sensors/entity' # Fetch a list of all the car parks... carParkTimeseriesIRIs = {} carParkMetadata = {} carParkRequestPage = 1 carParkResponse = None while carParkResponse is None or len(carParkResponse) > 1: carParkResponse = json.loads( urllib.request.urlopen( '%s?metric="Occupied%%20spaces"&page=%u' % (carParkRequestBase, carParkRequestPage) ).read().decode('utf-8') )['items'] carParkRequestPage = carParkRequestPage + 1 for carPark in carParkResponse: for feed in carPark['feed']: for timeseries in feed['timeseries']: for link in timeseries['links']: if (link['rel'] == 'archives.friendly' and \ 'latest' in timeseries): carParkTimeseriesIRIs[carPark['meta']['name']] = link['href'] carParkMetadata[carPark['meta']['name']] = { 'address': carPark['meta']['address'], 'postcode': carPark['meta']['address'].split(',')[-1].strip(), 'district': carPark['meta']['address'].split(',')[-1].strip().split(' ')[0], 'capacity': feed['meta']['totalSpaces'], 'latest': timeseries['latest']['value'] } print('Discovered %u car parks with occupancy data.' % len(carParkTimeseriesIRIs)) carParkMetadata = pd.DataFrame.from_records(carParkMetadata).transpose() carParkMetadata # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="Thc15ixbUgH3" outputId="8473b2e5-50e3-45ff-8542-8f2f2c1093e7" daysPerRequest = 37 dfCarParks = None print('Requesting car park occupancy in %u day chunks...' % daysPerRequest) for carParkName in carParkTimeseriesIRIs.keys(): carParkRequestTimeseries = carParkTimeseriesIRIs[carParkName] carParkTimeseries = [] print(' %s' % carParkName) print(' [', end='') for windowStart in dateutil.rrule.rrule( dateutil.rrule.DAILY, interval=daysPerRequest, dtstart=baselineStart, until=baselineEnd ): windowEnd = windowStart + pd.Timedelta(days=daysPerRequest) - pd.Timedelta(seconds=1) if windowEnd > baselineEnd: windowEnd = baselineEnd windowResponse = json.loads( urllib.request.urlopen( '%s?startTime=%s&endTime=%s' % (carParkRequestTimeseries, windowStart.isoformat().replace('+00:00', 'Z'), windowEnd.isoformat().replace('+00:00', 'Z')) ).read().decode('utf-8') )['historic']['values'] carParkTimeseries.extend(windowResponse) print('.', end='') print(']') # Duration isn't relevant to the car park data dfCP = pd.DataFrame.from_records(carParkTimeseries, exclude=['duration']) # Times in this API are in ISO8601 dfCP['time'] = dfCP['time'].apply(lambda t: datetime.datetime.strptime(t, "%Y-%m-%dT%H:%M:%S.%fZ")) dfCP = dfCP.rename(columns={'value': carParkName}) dfCP.set_index('time', inplace=True, drop=True) dfCP = dfCP.resample('%us' % resampleFrequency).median().fillna(method='ffill', limit=12) if dfCarParks is None: dfCarParks = dfCP else: dfCarParks = dfCarParks.join(dfCP, how='outer') gc.collect() dfCarParks # - dfCarParks.to_pickle('../cache/baseline-car-park-occupancy-pd.pkl')
notebooks/baseline-car-park-occupancy.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Rappels sur scikit-learn et le machine learning (correction) # # Quelques exercices simples sur *scikit-learn*. Le notebook est long pour ceux qui débutent en machine learning et sans doute sans suspens pour ceux qui en ont déjà fait. from jyquickhelper import add_notebook_menu add_notebook_menu() # %matplotlib inline # ## Des données synthétiques # # On simule un jeu de données aléatoires. from numpy import random n = 1000 X = random.rand(n, 2) X[:5] y = X[:, 0] * 3 - 2 * X[:, 1] ** 2 + random.rand(n) y[:5] # ## Exercice 1 : diviser en base d'apprentissage et de test # # Simple [train_test_split](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html). from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y) # ## Exercice 2 : caler une régression linéaire # # Et calculer le coefficient $R^2$. Pour ceux qui ne savent pas se servir d'un moteur de recherche : [LinearRegression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html), [r2_score](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html). from sklearn.linear_model import LinearRegression reg = LinearRegression() reg.fit(X_train, y_train) from sklearn.metrics import r2_score score = r2_score(y_test, reg.predict(X_test)) score # ## Exercice 3 : améliorer le modèle en appliquant une transformation bien choisie # # Le modèle de départ est : $Y = 3 X_1 - 2 X_2^2 + \epsilon$. Il suffit de rajouter des featues polynômiales avec [PolynomialFeatures](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html). from sklearn.preprocessing import PolynomialFeatures poly = PolynomialFeatures() poly.fit(X_train) X_train2 = poly.transform(X_train) reg2 = LinearRegression() reg2.fit(X_train2, y_train) score2 = r2_score(y_test, reg2.predict(poly.transform(X_test))) score2 # Le coefficient $R^2$ est plus élevé car on utilise les mêmes variables que le modèle. Il n'est théoriquement pas possible d'aller au delà. # ## Exercice 4 : caler une forêt aléatoire from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor() rf.fit(X_train, y_train) r2_score(y_test, rf.predict(X_test)) # Le modèle linéaire est le meilleur modèle dans notre cas puisque les données ont été construites de la sorte. Il est attendu que le $R^2$ ne soit pas plus élevé tout du moins pas significativement plus élevé. On regarde avec les features polynômiales... rf2 = RandomForestRegressor() rf2.fit(X_train2, y_train) r2_score(y_test, rf2.predict(poly.transform(X_test))) # Avant de tirer des conclusions hâtives, il faudrait recommencer plusieurs fois l'expérience avant de dire que la performance est plus ou moins élevée avec ces features ce que ce notebook ne fera pas puisque la réponse théorique est connue dans ce cas. # ## Exercice 5 : un peu de math # # Comparer les deux modèles sur les données suivantes ? Que remarquez-vous ? Expliquez pourquoi ? X_test2 = random.rand(n, 2) + 0.5 y_test2 = X_test2[:, 0] * 3 - 2 * X_test2[:, 1] ** 2 + random.rand(n) # + res = [] for model in [reg, reg2, rf, rf2]: name = model.__class__.__name__ try: pred = model.predict(X_test) pred2 = model.predict(X_test2) except Exception: pred = model.predict(poly.transform(X_test)) pred2 = model.predict(poly.transform(X_test2)) name += " + X^2" res.append(dict(name=name, r2=r2_score(y_test, pred), r2_jeu2=r2_score(y_test2, pred2))) import pandas df = pandas.DataFrame(res) df # - # Le seul modèle qui s'en tire vraiment est la régression linéaire avec les features polynômiales. Comme il équivaut au modèle théorique, il est normal qu'il ne se plante pas trop même si ses coefficients ne sont pas identique au modèle théorique (il faudrait plus de données pour que cela converge). reg2.coef_, reg2.intercept_ # Pour les autes modèles, voyons déjà visuellement ce qu'il se passe. # ## Exercice 6 : faire un graphe avec... # # Je laisse le code décrire l'approche choisie pour illustrer les carences des modèles précédents. Le commentaire suit le graphique pour les paresseux. # + import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 2, figsize=(14, 4)) a, b = 0.9, 1.1 index1 = (X_test2[:, 0] >= a) & (X_test2[:, 0] <= b) index2 = (X_test2[:, 1] >= a) & (X_test2[:, 1] <= b) yth = X_test2[:, 0] * 3 - 2 * X_test2[:, 1] ax[0].set_xlabel("X1") ax[0].set_ylabel("Y") ax[0].plot(X_test2[index2, 0], yth[index2], '.', label='Y théorique') ax[1].set_xlabel("X2") ax[1].set_ylabel("Y") ax[1].plot(X_test2[index1, 1], yth[index1], '.', label='Y théorique') for model in [reg, reg2, rf, rf2]: name = model.__class__.__name__ try: pred2 = model.predict(X_test2) except Exception: pred2 = model.predict(poly.transform(X_test2)) name += " + X^2" ax[0].plot(X_test2[index2, 0], pred2[index2], '.', label=name) ax[1].plot(X_test2[index1, 1], pred2[index1], '.', label=name) ax[0].legend() ax[1].legend(); # - # Le graphe étudie les variables des modèles selon une coordonnées tout en restreignant l'autre dans un intervalle donné. On voit tout de suite que la forêt aléatoire devient constante au delà d'un certain seuil. C'est encore une fois tout à fait normal puisque la base d'apprentissage ne contient des $X_1, X_2$ que dans l'intervalle $[0, 1]$. En dehors, chaque arbre de décision produit une valeur constante tout simplement parce que ce sont des fonctions en escalier : une forêt aléatoire est une moyenne de fonctions en escalier, elle est bornée. Quant à la première régression linéaire, elle ne peut saisir les effets du second degré, elle est linéaire par rapport aux variables de départ. Elle s'écarte moins mais elle s'écarte quand même de la variable à prédire. # # Cet exercice a pour but d'illustrer qu'un modèle de machine learning est estimé sur un jeu de données qui suit une certaine distribution. Lorsque les données sur lesquelles le modèle est utilisé pour prédire ne suivent plus cette loi, les modèles retournent des réponses qui ont toutes les chances d'être fausses et ce, de manière différente selon les modèles. # # C'est pour cela qu'on dit qu'il faut réapprendre régulièrement les modèles de machine learning, surtout s'ils sont appliqués sur des données générées par l'activité humaine et non des données issues de problèmes physiques. # ## Exercice 7 : illuster l'overfitting avec un arbre de décision # # Sur le premier jeu de données. # + from sklearn.tree import DecisionTreeRegressor res = [] for md in range(1, 20): tree = DecisionTreeRegressor(max_depth=md) tree.fit(X_train, y_train) r2_train = r2_score(y_train, tree.predict(X_train)) r2_test = r2_score(y_test, tree.predict(X_test)) res.append(dict(profondeur=md, r2_train=r2_train, r2_test=r2_test)) df = pandas.DataFrame(res) df.head() # - ax = df.plot(x='profondeur', y=['r2_train', 'r2_test']) ax.set_title("Evolution du R2 selon la profondeur"); # ## Exercice 8 : augmenter le nombre de features et régulariser une régression logistique # # L'objectif est de regarder l'impact de la régularisation des coefficients d'une régression logistique lorsque le nombre de features augmentent. On utilise les features polynômiales et une régression [Ridge](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) ou [Lasso](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html). # + from sklearn.linear_model import Ridge, Lasso import numpy.linalg as nplin import numpy def coef_non_nuls(coef): return sum(numpy.abs(coef) > 0.001) res = [] for d in range(1, 21): poly = PolynomialFeatures(degree=d) poly.fit(X_train) X_test2 = poly.transform(X_test) reg = LinearRegression() reg.fit(poly.transform(X_train), y_train) r2_reg = r2_score(y_test, reg.predict(X_test2)) rid = Ridge(alpha=10) rid.fit(poly.transform(X_train), y_train) r2_rid = r2_score(y_test, rid.predict(X_test2)) las = Lasso(alpha=0.01) las.fit(poly.transform(X_train), y_train) r2_las = r2_score(y_test, las.predict(X_test2)) res.append(dict(degre=d, nb_features=X_test2.shape[1], r2_reg=r2_reg, r2_las=r2_las, r2_rid=r2_rid, norm_reg=nplin.norm(reg.coef_), norm_rid=nplin.norm(rid.coef_), norm_las=nplin.norm(las.coef_), nnul_reg=coef_non_nuls(reg.coef_), nnul_rid=coef_non_nuls(rid.coef_), nnul_las=coef_non_nuls(las.coef_), )) df = pandas.DataFrame(res) df # - fig, ax = plt.subplots(1, 2, figsize=(12, 4)) df.plot(x="nb_features", y=["r2_reg", "r2_las", "r2_rid"], ax=ax[0]) ax[0].set_xlabel("Nombre de features") ax[0].set_ylim([0, 1]) ax[0].set_title("r2") df.plot(x="nb_features", y=["nnul_reg", "nnul_las", "nnul_rid"], ax=ax[1]) ax[1].set_xlabel("Nombre de features") ax[1].set_title("Nombre de coefficients non nuls"); # Numériquement, la régression linéaire devient difficile à estimer lorsque le nombre de features augmente. Théoriquement, il ne devrait pas y avoir de baisse de performances mais le graphe montre des erreurs évidentes. Cela se traduit par une norme des coefficients qui explose. La régularisation parvient à contraindre les modèles. La régression *Ridge* produira beaucoup de petits coefficients non nuls, la régression *Lasso* préfèrera concentrer la norme sur quelques coefficients seulement. Cette observation n'est vraie que dans le cas d'une régression linéaire avec une erreur quadratique.
_doc/notebooks/td2a_ml/ml_scikit_learn_simple_correction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Before you begin, execute this cell to import numpy and packages from the D-Wave Ocean suite, and all necessary functions for the gate-model framework you are going to use, whether that is the Forest SDK or Qiskit. In the case of Forest SDK, it also starts the qvm and quilc servers. # %run -i "assignment_helper.py" # %matplotlib inline # # Quantum Fourier Transform # # **Exercise 1** (3 points). The quantum Fourier transform acts on an amplitude-encoded variant of a normalized vector, the quantum state $|x\rangle=\sum_{i=0}^{N-1} x_i |i \rangle$. It maps this state to $|y\rangle=\sum_{k=0}^{N-1} y_k |k \rangle$, where $y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_je^{\boldsymbol{2\pi i} \frac{jk}{N}}$. If you have a two-dimensional vector to transform, you only need a single qubit, and the transformation reduces to $U |x \rangle = \frac{1}{\sqrt{2}} \left(|0\rangle + e^{\boldsymbol{2\pi i} 0.x_1} |1\rangle\right)$. Implement this circuit. Place your solution in an object called `qft`. ### BEGIN SOLUTION # Qiskit version q = QuantumRegister(1) c = ClassicalRegister(1) qft = QuantumCircuit(q, c) qft.h(q[0]) # Forest version # qft = Program() # qft += H(0) ## END SOLUTION ### BEGIN HIDDEN TESTS amplitudes = get_amplitudes(qft) assert np.allclose(amplitudes, np.array([1/np.sqrt(2), 1/np.sqrt(2)])) ### END HIDDEN TESTS # **Exercise 2** (1 point). Implement the inverse quantum Fourier transformation on a single qubit. Place your solution in an object called `iqft`. ### BEGIN SOLUTION # Qiskit version q = QuantumRegister(1) c = ClassicalRegister(1) iqft = QuantumCircuit(q, c) iqft.h(q[0]) # Forest version # iqft = Program() # iqft += H(0) ## END SOLUTION ### BEGIN HIDDEN TESTS amplitudes = get_amplitudes(iqft) assert np.allclose(amplitudes, np.array([1/np.sqrt(2), 1/np.sqrt(2)])) ### END HIDDEN TESTS # # Quantum phase estimation # # **Exercise 3** (3 points). Phase estimation pivots on the controlled application of powers of a unitary matrix, followed by an inverse Fourier transformation. Take the $2\times 2$ unitary matrix $\begin{bmatrix}1 & 0 \\0 & i\end{bmatrix}$. This operation has two eigenvectors, $|0\rangle$ and $|1\rangle$. Implement the first part of the quantum phase estimation with two ancilla qubits: add the necessary Hadamard gates on the ancilla qubits and apply $C-U^{2^0}$ and $C-U^{2^1}$. Qubits 0 and 1 should be the ancilla, and qubit 2 contains the eigenvector -- you can assume that this is $|0\rangle$. Place your solution in an object called `circuit`. ### BEGIN SOLUTION q = QuantumRegister(3) c = ClassicalRegister(2) circuit = QuantumCircuit(q, c) circuit.h(q[0]) circuit.h(q[1]) circuit.cu1(np.pi/2, q[1], q[2]) circuit.cz(q[0], q[2]) ### END SOLUTION amplitudes = get_amplitudes(circuit) assert np.allclose(amplitudes, np.array([0.5+0.j, 0.5+0.j, 0.5+0.j, 0.5+0.j, 0. +0.j, 0. +0.j, 0. +0.j, 0. +0.j])) # **Exercise 4** (1 point). Apply the same circuit to the other eigenvector. Place your solution in an object called `circuit_2`. ### BEGIN SOLUTION q = QuantumRegister(3) c = ClassicalRegister(3) circuit_2 = QuantumCircuit(q, c) circuit_2.x(q[2]) circuit_2.h(q[0]) circuit_2.h(q[1]) circuit_2.cu1(np.pi/2, q[1], q[2]) circuit_2.cz(q[0], q[2]) ### END SOLUTION amplitudes = get_amplitudes(circuit_2) assert np.allclose(amplitudes, np.array([ 0. +0.j , 0. +0.j , 0. +0.j , 0. +0.j , 0.5+0.j , -0.5+0.j , 0. +0.5j, 0. -0.5j]))
coding_assignments/solutions/13_Quantum_Phase_Estimation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/hBar2013/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments/blob/master/module4-real-world-experiment-design/Kim_Lowry_LS_DS_144_Real_world_Experiment_Design.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="meHdbCmyxDKn" colab_type="text" # # Lambda School Data Science Module 144 # ## Real-world Experiment Design # # ![Induction experiment](https://upload.wikimedia.org/wikipedia/commons/1/1c/Induction_experiment.png) # # [Induction experiment, Wikipedia](https://commons.wikimedia.org/wiki/File:Induction_experiment.png) # + [markdown] id="LayAf_2GxCKc" colab_type="text" # ## Prepare - Learn about JavaScript and Google Analytics # + [markdown] id="tn7eCITsx5HB" colab_type="text" # Python is great - but with web applications, it's impossible to avoid JavaScript. The lingua franca of the web, JavaScript runs in all browsers, and thus all front-end code must either be JS or transpiled to it. As a data scientist you don't have to learn JavaScript - but you do have to be aware of it, and being able to figure out snippets of it is an invaluable skill to connect your skills with real-world applications. # # So, we leave the warm comfort of Python, and venture to a bigger world - check out the [LambdaSchool/AB-Demo repo](https://github.com/LambdaSchool/AB-Demo) and [live experiment](https://lambdaschool.github.io/AB-Demo/) before class. # # Additionally, sign up for [Google Analytics](https://www.google.com/analytics) - if you're not sure on the steps or what "property" to give it, you can put a placeholder or wait until the live lecture. Google also has [Analytics documentation](https://support.google.com/analytics/) that is worth a look. # # Note - if you use any of the various tracker blocking techniques, it's quite likely you won't show up in Google Analytics. You'll have to disable them to be able to fully test your experiment. # + [markdown] id="RmQDF8OlxEey" colab_type="text" # ## Live Lecture - Using Google Analytics with a live A/B test # + [markdown] id="C-uO6iZQCT2l" colab_type="text" # Again we won't do much Python here, but we'll put a few notes and results in the notebook as we go. # + [markdown] id="n0btUZvwxE_T" colab_type="text" # ## Assignment - Set up your own A/B test! # + [markdown] id="U427Ao84Cfpx" colab_type="text" # For a baseline, a straight fork of the Lambda School repo is OK. Getting that working with your own Analytics profile is already a task. But if you get through that, stretch goals: # # 1. Explore Google Analytics - it's big and changes frequently, but powerful (can track conversions and events, flows, etc.) # 2. Customize the experiment to be more interesting/different (try colors!) # 3. Check out the various tools for setting up A/B experiments (e.g. [Optimizely](https://www.optimizely.com/) and [alternatives](https://alternativeto.net/software/optimizely/)) # 4. Try to get enough traffic to actually have more real data (don't spam people, but do share with friends) # 5. If you do get more traffic, don't just apply a t-test - dig into the results and use both math and writing to describe your findings # # Additionally, today it is a good idea to go back and review the frequentist hypothesis testing material from the first two modules. And if you feel on top of things - you can use your newfound GitHub Pages and Google Analytics skills to build/iterate a portfolio page, and maybe even instrument it with Analytics! # + [markdown] id="brQaffE0gg1V" colab_type="text" # https://hbar2013.github.io/AB-Demo/ # + [markdown] id="R2zgFT0cxBzy" colab_type="text" # ## Resources # + [markdown] id="QVnoP22VxVyC" colab_type="text" # - [Demo Google Analytics Data](https://support.google.com/analytics/answer/6367342?hl=en) - an Analytics profile you can add to your account with real data from the Google swag store # - [Design of Experiment](https://explorable.com/design-of-experiment) - an essay summarizing some of the things to be aware of when designing and running an experiment
module4-real-world-experiment-design/Kim_Lowry_LS_DS_144_Real_world_Experiment_Design.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/cxbxmxcx/Evolutionary-Deep-Learning/blob/main/EDL_4_Genetic_Programming.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="ZWk8QVItlplD" # Original Source: https://github.com/DEAP/deap/blob/master/examples/ga/onemax_numpy.py # # DEAP is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as # published by the Free Software Foundation, either version 3 of # the License, or (at your option) any later version. # # DEAP is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # You should have received a copy of the GNU Lesser General Public # License along with DEAP. If not, see <http://www.gnu.org/licenses/>. # + id="ct-pwA_aHMMa" colab={"base_uri": "https://localhost:8080/"} outputId="1bc4e1c7-1b84-4794-de92-cff039107a9d" #@title Install DEAP # !pip install deap --quiet # + id="epVL5qPDHCPW" #@title Imports import operator import math import random import numpy as np from deap import algorithms from deap import base from deap import creator from deap import tools from deap import gp # + id="O7RQPxvqf7Fw" #@title Protected Division to avoid errors random.seed(64) def protectedDiv(left, right): with np.errstate(divide='ignore',invalid='ignore'): x = np.divide(left, right) if isinstance(x, np.ndarray): x[np.isinf(x)] = 1 x[np.isnan(x)] = 1 elif np.isinf(x) or np.isnan(x): x = 1 return x # + id="N2H3aDt6gGMT" #@title Create Set of Operators pset = gp.PrimitiveSet("MAIN", 4) pset.addPrimitive(np.add, 2, name="vadd") pset.addPrimitive(np.subtract, 2, name="vsub") pset.addPrimitive(np.multiply, 2, name="vmul") pset.addPrimitive(protectedDiv, 2) pset.addPrimitive(np.negative, 1, name="vneg") pset.addPrimitive(np.cos, 1, name="vcos") pset.addPrimitive(np.sin, 1, name="vsin") pset.addEphemeralConstant("rand101", lambda: random.randint(-1,1)) pset.renameArguments(ARG0='x1') pset.renameArguments(ARG1='x2') pset.renameArguments(ARG2='x3') pset.renameArguments(ARG3='x4') # + id="s3r8TiOjHYyy" #@title Setup Fitness Criteria creator.create("FitnessMin", base.Fitness, weights=(-1.0,)) creator.create("Individual", gp.PrimitiveTree, fitness=creator.FitnessMin) # + id="WmdkSgaPHc8C" #@title Open the Toolbox toolbox = base.Toolbox() toolbox.register("expr", gp.genHalfAndHalf, pset=pset, min_=1, max_=2) toolbox.register("individual", tools.initIterate, creator.Individual, toolbox.expr) toolbox.register("population", tools.initRepeat, list, toolbox.individual) toolbox.register("compile", gp.compile, pset=pset) # + id="brtFWiHfhGHl" colab={"base_uri": "https://localhost:8080/"} outputId="fc311cf2-db87-42bb-e1ca-e162f2f30ede" x = np.random.rand(4, 10000) y = (x[3] + x[0]) / x[2] * x[1] # + id="84YphMPTHprp" #@title Define a Fitness/Evaluation Function def evalSymbReg(individual): #tree = gp.PrimitiveTree(individual) #print(str(tree)) # Transform the tree expression in a callable function func = toolbox.compile(expr=individual) # Evaluate the sum of squared difference between the expression # and the real function values : x**4 + x**3 + x**2 + x diff = np.sum((func(x[0],x[1],x[2],x[3]) - y)**2) return diff, # + id="DP0BRxxAH1uh" #@title Add Genetic Operators to Toolbox toolbox.register("evaluate", evalSymbReg) toolbox.register("select", tools.selTournament, tournsize=3) toolbox.register("mate", gp.cxOnePoint) toolbox.register("expr_mut", gp.genFull, min_=0, max_=2) toolbox.register('mutate', gp.mutUniform, expr=toolbox.expr_mut, pset=pset) # + id="UM87TusHv8ab" #@title Code to Plot the Expression Tree import matplotlib.pyplot as plt import networkx as nx def plot_expression(individual): options = {"node_size": 500, "alpha": 0.8} nodes, edges, labels = gp.graph(individual) g = nx.Graph() g.add_nodes_from(nodes) g.add_edges_from(edges) pos = nx.spring_layout(g) nx.draw_networkx_nodes(g, pos, **options) nx.draw_networkx_edges(g, pos, width=1.0, alpha=0.5) nx.draw_networkx_labels(g, pos, labels, font_size=9, font_color='k') plt.show() # + id="o50_7Bk6t9OC" #@title Defining the Evolution Function def eaSimple(population, toolbox, cxpb, mutpb, ngen, stats=None, halloffame=None): logbook = tools.Logbook() logbook.header = ['gen', 'nevals'] + (stats.fields if stats else []) # Evaluate the individuals with an invalid fitness invalid_ind = [ind for ind in population if not ind.fitness.valid] fitnesses = toolbox.map(toolbox.evaluate, invalid_ind) for ind, fit in zip(invalid_ind, fitnesses): ind.fitness.values = fit if halloffame is not None: halloffame.update(population) record = stats.compile(population) if stats else {} logbook.record(gen=0, nevals=len(invalid_ind), **record) print(logbook.stream) done = False # Begin the generational process for gen in range(1, ngen + 1): if done: return # Select the next generation individuals offspring = toolbox.select(population, len(population)) offspring = [toolbox.clone(ind) for ind in offspring] # Apply crossover and mutation on the offspring for i in range(1, len(offspring), 2): if random.random() < cxpb: offspring[i - 1], offspring[i] = toolbox.mate(offspring[i - 1], offspring[i]) del offspring[i - 1].fitness.values, offspring[i].fitness.values for i in range(len(offspring)): if random.random() < mutpb: offspring[i], = toolbox.mutate(offspring[i]) del offspring[i].fitness.values # Evaluate the individuals with an invalid fitness invalid_ind = [ind for ind in offspring if not ind.fitness.valid] fitnesses = toolbox.map(toolbox.evaluate, invalid_ind) for ind, fit in zip(invalid_ind, fitnesses): ind.fitness.values = fit if fit[0] <= 1: print("Solved") done = True # Update the hall of fame with the generated individuals if halloffame is not None: halloffame.update(offspring) plot_expression(halloffame[0]) # Replace the current population by the offspring population[:] = offspring # Append the current generation statistics to the logbook record = stats.compile(population) if stats else {} logbook.record(gen=gen, nevals=len(invalid_ind), **record) print(logbook.stream) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="FC0-B2wAID9Z" outputId="2c3c5224-a0d8-4783-da2e-1ac8ef52e9c4" #@title Run the Evolution pop = toolbox.population(n=30000) hof = tools.HallOfFame(1) stats = tools.Statistics(lambda ind: ind.fitness.values) stats.register("avg", np.mean) stats.register("std", np.std) stats.register("min", np.min) stats.register("max", np.max) eaSimple(pop, toolbox, 0.5, 0.1, 40, stats, halloffame=hof)
EDL_4_Genetic_Programming.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # A tutorial presentation of HUROT: an Homogeneous Unbalanced Regularized Optimal Transport model # This tutorial will be completed over time. Any feedback or suggestion is welcome! # ## Main ideas # # Non-regularized (un)balanced Optimal Transport (OT) is defined as the following optimization # problem: # # $$ \mathrm{OT}_{\varphi}(\alpha,\beta) = \min_\pi \iint c(x,y) \mathrm{d}\pi + D_\varphi(\pi_1|\alpha) + D_\varphi(\pi_2|\beta), $$ # # where $c$ is the cost function of our problem (in the current implementation $c(x,y) = \|x-y\|^2$, other costs will be # added in the future), and $D_\varphi$ is a divergence to the marginal (e.g. the Kullback-Leibler divergence) which # depends on the choice of an entropy function $\varphi$. # # This problem is **homogeneous** with respect to the couple of input measures $(\alpha,\beta)$, that is # # $$ \mathrm{OT}_{\varphi}(\lambda \alpha, \lambda \beta) = \lambda \cdot \mathrm{OT}_{\varphi}(\alpha,\beta), $$ # # and (probably more importantly) if $\pi$ is a optimal for the couple $(\alpha,\beta)$, then $\lambda \pi$ is optimal for # the couple $(\lambda \alpha,\lambda \beta)$. # This is a suitable behavior: rescaling the measures (e.g. changing the unit of measurment) should not change the # structure of the solutions. # # However, adding the celebrated entropic regularization term $+\epsilon \mathrm{KL}(\pi | \alpha \otimes \beta)$ # may break homogeneity in some situations. # Interestingly, when considering either # - Balanced optimal transport ($D_\varphi(\mu | \nu) = 0$ if $\mu=\nu$, $+\infty$ otherwise), # - KL divergence to the marginal ($D_\varphi = \mathrm{KL}$) # # the objective value $\mathrm{OT}_{\epsilon,\varphi}$ is **not** homogeneous, but the optimal transport plans are # homogeneous: the sturcture of the solutions is preserved. # Furthermore, the so-called Sinkhorn divergence is also homogeneous. # # This does not hold for more general divergences though. For instance, using $D_\varphi = \mathrm{TV}$ the *Total Variation* # yields inhomogeneity. # This is even worse when considering spatially varying divergences as the one appearing in Optimal Transport with Boundary. # # ## Bringing back homogeneity # # In this repository, we propose a model of regularized (unbalanced) OT that will be homogeneous. # The key idea is to slighly modify the entropic regularization term. # We propose two formulations, using: # # $$ +\frac{\epsilon}{2} \left( \mathrm{KL}\left(\pi | \frac{\alpha}{m(\alpha)} \otimes \beta \right) + \mathrm{KL}\left(\pi | \alpha \otimes \frac{\beta}{m(\beta)} \right) \right) $$ # # or, somewhat more simply: # # $$ +\epsilon \mathrm{KL}\left(\pi | \frac{\alpha \otimes \beta}{\sqrt{m(\alpha)m(\beta)}} \right).$$ # # These models are referred to as *harmonic* and *geometric* homogeneizations as they respectively involve # the harmonic and the geometric means of the masses $m(\alpha),m(\beta)$ in the computations. # They correspond to setting `mode_homogeneity="harmonic"` or `"geometric"` in the implementation. # Setting `mode_homogeneity="std"` retrieves the standard (non-homogeneous) Unbalanced Regularized OT model. # # These models are homogeneous by construction, in particular the corresponding Sinkhorn divergences are positive. # # **Note:** In the reference paper, only the harmonic regularization has been studied in depth. # Actually, the two models are very similar, the harmonic one presents the advantage of yielding a Sinkhorn divergence # that does not need the introduction of a mass bias term. # # ## Other options available # # - Our implementation was motivated by applications in Topological Data Analysis (TDA). The metrics used in TDA can be # cast as problems of *Optimal Transport with Boundary*, a model proposed by Figalli and Gigli. # This model correspond to an Unbalanced OT problem involving a *spatially varying* marginal divergence. # Adding the standard entropic regularization term (`mode_homogeneity="std"`) to this problem yields highly inhomogeneous # behavior, suggesting that using an homogeneous entropic penalty (e.g. `mode_homogeneity="harmonic"`) is much more reliable. # This mode is available through the option `mode_divergence="boundary"`. # For now, it is restricted to the case where the space is the open half plane $\Omega = \{(t_1,t_2),\ t_1 < t_2\} \subset \mathbb{R}^2$ # and its boundary is thus the diagonal $\partial \Omega = \{(t,t),\ t \in \mathbb{R}\}$ (the situation of interest in TDA). # Other implementation will be provided in the future. # - We propose an option `corrected_marginals`. # Setting this parameter to `True` changes the marginal terms $D_\varphi(\pi_1 | \alpha)$ and $D_\varphi(\pi_2 | \beta)$ # to $D_\varphi\left(\pi_1 | \alpha \sqrt{\frac{m(\beta)}{m(\alpha)}}\right)$ and # $D_\varphi\left(\pi_2 | \beta \sqrt{\frac{m(\alpha)}{m(\beta)}}\right)$ # respectively. # The idea being that, in any case, if $\alpha$ and $\beta$ have different total masses, it is pointless to expect the # marginals $\pi_1,\pi_2$ of $\pi$ to equal $\alpha$ and $\beta$ ($\pi_1$ and $\pi_2$ have the same total masses). # Thus, the standard marginals constraints are somewhat "in contradiction", and it may be interesting to replace them by # more feasible constraints: the measures proposed above have the same total mass, namely $\sqrt{m(\alpha)m(\beta)}$. # # **Note:** The `corrected_marginals` approach has not been detailed in the reference paper. Actually, when `mode_homogeneity="geometric"`, it is very similar to compute # $\sqrt{m(\alpha) m(\beta)} \cdot \mathrm{OT}_{\epsilon,\varphi}\left(\frac{\alpha}{m(\alpha)}, \frac{\beta}{m(\beta)} \right)$. # ### Run the quickstart # + import numpy as np from utils import sk_div, hurot np.random.seed(1) # Define the measures as weights + locations. n, m = 5, 7 a = np.random.rand(n) b = np.random.rand(m) x = np.random.randn(n, 2) y = np.random.randn(m, 2) + np.array([.5,.5]) # Set the parameter for the OT cost and the Sinkhorn divergence: mode_divergence = "TV" # To use the total variation as the marginal divergence. mode_homogeneity = "harmonic" # To use the harmonic eps = 1 # the entropic regularization parameter value = sk_div(x, y, a, b, mode_divergence = mode_divergence, mode_homogeneity = mode_homogeneity, corrected_marginals = False, eps = eps, verbose=0, init="unif", nb_step=1000, crit=0., stab=True) P, f, g, ot_value = hurot(x, y, a, b, mode_divergence=mode_divergence, mode_homogeneity=mode_homogeneity, corrected_marginals=False, eps=eps, verbose=0, init="unif", nb_step=1000, crit=0., stab=True) # - print("The sinkhorn divergence is:\n", value) import matplotlib.pyplot as plt from utils import plot_transport_plan fig, ax = plt.subplots() plot_transport_plan(ax=ax, xs=x, xt=y, P=P) ax.scatter(x[:,0], x[:,1], marker='o', s=100*a, c='red') ax.scatter(y[:,0], y[:,1], marker='o', s=100*b, c='blue') # Now let see what happens if we rescale the weights by $\lambda = 1000$. P2 = hurot(x, y, 1000*a, 1000*b, mode_divergence=mode_divergence, mode_homogeneity=mode_homogeneity, corrected_marginals=False, eps=eps, verbose=0, init="unif", nb_step=1000, crit=0., stab=True)[0] fig, ax = plt.subplots() plot_transport_plan(ax=ax, xs=x, xt=y, P=P2) ax.scatter(x[:,0], x[:,1], marker='o', s=100*a, c='red') ax.scatter(y[:,0], y[:,1], marker='o', s=100*b, c='blue') # As we used `mode_homogeneity="harmonic"`, the transport plan is unchanged visually (it has only been rescaled by $1000$). # # What happen if we use the standard model? # + mode_homogeneity_2 = "std" P3 = hurot(x, y, a, b, mode_divergence=mode_divergence, mode_homogeneity=mode_homogeneity_2, corrected_marginals=False, eps=eps, verbose=0, init="unif", nb_step=1000, crit=0., stab=True)[0] P4 = hurot(x, y, 1000*a, 1000*b, mode_divergence=mode_divergence, mode_homogeneity=mode_homogeneity_2, corrected_marginals=False, eps=eps, verbose=0, init="unif", nb_step=1000, crit=0., stab=True)[0] # + fig, axs = plt.subplots(1, 2) ax = axs[0] plot_transport_plan(ax=ax, xs=x, xt=y, P=P3) ax.scatter(x[:,0], x[:,1], marker='o', s=100*a, c='red') ax.scatter(y[:,0], y[:,1], marker='o', s=100*b, c='blue') ax = axs[1] plot_transport_plan(ax=ax, xs=x, xt=y, P=P4) ax.scatter(x[:,0], x[:,1], marker='o', s=100*a, c='red') ax.scatter(y[:,0], y[:,1], marker='o', s=100*b, c='blue') # - # As one can see, the transport plan are not proportional to each other (for instance, some mass on the bottom-right point is not transported when $\lambda = 1000$ but is transported for $\lambda = 1$). # # Inhomogeneity alters the _interpolation_ described by the optimal transport plans.
tutorial_HUROT.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="VTHU63hYgwzT" # summary: A tutorial to understand the process of building a Neural Matrix Factorization model from scratch in PyTorch on MovieLens-1M dataset. # id: neural-matrix-factorization-from-scratch-in-pytorch # categories: Pytorch # tags: scratch, movielens # status: Published # authors: <NAME>. # Feedback Link: https://form.jotform.com/211377288388469 # + [markdown] id="uC6AkZdChapT" # # Neural Matrix Factorization from scratch in PyTorch # + [markdown] id="-67Oh2k3uCIW" # <!-- ------------------------ --> # ## What you'll learn # Duration: 2 # # - Create movielens dataset class in Pytorch # - Setting the evaluation criteria # - Architecture of neural matrix factorization model # - Train and evaluating a neural matrix factorization model # + [markdown] id="slD2OYQvIfhG" # <!-- ------------------------ --> # ## Dataset # Duration: 5 # # After downloading and expanding the movielens-1m dataset, we will create the dataset class as the first step. # + id="kEYXkACmIe-B" class Rating_Datset(torch.utils.data.Dataset): def __init__(self, user_list, item_list, rating_list): super(Rating_Datset, self).__init__() self.user_list = user_list self.item_list = item_list self.rating_list = rating_list def __len__(self): return len(self.user_list) def __getitem__(self, idx): user = self.user_list[idx] item = self.item_list[idx] rating = self.rating_list[idx] return ( torch.tensor(user, dtype=torch.long), torch.tensor(item, dtype=torch.long), torch.tensor(rating, dtype=torch.float) ) # + [markdown] id="HOyBk8riI66H" # The name of our class is *Rating_Dataset* and it is getting inherited from PyTorch *Dataset* base class. The *__getitem__* method is helping us in 2 ways: 1) It is reinforcing the type to [long, long, float] and returning the tensor version of the tuple for the given index id. # # We are also creating a helper dataset class to put all the data processing functions under a single umbrella. This class contains 5 methods: # # - *_reindex*: process dataset to reindex userID and itemID, also set rating as binary feedback # - *_leave_one_out*: leave-one-out evaluation protocol in paper [https://www.comp.nus.edu.sg/~xiangnan/papers/ncf.pdf](https://www.comp.nus.edu.sg/~xiangnan/papers/ncf.pdf) # - *_negative_sampling*: randomly selects n negative examples for each positive one # - *get_train_instance*: merge the examples of train data with negative samples and return the PyTorch dataloader object # - *get_test_instance*: merge the examples of test data with negative samples and return the PyTorch dataloader object # + [markdown] id="71Gd05RMI-5i" # <!-- ------------------------ --> # ## Evaluation criteria # Duration: 5 # # Next, we are defining evaluation metrics. We are using Hit Rate and NDCG as our evaluation metrics. # + id="VVd3dzFVI3Tw" def hit(ng_item, pred_items): if ng_item in pred_items: return 1 return 0 def ndcg(ng_item, pred_items): if ng_item in pred_items: index = pred_items.index(ng_item) return np.reciprocal(np.log2(index+2)) return 0 def metrics(model, test_loader, top_k, device): HR, NDCG = [], [] for user, item, label in test_loader: user = user.to(device) item = item.to(device) predictions = model(user, item) _, indices = torch.topk(predictions, top_k) recommends = torch.take( item, indices).cpu().numpy().tolist() ng_item = item[0].item() # leave one-out evaluation has only one item per user HR.append(hit(ng_item, recommends)) NDCG.append(ndcg(ng_item, recommends)) return np.mean(HR), np.mean(NDCG) # + [markdown] id="ieH6c6OKJVrq" # The metrics function is first loading the user and item variables to the right device (e.g. to the GPU if it is enabled), then getting predictions from the model and then finally calculating (& returning) the hit_rate_at_k and ndcg_at_k values. # + [markdown] id="kzkunJLrJeae" # <!-- ------------------------ --> # ## Defining Model Architectures # Duration: 10 # # After defining the dataset class and evaluation function, it is time to define the model architecture. # # We are going to use *Neural Collaborative Filtering for Personalized Ranking*. This model leverages the flexibility and non-linearity of neural networks to replace dot products of matrix factorization, aiming at enhancing the model expressiveness. In specific, this model is structured with two subnetworks including generalized matrix factorization (GMF) and MLP and models the interactions from two pathways instead of simple inner products. The outputs of these two networks are concatenated for the final prediction scores calculation. # # ![nmf_architecture](img/nmf_architecture.png) # # In this architecture, we are first creating the user and item embedding layers for both MLP and MF architectures, and with the help of PyTorch ModuleList, we are creating MLP architecture. Then, in the forward method, we are passing user and item indices list in the embedding layers and then concatenating and multiplying the MLP and MF embedding layers respectively. And finally, concatenating the MLP and MF feature layers and a logistic activation at the end. # + id="2Ywo9NU-K7yy" class NeuMF(nn.Module): def __init__(self, args, num_users, num_items): super(NeuMF, self).__init__() self.num_users = num_users self.num_items = num_items self.factor_num_mf = args.factor_num self.factor_num_mlp = int(args.layers[0]/2) self.layers = args.layers self.dropout = args.dropout self.embedding_user_mlp = nn.Embedding(num_embeddings=self.num_users, embedding_dim=self.factor_num_mlp) self.embedding_item_mlp = nn.Embedding(num_embeddings=self.num_items, embedding_dim=self.factor_num_mlp) self.embedding_user_mf = nn.Embedding(num_embeddings=self.num_users, embedding_dim=self.factor_num_mf) self.embedding_item_mf = nn.Embedding(num_embeddings=self.num_items, embedding_dim=self.factor_num_mf) self.fc_layers = nn.ModuleList() for idx, (in_size, out_size) in enumerate(zip(args.layers[:-1], args.layers[1:])): self.fc_layers.append(torch.nn.Linear(in_size, out_size)) self.fc_layers.append(nn.ReLU()) self.affine_output = nn.Linear(in_features=args.layers[-1] + self.factor_num_mf, out_features=1) self.logistic = nn.Sigmoid() self.init_weight() def init_weight(self): nn.init.normal_(self.embedding_user_mlp.weight, std=0.01) nn.init.normal_(self.embedding_item_mlp.weight, std=0.01) nn.init.normal_(self.embedding_user_mf.weight, std=0.01) nn.init.normal_(self.embedding_item_mf.weight, std=0.01) for m in self.fc_layers: if isinstance(m, nn.Linear): nn.init.xavier_uniform_(m.weight) nn.init.xavier_uniform_(self.affine_output.weight) for m in self.modules(): if isinstance(m, nn.Linear) and m.bias is not None: m.bias.data.zero_() def forward(self, user_indices, item_indices): user_embedding_mlp = self.embedding_user_mlp(user_indices) item_embedding_mlp = self.embedding_item_mlp(item_indices) user_embedding_mf = self.embedding_user_mf(user_indices) item_embedding_mf = self.embedding_item_mf(item_indices) mlp_vector = torch.cat([user_embedding_mlp, item_embedding_mlp], dim=-1) mf_vector =torch.mul(user_embedding_mf, item_embedding_mf) for idx, _ in enumerate(range(len(self.fc_layers))): mlp_vector = self.fc_layers[idx](mlp_vector) vector = torch.cat([mlp_vector, mf_vector], dim=-1) logits = self.affine_output(vector) rating = self.logistic(logits) return rating.squeeze() # + [markdown] id="CcyotvWVLiBo" # <!-- ------------------------ --> # ## Training and evaluation # Duration: 10 # # We are using following hyperparameters to train the model: # - Learning rate is 0.001 # - Dropout rate is 0.2 # - Running for 10 epochs # - HitRate@10 and NDCG@10 # - 4 negative samples for each positive one # + id="lMKJ2FJrL27U" args = parser.parse_args("") device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") writer = SummaryWriter() # seed for Reproducibility seed_everything(args.seed) # load data ml_1m = pd.read_csv( DATA_PATH, sep="::", names = ['user_id', 'item_id', 'rating', 'timestamp'], engine='python') # set the num_users, items num_users = ml_1m['user_id'].nunique()+1 num_items = ml_1m['item_id'].nunique()+1 # construct the train and test datasets data = NCF_Data(args, ml_1m) train_loader = data.get_train_instance() test_loader = data.get_test_instance() # set model and loss, optimizer model = NeuMF(args, num_users, num_items) model = model.to(device) loss_function = nn.BCELoss() optimizer = optim.Adam(model.parameters(), lr=args.lr) # train, evaluation best_hr = 0 for epoch in range(1, args.epochs+1): model.train() # Enable dropout (if have). start_time = time.time() for user, item, label in train_loader: user = user.to(device) item = item.to(device) label = label.to(device) optimizer.zero_grad() prediction = model(user, item) loss = loss_function(prediction, label) loss.backward() optimizer.step() writer.add_scalar('loss/Train_loss', loss.item(), epoch) model.eval() HR, NDCG = metrics(model, test_loader, args.top_k, device) writer.add_scalar('Perfomance/HR@10', HR, epoch) writer.add_scalar('Perfomance/NDCG@10', NDCG, epoch) elapsed_time = time.time() - start_time print("The time elapse of epoch {:03d}".format(epoch) + " is: " + time.strftime("%H: %M: %S", time.gmtime(elapsed_time))) print("HR: {:.3f}\tNDCG: {:.3f}".format(np.mean(HR), np.mean(NDCG))) if HR > best_hr: best_hr, best_ndcg, best_epoch = HR, NDCG, epoch if args.out: if not os.path.exists(MODEL_PATH): os.mkdir(MODEL_PATH) torch.save(model, '{}{}.pth'.format(MODEL_PATH, MODEL)) writer.close() # + [markdown] id="a09fgGhEMALC" # Average epoch time is 90 seconds on Nvidia T4 GPU. Both hit_rate and ndcg values improves initially for first 4 epochs and then converged to a local (or global, I hope) minima. # + [markdown] id="Hq-Tn1PsMXXq" # <!-- ------------------------ --> # ## Congratulations # Duration: 2 # # Congratulations! We covered a lot of content and hopefully you have a better understanding of the working of neural matrix factorization model by now. # # ### What we've covered # - Create movielens dataset class in Pytorch # - Setting the evaluation criteria # - Architecture of neural matrix factorization model # - Train and evaluating a neural matrix factorization model # # ### Resources # - [Colab notebook](https://sparsh-ai.github.io/rec-tutorials/matrixfactorization%20movielens%20pytorch%20scratch/2021/04/21/rec-algo-ncf-pytorch-pyy0715.html) # # ### Next Steps # - Notebook based tutorials [here](https://sparsh-ai.github.io/rec-tutorials/) # - Read NMF Paper on [Arxiv](https://arxiv.org/abs/1511.06443) # - Continue learning by following [this](https://medium.com/@lz2576/a-first-look-at-recommendation-system-with-matrix-factorization-and-neural-nets-7e21e54295c) medium post # # #### Have a Question? # - https://form.jotform.com/211377288388469 # # #### Github Issues # - https://github.com/sparsh-ai/reco-tutorials/issues
_docs/nbs/reco-tutorials-prod-neural-matrix-factorization-from-scratch-in-pytorch.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from bizkit.anomaly import anomaly import pandas as pd import numpy as np df = pd.read_csv('exchange-2_cpc_results.csv') df.head() # <p style="font-size: 15px"><b>Step 1: Fill in missing data to the "value" column and "date/timestamp" column. Otherwise, a RuntimeError will be generated when the methods are called.</b></p> df['value'].fillna(0,inplace=True) # # <p style="font-size: 15px"><b>Step 2: Detect anomalies by running the detect_anomaly() method. Fill in the parameters of Dataset, Date Column, Value Column. Default value of the parameter outliers_fraction is 0.05. The user is free to set up a different value. </b></p> model = anomaly() model.detect_anomaly(df, df['timestamp'], df['value'], outliers_fraction=0.03) # # <p style="font-size: 15px"><b>Step 3: Print the anomalies by running the print_anomaly() method. Fill in the parameters of Dataset, Date Column, and Value Column. Make sure the detect_anomaly() is run before this method.</b></p> model.print_anomaly(df, df['timestamp'], df['value']) # # <p style="font-size: 15px"><b>Step 4: Plot the anomalies by running the plot_anomaly() method. Fill in the parameters of Dataset, Date Column, and Value Column. The chart is presented via Bokeh library. Red dots represent anomalies. The chart is interactive. Hover over any point, the information of Date and Value will be displayed. By selecting the icons on the right hand side, the chart can be zoomed in/out and reset. It can also be saved into a seperate file. Make sure the detect_anomaly() is run before this method.</b></p> model.plot_anomaly(df, df['timestamp'], df['value']) # Please refer below URL for visualization: # https://nbviewer.jupyter.org/github/zzurich/bizkit/blob/master/Examples/anomaly_example.ipynb #
Examples/anomaly_example.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Finch usage # # Finch is a WPS server for climate indicators, but also has a few utilities to facilitate data handling. To get started, first instantiate the client. from birdy import WPSClient url = 'https://finch.crim.ca/wps' #url = 'https://pavics.ouranos.ca/twitcher/ows/proxy/finch/wps' #url = 'http://localhost:5000' wps = WPSClient(url) # The list of available processes can be displayed using the help function, as well as details about individual processes. # + # wps? # + # wps.frost_days? # - # To actually compute an indicator, we need to specify the path to the netCDF file used as input for the calculation of the indicator. To compute `frost_days`, we need a time series of daily minimum temperature. Here we'll use a small test file. Note that here we're using an OPeNDAP link, but it could also be an url to a netCDF file, or the path to a local file on disk. We then simply call the indicator. The response is an object that can poll the server to inquire about the status of the process. This object can use two modes: # - synchronous: it will wait for the server's response before returning; or # - asynchronous: it will return immediately, but without the actual output from the process. # # Here, since we're applying the process on a small test file, we're using the default synchronous mode. For long computations, use the asynchronous mode to avoid time-out errors. The asynchronous mode is activated by setting the `progress` attribute of the WPS client to True. tasmin = "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/testdata/flyingpigeon/cmip3/tasmin.sresa2.miub_echo_g.run1.atm.da.nc" resp = wps.frost_days(tasmin) print(resp.status) out = resp.get() print(out) # The `get` method returns a `NamedTuple` object with all the WPS outputs, either as references to files or actual content. To copy the file to the local disk, you can use the `getOutput` method. resp.getOutput('/tmp/out.nc') import xarray as xr xr.open_dataset('/tmp/out.nc') # The birdy client offers a quicker way to download and open the files automatically using `asobj=True`, as long as the file format is known to birdy. ds, log = resp.get(asobj=True) ds print(log)
docs/notebooks/xclim_training/finch.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # this script is meant to deal with the data of 8 recognition runs and generate models saved in corresponding folder ''' input: cfg.session=ses1 cfg.modelFolder=f"{cfg.subjects_dir}/{cfg.subjectName}/{cfg.session}_recognition/clf/" cfg.dataFolder=f"{cfg.subjects_dir}/{cfg.subjectName}/{cfg.session}_recognition/" output: models in cfg.modelFolder ''' import os import sys sys.path.append('/gpfs/milgram/project/turk-browne/projects/rtSynth_rt/') import argparse import numpy as np import nibabel as nib import scipy.io as sio from subprocess import call from nibabel.nicom import dicomreaders import pydicom as dicom # type: ignore import time from glob import glob import shutil from nilearn.image import new_img_like import joblib import rtCommon.utils as utils from rtCommon.utils import loadConfigFile from rtCommon.fileClient import FileInterface import rtCommon.projectUtils as projUtils from rtCommon.imageHandling import readRetryDicomFromFileInterface, getDicomFileName, convertDicomImgToNifti # argParser = argparse.ArgumentParser() # argParser.add_argument('--config', '-c', default='sub001.ses1.toml', type=str, help='experiment file (.json or .toml)') # args = argParser.parse_args() from rtCommon.cfg_loading import mkdir,cfg_loading cfg = cfg_loading("sub001.ses1.toml") sys.path.append('/gpfs/milgram/project/turk-browne/projects/rtSynth_rt/expScripts/recognition/') from recognition_dataAnalysisFunctions import recognition_preprocess,minimalClass,behaviorDataLoading # + ''' This script is adapted from classRegion.py Purpose: to train and save the classifiers for all ROIs ''' ''' from the recognition exp dir, run batchRegions.sh, it will run the script classRegion.sh, which is just a feeder for classRegion.py for all ROI/parcels across both wang and schaefer. classRegion.py simply runs a runwise cross-validated classifier across the runs of recognition data, then stores the average accuracy of the ROI it was assigned in an numpy array. This is stored within the subject specific folder (e.g. wang2014/0111171/output/roi25_rh.npy ) input: 1 subject: which subject 2 dataloc: neurosketch or realtime 3 roiloc: schaefer2018 or wang2014 4 roinum: number of rois you want 5 roihemi: which hemisphere ''' import nibabel as nib import numpy as np import os import sys import time import pandas as pd from sklearn.linear_model import LogisticRegression # What subject are you running subject = "sub001" #sys.argv[1] dataSource = "realtime" recognition_dir = '/gpfs/milgram/project/turk-browne/projects/rtSynth_rt/subjects/sub001/ses1/recognition/' #sys.argv[1] print("NO ROI LOCATION ENTERED: Using radius of wang2014") roiloc = "wang" print("NO DATASOURCE ENTERED: Using original neurosketch data") dataSource = 'neurosketch' print("NO ROI SPECIFIED: Using roi number 1") roinum="1" if roiloc == "wang2014": try: roihemi = "_{}".format("lh") print("Since this is wang2014, we need a hemisphere, in this case {}".format(roihemi)) except: print("this is wang 2014, so we need a hemisphere, but one was not specified") assert 1 == 2 else: roihemi="" print("Running subject {}, with {} as a data source, {} roi #{} {}".format(subject, dataSource, roiloc, roinum, roihemi)) # + # dataSource depending, there are a number of keywords to fill in: # ses: which day of data collection # run: which run number on that day (single digit) # phase: 12, 34, or 56 # sub: subject number if dataSource == "neurosketch": funcdata = "/gpfs/milgram/project/turk-browne/jukebox/ntb/projects/sketchloop02/subjects/{sub}_neurosketch/data/nifti/realtime_preprocessed/{sub}_neurosketch_recognition_run_{run}.nii.gz" metadata = "/gpfs/milgram/project/turk-browne/jukebox/ntb/projects/sketchloop02/data/features/recog/metadata_{sub}_V1_{phase}.csv" anat = "/gpfs/milgram/project/turk-browne/jukebox/ntb/projects/sketchloop02/subjects/{sub}_neurosketch/data/nifti/{sub}_neurosketch_anat_mprage_brain.nii.gz" elif dataSource == "realtime": funcdata = "{recognition_dir}run{run}.nii.gz" metadata = "{recognition_dir}{subject}_{run_i}.csv" anat = "$TO_BE_FILLED" else: funcdata = "/gpfs/milgram/project/turk-browne/projects/rtTest/searchout/feat/{sub}_pre.nii.gz" metadata = "/gpfs/milgram/project/turk-browne/jukebox/ntb/projects/sketchloop02/data/features/recog/metadata_{sub}_V1_{phase}.csv" anat = "$TO_BE_FILLED" outloc = "/gpfs/milgram/project/turk-browne/projects/rtTest/searchout" starttime = time.time() def Wait(waitfor, delay=1): while not os.path.exists(waitfor): time.sleep(delay) print('waiting for {}'.format(waitfor)) def normalize(X): X = X - X.mean(3) return X def Class(data, bcvar): metas = bcvar[0] data4d = data[0] print(data4d.shape) accs = [] for run in range(6): testX = data4d[run] testY = metas[run] trainX = data4d[np.arange(6) != run] trainX = trainX.reshape(trainX.shape[0]*trainX.shape[1], -1) trainY = [] for meta in range(6): if meta != run: trainY.extend(metas[run]) clf = LogisticRegression(penalty='l2',C=1, solver='lbfgs', max_iter=1000, multi_class='multinomial').fit(trainX, trainY) # Monitor progress by printing accuracy (only useful if you're running a test set) acc = clf.score(testX, testY) accs.append(acc) return np.mean(accs) # phasedict = dict(zip([1,2,3,4,5,6,7,8],["12", "12", "34", "34", "56", "56"])) phasedict = dict(zip([1,2,3,4,5,6,7,8],[cfg.actualRuns])) imcodeDict={"A": "bed", "B": "Chair", "C": "table", "D": "bench"} mask = nib.load(f"{cfg.mask_dir}{roiloc}_roi{roinum}{roihemi}.nii.gz").get_data() mask = mask.astype(int)# say some things about the mask. print('mask dimensions: {}'. format(mask.shape)) print('number of voxels in mask: {}'.format(np.sum(mask))) # + run_i,run=0,cfg.actualRuns[0] print(run, end='--') # retrieve from the dictionary which phase it is, assign the session # Build the path for the preprocessed functional data this4d = f"{cfg.recognition_dir}run{run}.nii.gz" # run data # Read in the metadata, and reduce it to only the TR values from this run, add to a list thismeta = pd.read_csv(f"{cfg.recognition_dir}{cfg.subjectName}_{run_i+1}.csv") # thismeta = thismeta[thismeta['run_num'] == int(run)] TR_num = list(thismeta.TR.astype(int)) labels = list(thismeta.Item) labels = [None if type(label)==float else imcodeDict[label] for label in labels] # - print("LENGTH OF TR: {}".format(len(TR_num))) # Load the functional data runIm = nib.load(this4d) affine_mat = runIm.affine runImDat = runIm.get_data() # Use the TR numbers to select the correct features features = [runImDat[:,:,:,n+2] for n in TR_num] features = np.array(features) features = features[:, mask==1] print("shape of features", features.shape, "shape of mask", mask.shape) featmean = features.mean(1)[..., None] features = features - featmean features = np.expand_dims(features, 0) # Compile preprocessed data and corresponding indices metas = [] runs=[] for run_i,run in enumerate(cfg.actualRuns): print(run, end='--') # Build the path for the preprocessed functional data this4d = f"{cfg.recognition_dir}run{run}.nii.gz" # run data # Read in the metadata, and reduce it to only the TR values from this run, add to a list thismeta = pd.read_csv(f"{cfg.recognition_dir}{cfg.subjectName}_{run_i+1}.csv") TR_num = list(thismeta.TR.astype(int)) labels = list(thismeta.Item) labels = [None if type(label)==float else imcodeDict[label] for label in labels] print("LENGTH OF TR: {}".format(len(TR_num))) # Load the functional data runIm = nib.load(this4d) affine_mat = runIm.affine runImDat = runIm.get_data() # Use the TR numbers to select the correct features features = [runImDat[:,:,:,n+2] for n in TR_num] features = np.array(features) features = features[:, mask==1] print("shape of features", features.shape, "shape of mask", mask.shape) featmean = features.mean(1)[..., None] features = features - featmean # Append both so we can use it later metas.append(labels) runs.append(features) # if run_i == 0 else np.concatenate((runs, features)) print(trainX.shape) print(len(trainY)) # + def Class(data, bcvar): metas = bcvar data4d = data accs = [] for curr_run in range(8): testX = data4d[curr_run] testY = metas[curr_run] trainX=None for train_run in range(8): if train_run!=curr_run: trainX = data4d[train_run] if type(trainX)!=np.ndarray else np.concatenate((trainX, data4d[train_run]),axis=0) trainY = [] for train_run in range(8): if train_run!=curr_run: trainY.extend(metas[train_run]) # remove nan type id=[type(i)==str for i in trainY] trainY=[i for i in trainY if type(i)==str] trainX=trainX[id] clf = LogisticRegression(penalty='l2',C=1, solver='lbfgs', max_iter=1000, multi_class='multinomial').fit(trainX, trainY) # Monitor progress by printing accuracy (only useful if you're running a test set) id=[type(i)==str for i in testY] testY=[i for i in testY if type(i)==str] testX=testX[id] acc = clf.score(testX, testY) accs.append(acc) return np.mean(accs) accs=Class(data, bcvar) # - accs # + command=f"bash {cfg.recognition_expScripts_dir}batchRegions.sh sub001.ses1.toml" command # - cfg.recognition_expScripts_dir f"{cfg.mask_dir}{roiloc}_{roinum}{roihemi}.nii.gz" brain=np.load(f"{cfg.recognition_dir}brain_run10.npy") print(brain.shape) mask=nib.load(f"{cfg.recognition_dir}chosenMask.nii.gz").get_data() print(mask.shape)
archive/testClassRegion.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <h1><center> Facial Emotion Recognition - XCeption Model</center></h1> # <center> A project for the French Employment Agency </center> # <center> Telecom ParisTech 2018-2019 </center> # # I. Context # The aim of this notebook is to explore facial emotion recognition techniques from a live webcam video stream. # # The data set used for training is the Kaggle FER2013 emotion recognition data set : https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data # # The models explored include : # - Manual filters # - Deep Learning Architectures # - DenseNet Inspired Architectures # # This model will be combined with voice emotion recongition as well as psychological traits extracted from text inputs, and should provide a benchmark and a deep analysis of both verbal and non-verbal insights for candidates seeking for a job and their performance during an interview. # # II. General imports # Versions used : # + active="" # Python : 3.6.5 # Tensorflow : 1.10.1 # Keras : 2.2.2 # Numpy : 1.15.4 # OpenCV : 4.0.0 # + ### General imports ### import numpy as np import pandas as pd import matplotlib.pyplot as plt from time import time from time import sleep import re import os import argparse from collections import OrderedDict import matplotlib.animation as animation ### Image processing ### from scipy.ndimage import zoom from scipy.spatial import distance import imutils from scipy import ndimage import cv2 import dlib from __future__ import division from imutils import face_utils ### CNN models ### import keras from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img from keras.callbacks import TensorBoard from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Conv2D, MaxPooling2D, SeparableConv2D from keras.utils import np_utils from keras.regularizers import l2#, activity_l2 from keras.optimizers import SGD, RMSprop from keras.utils import to_categorical from keras.layers.normalization import BatchNormalization from keras import models from keras.utils.vis_utils import plot_model from keras.layers import Input, GlobalAveragePooling2D from keras.models import Model from tensorflow.keras import layers ### Build SVM models ### from sklearn.preprocessing import OneHotEncoder from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn import svm ### Same trained models ### import h5py from keras.models import model_from_json import pickle # - # # III. Import datas path = '/Users/maelfabien/filrouge_pole_emploi/Video/' local_path = '/Users/maelfabien/Desktop/LocalDB/Videos/' X_train = np.load(local_path + "X_train.npy") X_test = np.load(local_path + "X_test.npy") y_train = np.load(local_path + "y_train.npy") y_test = np.load(local_path + "y_test.npy") shape_x = 48 shape_y = 48 nRows,nCols,nDims = X_train.shape[1:] input_shape = (nRows, nCols, nDims) classes = np.unique(y_train) nClasses = len(classes) # # IV. XCeption Model def entry_flow(inputs) : x = Conv2D(32, 3, strides = 2, padding='same')(inputs) x = BatchNormalization()(x) x = Activation('relu')(x) x = Conv2D(64,3,padding='same')(x) x = BatchNormalization()(x) x = Activation('relu')(x) previous_block_activation = x for size in [128, 256, 728] : x = Activation('relu')(x) x = SeparableConv2D(size, 3, padding='same')(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = SeparableConv2D(size, 3, padding='same')(x) x = BatchNormalization()(x) x = MaxPooling2D(3, strides=2, padding='same')(x) residual = Conv2D(size, 1, strides=2, padding='same')(previous_block_activation) x = keras.layers.Add()([x, residual]) previous_block_activation = x return x def middle_flow(x, num_blocks=8) : previous_block_activation = x for _ in range(num_blocks) : x = Activation('relu')(x) x = SeparableConv2D(728, 3, padding='same')(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = SeparableConv2D(728, 3, padding='same')(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = SeparableConv2D(728, 3, padding='same')(x) x = BatchNormalization()(x) x = keras.layers.Add()([x, previous_block_activation]) previous_block_activation = x return x def exit_flow(x, num_classes=7) : previous_block_activation = x x = Activation('relu')(x) x = SeparableConv2D(728, 3, padding='same')(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = SeparableConv2D(1024, 3, padding='same')(x) x = BatchNormalization()(x) x = MaxPooling2D(3, strides=2, padding='same')(x) residual = Conv2D(1024, 1, strides=2, padding='same')(previous_block_activation) x = keras.layers.Add()([x, residual]) x = Activation('relu')(x) x = SeparableConv2D(728, 3, padding='same')(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = SeparableConv2D(1024, 3, padding='same')(x) x = BatchNormalization()(x) x = GlobalAveragePooling2D()(x) x = Dense(num_classes, activation='softmax')(x) return x inputs = Input(shape=(shape_x, shape_y, 1)) outputs = exit_flow(middle_flow(entry_flow(inputs))) xception = Model(inputs, outputs) plot_model(xception, to_file='model_images/model_plot_4.png', show_shapes=True, show_layer_names=True) # <img src='model_images/model_plot_4.png'> xception.summary() # + xception.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) batch_size = 128 epochs = 150 # + datagen = ImageDataGenerator( zoom_range=0.2, # randomly zoom into images rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180) width_shift_range=0.1, # randomly shift images horizontally (fraction of total width) height_shift_range=0.1, # randomly shift images vertically (fraction of total height) horizontal_flip=True, # randomly flip images vertical_flip=False) # randomly flip images history = xception.fit_generator( datagen.flow(X_train, y_train, batch_size=batch_size), steps_per_epoch=int(np.ceil(X_train.shape[0] / float(batch_size))), epochs = epochs, validation_data=(X_test, y_test) ) # - # # V. Evaluate the model # + #Plotting accuracy and loss curves for 2nd model # Loss Curves plt.figure(figsize=[8,6]) plt.plot(history.history['loss'],'r',linewidth=2.0) plt.plot(history.history['val_loss'],'b',linewidth=2.0) plt.legend(['Training loss', 'Validation Loss'],fontsize=18) plt.xlabel('Epochs ',fontsize=16) plt.ylabel('Loss',fontsize=16) plt.title('Loss Curves',fontsize=16) # Accuracy Curves plt.figure(figsize=[8,6]) plt.plot(history.history['acc'],'r',linewidth=2.0) plt.plot(history.history['val_acc'],'b',linewidth=2.0) plt.legend(['Training Accuracy', 'Validation Accuracy'],fontsize=18) plt.xlabel('Epochs ',fontsize=16) plt.ylabel('Accuracy',fontsize=16) plt.title('Accuracy Curves',fontsize=16) # - # # VI. Save the model #save the model weights json_string = model.to_json() model.save_weights(local_path + 'savedmodels/model_xception.h5') open(local_path + 'savedmodels/model_xception.json', 'w').write(json_string) #model.save_weights(local_path + 'savedmodels/Emotion_Face_Detection_Model.h5') # # VII. Sources # - Visualization : https://github.com/JostineHo/mememoji/blob/master/data_visualization.ipynb # - State of the art Architecture : https://github.com/amineHorseman/facial-expression-recognition-using-cnn # - Eyes Tracking : https://www.pyimagesearch.com/2017/04/24/eye-blink-detection-opencv-python-dlib/ # - Face Alignment : https://www.pyimagesearch.com/2017/05/22/face-alignment-with-opencv-and-python/ # - C.Pramerdorfer, and M.Kampel.Facial Expression Recognition using Con-volutional Neural Networks: State of the Art. Computer Vision Lab, TU Wien. https://arxiv.org/pdf/1612.02903.pdf # - A Brief Review of Facial Emotion Recognition Based # on Visual Information : https://www.mdpi.com/1424-8220/18/2/401/pdf # - Going deeper in facial expression recognition using deep neural networks : https://ieeexplore.ieee.org/document/7477450
03-Video/Notebooks/07-Xception.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # To fill in, follow DETR colab # # found here https://colab.research.google.com/github/facebookresearch/detr/blob/colab/notebooks/DETR_panoptic.ipynb#scrollTo=QD4mQxHIqGCr # + import torch from torchvision import transforms as T from PIL import Image path = 'sample_pics/idris.jpg' pic = Image.open(path) # - model = torch.hub.load('facebookresearch/detr', 'detr_resnet50', pretrained=True) model.eval() transform = T.Compose([ T.Resize(800), T.ToTensor(), T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) # + tensored = transforms.ToTensor()(pic) tensored = tensored.unsqueeze(0) a = model(tensored) # - a['pred_logits'].shape a['pred_boxes'].shape
detr.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # <font color=blue> Problem_01 : # ### <font color=red>Experiment with LDA and QDA import numpy as np from scipy.optimize import minimize from scipy.io import loadmat from numpy.linalg import det, inv from math import sqrt, pi import scipy.io import matplotlib.pyplot as plt import pickle import sys # + X,y,Xtest,ytest = pickle.load(open('diabetes.pickle','rb'),encoding = 'latin1') # X --> Xtrain ( 242 by 64) --> Input data matrix # y --> Ytrain ( 242 by 1) --> Target vector # Xtest --> ( 200 by 64 ) --> Testing # ytest --> ( 200 by 1 ) --> Testing # So, X and y will be used for training # - print("Shpae of Xtrain : ",X.shape) print("Shpae of ytrain : ",y.shape) print("---------------------------") print("Shpae of Xtest : ",Xtest.shape) print("Shpae of ytest : ",ytest.shape) def ldaLearn(X,y): ''' Inputs X - a N x d matrix with each row corresponding to a training example y - a N x 1 column vector indicating the labels for each training example Outputs means - A d x k matrix containing learnt means for each of the k classes covmat - A single d x d learnt covariance matrix ''' labels = np.unique(y) total_label = labels.shape[0] total_feature = X.shape[1] means = np.zeros([total_label,total_feature]) r = 0 for i in labels: data = X[np.where(y == i)[0],] m = np.mean(data,axis=0) means[r,] = m r +=1 X_transpose = np.transpose(X) covmat = np.cov(X_transpose) return means,covmat def qdaLearn(X,y): ''' Inputs X - a N x d matrix with each row corresponding to a training example y - a N x 1 column vector indicating the labels for each training example Outputs means - A d x k matrix containing learnt means for each of the k classes covmats - A list of k d x d learnt covariance matrices for each of the k classes ''' # IMPLEMENT THIS METHOD covmats = [] labels = np.unique(y) total_label = labels.shape[0] total_feature = X.shape[1] means = np.zeros([total_label,total_feature]) r = 0 for i in labels: data = X[np.where(y == i)[0],] m = np.mean(data,axis=0) means[r,] = m r +=1 data_transpose = np.transpose(data) covmats.append(np.cov(data_transpose)) return means,covmats # + def ldaTest(means,covmat,Xtest,ytest): # Inputs # means, covmat - parameters of the LDA model # Xtest - a N x d matrix with each row corresponding to a test example # ytest - a N x 1 column vector indicating the labels for each test example # Outputs # acc - A scalar accuracy value # ypred - N x 1 column vector indicating the predicted labels # IMPLEMENT THIS METHOD r = Xtest.shape[0] c = means.shape[0] res = np.zeros((r,c)) f = 1/np.sqrt((2*pi)**means.shape[1]*det(covmat)) for j in range(means.shape[0]): res[:,j] = f * np.exp(-0.5*np.array([np.dot(np.dot((Xtest[i,:] - means[j,:]),inv(covmat)),np.transpose(Xtest[i,:] - means[j,:])) for i in range(Xtest.shape[0])])) ypred = np.argmax(res,axis=1) + 1 res = (ypred == ytest.ravel()) acc_data = np.where(res)[0] acc = len(acc_data) return float(acc)/len(ytest),ypred # + def qdaTest(means,covmats,Xtest,ytest): # Inputs # means, covmats - parameters of the QDA model # Xtest - a N x d matrix with each row corresponding to a test example # ytest - a N x 1 column vector indicating the labels for each test example # Outputs # acc - A scalar accuracy value # ypred - N x 1 column vector indicating the predicted labels # IMPLEMENT THIS METHOD r = Xtest.shape[0] c = means.shape[0] res = np.zeros((r,c)) for j in range(means.shape[0]): f = 1/np.sqrt((2*pi)**means.shape[1]*det(covmats[j])) res[:,j] = f * np.exp(-0.5*np.array([np.dot(np.dot((Xtest[i,:] - means[j,:]),inv(covmats[j])),np.transpose(Xtest[i,:] - means[j,:])) for i in range(Xtest.shape[0])])) ypred = np.argmax(res,axis=1) + 1 res = (ypred == ytest.ravel()) acc_data = np.where(res)[0] acc = len(acc_data) return float(acc)/len(ytest),ypred # + # Main script # Problem 1 # load the sample data X,y,Xtest,ytest = pickle.load(open('sample.pickle','rb'),encoding='latin1') # LDA means,covmat = ldaLearn(X,y) ldaacc,ldares = ldaTest(means,covmat,Xtest,ytest) print('LDA Accuracy = '+str(ldaacc)) # QDA means,covmats = qdaLearn(X,y) qdaacc,qdares = qdaTest(means,covmats,Xtest,ytest) print('QDA Accuracy = '+str(qdaacc)) # + # plotting boundaries x1 = np.linspace(-5,20,100) x2 = np.linspace(-5,20,100) xx1,xx2 = np.meshgrid(x1,x2) xx = np.zeros((x1.shape[0]*x2.shape[0],2)) xx[:,0] = xx1.ravel() xx[:,1] = xx2.ravel() fig = plt.figure(figsize=[18,9]) plt.subplot(1, 2, 1) zacc,zldares = ldaTest(means,covmat,xx,np.zeros((xx.shape[0],1))) plt.contourf(x1,x2,zldares.reshape((x1.shape[0],x2.shape[0])),alpha=0.3) plt.scatter(Xtest[:,0],Xtest[:,1],c=ytest[:,0]) plt.title('LDA') plt.subplot(1, 2, 2) zacc,zqdares = qdaTest(means,covmats,xx,np.zeros((xx.shape[0],1))) plt.contourf(x1,x2,zqdares.reshape((x1.shape[0],x2.shape[0])),alpha=0.3) plt.scatter(Xtest[:,0],Xtest[:,1],c=ytest[:,0]) plt.title('QDA')
Project_01/My_Notebooks/.ipynb_checkpoints/ML_01_Problem_01-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Cleaning PLE DataSet # # ## Introduction # In this section of the project we shall clean the data so that we can do analysis in other sections. # # Upon inspection of the Dataset, a few issues were encountered. # - The files are in excel format, yet we need to use csv # - There are multiple sheets, one for each district # - Some districts have different columns and column names # - Some records have Non numeric values # # A Utils file was created for different helper functions used. The implementation of these methods is provided for clarity. # A settings file is used too to handle configurations. """ Import different packages required """ import utils import settings import pandas as pd # Download The File incase it is not already downloaded. utils.download_ple() # *utils.download_ple* # ```python # def download_ple(): # if not os.path.exists(settings.MAIN_FILE): # download_file( # 'http://ugandajournalistsresourcecentre.com/wp-content/uploads/2015/05/PLE-Results-2014.ALL-CANDIDATES.xlsx' # ) # return # ``` # Next, we need to convert the Excel file to csv # We can convert to one file with all the districts or a csv for each district. # Let's go for different files this time. # # This takes a bit of a while. Some optimization *may be* added later. utils.ExcelConverter.excel_to_csv_multiple(xls_file=settings.MAIN_FILE, target_folder=settings.PROCESSED_FOLDER) # *utils.ExcelConverter.excel_to_csv_multiple()* # # This is a rather long function so here it will simply be linked # # [*utils.ExcelConverter.excel_to_csv_multiple()*](https://github.com/lubegamark/ple2014/blob/4fd757223d127df11eb9ba1a7ffd4ade068a1c17/utils.py#L99-L157) # Next, we look at the shapes of the different files utils.find_csv_shape(settings.PROCESSED_FOLDER) # *utils.find_csv_shape* # ```python # def find_csv_shape(folder): # d = {} # for path, folders, files in os.walk(folder): # for file in files: # f = os.path.join(path, file) # csv = pd.read_csv(f) # if len(csv.columns) in d: # d[len(csv.columns)] += 1 # else: # d[len(csv.columns)] = 1 # print("{0}\n{1} - {2}".format(len(csv.columns), file, csv.columns)) # total_districts = sum(d.values()) # print("Sheets with column length {}".format(d)) # for key, value in d.items(): # percentage = (value / total_districts) * 100 # print("{} - {:.2f}%".format(key, percentage)) # ``` # We can see that most of the sheets have 10 columns. The ones with more columns have a variable number of 'Unamed' Columns. So the next step is to remove the unnamed utils.remove_unnamed(settings.PROCESSED_FOLDER, right_size=10) # *utils.remove_unnamed* # # ```python # def remove_unnamed(folder, right_size): # for path, folders, files in os.walk(folder): # for file in files: # f = os.path.join(path, file) # old_csv = pd.read_csv(f) # if len(old_csv.columns) != right_size: # new_csv = old_csv[old_csv.columns[~old_csv.columns.str.contains('Unnamed:')]] # new_csv.to_csv(f, quoting=csv.QUOTE_ALL, index=False) # ``` # After Removing the unnamed columns, let's check the shape again utils.find_csv_shape(settings.PROCESSED_FOLDER) # All sheets have the same number of columns. Looks great. # # Let's now try to view some stats. We'll try finding the number of Division 1s in Kampala # kampala = pd.read_csv(settings.PROCESSED_FOLDER+'KAMPALA.csv') english_ones = kampala[kampala['ENG'] == 1] english_ones # It's impossible for there to be no division ones in English in Kampala. # Looking at the data, let's get the datatypes for each column kampala.info() # Looking at the info provided above, we see that only CNDIDATE NUMBER is numeric, we expect ENG, SCIE, SST, MATH, AGG, DIV to all be numeric so that we can make some calculations against them. # # Another thing to note is the Column titles, CNDIDATE NUMBER wasn't seen in other datasets, it was CANDIDATE NUMBER. # We'll deal with this first. # Let's look at the columns in Kampala. # > *we use df.head(0) rather than df.columns because it shows prints the data in a tabular format in jupyter* kampala.head(0) # Let's look at some other files. # Butambala has similar names, but some columns are slightly different butambala =pd.read_csv(settings.PROCESSED_FOLDER+'BUTAMBALA.csv') butambala.head(0) # Upon analysis of other randomly sampled datasets, we can see that the format in Kampala is probably wrong. abim =pd.read_csv(settings.PROCESSED_FOLDER+'ABIM.csv') abim.head(0) """ This is probably supposed to be Kamwenge """ kamwenje =pd.read_csv(settings.PROCESSED_FOLDER+'KAMWENJE.csv') kamwenje.head(0) # A function that corrects the column names is in order. utils.correct_headers(settings.PROCESSED_FOLDER) # *utils.correct_headers* # ```python # def correct_headers(location): # """ # Some files have inconsistent headings. # These are corrected her # """ # if os.path.isfile(location): # df = pd.read_csv(location) # df.rename(columns={'F/M': 'M/F', 'SCIE': 'SCI', 'MATH': 'MAT', 'CNDIDATE NUMBER': 'CANDIDATE NUMBER'}, # inplace=True) # df.columns = df.columns.str.strip() # df.to_csv(location, quoting=csv.QUOTE_ALL, index=False) # elif os.path.isdir(location): # for path, folders, files in os.walk(location): # for f in files: # file = os.path.join(location, f) # df = pd.read_csv(file) # df.rename(columns={'F/M': 'M/F', 'SCIE': 'SCI', 'MATH': 'MAT', 'CNDIDATE NUMBER': 'CANDIDATE NUMBER'}, # inplace=True) # df.columns = df.columns.str.strip() # df.to_csv(file, quoting=csv.QUOTE_ALL, index=False) # ``` # Let's reload the kampala dataset and view its columns. kampala = pd.read_csv(settings.PROCESSED_FOLDER+'KAMPALA.csv') kampala.head(0) # Now, let's get back to changing the datatypes of the columns. utils.convert_numeric_values(settings.PROCESSED_FOLDER) kampala = pd.read_csv(settings.PROCESSED_FOLDER+'KAMPALA.csv') kampala.info() # We see that ENG, SCI, SST, MAT, AGG, DIV are now float64 which are numeric types. # ### The utils.numerize() function deserves a little explanation # # - Distinction 1(D1) # - Distinction 2(D2) # - Credit 3(C3) # - Credit 4(C4) # - Credit 5(C5) # - Credit 6(C6) # - Pass 7(P7) # - Pass 8(P8) # - Failure 9(F9) # # Some rules that apply to the data # #### ENG, MAT, SCI, SST # - X means a student registered for exam but did not sit for exam # # #### GRADE, DIV # - U means a student got a failure in any subject # - X means a student did not sit for one of the exam # # #### Our Conversion # - X - NaN(through coerce) # - U - 0(Our numeric Grade equivalent of failed) # # ```python # def numerize(location): # """ # Convert Numeric fields to numeric data types # """ # if os.path.isfile(location): # df = pd.read_csv(location) # df['DIV'].replace('U', '0', inplace=True) # df[['MAT', 'SCI', 'SST', 'ENG', 'AGG', 'DIV']] = df[ # ['MAT', 'SCI', 'SST', 'ENG', 'AGG', 'DIV']].apply(pd.to_numeric, errors='coerce') # df.to_csv(location, quoting=csv.QUOTE_ALL, index=False) # elif os.path.isdir(location): # for path, folders, files in os.walk(location): # for f in files: # file = os.path.join(location, f) # df = pd.read_csv(file) # df['DIV'].replace("U", "0", inplace=True) # df[['MAT', 'SCI', 'SST', 'ENG', 'AGG', 'DIV']] = df[ # ['MAT', 'SCI', 'SST', 'ENG', 'AGG', 'DIV']].apply(pd.to_numeric, errors='coerce') # df.to_csv(file, quoting=csv.QUOTE_ALL, index=False) # ``` # Let's look at some random files. abim = pd.read_csv(settings.PROCESSED_FOLDER+'ABIM.csv') abim ebb = pd.read_csv(settings.PROCESSED_FOLDER+'ENTEBBE MUNICIPALITY.csv') ebb # The end of the Entebbe dataframe has many empty values. Let's delete those with df.dropna() # But first we strip all the string records utils.strip_records(settings.PROCESSED_FOLDER) utils.remove_empty_records(settings.PROCESSED_FOLDER) # > *utils.remove_empty_records* and *utils.strip_records* were initially done in one function, but stripped empty strings would not be dropped if done immediately after. Probably something about how pandas handles its data ebb = pd.read_csv(settings.PROCESSED_FOLDER+'ENTEBBE MUNICIPALITY.csv') ebb # *utils.remove_empty_records* # ```python # def remove_empty_records(location): # """ # Remove all records empty records # """ # if os.path.isfile(location): # df = pd.read_csv(location) # # Drop all records without data # df.dropna(how='all', inplace=True) # df.to_csv(location, quoting=csv.QUOTE_ALL, index=False) # elif os.path.isdir(location): # for path, folders, files in os.walk(location): # for f in files: # file = os.path.join(location, f) # df = pd.read_csv(file) # # Drop all records without data # df.dropna(how='all', inplace=True) # df.to_csv(file, quoting=csv.QUOTE_ALL, index=False) # ``` # *utils.strip_records* # ```python # def strip_records(location): # """ # Strip all string columns # """ # if os.path.isfile(location): # df = pd.read_csv(location) # # Strip all String Columns # df_obj = df.select_dtypes(['object']) # df[df_obj.columns] = df_obj.apply(lambda x: x.str.strip()) # df.to_csv(location, quoting=csv.QUOTE_ALL, index=False) # elif os.path.isdir(location): # for path, folders, files in os.walk(location): # for f in files: # file = os.path.join(location, f) # df = pd.read_csv(file) # # Strip all String Columns # df_obj = df.select_dtypes(['object']) # df[df_obj.columns] = df_obj.apply(lambda x: x.str.strip()) # df.to_csv(file, quoting=csv.QUOTE_ALL, index=False) # ``` # Let's also join all the district data to make one csv file. utils.merge_districts(settings.PROCESSED_FOLDER) # *utils.merge_districts* # ```python # def merge_districts(location): # all_districts = [] # for path, folders, files in os.walk(location): # for f in files: # file = os.path.join(location, f) # df = pd.read_csv(file) # all_districts.append(df) # all_df = pd.concat(all_districts) # all_df_file = os.path.join(location, 'ALL.csv') # all_df.to_csv(all_df_file, quoting=csv.QUOTE_ALL, index=False) # ``` # ## Conclusion # The data cleaning phase of this data is complete. # A few lessons learned # - It's good to look at the data in a simple format through a spreadsheet to get a good feel for the data # - Some data is always not good so that has to be abandoned. # - The data cleaning process is not linear. Unlike what this edited document may show. # # We have finally cleaned the data. Next we shall use this information to answer some questions.
clean_data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="DkJ5AI1zMf8I" # # PeakVI: Analyzing scATACseq data # PeakVI is used for analyzing scATACseq data. This tutorial walks through how to read, set-up and train the model, accessing and visualizing the latent space, and differential accessibility. We use the 5kPBMC sample dataset from 10X but these steps can be easily adjusted for other datasets. # # # + colab={"base_uri": "https://localhost:8080/"} id="tcbBdgYoMf8O" outputId="1a9e28f4-f354-4add-c005-4475acd1df58" import sys # If True, will install via pypi, else will install from source stable = False IN_COLAB = "google.colab" in sys.modules if IN_COLAB and stable: # !pip install --quiet scvi-tools[tutorials] elif IN_COLAB and not stable: # !pip install --quiet --upgrade jsonschema # !pip install --quiet git+https://github.com/yoseflab/scvi-tools@master#egg=scvi-tools[tutorials] # + [markdown] id="10dGI6abNGND" # First we need to download the sample data. This block will do this for a google colab session, but if you're running it in a different platform you might need to adjust it, or download and unpack the data manually. # + colab={"base_uri": "https://localhost:8080/"} id="t9QU_OE6NV_P" outputId="3ba6280a-5999-460c-c8f1-494dc46e98fd" # !wget https://cf.10xgenomics.com/samples/cell-atac/1.2.0/atac_pbmc_5k_nextgem/atac_pbmc_5k_nextgem_filtered_peak_bc_matrix.tar.gz # !sudo tar -xvf atac_pbmc_5k_nextgem_filtered_peak_bc_matrix.tar.gz # + colab={"base_uri": "https://localhost:8080/"} id="M49KGvsRMf8P" outputId="c5d7d8df-e113-4721-cdf9-72d76f04b256" import scvi import anndata import scipy import numpy as np import pandas as pd import scanpy as sc import matplotlib.pyplot as plt import random random.seed(420) # + [markdown] id="iBswUSRRMf8P" # ## loading data # PeakVI expects as input an AnnData object with a cell-by-region matrix. There are various pipelines that handle preprocessing of scATACseq to obtain this matrix from the sequencing data. If the data was generated by 10X genomics, this matrix is among the standard outputs of CellRanger. Other pipelines, like [SnapATAC](https://github.com/r3fang/SnapATAC/) and [ArchR](https://www.archrproject.com/bookdown/add-peak-matrix.html), also generate similar matrices. # # In the case of 10X data, PeakVI has a special reader function `scvi.data.read_10x_atac` that reads the files and creates an AnnData object, demonstrated below. For conveniece, we also demonstrate how to initialize an AnnData object from scratch. # # Throughout this tutorial, we use [sample scATACseq data from 10X of 5K PBMCs](https://support.10xgenomics.com/single-cell-atac/datasets/1.2.0/atac_pbmc_5k_nextgem). # + colab={"base_uri": "https://localhost:8080/"} id="R6OB27CwMf8P" outputId="aa2aa58a-074b-4b7e-ed02-2a1ac41c359f" # read the count matrix into a sparse matrix, and the cell and region annotations as pandas DataFrames counts = scipy.io.mmread("filtered_peak_bc_matrix/matrix.mtx").T regions = pd.read_csv("filtered_peak_bc_matrix/peaks.bed", sep='\t', header=None, names=['chr','start','end']) cells = pd.read_csv("filtered_peak_bc_matrix/barcodes.tsv", header=None, names=['barcodes']) # then initialize a new AnnData object adata = anndata.AnnData(X=counts, obs=cells, var=regions) # or use this methods to read 10x data directly adata = scvi.data.read_10x_atac("filtered_peak_bc_matrix") # + [markdown] id="nAh2wnlhMf8Q" # we can use scanpy functions to handle, filter, and manipulate the data. In our case, we might want to filter out peaks that are rarely detected, to make the model train faster: # + colab={"base_uri": "https://localhost:8080/"} id="4r0XVM1qMf8Q" outputId="41d8e81a-66f6-421b-92e5-7eefb6610664" print(adata.shape) # compute the threshold: 5% of the cells min_cells = int(adata.shape[0] * 0.05) # in-place filtering of regions sc.pp.filter_genes(adata, min_cells=min_cells) print(adata.shape) # + [markdown] id="ZCKvcOuaMf8R" # ## set up, training, saving, and loading # We can now set up the AnnData object, which will ensure everything the model needs is in place for training. # # This is also the stage where we can condition the model on additional covariates, which encourages the model to remove the impact of those covariates from the learned latent space. Our sample data is a single batch, so we won't demonstrate this directly, but it can be done simply by setting the `batch_key` argument to the annotation to be used as a batch covariate (must be a valid key in `adata.obs`) . # + colab={"base_uri": "https://localhost:8080/"} id="LsphppROMf8R" outputId="256bd7d5-7a2e-44e5-9f22-fd7a6ad75e97" scvi.data.setup_anndata(adata) # + [markdown] id="onvBxehyMf8R" # We can now create a PeakVI model object and train it! # # **Importantly**: the default max epochs is set to 500, but in practice PeakVI stops early once the model converges, which rarely requires that many, especially for large datasets (which require fewer epochs to converge, since each epoch includes letting the model view more data). # *So the estimated runtime is usually a substantial overestimate of the actual runtime.* # In the case of the data we use for this tutorial, it used less than half of the max epochs! # + colab={"base_uri": "https://localhost:8080/"} id="cclCyKFOMf8R" outputId="8e15d44d-58ca-4ad4-ae27-9af027fc5262" pvi = scvi.model.PEAKVI(adata) pvi.train() # + [markdown] id="dab9yhHcMf8R" # since training a model can take a while, we recommend saving the trained model after training, just in case. # + id="v5w3u7ZgMf8S" pvi.save("trained_model", overwrite=True) # + [markdown] id="mvEQcMynMf8S" # We can then load the model later, which require providing an AnnData object that is structured similarly to the one used for training (or, in most cases, the same one): # + colab={"base_uri": "https://localhost:8080/"} id="_q34gv_hMf8S" outputId="686dbebe-afcd-4cf4-ec2c-ce99dcb5b8ae" pvi = scvi.model.PEAKVI.load("trained_model", adata) # + [markdown] id="3FRmN1bbMf8S" # ## visualizing and analyzing the latent space # We can now use the trained model to visualize, cluster, and analyze the data. We first extract the latent representation from the model, and save it back into our AnnData object: # + colab={"base_uri": "https://localhost:8080/"} id="shzoLMjQMf8S" outputId="ceaff0c9-c6bb-418a-de8f-e2e4458fd75e" latent = pvi.get_latent_representation() adata.obsm["X_PeakVI"] = latent print(latent.shape) # + [markdown] id="-nIg0xy1Mf8T" # We can now use scanpy functions to cluster and visualize our latent space: # + id="CAzMk_4qMf8T" # compute the k-nearest-neighbor graph that is used in both clustering and umap algorithms sc.pp.neighbors(adata, use_rep="X_PeakVI") # compute the umap sc.tl.umap(adata, min_dist=0.2) # cluster the space (we use a lower resolution to get fewer clusters than the default) sc.tl.leiden(adata, key_added="cluster_pvi", resolution=0.2) # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="gPm9j0aaMf8T" outputId="35af2d45-9bd2-4381-d4be-afbad3155c88" sc.pl.umap(adata, color='cluster_pvi') # + [markdown] id="wGioRKwFMf8T" # ## differential accessibility # Finally, we can use PeakVI to identify regions that are differentially accessible. There are many different ways to run this analysis, but the simplest is comparing one cluster against all others, or comparing two clusters to each other. In the first case we'll be looking for marker-regions, so we'll mostly want a one-sided test (the significant regions will only be the ones preferentially accessible in our target cluster). In the second case we'll use a two-sided test to find regions that are differentially accessible, regardless of direction. # # We demonstrate both of these next, and do this in two different ways: (1) more convenient but less flexible: using an existing factor to group the cells, and then comparing groups. (2) more flexible: using cell indices directly. # # **If the data includes multiple batches**, we encourage setting `batch_correction=True` so the model will sample from multiple batches when computing the differential signal. We do this below despite the data only having a single batch, as a demonstration. # + colab={"base_uri": "https://localhost:8080/", "height": 382} id="YptioB5PMf8T" outputId="7081a937-91bc-40af-9b62-25526ff1e10a" # (1.1) using a known factor to compare two clusters ## two-sided is True by default, but included here for emphasis da_res11 = pvi.differential_accessibility(groupby='cluster_pvi', group1='3', group2='0', two_sided=True) # (1.2) using a known factor to compare a cluster against all other clusters ## if we only provide group1, group2 is all other cells by default da_res12 = pvi.differential_accessibility(groupby='cluster_pvi', group1='3', two_sided=False) # (2.1) using indices to compare two clusters ## we can use boolean masks or integer indices for the `idx1` and `idx2` arguments da_res21 = pvi.differential_accessibility( idx1 = adata.obs.cluster_pvi == '3', idx2 = adata.obs.cluster_pvi == '0', two_sided=True, ) # (2.2) using indices to compare a cluster against all other clusters ## if we don't provide idx2, it uses all other cells as the contrast da_res22 = pvi.differential_accessibility( idx1 = np.where(adata.obs.cluster_pvi == '3'), two_sided=False, ) da_res22.head() # + [markdown] id="zWseudKHMf8U" # Note that `da_res11` and `da_res21` are equivalent, as are `da_res12` and `da_res22`. # The return value is a pandas DataFrame with the differential results and basic properties of the comparison: # # `prob_da` in our case is the probability of cells from cluster 0 being more than 0.05 (the default minimal effect) more accessible than cells from the rest of the data. # # `is_da_fdr` is a conservative classification (True/False) of whether a region is differential accessible. This is one way to threshold the results. # # `bayes_factor` is a statistical significance score. It doesn't have a commonly acceptable threshold (e.g 0.05 for p-values), bu we demonstrate below that it's well calibrated to the effect size. # # `effect_size` is the effect size, calculated as `est_prob1 - est_prob2`. # # `emp_effect` is the empirical effect size, calculated as `emp_prob1 - emp_prob2`. # # `est_prob{1,2}` are the estimated probabilities of accessibility in group1 and group2. # # `emp_prob{1,2}` are the empirical probabilities of detection (how many cells in group X was the region detected in). # # We can make sure the probability of DA is well calibrated, and look at the regions that are identified as differentially accessible: # + colab={"base_uri": "https://localhost:8080/", "height": 818} id="uMenbDZ8Mf8U" outputId="7522c6f1-bc9d-4e17-977b-3990d443df62" plt.scatter(da_res22.effect_size, da_res22.prob_da, s=1) plt.xlabel("effect size") plt.ylabel("probability of DA") plt.show() da_res22.loc[da_res22.is_da_fdr].sort_values('prob_da', ascending=False).head(10) # + [markdown] id="a8_I_8PLMf8U" # We can now examine these regions to understand what is happening in the data, using various different annotation and enrichment methods. For instance, `chr11:60222766-60223569`, one of the regions preferentially accessible in cluster 0, is the promoter region of [`MS4A1`](https://www.genecards.org/cgi-bin/carddisp.pl?gene=MS4A1), also known as `CD20`, a known B-cell surface marker, indicating that cluster 0 are probably B-cells. # + id="rmrkcC2FQk1i"
PeakVI.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.metrics import confusion_matrix # %matplotlib inline # **Hi Khawlah and Reem, you both did great projects today. I wanted you both to have the official name/concept for what technique you used to make the custom classifier / feature engineering. It was basically a flavor of cosine similarity measure with the centroid of the class distribution. I can explain it in person and write it out on the board with vectors on Sunday.** #import the mnist data df=pd.read_csv('mnist_train.csv',header=None) df.head() #get the centroid vectors num_means=df.groupby(0).mean().reset_index() # get the L2 of the centroid vectors lengths=((num_means**2).sum(axis=1))**0.5 lengths #L2 norm the centroid vectors num_means_L2=(num_means.values[:,1:].T/lengths.values).T num_means_L2.shape #look at the L2 normed centroid vectors pics=num_means_L2.reshape(10,28,28) for i in range(10): plt.imshow(pics[i,:,:]) plt.figure() #L2 norm the data labeled_data=df.values y_actual=labeled_data[:,0] data=labeled_data[:,1:] data_L2=data/(data**2).sum(axis=1,keepdims=True)**0.5 data_L2.shape num_means_L2.shape # $$ \Large \hat{d}_i\cdot\hat{c}_j=\cos{\theta_{ij}}$$ #dot each row of the L2 normed data with the L2 normed centroids #this is the cosine similarity data_dot=data_L2@num_means_L2.T data_dot.shape data_dot_df=pd.DataFrame(data_dot) data_dot_df.head() # $$ \Large d_{class} =argmax_j(\hat{d}\cdot\hat{c}_j)$$ #get max dot similarity max_feat=data_dot_df.max(axis=1) y_pred=((data_dot_df.apply(lambda row :row == max_feat, axis=0 ).values)@np.arange(10).reshape(-1,1)).reshape(-1) y_actual,y_pred (y_pred==y_actual).sum()/y_pred.shape[0] sns.heatmap(confusion_matrix(y_actual,y_pred), annot=True, fmt="d") plt.xlim(-0.7, 10) plt.ylim(10,-0.7)
images/digits-recognition/officially_cosine_similarity.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: ds # language: python # name: ds # --- # + import warnings warnings.simplefilter(action='ignore') import pandas as pd # - # ## SAT datset sat_data = pd.read_csv("data/raw/sat_problems.csv") sat_train = sat_data[(sat_data["year"] != 2020) & (sat_data["test_type"] != "짝수형")] sat_valid = sat_data[(sat_data["year"] != 2020) & (sat_data["test_type"] == "짝수형")] sat_test = sat_data[sat_data["year"] == 2020] print("# of train data:", sat_train.shape[0]) print("# of valid data:", sat_valid.shape[0]) print("# of test data:", sat_test.shape[0]) def clean_bracket(string): string = string.replace("[", "") string = string.replace("]", "") return string sat_train["context"] = sat_train["context"].map(clean_bracket) sat_valid["context"] = sat_valid["context"].map(clean_bracket) sat_test["context"] = sat_test["context"].map(clean_bracket) sat_train["label"] = sat_train["label"].map(int) sat_valid["label"] = sat_valid["label"].map(int) sat_test["label"] = sat_test["label"].map(int) sat_train.to_csv("data/processed/sat_train.tsv", sep="\t", index=False) sat_valid.to_csv("data/processed/sat_valid.tsv", sep="\t", index=False) sat_test.to_csv("data/processed/sat_test.tsv", sep="\t", index=False) # ## CoLA Daaset columns = ["source", "label", "original_judgement", "context"] in_domian_train = pd.read_csv("data/raw/cola/in_domain_train.tsv", sep="\t", header=None, names=columns) in_domian_dev = pd.read_csv("data/raw/cola/in_domain_dev.tsv", sep="\t", header=None, names=columns) out_of_domian_dev = pd.read_csv("data/raw/cola/out_of_domain_dev.tsv", sep="\t", header=None, names=columns) print("# of cola train data:", in_domian_train.shape[0]) print("# of cola valid data:", in_domian_dev.shape[0]) print("# of cola test data:", out_of_domian_dev.shape[0]) in_domian_train[["context", "label"]].to_csv("data/processed/cola_train.tsv", sep="\t", index=False) in_domian_dev[["context", "label"]].to_csv("data/processed/cola_valid.tsv", sep="\t", index=False) out_of_domian_dev[["context", "label"]].to_csv("data/processed/cola_test.tsv", sep="\t", index=False)
data_preprocess.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import matplotlib as mpl import matplotlib.pyplot as plt from PIL import Image import pandas as pd import numpy as np pd.set_option('display.max_rows', 500) from bs4 import BeautifulSoup import requests import json import subprocess import os # %matplotlib inline import seaborn as sns from datetime import datetime import plotly.graph_objects as go import plotly # %matplotlib inline mpl.rcParams['figure.figsize'] = (16, 10) pd.set_option('display.max_rows', 500) im=Image.open('../reports/figures/CRISP_DM.png') im data_path='../data/raw/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv' DF_raw=pd.read_csv(data_path) DF_raw.head(10) #Displaying first 10 entries of data frame DF_raw.columns #Displaying column names of the data frame DF_raw.describe() EDA_Full_data=pd.DataFrame(np.array(DF_raw.columns[4:]), columns=['Date']) # converting the present dataframe into more readable and easily plotable dataframe EDA_Full_data.head(6) # + DF_raw['Country/Region'].unique() # - allcountries= list (DF_raw['Country/Region'].unique()) for each in allcountries: EDA_Full_data[each]= np.array(DF_raw[DF_raw['Country/Region']== each].iloc[:,4::].sum()) time_idx=[datetime.strptime( each,"%m/%d/%y") for each in EDA_Full_data.Date] # convert to datetime time_str=[each.strftime('%Y-%m-%d') for each in time_idx] # convert back to date ISO norm (str) EDA_Full_data['Date']= time_idx #converting entries in Date column to dtype 'datetime64[ns]' EDA_Full_data.to_csv('../data/processed/COVID_full_flat_table.csv',sep=';',index=False) data_path='../data/raw/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv' raw_data=pd.read_csv(data_path) # ## Relational data model - defining a primary key # # In the relational model of databases, a primary key is a specific choice of a minimal set of attributes (columns) that uniquely specify a tuple (row) in a relation (table) (source: wiki) # # # A primary key’s main features are: # # * It must contain a unique value for each row of data. # * It cannot contain null values. raw_data.head() raw_data.info() raw_data =raw_data.drop(['Lat','Long'],axis=1) #dropping latitude and logitude columns as they are of no significance for our project base_df=raw_data.rename(columns={'Country/Region':'country','Province/State':'state'}) base_df['state']=base_df['state'].fillna('no') #ensure a sring, important for indexing # + pd_relational_model= base_df.set_index(['state','country']).T.stack(level=[0,1]) pd_relational_model # + pd_relational_model = pd_relational_model.reset_index().rename(columns={'level_0':'date',0:'confirmed'},) pd_relational_model # - pd_relational_model.confirmed=pd_relational_model.confirmed.astype(int) pd_relational_model.head() pd_relational_model['date']=pd_relational_model['date'].astype('datetime64[ns]') pd_relational_model.head() pd_relational_model.info() # + pd_relational_model.confirmed=pd_relational_model.confirmed.astype(int) pd_relational_model.to_csv('../data/processed/COVID_relational_confirmed.csv',sep=';',index=False) # - data_path='../data/processed/COVID_relational_confirmed.csv' df=pd.read_csv(data_path, sep=';', parse_dates=[0]) df=df.sort_values('date',ascending=True).reset_index(drop=True).copy() # + def quick_plot(x_in, df_input,y_scale,slider=False): """ Quick basic plot for quick static evaluation of a time series you can push selective columns of your data frame by .iloc[:,[0,6,7,8]] Parameters: ---------- x_in : array array of date time object, or array of numbers df_input : pandas dataframe the plotting matrix where each column is plotted the name of the column will be used for the legend scale: str y-axis scale as 'log' or 'linear' slider: bool True or False for x-axis slider Returns: ---------- """ fig = go.Figure() if df_input.columns[0]=='doubling every day': for each in df_input.columns[0:4]: fig.add_trace(go.Scatter( x=x_in, y=df_input[each], name=each, opacity=0.5, mode='lines', line = dict(color='black', width=3))) for each in df_input.columns[4:]: # column is date { drop(['Date']} fig.add_trace(go.Scatter( x=x_in, y=df_input[each], name=each, opacity=0.8)) if y_scale =='log': fig.update_yaxes(type=y_scale, range=[2,7]) else: fig.update_yaxes(type=y_scale, range=[100,max(list(df_input.iloc[:,4:].max()))]) else: for each in df_input.columns: fig.add_trace(go.Scatter( x=x_in, y=df_input[each], name=each, opacity=0.8)) if y_scale =='log': fig.update_yaxes(type=y_scale, range=[2,7]) else: fig.update_yaxes(type=y_scale, range=[0,max(list(df_input.max()))]) fig.update_layout(autosize=True, width=980, height=768, font=dict( family="PT Sans, monospace", size=18, color="#7f7f7f" ) ) fig.update_xaxes(tickangle=-45, nticks=20, tickfont=dict(size=14,color="#7f7f7f")) if slider==True: fig.update_layout(xaxis_rangeslider_visible=True) fig.show() # + import numpy as np from sklearn import linear_model reg = linear_model.LinearRegression(fit_intercept=True) def get_doubling_time_via_regression(in_array): ''' Use a linear regression to approximate the doubling rate''' y = np.array(in_array) X = np.arange(-1,2).reshape(-1, 1) assert len(in_array)==3 reg.fit(X,y) intercept=reg.intercept_ slope=reg.coef_ return intercept/slope # - df=df.reset_index() df.head() def rolling_reg (df_input,col='confirmed'): ''' input has to be a data frame''' ''' return is single series (mandatory for group by apply)''' days_back=3 result=df_input[col].rolling( window=days_back, min_periods=days_back).apply(get_doubling_time_via_regression,raw=False) return result pd_DR_result=df[['state','country','confirmed']].groupby(['state','country']).apply(rolling_reg,'confirmed').reset_index() pd_DR_result.tail() pd_DR_result=pd_DR_result.rename(columns={'confirmed':'confirmed_DR', 'level_2':'index'}) pd_DR_result.head() pd_result_larg=pd.merge(df, pd_DR_result[['index','confirmed_DR']], on=['index'],how='left') pd_result_larg.head() # + from scipy import signal def savgol_filter(df_input,column='confirmed',window=4): ''' Savgol Filter which can be used in groupby apply function it ensures that the data structure is kept''' window=4, degree=1 df_result=df_input filter_in=df_input[column].fillna(0) # attention with the neutral element here result=signal.savgol_filter(np.array(filter_in), 5, # window size used for filtering 1) df_result[column+'_filtered']=result return df_result # - pd_filtered_result=df[['state','country','confirmed']].groupby(['state','country']).apply(savgol_filter).reset_index() pd_result_larg=pd.merge(pd_result_larg,pd_filtered_result[['index','confirmed_filtered']],on=['index'],how='left') pd_result_larg.tail(150) pd_result_larg[['state','country','confirmed_filtered']].groupby(['state','country']) # + pd_filtered_doubling=pd_result_larg[['state','country','confirmed_filtered']].groupby(['state','country']).apply(rolling_reg,'confirmed_filtered').reset_index() pd_filtered_doubling=pd_filtered_doubling.rename(columns={'confirmed_filtered':'confirmed_filtered_DR','level_2':'index'}) pd_filtered_doubling.tail() # - pd_result_larg=pd.merge(pd_result_larg,pd_filtered_doubling[['index','confirmed_filtered_DR']],on=['index'],how='left') mask=pd_result_larg['confirmed']>100 pd_result_larg['confirmed_filtered_DR']=pd_result_larg['confirmed_filtered_DR'].where(mask, other=np.NaN) pd_result_larg.tail(200) pd_result_larg.to_csv('../data/processed/COVID_final_set.csv',sep=';',index=False) # + import dash dash.__version__ import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output,State import plotly.graph_objects as go df_input_large=pd.read_csv('../data/processed/COVID_final_set.csv',sep=';') fig = go.Figure() app = dash.Dash() app.layout = html.Div([html.H1(children='Applied Data Science on COVID-19 data', style={'color':'blue'}), dcc.Markdown(''' Goal of the project is to teach data science by applying a cross industry standard process, it covers the full walkthrough of: automated data gathering, data transformations, filtering and machine learning to approximating the doubling time, and (static) deployment of responsive dashboard. '''), html.Div([dcc.Markdown(''' ## Select Multiple Country for visualization ''', style={'color':'green'}), dcc.Dropdown( id='country_drop_down', options=[ {'label': each,'value':each} for each in df_input_large['country'].unique()], value=['US', 'Germany','Italy'], # which are pre-selected multi=True )], style={'width': '30%', 'display': 'inline-block','border':'2px black solid', 'borderRadius':5}), html.Div([dcc.Markdown(''' ## Select Timeline of confirmed COVID-19 cases or the approximated doubling time ''', style={'color':'green'}), dcc.RadioItems( id='doubling_time', options=[ {'label': 'Timeline Confirmed ', 'value': 'confirmed'}, {'label': 'Timeline Confirmed Filtered', 'value': 'confirmed_filtered'}, {'label': 'Timeline Doubling Rate', 'value': 'confirmed_DR'}, {'label': 'Timeline Doubling Rate Filtered', 'value': 'confirmed_filtered_DR'}, ], value='confirmed', labelStyle={'display': 'inline-block'} )],style={'width': '68%', 'float': 'right', 'display': 'inline-block','border':'2px black solid', 'borderRadius':5}), dcc.Graph(figure=fig, id='main_window_slope') ], style={'padding':10}) @app.callback( Output('main_window_slope', 'figure'), [Input('country_drop_down', 'value'), Input('doubling_time', 'value')]) def update_figure(country_list,show_doubling): if 'doubling_rate' in show_doubling: my_yaxis={'type':"log", 'title':'Approximated doubling rate over 3 days (larger numbers are better #stayathome)' } else: my_yaxis={'type':"log", 'title':'Confirmed infected people (source johns hopkins csse, log-scale)' } traces = [] for each in country_list: df_plot=df_input_large[df_input_large['country']==each] if show_doubling=='doubling_rate_filtered': df_plot=df_plot[['state','country','confirmed','confirmed_filtered','confirmed_DR','confirmed_filtered_DR','date']].groupby(['country','date']).agg(np.mean).reset_index() else: df_plot=df_plot[['state','country','confirmed','confirmed_filtered','confirmed_DR','confirmed_filtered_DR','date']].groupby(['country','date']).agg(np.sum).reset_index() #print(show_doubling) traces.append(dict(x=df_plot.date, y=df_plot[show_doubling], mode='markers+lines', marker={'size': 3, 'opacity': 0.5}, line= {'width':1, 'opacity' :0.9,}, name=each ) ) return { 'data': traces, 'layout': dict ( width=1280, height=720, xaxis={'title':'Timeline', 'tickangle':-45, 'nticks':20, 'tickfont':dict(size=14,color="#7f7f7f"), }, hovermode='closest', yaxis=my_yaxis ) } if __name__ == '__main__': app.run_server(debug=True, use_reloader=False)
notebooks/Step.3_Data_Preparation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # System Curve Demo # # Demonstration of developing a system curve using a Python notebook. # # The example system consists of the following features and parameters: # # 1. DN200 SCH40 pipework with a duty flow of 70 L/s # 2. Three 90° long radius elbows, two gate valves and a non-return valve # 3. Total pipe length of 100 m # # Comments are included in this sheet to explain what is going on and there are a number of print statements to give visibility of what information we have found from the various functions. # # Reference for fluids library: # # https://fluids.readthedocs.io/tutorial.html#pressure-drop-through-piping # https://fluids.readthedocs.io/fluids.fittings.html # # Install fluids library with: # # ```conda install -c conda-forge fluids``` from fluids import * from math import pi import numpy as np import matplotlib.pyplot as plt # + # Define the flowrate and pipe length Q = 70 / 1000 # m³/s L = 100 # m # Find the material roughness (epsilon) values material_1 = nearest_material_roughness('steel', clean=True) material_2 = nearest_material_roughness('steel', clean=False) eps_1 = material_roughness(material_1) # Can also input manual values here (e.g. from AS 2200) eps_2 = material_roughness(material_2) print(f"Found epsilon values of {eps_1:.2e} m and {eps_2:.2e} m for new and old pipes respectively") # Find the internal diameter of our DN200 (8" NPS) pipe D1 = nearest_pipe(NPS=8, schedule='40')[1] # m print(f"Pipe ID is {D1:.3f} m") # - # Find reynolds number and friction factor A1 = pi * D1**2 / 4 # m² V1 = Q / A1 # m/s Re = Reynolds(V=V1, D=D1, rho=1000, mu=1E-3) fd_1 = friction_factor(Re, eD=eps_1/D1) fd_2 = friction_factor(Re, eD=eps_2/D1) print(f"Calculated friction factors for new and old pipe of {fd_1:.4f} and {fd_2:.4f} respectively") # Example K value for long radius (r = 1.5D) bend using Rennels method: fittings.bend_rounded(D1, 90, fd_1, 1.5*D1) # Example K value for gate valve fittings.K_gate_valve_Crane(D1, D1, 0, fd_1) # For the non-return valve say we have a Valmatic Surgebuster which at NPS 8 has a Cv of 1900 Cv_to_K(1900, D1) # + # Sum up K values for new pipe: K1 = 3 * fittings.bend_rounded(D1, 90, fd_1, 1.5*D1) K1 += 2 * fittings.K_gate_valve_Crane(D1, D1, 0, fd_1) K1 += Cv_to_K(1900, D1) # non-return valve with Cv of 1900 # Sum up K values for old pipe: K2 = 3 * fittings.bend_rounded(D1, 90, fd_2, 1.5*D1)*3 K2 += 2 * fittings.K_gate_valve_Crane(D1, D1, 0, fd_2) K2 += Cv_to_K(1900, D1) # non-return valve with Cv of 1900 print(f"Calculated total K values for new and old pipe of {K1:.2f} and {K2:.2f} respectively") # - # Add in the equivalent K value from friction K1 += K_from_f(fd_1, L, D1) K2 += K_from_f(fd_2, L, D1) # + # Calculate high and low headloss values at our duty point: H1 = head_from_K(K1, V1) H2 = head_from_K(K2, V1) print(f"Calculated headloss for new and old pipe of {H1:.2f} m and {H2:.2f} m respectively") # - # ## System Curve Plotting # # Now that we have defined the full system K value we can plot the system curves for both the new/clean state and aged state of the pipework. This creates a range of system curves which will be plotted here as an area. # + # Create an array of flowrates for plotting Q_ = np.arange(0, Q, step= 5 / 1000) # m³/s V_ = Q_ / A1 # convert flowrate array to velocity array # Calculate headloss values for the full range H1_ = head_from_K(K1, V_) H2_ = head_from_K(K2, V_) # - # Create plot fig, ax1 = plt.subplots(figsize=(10,6)) ax1.fill_between(Q_*1000, H2_, H1_) ax1.set(xlabel='Flowrate (L/s)', ylabel='Head (m)') plt.legend(["Range of system curves"]) ax1.grid() plt.show() # ## Pump Curve Plotting # # With our range of system curves calculated we can add a pump curve. For simplicity we will assume a fixed speed pump in this example. If the pump is operating on a VSD then the affinity laws can be used to find the range of operating head & flowrates. # + # Hypothetical pump curve data at 50 Hz Hp1 = 3.0 - 500 * Q_**2 # Slow pump down to 35 Hz N1 = 50 # Hz N2 = 35 # Hz # Affinity laws for flow and head Q2 = Q_ / (N1/N2) Hp2 = Hp1 / (N1/N2)**2 # - # Re-create the system curve plot and add in the pump curve fig, ax1 = plt.subplots(figsize=(10,6)) ax1.fill_between(Q_*1000, H2_, H1_, label='System Curve Range') ax1.fill(np.concatenate((Q_*1000, Q2[::-1]*1000)), np.concatenate((Hp1, Hp2[::-1])), color='red', alpha=0.5, label='Pump Operating Range') # pump curve ax1.set(xlabel='Flowrate (L/s)', ylabel='Head (m)') plt.legend() ax1.grid() plt.show() # The overlap between the pump operating range and range of system curves represents the range in which the system can operate depending on VSD frequency and system condition.
System Curve Demo.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <h1> 1. Models </h1> import pandas as pd import dask.dataframe as dd import seaborn as sns import numpy as np import matplotlib.pyplot as plt from scipy import stats import phik from sklearn.model_selection import train_test_split from phik.report import plot_correlation_matrix import optuna import lightgbm as lgb from sklearn.metrics import roc_auc_score from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split import catboost as cb from tqdm.notebook import tqdm from sklearn.model_selection import StratifiedKFold import prettytable import pickle as pkl from sklearn.tree import DecisionTreeClassifier from sklearn.base import BaseEstimator,ClassifierMixin from sklearn.ensemble import RandomForestClassifier,ExtraTreesClassifier,AdaBoostClassifier import xgboost as xgb from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LogisticRegression dtype = { 'MachineIdentifier': 'category', 'ProductName': 'category', 'EngineVersion': 'category', 'AppVersion': 'category', 'AvSigVersion': 'category', 'IsBeta': 'int8', 'RtpStateBitfield': 'float16', 'IsSxsPassiveMode': 'int8', 'DefaultBrowsersIdentifier': 'float16', 'AVProductStatesIdentifier': 'float32', 'AVProductsInstalled': 'float16', 'AVProductsEnabled': 'float16', 'HasTpm': 'int8', 'CountryIdentifier': 'int16', 'CityIdentifier': 'float32', 'OrganizationIdentifier': 'float16', 'GeoNameIdentifier': 'float16', 'LocaleEnglishNameIdentifier': 'int8', 'Platform': 'category', 'Processor': 'category', 'OsVer': 'category', 'OsBuild': 'int16', 'OsSuite': 'int16', 'OsPlatformSubRelease': 'category', 'OsBuildLab': 'category', 'SkuEdition': 'category', 'IsProtected': 'float16', 'AutoSampleOptIn': 'int8', 'PuaMode': 'category', 'SMode': 'float16', 'IeVerIdentifier': 'float16', 'SmartScreen': 'category', 'Firewall': 'float16', 'UacLuaenable': 'float32', 'Census_MDC2FormFactor': 'category', 'Census_DeviceFamily': 'category', 'Census_OEMNameIdentifier': 'float16', 'Census_OEMModelIdentifier': 'float32', 'Census_ProcessorCoreCount': 'float16', 'Census_ProcessorManufacturerIdentifier': 'float16', 'Census_ProcessorModelIdentifier': 'float16', 'Census_ProcessorClass': 'category', 'Census_PrimaryDiskTotalCapacity': 'float32', 'Census_PrimaryDiskTypeName': 'category', 'Census_SystemVolumeTotalCapacity': 'float32', 'Census_HasOpticalDiskDrive': 'int8', 'Census_TotalPhysicalRAM': 'float32', 'Census_ChassisTypeName': 'category', 'Census_InternalPrimaryDiagonalDisplaySizeInInches': 'float16', 'Census_InternalPrimaryDisplayResolutionHorizontal': 'float16', 'Census_InternalPrimaryDisplayResolutionVertical': 'float16', 'Census_PowerPlatformRoleName': 'category', 'Census_InternalBatteryType': 'category', 'Census_InternalBatteryNumberOfCharges': 'float32', 'Census_OSVersion': 'category', 'Census_OSArchitecture': 'category', 'Census_OSBranch': 'category', 'Census_OSBuildNumber': 'int16', 'Census_OSBuildRevision': 'int32', 'Census_OSEdition': 'category', 'Census_OSSkuName': 'category', 'Census_OSInstallTypeName': 'category', 'Census_OSInstallLanguageIdentifier': 'float16', 'Census_OSUILocaleIdentifier': 'int16', 'Census_OSWUAutoUpdateOptionsName': 'category', 'Census_IsPortableOperatingSystem': 'int8', 'Census_GenuineStateName': 'category', 'Census_ActivationChannel': 'category', 'Census_IsFlightingInternal': 'float16', 'Census_IsFlightsDisabled': 'float16', 'Census_FlightRing': 'category', 'Census_ThresholdOptIn': 'float16', 'Census_FirmwareManufacturerIdentifier': 'float16', 'Census_FirmwareVersionIdentifier': 'float32', 'Census_IsSecureBootEnabled': 'int8', 'Census_IsWIMBootEnabled': 'float16', 'Census_IsVirtualDevice': 'float16', 'Census_IsTouchEnabled': 'int8', 'Census_IsPenCapable': 'int8', 'Census_IsAlwaysOnAlwaysConnectedCapable': 'float16', 'Wdft_IsGamer': 'float16', 'Wdft_RegionIdentifier': 'float16', 'HasDetections': 'int8', 'Remaining_Disks_Capacity': 'float32', 'C_drive_capacity': 'float64', 'gamer_with_firewall': 'int8', 'gamer_protected': 'int8', 'ram_by_processor': 'float32', 'portable_os_protected': 'int8', 'dimensions': 'category', 'optical_drive_protected': 'int8', 'Connected_total_capacity': 'float32', 'optical_drive_protected' : 'int8', 'secure_boot_protected' : 'int8', 'transformed_build_revision': 'float32', 'vertical_display_by_inches': 'float64', } def reduce_memory(data): ''' Reducing memory of data by changing datatype of column based on their min and max values ''' for col in data: column_type = str(data[col].dtypes) types = ['int16','int32','int64','float16','float32','float64'] if column_type in types: min_value = data[col].min() max_value = data[col].max() if column_type[:3] == 'int': if min_value > np.iinfo(np.int8).min and max_value < np.iinfo(np.int8).max: data[col]=data[col].astype(np.int8) elif min_value > np.iinfo(np.int16).min and max_value < np.iinfo(np.int16).max: data[col]=data[col].astype(np.int16) elif min_value > np.iinfo(np.int32).min and max_value < np.iinfo(np.int32).max: data[col]=data[col].astype(np.int32) elif min_value > np.iinfo(np.int64).min and max_value < np.iinfo(np.int64).max: data[col]=data[col].astype(np.int64) else: if min_value > np.finfo(np.float16).min and max_value < np.finfo(np.float16).max: data[col]=data[col].astype(np.float16) elif min_value > np.finfo(np.float32).min and max_value < np.finfo(np.float32).max: data[col]=data[col].astype(np.float32) else: data[col]=data[col].astype(np.float64) return data categorical_features = list(np.load('categorical_features.npz')['arr_0']) binary_features = list(np.load('binary_features_after_analysis.npz')['arr_0']) numerical_features = list(np.load('numerical_features_after_analysis.npz')['arr_0']) frequency_encoded_features = list(np.load('frequency_encoded_features.npz')['arr_0']) final_data = dd.read_csv('Feature_Engineered_data.csv',dtype=dtype) train_data = final_data.compute() train_data = reduce_memory(train_data) train_data.head() final_categorical_features = list(set(categorical_features) - set(frequency_encoded_features)) final_categorical_features def frequency_encoding(feature,data): ''' function to encode features based on frequency encoding''' encoded_dict = data[feature].value_counts().to_dict() return encoded_dict def encode_categorical_features(final_data,frequency_encoded_features,final_categorical_features): ''' encoding categorical features based on frequency encoding and label encoding''' for i in tqdm(frequency_encoded_features): encoded_values = frequency_encoding(i,final_data) final_data[i] = final_data[i].map(lambda x : encoded_values.get(str(x),0)) final_data[i] = final_data[i].astype('int64') for i in tqdm(final_categorical_features): le = LabelEncoder() final_data[i] = le.fit_transform(final_data[i]) final_data[i] = final_data[i].astype('int64') return final_data train_data = encode_categorical_features(train_data,frequency_encoded_features,final_categorical_features) train_data.head() Y = train_data['HasDetections'] X = train_data.drop(columns = ['HasDetections','MachineIdentifier']) X.replace([np.inf, -np.inf], np.finfo(np.float32).min, inplace=True) # replacing infinity values if any in a dataset to minimum float32 value because applied log transformation on one feature # <h1>1.1 LightGBM Model </h1> # <h2> 1.1.1 Lightgbm Model with optuna hyperparameter tuning (First Cut Approach) </h2> # + # %%time def objective(trial): X_train,X_val,Y_train,Y_val = train_test_split(X,Y,test_size=0.2,random_state=45) train_data = lgb.Dataset(X_train,label=Y_train) val_data = lgb.Dataset(X_val,label=Y_val) param = {'objective': 'binary', "metric": "auc", 'reg_alpha': trial.suggest_uniform('reg_alpha', 0.6, 0.8), 'reg_lambda': trial.suggest_uniform('reg_lambda', 0.6, 0.8), 'num_leaves': trial.suggest_int('num_leaves', 100, 110), 'subsample_for_bin': trial.suggest_int('subsample_for_bin', 10000, 15000), 'subsample': trial.suggest_uniform('subsample', 0.8, 1.0), 'min_child_samples': trial.suggest_int('min_child_samples', 100, 110), 'colsample_bytree': trial.suggest_uniform('colsample_bytree',0.4,0.6), 'learning_rate': trial.suggest_uniform('learning_rate',1e-3,1e-1), 'n_estimators' : trial.suggest_int('n_estimators',1000,2000) } gb = lgb.train(param,train_data,valid_sets=[val_data,train_data],verbose_eval=200,early_stopping_rounds = 50,callbacks=[optuna.integration.LightGBMPruningCallback(trial, "auc")]) prediction = gb.predict(X_val) #prediction = np.rint(prediction) val_score = roc_auc_score(Y_val,prediction) prediction = gb.predict(X_train) #prediction = np.rint(prediction) train_score = roc_auc_score(Y_train,prediction) print('Train AUC Score {}'.format(train_score)) print('Validation AUC Score {}'.format(val_score)) return val_score study = optuna.create_study(direction='maximize') study.optimize(objective,n_trials=3,timeout=600) best_params = open("lightgbm_best_param.pkl", "wb") pkl.dump(study.best_trial.params,best_params) print(study.best_trial.params) # - # best hyperparameter for lightgbm model best_params = {'reg_alpha': 0.6798729809719491, 'reg_lambda': 0.6195256018059018, 'num_leaves': 103, 'subsample_for_bin': 10104, 'subsample': 0.8985615944259155, 'min_child_samples': 104, 'colsample_bytree': 0.5346625170456308, 'learning_rate': 0.00693524874733183, 'n_estimators': 1444} # %%time cv = StratifiedKFold(n_splits=3,random_state=None, shuffle=False) clf = lgb.LGBMClassifier(n_jobs=-1,random_state=45,objective='binary',**best_params) train_auc = [] val_auc = [] for train_indices,cv_indices in cv.split(X,Y): X_train,Y_train = X.iloc[train_indices],Y.iloc[train_indices] X_cv,Y_cv = X.iloc[cv_indices],Y.iloc[cv_indices] clf.fit(X_train,Y_train,eval_set=[(X_train,Y_train),(X_cv, Y_cv)],verbose=200,eval_metric='auc',early_stopping_rounds= 100) train_pred = clf.predict_proba(X_train)[:,1] scores_train = roc_auc_score(Y_train,train_pred) val_pred = clf.predict_proba(X_cv)[:,1] scores_val = roc_auc_score(Y_cv,val_pred) train_auc.append(scores_train) val_auc.append(scores_val) print('Training AUC', np.mean(train_auc)) print('Validation AUC',np.mean(val_auc)) lightgbm_val_auc = np.mean(val_auc) lightgbm_train_auc = np.mean(train_auc) best_classifier = open("lightgbm_best_model.pkl", "wb") pkl.dump(clf,best_classifier) pkl_filename = "lightgbm_best_model.pkl" with open(pkl_filename, 'wb') as file: pkl.dump(clf, file) feature_imp = pd.Series(data = clf.feature_importances_,index=X.columns) feature_imp.nlargest(50).plot(kind='barh',figsize=(15,15)) # <h2> 1.1.2 Predicting probabilities on test data set using lightgbm model </h2> test_data = dd.read_csv('Feature_Engineered_test_data.csv',dtype=dtype) test_data = test_data.compute() test_data = reduce_memory(test_data) test_data.info() test_data.head() test_data = encode_categorical_features(test_data,frequency_encoded_features,final_categorical_features) test_data.head() # + # %%time identifier = test_data['MachineIdentifier'] test = test_data.drop(columns=['MachineIdentifier']) predictions = clf.predict_proba(test)[:,1] final_prediction = pd.DataFrame({'MachineIdentifier' : identifier,'Probabilities' : predictions}) # - final_predictions = final_prediction.sample(frac=0.5) final_predictions.head() # <h1> 1.2 Custom Model </h1> train_X,val_X,train_Y,val_Y = train_test_split(X,Y,test_size=0.2,random_state=45) def generate_samples(X,Y,k): '''generating random samples ''' selecting_rows = np.random.choice(X.shape[0],size=(X.shape[0]//k)) replicating_rows = np.random.choice(selecting_rows,size=X.shape[0] - (X.shape[0]//k)) selected_X = X.iloc[selecting_rows].to_numpy() selected_Y = Y.iloc[selecting_rows].to_numpy() replicated_X = X.iloc[replicating_rows].to_numpy() replicated_Y = Y.iloc[replicating_rows].to_numpy() replaced_X = np.vstack((selected_X,replicated_X)) replaced_Y = np.vstack((selected_Y.reshape(-1,1),replicated_Y.reshape(-1,1))) return replaced_X,replaced_Y.ravel() params = {'k' : [2,5,7]} # <h2> 1.2.1 Custom model implementation </h2> # + class CustomStackingClassifiers(BaseEstimator): '''Custom Stacking model ''' def __init__(self,k=1): self.k = k self.base_learners = ['RandomForestClassifier','LGBMClassifier','ExtraTreesClassifier','XGBClassifier','Adaboost','LogisticRegression'] self.meta_learner = xgb.XGBClassifier(learning_rate= 0.01,n_estimators=80,max_depth = 10,colsample_bytree = 0.6,alpha = 0.0185,n_jobs=-1) self.base_learners_model = [] self.predictions = [] self.meta = 0 self.final_probabilites = 0 def fit(self,X,Y): base_learners = [] D1_X,D2_X,D1_Y,D2_Y = train_test_split(X,Y,test_size=0.5,random_state=50) for i in tqdm(range(self.k)): X_new,Y_new = generate_samples(D1_X,D1_Y,self.k) if self.base_learners[i % len(self.base_learners)] == 'RandomForestClassifier': clf = RandomForestClassifier(n_estimators=40,max_depth=10,n_jobs=-1) clf.fit(X_new,Y_new) elif self.base_learners[i % len(self.base_learners)] == 'LGBMClassifier': clf = lgb.LGBMClassifier(n_estimators=120,num_leaves=10,max_depth = 20,n_jobs=-1) clf.fit(X_new,Y_new) elif self.base_learners[i % len(self.base_learners)] == 'ExtraTreesClassifier': clf = ExtraTreesClassifier(n_estimators=100,max_depth=15,n_jobs=-1) clf.fit(X_new,Y_new) elif self.base_learners[i % len(self.base_learners)] == 'XGBClassifier': clf = xgb.XGBClassifier(n_jobs=-1,n_estimators=80,learning_rate=0.01,max_depth=10) clf.fit(X_new,Y_new) elif self.base_learners[i % len(self.base_learners)] == 'Adaboost': clf = AdaBoostClassifier(n_estimators=40) clf.fit(X_new,Y_new) elif self.base_learners[i % len(self.base_learners)] == 'LogisticRegression': clf = LogisticRegression(n_jobs=-1) clf.fit(X_new,Y_new) preds = clf.predict_proba(D2_X)[:,1] self.predictions.append(preds) self.base_learners_model.append(clf) df = pd.DataFrame() for i in range(self.k): df['M'+str(i)] = self.predictions[i] self.meta_learner.fit(df,D2_Y) def predict_proba(self,X): prediction = [] for i in tqdm(range(len(self.base_learners_model))): # predicting probabilites from base learners prediction.append(self.base_learners_model[i].predict_proba(X)[:,1]) df = pd.DataFrame() for i in tqdm(range(len(self.base_learners_model))): # predicting final probabilites from meta classifiers df['M'+str(i)] = prediction[i] self.final_probabilites = self.meta_learner.predict_proba(df) return self.final_probabilites # - # %%time csv = CustomStackingClassifiers() cv = GridSearchCV(csv,params,cv=3,verbose=1000,scoring='roc_auc') search = cv.fit(train_X,train_Y) best_parms = open("stacking_params.pkl", "wb") pkl.dump(search,best_parms) # number of base models after hyperparameter tuning search.best_params_ # %%time csc = CustomStackingClassifiers(k=5) csc.fit(train_X,train_Y) preds = csc.predict_proba(val_X) train_preds = csc.predict_proba(train_X) custom_model_train_auc_score = roc_auc_score(train_Y,train_preds[:,1]) custom_model_validation_auc_score = roc_auc_score(val_Y,preds[:,1]) print('Custom ModelTrain AUC Score: ',custom_model_train_auc_score) print('Custom Model validation AUC Score: ',custom_model_validation_auc_score) best_stacking_model = open("best_stacking_model_final.pkl", "wb") pkl.dump(csc,best_stacking_model) # <h2>1.2.2 Predicting probabilities on Test Data </h2> test_data = dd.read_csv('Feature_Engineered_test_data.csv',dtype=dtype) test_data = test_data.compute() test_data = reduce_memory(test_data) test_data.info() test_data = encode_categorical_features(test_data,frequency_encoded_features,final_categorical_features) test_data.head() test_data.replace([np.inf, -np.inf], np.finfo(np.float32).min, inplace=True)# replacing infinity values if any in a dataset to minimum float32 value because applied log transformation on one feature # + # %%time identifier = test_data['MachineIdentifier'] test = test_data.drop(columns=['MachineIdentifier']) predictions = csc.predict_proba(test)[:,1] final_prediction = pd.DataFrame({'MachineIdentifier' : identifier,'Probabilities' : predictions}) # - final_predictions = final_prediction.sample(frac=0.5) final_predictions.head() # <h2>1.3 Model Performance comparison </h2> # + from prettytable import PrettyTable table = PrettyTable() table.field_names = ['lightgbm train AUC score','lightgbm validation AUC score','custom model train auc score','custom model validation AUC score'] table.add_row([lightgbm_train_auc,lightgbm_val_auc,custom_model_train_auc_score,custom_model_validation_auc_score]) print(table) # - # <h1> 2. Lightgbm model as final Model </h1> # <h2> 2.1 Feature Selection </h2> pkl_filename = "lightgbm_best_model.pkl" with open(pkl_filename, 'rb') as file: clf = pkl.load(file) feature_imp = pd.Series(data = clf.feature_importances_,index=X.columns) feature_selection = feature_imp.to_frame(name='times_features_used') # + feature_selection = feature_selection.sort_values(ascending=False,by='times_features_used') feature_selection[:65] # - # selecting top features which are used more than and equal to 100 times for training model because used splits as importance type parameter in lightgbm model selected_features = feature_selection[feature_selection['times_features_used'] >= 100] selected_X = X[selected_features.index] selected_X.head() # <h2> 2.2 Lightgbm model training and hyperparameter tuning after feature selection </h2> # + # %%time def objective(trial): X_train,X_val,Y_train,Y_val = train_test_split(selected_X,Y,test_size=0.2,random_state=45) train_data = lgb.Dataset(X_train,label=Y_train) val_data = lgb.Dataset(X_val,label=Y_val) param = {'objective': 'binary', "metric": "auc", 'reg_alpha': trial.suggest_uniform('reg_alpha', 0.6, 0.8), 'reg_lambda': trial.suggest_uniform('reg_lambda', 0.6, 0.8), 'num_leaves': trial.suggest_int('num_leaves', 100, 110), 'subsample_for_bin': trial.suggest_int('subsample_for_bin', 10000, 15000), 'subsample': trial.suggest_uniform('subsample', 0.8, 1.0), 'min_child_samples': trial.suggest_int('min_child_samples', 100, 110), 'colsample_bytree': trial.suggest_uniform('colsample_bytree',0.4,0.6), 'learning_rate': trial.suggest_uniform('learning_rate',1e-3,1e-1), 'n_estimators' : trial.suggest_int('n_estimators',1000,2000) } gb = lgb.train(param,train_data,valid_sets=[val_data,train_data],verbose_eval=200,early_stopping_rounds = 50,callbacks=[optuna.integration.LightGBMPruningCallback(trial, "auc")]) prediction = gb.predict(X_val) #prediction = np.rint(prediction) val_score = roc_auc_score(Y_val,prediction) prediction = gb.predict(X_train) #prediction = np.rint(prediction) train_score = roc_auc_score(Y_train,prediction) print('Train AUC Score {}'.format(train_score)) print('Validation AUC Score {}'.format(val_score)) return val_score study = optuna.create_study(direction='maximize') study.optimize(objective,n_trials=3,timeout=600) best_params = open("lightgbm_best_param_after_feature_selection.pkl", "wb") pkl.dump(study.best_trial.params,best_params) print(study.best_trial.params) # - best_params ={'reg_alpha': 0.713432951008844, 'reg_lambda': 0.7983396526813339, 'num_leaves': 109, 'subsample_for_bin': 11584, 'subsample': 0.9445583769273611, 'min_child_samples': 104, 'colsample_bytree': 0.4577202290641039, 'learning_rate': 0.0833627162919548, 'n_estimators': 1581} # %%time cv = StratifiedKFold(n_splits=3,random_state=None, shuffle=False) clf = lgb.LGBMClassifier(n_jobs=-1,random_state=45,objective='binary',**best_params) train_auc = [] val_auc = [] for train_indices,cv_indices in cv.split(selected_X,Y): X_train,Y_train = X.iloc[train_indices],Y.iloc[train_indices] X_cv,Y_cv = X.iloc[cv_indices],Y.iloc[cv_indices] clf.fit(X_train,Y_train,eval_set=[(X_train,Y_train),(X_cv, Y_cv)],verbose=200,eval_metric='auc',early_stopping_rounds= 100) train_pred = clf.predict_proba(X_train)[:,1] scores_train = roc_auc_score(Y_train,train_pred) val_pred = clf.predict_proba(X_cv)[:,1] scores_val = roc_auc_score(Y_cv,val_pred) train_auc.append(scores_train) val_auc.append(scores_val) print('Training AUC after feature selection', np.mean(train_auc)) print('Validation AUC after feature selection',np.mean(val_auc)) lightgbm_after_feature_selection_val_auc = np.mean(val_auc) lightgbm_after_feature_selection_train_auc = np.mean(train_auc) pkl_filename = "lightgbm_best_model_after_feature_selection.pkl" with open(pkl_filename, 'wb') as file: pkl.dump(clf, file) # <h2> 2.3 Predicting probabilities on test data </h2> test_data = dd.read_csv('Feature_Engineered_test_data.csv',dtype=dtype) test_data = test_data.compute() test_data = reduce_memory(test_data) test_data.info() test_data = encode_categorical_features(test_data,frequency_encoded_features,final_categorical_features) test_data.head() test_data.replace([np.inf, -np.inf], np.finfo(np.float32).min, inplace=True) # replacing infinity values if any in a dataset to minimum float32 value because applied log transformation on one feature # + # %%time identifier = test_data['MachineIdentifier'] test = test_data.drop(columns=['MachineIdentifier']) predictions = clf.predict_proba(test)[:,1] final_prediction = pd.DataFrame({'MachineIdentifier' : identifier,'Probabilities' : predictions}) # - final_predictions = final_prediction.sample(frac=0.5) final_predictions.head()
Models/Models.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Welcome to the Exercise on TensorFlow 2.x. # # First we make us familiar with linear algebra operations. Then we implement a linear regression model. After that we implement a neural network using the low level TensorFlow API. Finally, we conclude with a neural network implemented in Keras. # # Now let's make sure we are on the latest version of TensorFlow # !pip install --upgrade tensorflow import tensorflow as tf tf.__version__ # Let's play around with some linear algebra examples # + a = tf.constant([1., 2., 2., 3.]) print(a) b = tf.constant([4., 5., 5., 6.]) print(b) c = a + b print(c) # + a = tf.constant([1., 2., 2., 3.]) print(a) b = tf.constant([4., 5., 5., 6.]) print(b) c = tf.tensordot(a, b, axes=1) print(c) # - # ### Linear Regression # + import tensorflow as tf tf.__version__ import numpy as np data = np.array( [ [100,35,35,12,0.32], [101,46,35,21,0.34], [130,56,46,3412,12.42], [131,58,48,3542,13.43] ] ) x = data[:,1:-1] y_target = data[:,-1] b = tf.Variable(1,dtype=tf.float64) w = tf.Variable([1,1,1],dtype=tf.float64) def linear_model(x): return b + tf.tensordot(x,w,axes=1) optimizer = tf.keras.optimizers.Adam() loss_object = tf.keras.losses.MeanSquaredLogarithmicError() def train_step(x, y): with tf.GradientTape() as tape: predicted = linear_model(x) loss_value = loss_object(y, predicted) print ('Loss {} '.format(loss_value)) grads = tape.gradient(loss_value, [b,w]) optimizer.apply_gradients(zip(grads, [b,w])) def train(epochs): for epoch in range(epochs): train_step(x, y_target) print ('Epoch {} finished'.format(epoch)) train(epochs = 1000) # - b w linear_model(x) # ### Logistic Model # + import tensorflow as tf tf.__version__ import numpy as np data = np.array( [ [100,35,35,12,0.], [101,46,35,21,0.], [130,56,46,3412,1.], [131,58,48,3542,1.] ] ) x = data[:,1:-1] y_target = data[:,-1] x = x / np.linalg.norm(x) b = tf.Variable(1,dtype=tf.float64) w = tf.Variable([1,1,1],dtype=tf.float64) def logstic_model(x): return tf.sigmoid(b + tf.tensordot(x,w,axes=1)) optimizer = tf.keras.optimizers.SGD(learning_rate=10) loss_object = tf.keras.losses.BinaryCrossentropy() def train_step(x, y): with tf.GradientTape() as tape: predicted = logstic_model(x) loss_value = loss_object(y, predicted) print(loss_value) grads = tape.gradient(loss_value, [b,w]) optimizer.apply_gradients(zip(grads, [b,w])) def train(epochs): for epoch in range(epochs): train_step(x, y_target) train(epochs = 1000) # - logstic_model(x) # ### Neural Network # + import tensorflow as tf tf.__version__ import numpy as np data = np.array( [ [100,35,35,12,0.32], [101,46,35,21,0.34], [130,56,46,3412,12.42], [131,58,48,3542,13.43] ] ) x = data[:,1:-1] x = x / np.linalg.norm(x) y_target = data[:,-1] y_target = y_target / np.linalg.norm(y_target) w1 = tf.Variable([[1,1,1],[1,1,1],[1,1,1]],dtype=tf.float64) w2 = tf.Variable([1,1,1],dtype=tf.float64) def layer1(x): return tf.sigmoid(tf.tensordot(x,w1,axes=1)) print(layer1(x)) def layer2(x): return tf.sigmoid(tf.tensordot(layer1(x),w2,axes=1)) optimizer = tf.keras.optimizers.Adam(learning_rate=0.01) loss_object = tf.keras.losses.MeanSquaredLogarithmicError() def train_step(x, y): with tf.GradientTape() as tape: predicted = layer2(x) loss_value = loss_object(y, predicted) print(loss_value) grads = tape.gradient(loss_value, [w1,w2]) optimizer.apply_gradients(zip(grads, [w1,w2])) def train(epochs): for epoch in range(epochs): train_step(x, y_target) print ('Epoch {} finished'.format(epoch)) train(epochs = 1000) # - w1 layer1(x) x # ### Keras # + from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import SGD from tensorflow.keras.utils import to_categorical data = np.array( [ [100,35,35,12,0], [101,46,35,21,0], [130,56,46,3412,1], [131,58,48,3542,1] ] ) x = data[:,1:-1] y_target = data[:,-1] x = x / np.linalg.norm(x) model = Sequential() model.add(Dense(3, input_shape=(3,), activation='sigmoid')) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer=SGD(learning_rate=0.1), loss='binary_crossentropy', metrics=['accuracy']) model.fit(x, y_target, epochs=1000, verbose=1) # - model.predict(x)
coursera_ai/week2/tensorflow/Exercise_TensorFlow_2.x_Solution.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # VacationPy # ---- # # #### Note # * Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing. # # * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. # !jupyter nbextension enable --py gmaps # + # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import gmaps import gmaps.datasets import os # Import API key from api_keys import g_key # - # ### Store Part I results into DataFrame # * Load the csv exported in Part I to a DataFrame # + # identify file path weather_csv = 'weather_data.csv' # turn csv into dataframe weather_df = pd.read_csv(weather_csv) weather_df.head() # - # ### Humidity Heatmap # * Configure gmaps. # * Use the Lat and Lng as locations and Humidity as the weight. # * Add Heatmap layer to map. # + #change g_key to api_key gmaps.configure(api_key=g_key) # convert humidity to float, drop NaN values weather_df = weather_df.dropna() humidity = weather_df['Humidity'].astype(float) # store lattitude and longitude together local = weather_df[['Latitude', 'Longitude']].astype(float) local = local.to_records(index=False) local = list(local) # create humidity heatmap layer m = gmaps.figure() m.add_layer(gmaps.heatmap_layer(local, weights = humidity)) m # m = gmaps.Map() # humidity_layer = m.WeightedHeatmap(data=locations) # m.add_layer(humidity_layer) # m # - # ### Create new DataFrame fitting weather criteria # * Narrow down the cities to fit weather conditions. # * Drop any rows will null values. # + # # copy weather_df best_weather_df = weather_df.copy() best_weather_df = best_weather_df.dropna() # filter through df to find cities with ideal conditions best_weather_df = best_weather_df.loc[(best_weather_df['Temperature(F)'] < 75) & (best_weather_df['Temperature(F)'] > 65)] best_weather_df = best_weather_df.loc[(best_weather_df['Wind Speed(mph)'] < 10)] best_weather_df = best_weather_df.loc[(best_weather_df['Cloud Coverage'] < 50) & (best_weather_df['Cloud Coverage'] > 10)] best_weather_df.head(10) # - # ### Hotel Map # * Store into variable named `hotel_df`. # * Add a "Hotel Name" column to the DataFrame. # * Set parameters to search for hotels with 5000 meters. # * Hit the Google Places API for each city's coordinates. # * Store the first Hotel result into the DataFrame. # * Plot markers on top of the heatmap. # + hotel_df = best_weather_df base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json" row_index = 0 # limit hotel search within 5000m of each location params = {'radius': 5000, 'types': 'lodging', 'key': g_key} # walk through hotel_df for i, row in hotel_df.iterrows(): # grab lat and lng data lat = row['Latitude'] lng = row['Longitude'] # add lat and lng data to params dict params['location'] = f'{lat}, {lng}' # API request print(f'Request number {row_index} for {row["Name"]}.') response = requests.get(base_url, params=params).json() # extract results results = response['results'] try: print(f"Closest hotel: {results[0]['name']}") hotel_df.loc[row_index, 'Hotel Name'] = results[0]['name'] except (KeyError, IndexError): print("Missing field/result... skipping.") print(' ') row_index = row_index + 1 # - # + # NOTE: Do not change any of the code in this cell # Using the template add the hotel marks to the heatmap info_box_template = """ <dl> <dt>Name</dt><dd>{Hotel Name}</dd> <dt>City</dt><dd>{Name}</dd> <dt>Country</dt><dd>{Country}</dd> </dl> """ # Store the DataFrame Row # NOTE: be sure to update with your DataFrame name hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()] locations = hotel_df[["Latitude", "Longitude"]] locations = locations.head(28) locations # + # Add marker layer ontop of heat map symbol = gmaps.symbol_layer(locations, fill_color='red', stroke_color='red') m.add_layer(symbol) # Display figure m # -
VacationPy/VacationPy.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Vignettes # # Plot figures for selected vignettes. # + import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import pandas as pd import pyfaidx from collections import Counter, defaultdict, OrderedDict from modisco.visualization import viz_sequence import logomaker from copy import deepcopy from tqdm import tqdm import sys sys.path.append("/users/surag/kundajelab/retina-models/src/") from utils.loss import multinomial_nll from utils import one_hot from utils.data_utils import load_test_data, get_seq from utils.shap_utils import shuffle_several_times, combine_mult_and_diffref import shap import glob tf.compat.v1.disable_eager_execution() from metrics import softmax import scipy.stats import matplotlib matplotlib.rcParams['pdf.fonttype'] = 42 matplotlib.rcParams['ps.fonttype'] = 42 # - import os os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152 os.environ["CUDA_VISIBLE_DEVICES"]="1" def get_profile(output): prof, cts = output return softmax(prof)*(np.exp(cts)-1) # ## Load Models hg38 = pyfaidx.Fasta("/users/surag/genomes/hg38/GRCh38_no_alt_analysis_set_GCA_000001405.15.fasta") # + models = defaultdict(dict) ctypes = [] FOLDS = 5 with tf.keras.utils.CustomObjectScope({'multinomial_nll':multinomial_nll, 'tf':tf}): for x in glob.glob("/srv/scratch/surag/tmp/retina/models/20220202_bpnet/fold0/*.h5"): ctype = x.split("/")[-1].split(".")[0] ctypes.append(ctype) print(ctype) for i in range(FOLDS): models[ctype][i] = tf.keras.models.load_model(x.replace("fold0", "fold" + str(i))) # - len(ctypes) # + model_count_explainers = defaultdict(dict) # explainers for x in ctypes: for i in range(FOLDS): model_count_explainers[x][i] = shap.explainers.deep.TFDeepExplainer( (models[x][i].input, tf.reduce_sum(models[x][i].outputs[1], axis=-1)), shuffle_several_times, combine_mult_and_diffref=combine_mult_and_diffref) # + INP_LEN = models[ctypes[0]][0].input_shape[1] OUT_LEN = models[ctypes[0]][0].output_shape[0][1] INP_LEN, OUT_LEN # - # ## Load SNPs # + snp_lists = dict() for x in glob.glob("/srv/scratch/surag/tmp/retina/SNPs/SNPs_*"): set_name = x.split("_")[-1].split(".")[0] snp_lists[set_name] = pd.read_csv(x, sep='\t', names=['chr', 'start', 'rs', 'ref', 'alt']) # - snp_lists.keys() # + # allRetina is a subset of all those with Retina, so collapse them snp_lists['allRetina']['isTier2'] = snp_lists['allRetina']['rs'].isin(snp_lists['Tier2Retina']['rs']) snp_lists['allRetina']['isTier3'] = snp_lists['allRetina']['rs'].isin(snp_lists['Tier3Retina']['rs']) snp_lists['allRetina']['isInPeaks'] = snp_lists['allRetina']['rs'].isin(snp_lists['overlapPeaksRetina']['rs']) del snp_lists['Tier2Retina'] del snp_lists['Tier3Retina'] del snp_lists['overlapPeaksRetina'] # randomInPeaks is a subset of all those in random, so collapse them snp_lists['random']['isInPeaks'] = snp_lists['random']['rs'].isin(snp_lists['randomInPeaks']['rs']) del snp_lists['randomInPeaks'] # - snp_lists.keys() snp_lists['allRetina'].shape, snp_lists['random'].shape snp_lists['allRetina'].sample(10) snp_lists['random'].sum()['isInPeaks'] # + for x in snp_lists: # remove non standard snp_lists[x] = snp_lists[x][snp_lists[x]['chr'].transform(lambda x: x in hg38.keys())==True] print(snp_lists[x].shape) # moves from random, not allRetina # - snp_lists['random'].head() # + BUFFER = 500 # to adjust for indels ref_one_hots = defaultdict(list) alt_one_hots = defaultdict(list) for x in snp_lists: # print(x) for _, y in snp_lists[x].iterrows(): ref_one_hots[x].append(str(hg38[y['chr']][(y['start'] - INP_LEN//2):(y['start'] + INP_LEN//2 + BUFFER)])) # correct those that don't match ref as per dataframe if ref_one_hots[x][-1][INP_LEN//2 - 1:][:len(y['ref'])] != y['ref']: ref_one_hots[x][-1] = ref_one_hots[x][-1][:INP_LEN//2 - 1] + y['ref'] + ref_one_hots[x][-1][INP_LEN//2 - 1 + len(y['ref']):] assert(ref_one_hots[x][-1][INP_LEN//2 - 1:][:len(y['ref'])] == y['ref']) # print(ref_one_hots[x][-1][INP_LEN//2 - 1:][:len(y['ref'])], y['ref'], y['alt']) cur_alt = ref_one_hots[x][-1] cur_alt = cur_alt[:INP_LEN//2 - 1] + y['alt'] + cur_alt[INP_LEN//2 -1 + len(y['ref']):] alt_one_hots[x].append(cur_alt) assert(alt_one_hots[x][-1][INP_LEN//2 - 1:][:len(y['alt'])] == y['alt']) # trim to model input length ref_one_hots[x][-1] = ref_one_hots[x][-1][:INP_LEN] alt_one_hots[x][-1] = alt_one_hots[x][-1][:INP_LEN] ref_one_hots[x] = one_hot.dna_to_one_hot(ref_one_hots[x]) alt_one_hots[x] = one_hot.dna_to_one_hot(alt_one_hots[x]) # - ref_one_hots['random'].shape, alt_one_hots['random'].shape, ref_one_hots['allRetina'].shape, alt_one_hots['allRetina'].shape # ## Plotting Code models['Rod'].keys() np.array([get_profile(models["Rod"][i].predict(ref_one_hots['allRetina'][:2])) for i in range(FOLDS)]). def plot_rsid(rsid, celltype, shap_window=100): EXAMPLE_IDX = snp_lists['allRetina'][snp_lists['allRetina']['rs'] == rsid].index[0] cur_raw_seqs = np.vstack([ref_one_hots['allRetina'][EXAMPLE_IDX:EXAMPLE_IDX+1], alt_one_hots['allRetina'][EXAMPLE_IDX:EXAMPLE_IDX+1]]) # collapse predited profile across folds preds = np.array([get_profile(models[celltype][i].predict(cur_raw_seqs)) for i in range(FOLDS)]).mean(0) counts_shap_scores = np.array([model_count_explainers[celltype][i].shap_values(cur_raw_seqs) for i in range(FOLDS)]).mean(0) projected_shap_scores = np.multiply(cur_raw_seqs, counts_shap_scores) ymin = np.min(projected_shap_scores) ymax = np.max(projected_shap_scores) + 0.02 fig, ax = plt.subplots(3, figsize=(20,6)) ref_avg_prof = preds[0] alt_avg_prof = preds[1] ax[0].plot(ref_avg_prof, label='ref ({})'.format(snp_lists['allRetina'].iloc[EXAMPLE_IDX]['ref'])) ax[0].plot(alt_avg_prof, label='alt ({})'.format(snp_lists['allRetina'].iloc[EXAMPLE_IDX]['alt'])) ax[0].legend() SLICE_START = INP_LEN//2 - shap_window//2 SLICE_END = INP_LEN//2 + shap_window//2 ax[1].vlines(INP_LEN//2-1 - SLICE_START + 0.5, ymin, ymax+1, 'k', linestyle='dashed') ax[2].vlines(INP_LEN//2-1 - SLICE_START + 0.5, ymin, ymax+1, 'k', linestyle='dashed') logomaker.Logo(pd.DataFrame(projected_shap_scores[0][SLICE_START:SLICE_END], columns=['A','C','G','T']), ax=ax[1]) logomaker.Logo(pd.DataFrame(projected_shap_scores[1][SLICE_START:SLICE_END], columns=['A','C','G','T']), ax=ax[2]) ax[1].set_ylim(ymin, ymax) ax[2].set_ylim(ymin, ymax) fig.suptitle("{} ({})".format(celltype, snp_lists['allRetina'].iloc[EXAMPLE_IDX]['rs'])) return fig # ## rs1532278 snp_lists['allRetina'].loc[snp_lists['allRetina']['rs']=="rs1532278"] fig = plot_rsid("rs1532278", "Mullerglia") fig.savefig("plots/rs1532278.pdf") # ## rs1874459 snp_lists['allRetina'].loc[snp_lists['allRetina']['rs']=="rs1874459"] fig = plot_rsid("rs1874459", "OFFconebipolar") fig.savefig("plots/rs1874459.pdf")
notebooks/Vignettes.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:EMAT] # language: python # name: conda-env-EMAT-py # --- import emat emat.__version__ db = emat.SQLiteDB('gbnrtc_1.sqlitedb.gz') db.database_path from emat.interactive import Explorer explorer = Explorer(db) explorer from emat.interactive import prototype_logging prototype_logging.handler.out explorer.scope
docs/source/emat.examples/GBNRTC/gbnrtc_explorer.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # This file scrapes the data from bureau of labor statistics to find Median Pay for different occupations #Refer to the csv written as an output # - import requests import pandas as pd import bs4 as bs import matplotlib.pyplot as plt import urllib.request url = "https://www.bls.gov/ooh/" source = urllib.request.urlopen(url).read() soup = bs.BeautifulSoup(source, "lxml") lst = [] for uls in soup.find_all('ul', {'class': 'ooh-groups-col'}): for lis in uls.find_all('li'): lst.append(lis.text.lstrip(' ').rstrip(' ').replace(' ','-').replace(',','')) print(lst) base_url = "https://www.bls.gov/ooh/" tail = "/home.htm" df = pd.DataFrame() for i in lst: url = base_url+i+tail dfs = pd.read_html(url) df = df.append(dfs[0]) df = df.drop(['Unnamed: 0'], axis=1) df.to_csv('web-scraping-bls.csv')
notebooks/bhushan-web-scraping(bls).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Abundance zeropoints # # ## Author(s): <NAME> (SB, WG4) # # ### History: # 180926 SB Created # 191126 SB Reformatted with final abundance measurements # + code_folding=[0] # Preamble for notebook # Compatibility with Python 3 from __future__ import (absolute_import, division, print_function) try: # %matplotlib inline # %config InlineBackend.figure_format='retina' except: pass # Basic packages import numpy as np np.seterr(divide='ignore', invalid='ignore') import os import sys import collections import glob import pickle import pandas # Packages to work with FITS and (IDL) SME.out files import astropy.io.fits as pyfits import astropy.table as table from astropy.table import Table, join from scipy.io.idl import readsav # Matplotlib and associated packages for plotting import matplotlib import matplotlib.pyplot as plt from matplotlib.colors import LogNorm from matplotlib.transforms import Bbox,TransformedBbox from matplotlib.image import BboxImage from matplotlib.legend_handler import HandlerBase from matplotlib._png import read_png from matplotlib.backends.backend_pdf import PdfPages from matplotlib.colors import ListedColormap import matplotlib.colors as colors params = { # 'font.family' : 'sans', # 'font.size' : 17, # 'axes.labelsize' : 20, # 'ytick.labelsize' : 16, # 'xtick.labelsize' : 16, # 'legend.fontsize' : 20, 'text.usetex' : True, 'text.latex.preamble': [r'\usepackage{upgreek}', r'\usepackage{amsmath}'], } plt.rcParams.update(params) _parula_data = [[0.2081, 0.1663, 0.5292], [0.2116238095, 0.1897809524, 0.5776761905], [0.212252381, 0.2137714286, 0.6269714286], [0.2081, 0.2386, 0.6770857143], [0.1959047619, 0.2644571429, 0.7279], [0.1707285714, 0.2919380952, 0.779247619], [0.1252714286, 0.3242428571, 0.8302714286], [0.0591333333, 0.3598333333, 0.8683333333], [0.0116952381, 0.3875095238, 0.8819571429], [0.0059571429, 0.4086142857, 0.8828428571], [0.0165142857, 0.4266, 0.8786333333], [0.032852381, 0.4430428571, 0.8719571429], [0.0498142857, 0.4585714286, 0.8640571429], [0.0629333333, 0.4736904762, 0.8554380952], [0.0722666667, 0.4886666667, 0.8467], [0.0779428571, 0.5039857143, 0.8383714286], [0.079347619, 0.5200238095, 0.8311809524], [0.0749428571, 0.5375428571, 0.8262714286], [0.0640571429, 0.5569857143, 0.8239571429], [0.0487714286, 0.5772238095, 0.8228285714], [0.0343428571, 0.5965809524, 0.819852381], [0.0265, 0.6137, 0.8135], [0.0238904762, 0.6286619048, 0.8037619048], [0.0230904762, 0.6417857143, 0.7912666667], [0.0227714286, 0.6534857143, 0.7767571429], [0.0266619048, 0.6641952381, 0.7607190476], [0.0383714286, 0.6742714286, 0.743552381], [0.0589714286, 0.6837571429, 0.7253857143], [0.0843, 0.6928333333, 0.7061666667], [0.1132952381, 0.7015, 0.6858571429], [0.1452714286, 0.7097571429, 0.6646285714], [0.1801333333, 0.7176571429, 0.6424333333], [0.2178285714, 0.7250428571, 0.6192619048], [0.2586428571, 0.7317142857, 0.5954285714], [0.3021714286, 0.7376047619, 0.5711857143], [0.3481666667, 0.7424333333, 0.5472666667], [0.3952571429, 0.7459, 0.5244428571], [0.4420095238, 0.7480809524, 0.5033142857], [0.4871238095, 0.7490619048, 0.4839761905], [0.5300285714, 0.7491142857, 0.4661142857], [0.5708571429, 0.7485190476, 0.4493904762], [0.609852381, 0.7473142857, 0.4336857143], [0.6473, 0.7456, 0.4188], [0.6834190476, 0.7434761905, 0.4044333333], [0.7184095238, 0.7411333333, 0.3904761905], [0.7524857143, 0.7384, 0.3768142857], [0.7858428571, 0.7355666667, 0.3632714286], [0.8185047619, 0.7327333333, 0.3497904762], [0.8506571429, 0.7299, 0.3360285714], [0.8824333333, 0.7274333333, 0.3217], [0.9139333333, 0.7257857143, 0.3062761905], [0.9449571429, 0.7261142857, 0.2886428571], [0.9738952381, 0.7313952381, 0.266647619], [0.9937714286, 0.7454571429, 0.240347619], [0.9990428571, 0.7653142857, 0.2164142857], [0.9955333333, 0.7860571429, 0.196652381], [0.988, 0.8066, 0.1793666667], [0.9788571429, 0.8271428571, 0.1633142857], [0.9697, 0.8481380952, 0.147452381], [0.9625857143, 0.8705142857, 0.1309], [0.9588714286, 0.8949, 0.1132428571], [0.9598238095, 0.9218333333, 0.0948380952], [0.9661, 0.9514428571, 0.0755333333], [0.9763, 0.9831, 0.0538]] parula = ListedColormap(_parula_data, name='parula') parula_zero = _parula_data[0] parula_0 = ListedColormap(_parula_data, name='parula_0') parula_0.set_bad((1,1,1)) parula_r = ListedColormap(_parula_data[::-1], name='parula_r') willi_blau = [0.0722666667, 0.4886666667, 0.8467] # - final_elements = [ 'teff','logg','fe_h', #'fe_h_atmo','vmic','vbroad', 'Li','C','O','Na','Mg5711','Al','Si','K7699','Ca', 'Sc'] # # Measurements from Skyflats and Arcturus # + code_folding=[] def get_galah_sun_10k_43(): gbs_lbol = pyfits.getdata('../../processing/sme_result_files/GALAH_10k_43_lbol_final.fits',1) gbs_sun = gbs_lbol[(gbs_lbol['sobject_id'] == 150405000901378)][0] galah_sun = collections.OrderedDict() galah_sun['teff'] = [float('%.0f' % gbs_sun['TEFF']), float('%.0f' % gbs_sun['E_TEFF'])] galah_sun['logg'] = [float('%.2f' % gbs_sun['LOGG']), float(0.0)] galah_sun['fe_h'] = [float('%.2f' % gbs_sun['FEH']), float(0.0)] galah_sun['fe_h_atmo'] = [float('%.2f' % gbs_sun['FEH']), float(0.0)] galah_sun['vmic'] = [float('%.2f' % gbs_sun['VMIC']), float(0.0)] galah_sun['vbroad'] = [float('%.2f' % gbs_sun['VSINI']), float(0.0)] galah_sun['mass'] = [float('%.2f' % gbs_sun['MASS']), float(0.0)] galah_sun['lbol'] = [float('%.2f' % gbs_sun['LBOL']), float(0.0)] galah_sun['age'] = [float('%.2f' % gbs_sun['AGE']), float(0.0)] galah_sun['chi2_sp'] = [float('%.2f' % gbs_sun['CHI'][0]), float(0.0)] galah_sun['alpha_fe'] = [float('%.2f' % gbs_sun['ALPHA_FE']), float(0.0)] for iterator, each_mode in enumerate(gbs_sun['MODE']): if (each_mode != 'Sp') & (len(each_mode) >=1): if (len(each_mode) <= 2): galah_sun["A_"+each_mode] = np.array([ float('%.2f' % gbs_sun['A_ABUND'][iterator]), float('%.2f' % gbs_sun['C_ABUND'][iterator]), ]) else: galah_sun["A_"+each_mode] = np.array([ float('%.2f' % gbs_sun['A_ABUND'][iterator]), float('%.2f' % gbs_sun['C_ABUND'][iterator]) ]) return(galah_sun) galah_sun = get_galah_sun_10k_43() # + code_folding=[] def get_galah_sun_gbs(): gbs_lbol = pyfits.getdata('../stellar_parameters/gaia_fgk_benchmark_stars/data/GALAH_gbs_lbol.fits',1) gbs_sun = gbs_lbol[(gbs_lbol['sobject_id'] == 150405000901378)][0] galah_sun = collections.OrderedDict() galah_sun['teff'] = [float('%.0f' % gbs_sun['TEFF']), float('%.0f' % gbs_sun['E_TEFF'])] galah_sun['logg'] = [float('%.2f' % gbs_sun['LOGG']), float(0.0)] galah_sun['fe_h'] = [float('%.2f' % gbs_sun['FEH']), float(0.0)] galah_sun['fe_h_atmo'] = [float('%.2f' % gbs_sun['FEH']), float(0.0)] galah_sun['vmic'] = [float('%.2f' % gbs_sun['VMIC']), float(0.0)] galah_sun['vbroad'] = [float('%.2f' % gbs_sun['VSINI']), float(0.0)] galah_sun['mass'] = [float('%.2f' % gbs_sun['MASS']), float(0.0)] galah_sun['lbol'] = [float('%.2f' % gbs_sun['LBOL']), float(0.0)] galah_sun['age'] = [float('%.2f' % gbs_sun['AGE']), float(0.0)] for iterator, each_mode in enumerate(gbs_sun['MODE']): if (each_mode != 'Sp') & (len(each_mode) >=1): if (len(each_mode) <= 2): galah_sun["A_"+each_mode] = np.array([ float('%.2f' % gbs_sun['A_ABUND'][iterator]), float('%.2f' % gbs_sun['C_ABUND'][iterator]), ]) else: galah_sun["A_"+each_mode] = np.array([ float('%.2f' % gbs_sun['A_ABUND'][iterator]), float('%.2f' % gbs_sun['C_ABUND'][iterator]) ]) return(galah_sun) galah_sun_gbs = get_galah_sun_gbs() # + code_folding=[] def get_galah_arcturus(): gbs_lbol = pyfits.getdata('../stellar_parameters/gaia_fgk_benchmark_stars/data/GALAH_gbs_lbol.fits',1) gbs_arcturus = gbs_lbol[(gbs_lbol['sobject_id'] == 150210005801171)][0] galah_arcturus = collections.OrderedDict() galah_arcturus['teff'] = ['%.0f' % gbs_arcturus['TEFF'], '%.0f' % gbs_arcturus['E_TEFF']] galah_arcturus['logg'] = ['%.2f' % gbs_arcturus['LOGG'], np.nan] galah_arcturus['fe_h'] = ['%.2f' % gbs_arcturus['FEH'], np.nan] galah_arcturus['fe_h_atmo'] = ['%.2f' % gbs_arcturus['FEH'], np.nan] galah_arcturus['vmic'] = ['%.2f' % gbs_arcturus['VMIC'], np.nan] galah_arcturus['vbroad'] = ['%.2f' % gbs_arcturus['VSINI'], np.nan] galah_arcturus['mass'] = ['%.2f' % gbs_arcturus['MASS'], np.nan] galah_arcturus['lbol'] = ['%.2f' % gbs_arcturus['LBOL'], np.nan] galah_arcturus['age'] = ['%.2f' % gbs_arcturus['AGE'], np.nan] galah_arcturus['chi2_sp'] = ['%.2f' % gbs_arcturus['CHI'][0], np.nan] # Z, A_X, e_A_X for iterator, each_mode in enumerate(gbs_arcturus['MODE']): if (each_mode != 'Sp') & (len(each_mode) >=1) & (each_mode not in ['Si6742','Ca6472','Ti6689','Ti6743','Ti6746']): if (len(each_mode) <= 2): galah_arcturus["A_"+each_mode] = np.array([ float('%.2f' % gbs_arcturus['A_ABUND'][iterator]), float('%.2f' % gbs_arcturus['E_ABUND'][iterator]) ]) else: galah_arcturus["A_"+each_mode] = np.array([ float('%.2f' % gbs_arcturus['A_ABUND'][iterator]), float('%.2f' % gbs_arcturus['E_ABUND'][iterator]) ]) return(galah_arcturus) galah_arcturus = get_galah_arcturus() # + sun_skyflat = collections.OrderedDict() for each_key in galah_sun: try: sun_skyflat[each_key] = galah_sun[each_key] except: sun_skyflat[each_key] = ['n.a.','n.a.'] # + arcturus_galah = collections.OrderedDict() for each_key in galah_sun: try: print(each_key, galah_arcturus['A_'+each_key],'') arcturus_galah[each_key] = galah_arcturus['A_'+each_key] except: if each_key == 'Li': arcturus_galah['Li'] = galah_arcturus['A_Li'] arcturus_galah[each_key] = ['n.a.','n.a.'] # - # # Reference values # + code_folding=[] # sun_grevesse sun_grevesse = collections.OrderedDict() sun_grevesse['teff'] = [np.NaN,np.NaN] sun_grevesse['logg'] = [np.NaN,np.NaN] sun_grevesse['fe_h'] = [np.NaN,np.NaN] sun_grevesse['fe_h_atmo'] = [np.NaN,np.NaN] sun_grevesse['vmic'] = [np.NaN,np.NaN] sun_grevesse['vbroad'] = [np.NaN,np.NaN] sun_grevesse['mass'] = [np.NaN,np.NaN] sun_grevesse['lbol'] = [np.NaN,np.NaN] sun_grevesse['age'] = [np.NaN,np.NaN] sun_grevesse['chi2_sp'] = [np.NaN,np.NaN] sun_grevesse['alpha_fe'] = [np.NaN,np.NaN] sun_grevesse['A_Li'] = [1.05,0.10] sun_grevesse['A_C'] = [8.39,0.05] sun_grevesse['A_O'] = [8.66,0.05] sun_grevesse['A_Na'] = [6.17,0.04] sun_grevesse['A_Mg'] = [7.53,0.09] sun_grevesse['A_Al'] = [6.37,0.06] sun_grevesse['A_Si'] = [7.51,0.04] sun_grevesse['A_K'] = [5.08,0.07] sun_grevesse['A_Ca'] = [6.31,0.04] sun_grevesse['A_Sc'] = [3.17,0.10] sun_grevesse['A_Ti'] = [4.90,0.06] sun_grevesse['A_V'] = [4.00,0.02] sun_grevesse['A_Cr'] = [5.64,0.10] sun_grevesse['A_Mn'] = [5.39,0.03] sun_grevesse['A_Fe'] = [7.45,0.05] sun_grevesse['A_Co'] = [4.92,0.08] sun_grevesse['A_Ni'] = [6.23,0.04] sun_grevesse['A_Cu'] = [4.21,0.04] sun_grevesse['A_Zn'] = [4.60,0.03] sun_grevesse['A_Rb'] = [2.60,0.15] sun_grevesse['A_Sr'] = [2.92,0.05] sun_grevesse['A_Y'] = [2.21,0.02] sun_grevesse['A_Zr'] = [2.58,0.02] sun_grevesse['A_Mo'] = [1.92,0.05] sun_grevesse['A_Ru'] = [1.84,0.07] sun_grevesse['A_Ba'] = [2.17,0.07] sun_grevesse['A_La'] = [1.13,0.05] sun_grevesse['A_Ce'] = [1.70,0.10] sun_grevesse['A_Nd'] = [1.45,0.05] sun_grevesse['A_Sm'] = [1.00,0.03] sun_grevesse['A_Eu'] = [0.52,0.06] # + code_folding=[] # sun_asplund sun_asplund = collections.OrderedDict() sun_asplund['teff'] = [np.NaN,np.NaN] sun_asplund['logg'] = [np.NaN,np.NaN] sun_asplund['fe_h'] = [np.NaN,np.NaN] sun_asplund['fe_h_atmo'] = [np.NaN,np.NaN] sun_asplund['vmic'] = [np.NaN,np.NaN] sun_asplund['vbroad'] = [np.NaN,np.NaN] sun_asplund['mass'] = [np.NaN,np.NaN] sun_asplund['lbol'] = [np.NaN,np.NaN] sun_asplund['age'] = [np.NaN,np.NaN] sun_asplund['chi2_sp'] = [np.NaN,np.NaN] sun_asplund['alpha_fe'] = [np.NaN,np.NaN] sun_asplund['A_Li'] = [1.05, 0.10] sun_asplund['A_C'] = [8.43, 0.05] sun_asplund['A_O'] = [8.69, 0.05] sun_asplund['A_Na'] = [6.24, 0.04] sun_asplund['A_Mg'] = [7.60, 0.04] sun_asplund['A_Al'] = [6.45, 0.03] sun_asplund['A_Si'] = [7.51, 0.03] sun_asplund['A_K'] = [5.03, 0.09] sun_asplund['A_Ca'] = [6.34, 0.04] sun_asplund['A_Sc'] = [3.15, 0.04] sun_asplund['A_Ti'] = [4.95, 0.05] sun_asplund['A_V'] = [3.93, 0.08] sun_asplund['A_Cr'] = [5.64, 0.04] sun_asplund['A_Mn'] = [5.43, 0.04] sun_asplund['A_Fe'] = [7.50, 0.04] sun_asplund['A_Co'] = [4.99, 0.07] sun_asplund['A_Ni'] = [6.22, 0.04] sun_asplund['A_Cu'] = [4.19, 0.04] sun_asplund['A_Zn'] = [4.56, 0.05] sun_asplund['A_Rb'] = [2.52, 0.10] sun_asplund['A_Sr'] = [2.87, 0.07] sun_asplund['A_Y'] = [2.21, 0.05] sun_asplund['A_Zr'] = [2.58, 0.04] sun_asplund['A_Mo'] = [1.88, 0.08] sun_asplund['A_Ru'] = [1.75, 0.08] sun_asplund['A_Ba'] = [2.18, 0.09] sun_asplund['A_La'] = [1.10, 0.04] sun_asplund['A_Ce'] = [1.58, 0.04] sun_asplund['A_Nd'] = [1.42, 0.04] sun_asplund['A_Sm'] = [0.96, 0.04] sun_asplund['A_Eu'] = [0.52, 0.04] # + code_folding=[0] # arcturus_ramirez arcturus_ramirez = collections.OrderedDict() arcturus_ramirez['teff'] = [4286,30] arcturus_ramirez['logg'] = [1.66,0.05] arcturus_ramirez['fe_h'] = [-0.52,0.04] arcturus_ramirez['Li'] = [np.NaN,np.NaN] arcturus_ramirez['C'] = [0.43,0.07] arcturus_ramirez['O'] = [0.50,0.03] arcturus_ramirez['Na'] = [0.11,0.03] arcturus_ramirez['Mg'] = [0.37,0.03] arcturus_ramirez['Al'] = [0.34,0.03] arcturus_ramirez['Si'] = [0.33,0.04] arcturus_ramirez['K'] = [0.20,0.07] arcturus_ramirez['Ca'] = [0.11,0.04] arcturus_ramirez['Sc'] = [0.15,0.08] #arcturus_ramirez['ScII'] = [0.23,0.04] arcturus_ramirez['Ti'] = [0.27,0.05] #arcturus_ramirez['TiII'] = [0.21,0.04] arcturus_ramirez['V'] = [0.20,0.05] arcturus_ramirez['Cr'] = [-0.05,0.04] arcturus_ramirez['Mn'] = [-0.21,0.04] arcturus_ramirez['Fe'] = [np.NaN,np.NaN] arcturus_ramirez['Co'] = [0.09,0.04] arcturus_ramirez['Ni'] = [0.06,0.03] arcturus_ramirez['Cu'] = [np.NaN,np.NaN] arcturus_ramirez['Zn'] = [0.22,0.06] arcturus_ramirez['Rb'] = [np.NaN,np.NaN] arcturus_ramirez['Sr'] = [np.NaN,np.NaN] arcturus_ramirez['Y'] = [np.NaN,np.NaN] arcturus_ramirez['Zr'] = [np.NaN,np.NaN] arcturus_ramirez['Mo'] = [np.NaN,np.NaN] arcturus_ramirez['Ru'] = [np.NaN,np.NaN] arcturus_ramirez['Ba'] = [np.NaN,np.NaN] arcturus_ramirez['La'] = [np.NaN,np.NaN] arcturus_ramirez['Ce'] = [np.NaN,np.NaN] arcturus_ramirez['Nd'] = [np.NaN,np.NaN] arcturus_ramirez['Sm'] = [np.NaN,np.NaN] arcturus_ramirez['Eu'] = [np.NaN,np.NaN] # + code_folding=[0] # arcturus_apogee arcturus_apogee = collections.OrderedDict() # Atmosphere see Holtzmann+19 # arcturus_apogee['teff'] = [4286,np.NaN] # arcturus_apogee['logg'] = [1.66,np.NaN] # arcturus_apogee['fe_h'] = [-0.52,np.NaN] # arcturus_apogee['Li'] = [np.NaN,np.NaN] # arcturus_apogee['C'] = [7.96,np.NaN] # arcturus_apogee['O'] = [8.62,np.NaN] # arcturus_apogee['Na'] = [5.86,np.NaN] # arcturus_apogee['Mg'] = [7.38,np.NaN] # arcturus_apogee['Al'] = [6.25,np.NaN] # arcturus_apogee['Si'] = [7.32,np.NaN] # arcturus_apogee['K'] = [4.76,np.NaN] # arcturus_apogee['Ca'] = [5.88,np.NaN] # arcturus_apogee['Sc'] = [2.72,np.NaN] # arcturus_apogee['Ti'] = [4.63,np.NaN] # arcturus_apogee['V'] = [3.64,np.NaN] # arcturus_apogee['Cr'] = [5.07,np.NaN] # arcturus_apogee['Mn'] = [4.75,np.NaN] # arcturus_apogee['Fe'] = [6.93,np.NaN] # arcturus_apogee['Co'] = [4.44,np.NaN] # arcturus_apogee['Ni'] = [5.74,np.NaN] # arcturus_apogee['Cu'] = [3.64,np.NaN] # Actual DR14 catalog entry arcturus_apogee['teff'] = [4290,56] arcturus_apogee['logg'] = [1.70,0.06] arcturus_apogee['vmic'] = [1.48,np.NaN] arcturus_apogee['vbroad'] = [10,np.NaN] arcturus_apogee['fe_h'] = [-0.47,0.01] arcturus_apogee['Li'] = [np.NaN,np.NaN] arcturus_apogee['C'] = [0.10,0.03] arcturus_apogee['O'] = [0.22,0.02] arcturus_apogee['Na'] = [-0.09,0.06] arcturus_apogee['Mg'] = [0.24,0.02] arcturus_apogee['Al'] = [0.16,0.04] arcturus_apogee['Si'] = [0.18,0.03] arcturus_apogee['K'] = [0.12,0.03] arcturus_apogee['Ca'] = [0.11,0.02] arcturus_apogee['Sc'] = [np.NaN,np.NaN] arcturus_apogee['Ti'] = [0.08,0.03] #arcturus_apogee['TiII'] = [0.31,0.08] arcturus_apogee['V'] = [-0.07,0.06] arcturus_apogee['Cr'] = [0.08,0.04] arcturus_apogee['Mn'] = [-0.19,0.02] arcturus_apogee['Fe'] = [np.NaN,np.NaN] arcturus_apogee['Co'] = [0.20,0.05] arcturus_apogee['Ni'] = [0.11,0.02] arcturus_apogee['Cu'] = [np.NaN,np.NaN] arcturus_apogee['Zn'] = [np.NaN,np.NaN] arcturus_apogee['Rb'] = [np.NaN,np.NaN] arcturus_apogee['Sr'] = [np.NaN,np.NaN] arcturus_apogee['Y'] = [np.NaN,np.NaN] arcturus_apogee['Zr'] = [np.NaN,np.NaN] arcturus_apogee['Mo'] = [np.NaN,np.NaN] arcturus_apogee['Ru'] = [np.NaN,np.NaN] arcturus_apogee['Ba'] = [np.NaN,np.NaN] arcturus_apogee['La'] = [np.NaN,np.NaN] arcturus_apogee['Ce'] = [np.NaN,np.NaN] arcturus_apogee['Nd'] = [np.NaN,np.NaN] arcturus_apogee['Sm'] = [np.NaN,np.NaN] arcturus_apogee['Eu'] = [np.NaN,np.NaN] # - # # Final values # + sun_adopted = collections.OrderedDict() for each_key in galah_sun.keys(): sun_adopted[each_key] = np.array(galah_sun[each_key]) sun_adopted['teff'] = [5772,np.NaN] sun_adopted['logg'] = [4.44,np.NaN] sun_adopted['fe_h'] = [0.00,np.NaN] sun_adopted['fe_h_atmo'] = [0.00, np.NaN] sun_adopted['vmic'] = [1.06 , 0.00] # GBS sun_adopted['vbroad'] = [np.sqrt(4.00**2. + 1.6**2.),np.NaN] # GBS sun_adopted['mass'] = [1.00 , np.NaN] # by definition sun_adopted['lbol'] = [1.00 , np.NaN] # by definition sun_adopted['age'] = [4.57 , 0.11] # Bonanno et al., 2002, A&A, 390, 1115 sun_adopted['chi2_sp'] = [1.00, 0.00] sun_adopted['alpha_fe'] = [0.00, 0.00] # - sun_adopted['A_Li6708'] = [1.05, np.sqrt(galah_sun['A_Li6708'][1]**2 + 0.1**2)] sun_adopted['A_Li6708_NoRV'] = [1.05, np.sqrt(galah_sun['A_Li6708'][1]**2 + 0.1**2)] sun_adopted['A_C6588'] = [8.45, np.sqrt(galah_sun['A_C6588'][1]**2 + 0.06**2)] sun_adopted['A_O'] = [8.77, np.sqrt(galah_sun['A_O'][1]**2 + 0.00**2)] sun_adopted['A_Na'][0] = 6.06 # sun_adopted['A_Na'][1] = np.sqrt(galah_sun['A_Na'][1]**2 + 0.09**2) sun_adopted['A_Mg5711'][0] = 7.60 # sun_adopted['A_Mg5711'][1] = np.sqrt(galah_sun['A_Mg5711'][1]**2 + 0.07**2) sun_adopted['A_Al'][0] = 6.41 sun_adopted['A_Al'][1] = np.sqrt(galah_sun['A_Al'][1]**2 + 0.04**2) sun_adopted['A_Si'][0] = 7.47 sun_adopted['A_Si'][1] = np.sqrt(galah_sun['A_Si'][1]**2 + 0.06**2) sun_adopted['A_K7699'][0] = 5.07 sun_adopted['A_K7699'][1] = np.sqrt(galah_sun['A_K7699'][1]**2 + 0.10**2) sun_adopted['A_Ca'][0] = 6.18 sun_adopted['A_Ca'][1] = np.sqrt(galah_sun['A_Ca'][1]**2 + 0.12**2) sun_adopted['A_Sc'] = [3.16, np.sqrt(0.05**2 + 0.01**2)] sun_adopted['A_Ti4758'][0] = 4.70 # shifted by -0.05 wrt skyflat fit sun_adopted['A_Ti4758'][1] = np.sqrt(galah_sun['A_Ti4758'][1]**2 + 0.05**2) sun_adopted['A_Ti4759'][0] = 4.72 # shifted by 0.00 wrt skyflat fit sun_adopted['A_Ti4759'][1] = np.sqrt(galah_sun['A_Ti4759'][1]**2 + 0.00**2) sun_adopted['A_Ti4778'][0] = 4.90 # shifted by 0.00 wrt skyflat fit sun_adopted['A_Ti4778'][1] = np.sqrt(galah_sun['A_Ti4759'][1]**2 + 0.00**2) sun_adopted['A_Ti4782'][0] = 5.04 # shifted by 0.00 wrt skyflat fit sun_adopted['A_Ti4782'][1] = np.sqrt(galah_sun['A_Ti4782'][1]**2 + 0.00**2) sun_adopted['A_Ti4798'][0] = 4.70 # shifted by NaN wrt skyflat fit sun_adopted['A_Ti4798'][1] = 0.2 # wrt to Grevesse A(Ti) sun_adopted['A_Ti4802'][0] = 5.05 # shifted by NaN wrt skyflat fit sun_adopted['A_Ti4802'][1] = 0.15 # wrt to Grevesse A(Ti) sun_adopted['A_Ti4820'][0] = 4.80 # shifted by 0.00 wrt skyflat fit sun_adopted['A_Ti4820'][1] = np.sqrt(galah_sun['A_Ti4759'][1]**2 + 0.05**2) sun_adopted['A_Ti5689'][0] = 4.83 # shifted by 0.08 wrt skyflat fit sun_adopted['A_Ti5689'][1] = np.sqrt(galah_sun['A_Ti5689'][1]**2 + 0.08**2) sun_adopted['A_Ti5716'][0] = 4.90 # shifted by NaN wrt skyflat fit sun_adopted['A_Ti5716'][1] = 0.1 # is Grevesse, but no detection sun_adopted['A_Ti5720'][0] = 5.00 # shifted by 0.00 wrt skyflat fit sun_adopted['A_Ti5720'][1] = np.sqrt(galah_sun['A_Ti4759'][1]**2 + 0.18**2) sun_adopted['A_Ti5739'][0] = 4.82 # shifted by 0.09 wrt skyflat fit sun_adopted['A_Ti5739'][1] = np.sqrt(galah_sun['A_Ti4759'][1]**2 + 0.09**2) sun_adopted['A_Ti5866'][0] = 4.85 # shifted by 0.00 wrt skyflat fit sun_adopted['A_Ti5866'][1] = np.sqrt(galah_sun['A_Ti5866'][1]**2 + 0.00**2) sun_adopted['A_Ti6717'][0] = 5.2 # shifted by NaN wrt skyflat fit sun_adopted['A_Ti6717'][1] = 0.3 # wrt to Grevesse A(Ti) sun_adopted['A_Ti7853'][0] = 4.86 # shifted by NaN wrt skyflat fit sun_adopted['A_Ti7853'][1] = 0.04 # wrt to Grevesse A(Ti) sun_adopted['A_Ti4720'][0] = 5.12 # shifted by NaN wrt skyflat fit sun_adopted['A_Ti4720'][1] = 0.22 # wrt to Grevesse A(Ti) sun_adopted['A_Ti4765'][0] = 4.85 # shifted by NaN wrt skyflat fit sun_adopted['A_Ti4765'][1] = 0.05 # wrt to Grevesse A(Ti) sun_adopted['A_Ti4799'][0] = 4.85 # shifted by NaN wrt skyflat fit sun_adopted['A_Ti4799'][1] = 0.05 # wrt to Grevesse A(Ti) sun_adopted['A_Ti4849'][0] = 5.18 # shifted by 0.09 wrt skyflat fit sun_adopted['A_Ti4849'][1] = np.sqrt(galah_sun['A_Ti4849'][1]**2 + 0.00**2) sun_adopted['A_Ti4866'][0] = 5.12 # shifted by 0.00 wrt skyflat fit sun_adopted['A_Ti4866'][1] = np.sqrt(galah_sun['A_Ti4866'][1]**2 + 0.00**2) sun_adopted['A_Ti4874'][0] = 4.95 # shifted by 0.01 wrt skyflat fit sun_adopted['A_Ti4874'][1] = np.sqrt(galah_sun['A_Ti4874'][1]**2 + 0.06**2) sun_adopted['A_V4784'] = [3.99 , np.sqrt(0.05**2 + 0.01**2)] sun_adopted['A_V4797'] = [3.99 , np.sqrt(0.05**2 + 0.01**2)] sun_adopted['A_V4832'] = [3.99 , np.sqrt(0.05**2 + 0.01**2)] sun_adopted['A_Cr'] = [5.63, np.sqrt(0.05**2 + 0.01**2)] sun_adopted['A_Mn'][0] = 5.33 # sun_adopted['A_Mn'][1] = np.sqrt(galah_sun['A_Mn'][1]**2 + 0.05**2) sun_adopted['A_Co4781'] = [4.92+0.01,0.01] # sun_adopted['A_Co4900'] = [4.92+0.03,0.03] # sun_adopted['A_Co5647'] = [4.92+0.08,0.08] # sun_adopted['A_Co6490'] = [4.92-0.07,0.07] # sun_adopted['A_Co6551'] = [4.92-0.03,0.03] # sun_adopted['A_Co6632'] = [4.92+0.01,0.01] # sun_adopted['A_Co6679'] = [4.92+0.22,0.22] # sun_adopted['A_Co7713'] = [4.92+0.14,0.14] # sun_adopted['A_Co7838'] = [4.92,0] # sun_adopted['A_Ni5847'][0] = 6.23 # sun_adopted['A_Ni5847'][1] = np.sqrt(galah_sun['A_Ni5847'][1]**2 + 0.05**2) sun_adopted['A_Ni6586'][0] = 6.23 # sun_adopted['A_Ni6586'][1] = np.sqrt(galah_sun['A_Ni6586'][1]**2 + 0.05**2) sun_adopted['A_Cu5700'][0] = 3.74 # sun_adopted['A_Cu5700'][1] = np.sqrt(galah_sun['A_Cu5700'][1]**2 + 0.12**2) sun_adopted['A_Cu5782'][0] = 4.06 # sun_adopted['A_Cu5782'][1] = np.sqrt(galah_sun['A_Cu5782'][1]**2 + 0.36**2) sun_adopted['A_Zn4722'][0] = 4.49 # sun_adopted['A_Zn4722'][1] = np.sqrt(galah_sun['A_Zn4722'][1]**2 + 0.15**2) sun_adopted['A_Zn4811'][0] = 4.46 # sun_adopted['A_Zn4811'][1] = np.sqrt(galah_sun['A_Zn4811'][1]**2 + 0.12**2) sun_adopted['A_Rb7800'][0] = 2.60 # sun_adopted['A_Rb7800'][1] = np.sqrt(galah_sun['A_Rb7800'][1]**2 + 0.05**2) sun_adopted['A_Sr6550'][0] = 3.30 # sun_adopted['A_Sr6550'][1] = np.sqrt(galah_sun['A_Sr6550'][1]**2 + 0.38**2) sun_adopted['A_Y'][0] = 2.21-0.30+0.23 # sun_adopted['A_Y'][1] = np.sqrt(galah_sun['A_Y'][1]**2 + 0.00**2) sun_adopted['A_Y4820'][0] = 2.21-1.20+0.23 # sun_adopted['A_Y4820'][1] = np.sqrt(galah_sun['A_Y4820'][1]**2 + 0.00**2) sun_adopted['A_Y4855'][0] = 2.21-0.31+0.23 # sun_adopted['A_Y4855'][1] = np.sqrt(galah_sun['A_Y4855'][1]**2 + 0.31**2) sun_adopted['A_Y4884'][0] = 2.21-0.35+0.23 # sun_adopted['A_Y4884'][1] = np.sqrt(galah_sun['A_Y4884'][1]**2 + 0.35**2) sun_adopted['A_Y5663'][0] = 2.21-0.87+0.23 # sun_adopted['A_Y5663'][1] = np.sqrt(galah_sun['A_Y5663'][1]**2 + 0.30**2) sun_adopted['A_Y5729'][0] = 2.21+0.05+0.23 # sun_adopted['A_Y5729'][1] = np.sqrt(galah_sun['A_Y5729'][1]**2 + 0.05**2) sun_adopted['A_Zr4739'][0] = 2.50-0.19 # sun_adopted['A_Zr4739'][1] = np.sqrt(galah_sun['A_Zr4739'][1]**2 + 0.29**2) sun_adopted['A_Zr4772'][0] = 2.58-0.10 # sun_adopted['A_Zr4772'][1] = np.sqrt(galah_sun['A_Zr4772'][1]**2 + 0.20**2) sun_adopted['A_Zr4806'][0] = 2.58-0.15 # sun_adopted['A_Zr4806'][1] = np.sqrt(galah_sun['A_Zr4806'][1]**2 + 0.15**2) sun_adopted['A_Zr4828'][0] = 2.58+0.08 # sun_adopted['A_Zr4828'][1] = np.sqrt(galah_sun['A_Zr4828'][1]**2 + 0.41**2) sun_adopted['A_Zr5681'][0] = 2.58+0.47 # sun_adopted['A_Zr5681'][1] = np.sqrt(galah_sun['A_Zr5681'][1]**2 + 0.47**2) sun_adopted['A_Mo5689'][0] = 1.92 # sun_adopted['A_Mo5689'][1] = np.sqrt(galah_sun['A_Mo5689'][1]**2 + 0.10**2) sun_adopted['A_Mo5751'][0] = 1.92 # sun_adopted['A_Mo5751'][1] = np.sqrt(galah_sun['A_Mo5751'][1]**2 + 0.05**2) sun_adopted['A_Mo5858'][0] = 1.92+0.73 # sun_adopted['A_Mo5858'][1] = np.sqrt(galah_sun['A_Mo5858'][1]**2 + 0.73**2) sun_adopted['A_Mo6619'][0] = 1.92+0.00 # sun_adopted['A_Mo6619'][1] = np.sqrt(galah_sun['A_Mo6619'][1]**2 + 0.13**2) sun_adopted['A_Ru4758'][0] = 1.84 # sun_adopted['A_Ru4758'][1] = np.sqrt(galah_sun['A_Ru4758'][1]**2 + 0.05**2) sun_adopted['A_Ru4869'][0] = 1.92-0.02 # sun_adopted['A_Ru4869'][1] = np.sqrt(galah_sun['A_Ru4869'][1]**2 + 0.02**2) sun_adopted['A_Ru5699'][0] = 1.92+0.15 # sun_adopted['A_Ru5699'][1] = np.sqrt(galah_sun['A_Ru5699'][1]**2 + 0.15**2) sun_adopted['A_Ba'][0] = 2.17 # sun_adopted['A_Ba'][1] = np.sqrt(galah_sun['A_Ba'][1]**2 + 0.10**2) sun_adopted['A_La4716'][0] = 1.13 # sun_adopted['A_La4716'][1] = np.sqrt(galah_sun['A_La4716'][1]**2 + 0.05**2) sun_adopted['A_La4749'][0] = 1.13+0.14 # sun_adopted['A_La4749'][1] = np.sqrt(galah_sun['A_La4749'][1]**2 + 0.30**2) sun_adopted['A_La4804'][0] = 1.13+0.10 # sun_adopted['A_La4804'][1] = np.sqrt(galah_sun['A_La4804'][1]**2 + 0.10**2) sun_adopted['A_La5806'][0] = 1.13 # sun_adopted['A_La5806'][1] = np.sqrt(galah_sun['A_La5806'][1]**2 + 0.05**2) sun_adopted['A_Ce4774'][0] = 2.14+0.00 # sun_adopted['A_Ce4774'][1] = np.sqrt(galah_sun['A_Ce4774'][1]**2 + 0.03**2) sun_adopted['A_Nd4811'][0] = 1.45+0.17 # sun_adopted['A_Nd4811'][1] = np.sqrt(galah_sun['A_Nd4811'][1]**2 + 0.17**2) sun_adopted['A_Nd5741'][0] = 1.45-0.13 # sun_adopted['A_Nd5741'][1] = np.sqrt(galah_sun['A_Nd5741'][1]**2 + 0.13**2) sun_adopted['A_Nd5770'][0] = 1.45+0.27 # sun_adopted['A_Nd5770'][1] = np.sqrt(galah_sun['A_Nd5770'][1]**2 + 0.27**2) sun_adopted['A_Nd5812'][0] = 1.45-0.05 # sun_adopted['A_Nd5812'][1] = np.sqrt(galah_sun['A_Nd5812'][1]**2 + 0.05**2) sun_adopted['A_Nd5842'][0] = 1.45+0.16 # sun_adopted['A_Nd5842'][1] = np.sqrt(galah_sun['A_Nd5842'][1]**2 + 0.16**2) sun_adopted['A_Sm4720'][0] = 1.36 # sun_adopted['A_Sm4720'][1] = np.sqrt(galah_sun['A_Sm4720'][1]**2 + 0.05**2) sun_adopted['A_Sm4792'][0] = 1.45 # sun_adopted['A_Sm4792'][1] = np.sqrt(galah_sun['A_Sm4792'][1]**2 + 0.30**2) sun_adopted['A_Sm4837'][0] = 2.31 # sun_adopted['A_Sm4837'][1] = np.sqrt(galah_sun['A_Sm4837'][1]**2 + 1.50**2) sun_adopted['A_Sm4848'][0] = 1.66 # sun_adopted['A_Sm4848'][1] = np.sqrt(galah_sun['A_Sm4848'][1]**2 + 0.05**2) sun_adopted['A_Sm4854'][0] = 1.61 # sun_adopted['A_Sm4854'][1] = np.sqrt(galah_sun['A_Sm4854'][1]**2 + 0.05**2) sun_adopted['A_Eu5819'][0] = 0.52+1.2 # sun_adopted['A_Eu5819'][1] = np.sqrt(galah_sun['A_Eu5819'][1]**2 + 1.2**2) sun_adopted['A_Eu6645'][0] = 0.57 # sun_adopted['A_Eu6645'][1] = np.sqrt(galah_sun['A_Eu6645'][1]**2 + 0.05**2) # + code_folding=[] sun_zeropoint_table = [] sun_zeropoint_table.append('Parameter & GALAH adopted & GALAH measured & Asplund+09 & Grevesse+07 \\\\') for each_key in list(sun_adopted.keys())[:112]: if each_key not in ['A_Li6708_NoRV']: shortkey = each_key if each_key == 'teff': print_key = r'$T_\text{eff}$ [K]' elif each_key == 'logg': print_key = r'$\log g$ [dex]' elif each_key == 'fe_h': print_key = r'$\mathrm{[Fe/H]}$' elif each_key == 'fe_h_atmo': print_key = r'$\mathrm{[Fe/H]}_\text{atmo}$' elif each_key == 'chi2_sp': print_key = r'$\chi^2$ SP' elif each_key[:2]=='A_': print_key=each_key[2:] if len(shortkey) > 4: shortkey = shortkey[:-4] else: print_key=each_key if sun_adopted[each_key][1] >= 0.: print_adopted = "{:.2f}".format(sun_adopted[each_key][0])+' \pm '+"{:.2f}".format(sun_adopted[each_key][1]) else: print_adopted = '-' if sun_skyflat[each_key][1] >= 0.: print_galah = "{:.2f}".format(sun_skyflat[each_key][0])+' \pm '+"{:.2f}".format(sun_skyflat[each_key][1]) else: print_galah = '-' print_asplund = "{:.2f}".format(sun_asplund[shortkey][0])+' \pm '+"{:.2f}".format(sun_asplund[shortkey][1]) print_grevesse = "{:.2f}".format(sun_grevesse[shortkey][0])+' \pm '+"{:.2f}".format(sun_grevesse[shortkey][1]) sun_zeropoint_table.append(print_key+' & $'+print_adopted+'$ & $'+print_galah+'$ & $'+print_asplund+'$ & $'+print_grevesse+'$ \\\\') # print(each_key) # print(sun_adopted[each_key]) # if sun_adopted[each_key][1] >= 0.0: # print(sun_adopted[each_key][1]) # first = r'$'+"{:.2f}".format(sun_adopted[each_key][0])+r' \pm '+"{:.2f}".format(sun_adopted[each_key][1])+'$ & ' # else: # first = r'$'+"{:.2f}".format(sun_adopted[each_key][0])+'$ & ' # print(first) # # make sure to print galah skyflat measurement only if measured # if sun_skyflat[each_key][0] != 'n.a.': # galah_sun_print = "{:.2f}".format(sun_skyflat[each_key][0])+r' \pm '+"{:.2f}".format(sun_skyflat[each_key][1]) # else: # galah_sun_print = 'n.a.' # try: # sun_zeropoint_table.append(each_key+' & $'+ # "{:.2f}".format(sun_adopted[+each_key][0])+r' \pm '+"{:.2f}".format(sun_adopted[each_key][1])+'$ & '+ # galah_sun_print+'$ & '+ # "{:.2f}".format(sun_asplund[each_key][0])+r' \pm '+"{:.2f}".format(sun_asplund[each_key][1])+'$ & '+ # "{:.2f}".format(sun_grevesse[each_key][0])+r' \pm '+"{:.2f}".format(sun_grevesse[each_key][1])+'$ \\\\' # ) # except: # sun_zeropoint_table.append(each_key+' & $'+ # "{:.2f}".format(sun_adopted[+each_key][0])+r' \pm '+"{:.2f}".format('A_'+sun_adopted[each_key][1])+'$ & '+ # galah_sun_print+'$ & '+ # "{:.2f}".format(sun_asplund[each_key][0])+r' \pm '+"{:.2f}".format(sun_asplund[each_key][1])+'$ & '+ # "{:.2f}".format(sun_grevesse[each_key][0])+r' \pm '+"{:.2f}".format(sun_grevesse[each_key][1])+'$ \\\\' # ) # - print(np.array(sun_zeropoint_table)) # + arcturus_adopted = collections.OrderedDict() arcturus_adopted['teff'] = [4286, 35] # Heiter et al., 2005, A&A, 582, 49 arcturus_adopted['logg'] = [1.60, 0.20] # Heiter et al., 2005, A&A, 582, 49 arcturus_adopted['fe_h'] = [-0.52, 0.08] # Heiter et al., 2005, A&A, 582, 49 arcturus_adopted['fe_h_atmo'] = [-0.52, 0.08] # Heiter et al., 2005, A&A, 582, 49 arcturus_adopted['vmic'] = [1.25 , np.NaN] arcturus_adopted['vbroad'] = [np.sqrt(5.07**2. + 3.8**2.), np.NaN] arcturus_adopted['mass'] = [1.08 , 0.06] # Ramirez & Allende Prieto, 2011, ApJ, 743, 135 arcturus_adopted['lbol'] = [197.2, 2.2] # Heiter et al., 2005, A&A, 582, 49 arcturus_adopted['age'] = [7.1, 1.5] # Ramirez & Allende Prieto, 2011, ApJ, 743, 135 arcturus_adopted['Li'] = [0.00,np.NaN] # - arc_zeropoint_table = [] arc_zeropoint_table.append('Parameter & GALAH adopted & GALAH measured & Ramirez+11 & APOGEE DR14 \\\\') galah_final = collections.OrderedDict() for each_key in sun_adopted.keys(): galah_final[each_key] = np.array(sun_adopted[each_key]) galah_final sun_pandas = pandas.DataFrame(galah_final,columns=galah_final.keys()) sun_astropy = Table.from_pandas(sun_pandas) sun_astropy.write('galahdr3_abundance_zeropoints.fits',overwrite=True) sun_pandas galah_final.keys() galah_final
validation/abundances/abundance_zeropoints.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Torch Core # This module contains all the basic functions we need in other modules of the fastai library (split with [`core`](/core.html#core) that contains the ones not requiring pytorch). Its documentation can easily be skipped at a first read, unless you want to know what a given function does. # + hide_input=true from fastai.imports import * from fastai.gen_doc.nbdoc import * from fastai.layers import * from fastai.torch_core import * # - # ## Global constants # `AdamW = partial(optim.Adam, betas=(0.9,0.99))` <div style="text-align: right"><a href="https://github.com/fastai/fastai/blob/master/fastai/torch_core.py#L43">[source]</a></div> # `bn_types = (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)` <div style="text-align: right"><a href="https://github.com/fastai/fastai/blob/master/fastai/torch_core.py#L41">[source]</a></div> # `defaults.device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')` <div style="text-align: right"><a href="https://github.com/fastai/fastai/blob/master/fastai/torch_core.py#L62">[source]</a></div> # If you are trying to make fastai run on the CPU, simply change the default device: `defaults.device = 'cpu'`. # # Alternatively, if not using wildcard imports: `fastai.torch_core.defaults.device = 'cpu'`. # ## Functions that operate conversions # + hide_input=true show_doc(batch_to_half) # + hide_input=true show_doc(flatten_model, full_name='flatten_model') # - # Flattens all the layers of `m` into an array. This allows for easy access to the layers of the model and allows you to manipulate the model as if it was an array. m = simple_cnn([3,6,12]) m flatten_model(m) # + hide_input=true show_doc(model2half) # - # Converting model parameters to half precision allows us to leverage fast `FP16` arithmetic which can speed up the computations by 2-8 times. It also reduces memory consumption allowing us to train deeper models. # # **Note**: Batchnorm layers are not converted to half precision as that may lead to instability in training. # + m = simple_cnn([3,6,12], bn=True) def show_params_dtype(state_dict): """Simple function to pretty print the dtype of the model params""" for wt_name, param in state_dict.items(): print("{:<30}: {}".format(wt_name, str(param.dtype))) print() print("dtypes of model parameters before model2half: ") show_params_dtype(m.state_dict()) # Converting model to half precision m_half = model2half(m) print("dtypes of model parameters after model2half: ") show_params_dtype(m_half.state_dict()) # + hide_input=true show_doc(np2model_tensor) # - # It is a wrapper on top of Pytorch's `torch.as_tensor` which converts numpy array to torch tensor, and additionally attempts to map all floats to `torch.float32` and all integers to `torch.int64` for consistencies in model data. Below is an example demonstrating it's functionality for floating number, similar functionality applies to integer as well. # + a1 = np.ones((2, 3)).astype(np.float16) a2 = np.ones((2, 3)).astype(np.float32) a3 = np.ones((2, 3)).astype(np.float64) b1 = np2model_tensor(a1) # Maps to torch.float32 b2 = np2model_tensor(a2) # Maps to torch.float32 b3 = np2model_tensor(a3) # Maps to torch.float32 print(f"Datatype of as': {a1.dtype}, {a2.dtype}, {a3.dtype}") print(f"Datatype of bs': {b1.dtype}, {b2.dtype}, {b3.dtype}") # + hide_input=true show_doc(requires_grad) # - # Performs both getting and setting of [`requires_grad`](/torch_core.html#requires_grad) parameter of the tensors, which decided whether to accumulate gradients or not. # # * If `b` is `None`: The function **gets** the [`requires_grad`](/torch_core.html#requires_grad) for the model parameter, to be more specific it returns the [`requires_grad`](/torch_core.html#requires_grad) of the first element in the model. # # * Else if `b` is passed (a boolean value), [`requires_grad`](/torch_core.html#requires_grad) of all parameters of the model is **set** to `b`. # + # Any Pytorch model m = simple_cnn([3, 6, 12], bn=True) # Get the requires_grad of model print("requires_grad of model: {}".format(requires_grad(m))) # Set requires_grad of all params in model to false requires_grad(m, False) # Get the requires_grad of model print("requires_grad of model: {}".format(requires_grad(m))) # + hide_input=true show_doc(tensor) # - # Handy function when you want to convert any list type object to tensor, initialize your weights manually, and other similar cases. # # **NB**: When passing multiple vectors, all vectors must be of same dimensions. (Obvious but can be forgotten sometimes) # + # Conversion from any numpy array b = tensor(np.array([1, 2, 3])) print(b, type(b)) # Passing as multiple parameters b = tensor(1, 2, 3) print(b, type(b)) # Passing a single list b = tensor([1, 2, 3]) print(b, type(b)) # Can work with multiple vectors / lists b = tensor([1, 2], [3, 4]) print(b, type(b)) # + hide_input=true show_doc(to_cpu) # - # A wrapper on top of Pytorch's `torch.Tensor.cpu()` function, which creates and returns a copy of a tensor or even a **list** of tensors in the CPU. As described in Pytorch's docs, if the tensor or list of tensor is already on the CPU, the exact data is returned and no copy is made. # # Useful to convert all the list of parameters of the model to CPU in a single call. if torch.cuda.is_available(): a = [torch.randn((1, 1)).cuda() for i in range(3)] print(a) print("Id of tensors in a: ") for i in a: print(id(i)) # Getting a CPU version of the tensors in GPU b = to_cpu(a) print(b) print("Id of tensors in b:") for i in b: print(id(i)) # Trying to perform to_cpu on a list of tensor already in CPU c = to_cpu(b) print(c) # The tensors in c has exact id as that of b. No copy performed. print("Id of tensors in c:") for i in c: print(id(i)) # + hide_input=true show_doc(to_data) # - # Returns the data attribute from the object or collection of objects that inherits from [`ItemBase`](/core.html#ItemBase) class. Useful to examine the exact values of the data, could be used to work with the data outside of `fastai` classes. # + # Default example examined from fastai import * from fastai.vision import * path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) # Examin the labels ys = list(data.y) print("Category display names: ", [ys[0], ys[-1]]) print("Unique classes internally represented as: ", to_data([ys[0], ys[-1]])) # + hide_input=true show_doc(to_detach) # + hide_input=true show_doc(to_device) # + hide_input=true show_doc(to_half) # - # Converts the tensor or list of `FP16`, resulting in less memory consumption and faster computations with the tensor. It does not convert `torch.int` types to half precision. # + a1 = torch.tensor([1, 2], dtype=torch.int64) a2 = torch.tensor([1, 2], dtype=torch.int32) a3 = torch.tensor([1, 2], dtype=torch.int16) a4 = torch.tensor([1, 2], dtype=torch.float64) a5 = torch.tensor([1, 2], dtype=torch.float32) a6 = torch.tensor([1, 2], dtype=torch.float16) print("dtype of as: ", a1.dtype, a2.dtype, a3.dtype, a4.dtype, a5.dtype, a6.dtype, sep="\t") b1, b2, b3, b4, b5, b6 = to_half([a1, a2, a3, a4, a5, a6]) print("dtype of bs: ", b1.dtype, b2.dtype, b3.dtype, b4.dtype, b5.dtype, b6.dtype, sep="\t") # + hide_input=true show_doc(to_np) # - # Internally puts the data to CPU, and converts to `numpy.ndarray` equivalent of `torch.tensor` by calling `torch.Tensor.numpy()`. # + a = torch.tensor([1, 2], dtype=torch.float64) if torch.cuda.is_available(): a = a.cuda() print(a, type(a), a.device) b = to_np(a) print(b, type(b)) # + hide_input=true show_doc(try_int) # + # Converts floating point numbers to integer print(try_int(12.5), type(try_int(12.5))) # This is a Rank-1 ndarray, which ideally should not be converted to int print(try_int(np.array([1.5])), try_int(np.array([1.5])).dtype) # Numpy array with a single elements are converted to int print(try_int(np.array(1.5)), type(try_int(np.array(1.5)))) print(try_int(torch.tensor(2.5)), type(try_int(torch.tensor(2.5)))) # Strings are not converted to int (of course) print(try_int("12.5"), type(try_int("12.5"))) # - # ## Functions to deal with model initialization # + hide_input=true show_doc(apply_init) # + hide_input=true show_doc(apply_leaf) # + hide_input=true show_doc(cond_init) # + hide_input=true show_doc(in_channels) # + hide_input=true show_doc(init_default) # - # ## Functions to get information of a model # + hide_input=true show_doc(children) # + hide_input=true show_doc(children_and_parameters) # + hide_input=true show_doc(first_layer) # + hide_input=true show_doc(last_layer) # + hide_input=true show_doc(num_children) # + hide_input=true show_doc(one_param) # + hide_input=true show_doc(range_children) # + hide_input=true show_doc(trainable_params) # - # ## Functions to deal with BatchNorm layers # + hide_input=true show_doc(bn2float) # + hide_input=true show_doc(set_bn_eval) # + hide_input=true show_doc(split_no_wd_params) # - # This is used by the optimizer to determine which params should be applied weight decay when using the option `bn_wd=False` is used in a [`Learner`](/basic_train.html#Learner). # ## Functions to get random tensors # + hide_input=true show_doc(log_uniform) # - log_uniform(0.5,2,(8,)) # + hide_input=true show_doc(rand_bool) # - rand_bool(0.5, 8) # + hide_input=true show_doc(uniform) # - uniform(0,1,(8,)) # + hide_input=true show_doc(uniform_int) # - uniform_int(0,2,(8,)) # ## Other functions # + hide_input=true show_doc(ModelOnCPU, title_level=3) # + hide_input=true show_doc(NoneReduceOnCPU, title_level=3) # + hide_input=true show_doc(ParameterModule, title_level=3) # + hide_input=true show_doc(data_collate) # + hide_input=true show_doc(get_model) # + hide_input=true show_doc(grab_idx) # + hide_input=true show_doc(logit) # + hide_input=true show_doc(logit_) # + hide_input=true show_doc(model_type) # + hide_input=true show_doc(np_address) # + hide_input=true show_doc(split_model) # - # If `splits` are layers, the model is split at those (not included) sequentially. If `want_idxs` is True, the corresponding indexes are returned. If `splits` are lists of layers, the model is split according to those. # + hide_input=true show_doc(split_model_idx) # + hide_input=true show_doc(trange_of) # - # ## Undocumented Methods - Methods moved below this line will intentionally be hidden show_doc(tensor__array__) show_doc(ParameterModule.forward) # ## New Methods - Please document or move to the undocumented section # + hide_input=true show_doc(to_float) # - # # + hide_input=true show_doc(flatten_check) # - #
docs_src/torch_core.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 1. A basic Pastas model # In this notebook you will learn: # - how to create a basic time series model with pastas. # - how to solve a pastas model. # - how to visualise the results of a time series model. # # #### Required knowledge # Before you start the exercise you should know: # - What a `pandas` `DataFrame` and `Series` are. # - What a `datetime` object in Python is. # - What a `dictionary` is. # - How to obtain a subset of a `Series` using a certain condition. # - How to make a basic plot with `matplotlib.pyplot`. # - How to obtain values from a `dictionary`. # # ### Contents # 1. [Import groundwater data](#groundwater) # 2. [Import meteo data](#meteo) # 3. [Create a pastas model](#tsa_model) # 4. [Solve the pastas model](#solve) # 5. [Plot the results](#visualise) # importeer de benodigde packages import pandas as pd import numpy as np import matplotlib.pyplot as plt import pastas as ps # %matplotlib inline # ### 1. Import the groundwater data<a name="groundwater"></a> # In this codeblock a time series of groundwater levels is imported using the `read_csv` function of `pandas`. We use the date column as the index and parse the dates as datetime objects. The `read_csv` function returns a `DataFrame` by default. We use `squeeze=True` because we need a `pandas` `Series` object if we use `pastas`. To check if you have the correct data type (a `pandas Series` object), you can use `type(gwdata_raw)` as shown below. # Import groundwater time series and squeeze to Series object gwdata_raw = pd.read_csv('data/nb1/head_ex1.csv', parse_dates=True, index_col='date', squeeze=True) print(f'The data type of gwdata is: {type(gwdata_raw)}') # We can visualize the timeseries using the `plot` method of a `Series` object. # visualise the observed groundwater levels fig, ax = plt.subplots(figsize=(10, 4)) gwdata_raw.plot(style='.', ax=ax) ax.set_ylabel('Head [m]') ax.set_xlabel('Time [years]'); # Before we use these time series to create a pastas model it is important that we are able to visualise and manipulate the data to our needs. The exercises below will help you with this. # #### Exercise 1 <a name="ex1"></a> # The observed groundwater levels are obtained from the DinoLoket database. The code of the well is 'B58C0698'. Plot the observed groundwater levels over time and add a legend to the plot with the well code. # <a href="#ans1">Answer Exercise 1</a> # #### Exercise 2 <a name="ex2"></a> # We will not use the observed groundwater levels before the 1st of January 2000. Plot the groundwater levels before 2000 as red dots and the data after 2000 as green dots connected by a line. # <a href="#ans2">Answer Exercise 2</a> # #### Exercise 3<a name="ex3"></a> # The groundwater levels after 2014 are clearly lower than before. After 2014 a wrong reference level has been used. A reference level is used to convert raw measurements. The reference level differs 5 meters from the correct level. Correct `gwdata_raw` for this mistake by adding 5 meter to the data after 2014. Save the corrected series as `gwdata`. Write your code in the cell below. # <a href="#ans3">Answer Exercise 3</a> # ### 2. Import meteo data <a name="meteo"></a> # For the time series model we will use the groundwater data along with two meteorological datasets: the precipitation and the potential evaporation. Just like the groundwater levels we need to Squeeze them into `pandas Series` objects. # + # Import observed precipitation series precip_mm = pd.read_csv('data/nb1/rain_ex4.csv', parse_dates=['date'], index_col='date', squeeze=True) print('The data type of the precip series is: %s' % type(precip_mm)) # Import observed evaporation series evap_mm = pd.read_csv('data/nb1/evap_ex4.csv', parse_dates=['date'], index_col='date', squeeze=True) print('The data type of the evap series is: %s' % type(evap_mm)) # - # #### Exercise 4 <a name="ex4"></a> # The units of the precipitation (`precip_mm`) and evaporation (`evap_mm`) series are mm/day. In a timeseries model we want all series to have the same units. Convert the units of the precipitation and evaporation series to m/day. Name your new variables `evap_m` and `precip_m`. # <a href="#ans4">Answer Exercise 4</a> # #### Exercise 5 <a name="ex5"></a> # Visualize both series in two plots. # <a href="#ans5">Answer Exercise 5</a> # #### Exercise 6 <a name="ex6"></a> # The groundwater recharge can be calculated as the precipitation minus the potential evaporation. We will use the recharge in this basic time series model to explain the variance in the groundwater. Calculate and visualise the recharge, use the `precip_m` and `evap_m` from exercise 4. # <a href="#ans6">Answer Exercise 6</a> # ### 3. Create a pastas model<a name="tsa_model"></a> # Now that we have the groundwater and recharge series we can start to create a time series model. With this model we can determine to what extend the groundwater is influenced by the recharge. # # We start by creating an instance of the `Model` class (named `ml` here). We use the groundwater data `gwdata` as input. Pastas will probably inform you that it cannot determine the frequency of the head series, this is not a problem for our model. # + # correct the data as (done during exercise 1) gwdata = gwdata_raw.copy() gwdata.loc['2014-1-1':] = gwdata.loc['2014-1-1':] + 5 # Create a model object by passing it the observed series ml = ps.Model(gwdata, name="GWL") # - # The next step is to add the recharge as an explanatory series. To do this we have to create a `StressModel` instance using the `recharge` timeseries and a response function. In this case we use the `Gamma` response function. This response function will be used to model the response of groundwater to recharge. In [notebook 2](2_external_stresses.ipynb) and [notebook 3](3_stressmodels.ipynb) you will learn more about response functions and how they are used in `pastas`. # + # calculate the recharge (done during exercises 4 & 6) precip_m = precip_mm/1000 evap_m = evap_mm/1000 recharge = precip_m - evap_m # Add the recharge data as explanatory variable ts1 = ps.StressModel(recharge, ps.Gamma, name='recharge', settings="evap") ml.add_stressmodel(ts1) # - # In order to see all the data and settings in your model you can use the `ml.to_dict()` function. The groundwater level series is stored under the `oseries` key, the recharge under the keys `stressmodels` -> `recharge` -> `stress`. The other variables that you see are mostly default settings that can be specified. ml.to_dict() # #### Exercise 7 <a name="ex7"></a> # Plot the groundwater levels (oseries) using the `to_dict()` method from the `ml` object. Check if this is the same figure as your answer to <a href="#ex2">exercise 2</a>. # <a href="#ans7">Answer Exercise 7</a> # ### 4. Solve a pastas model<a name="solve"></a> # The next step is to solve the model. When you solve a model you try to find optimal parameters of the response function. You can simply find these parameters using the `ml.solve()` method. By default the results of the solve method are printed. This contains a lot of information about the time series model. The model results and parameters of the response function will be explained in [3_stressmodels](3_stressmodels.ipynb). ml.solve() # #### Exercise 8 <a name="ex8"></a> # The model we used contains the complete groundwater level series. Let's see what happens if we do not use the groundwater levels before the 1st of January 2000. Create a new model using only the groundwater data after the 1st of January 2000 (you can use your answer from <a href="#ex3">exercise 3</a>). Call this model `ml2`. # <a href="#ans8">Answer Exercise 8</a> # ### 5. Plot the results <a name="visualise"></a> # The simulated groundwater heads can be plotted together with the measurements. ml.plot() # There are many ways to further visualise the results of the time series model. The `plots` submodule contains all the options. One of these is the method `plots.results` which provides a plot with more information. See the figure below. ml.plots.results(figsize=(10, 6)) ; # #### Exercise 9 <a name="ex9"></a> # Plot the results of the model with only observations after the 1st of January 2000. What differences do you notice in the results? Use your answer to [exercise 8](#ex8) for this question. # <a href="#ans9">Answer Exercise 9</a> # #### Exercise 10 <a name="ex10"></a> # The directory `data\nb1` contains the files `head_ex10.csv`, `evap_ex10.csv` and `evap_ex10.csv`. Create a time series model using this data. Solve the model and plot the results. # <a href="#ans10">Answer Exercise 10</a> # # ## Answers # #### <a href="#ex1">Answer exercise 1</a> <a name="ans1"></a> fig, ax = plt.subplots(figsize=(10, 4)) gwdata_raw.plot(style='.', ax=ax, label='B58C0698') ax.set_ylabel('Head [m]') ax.set_xlabel('Time [years]') ax.legend() # #### <a href="#ex2">Answer exercise 2</a> <a name="ans2"></a> # + fig, ax = plt.subplots(figsize=(10, 4)) # make subselection of data before 2000 gwdata_before2000 = gwdata_raw.loc[:'2000-1-1'] gwdata_after2000 = gwdata_raw.loc['2000-1-1':] gwdata_before2000.plot(ax=ax, color='red', style='.') gwdata_after2000.plot(ax=ax, color='green', linestyle='-', linewidth=0.5, marker='.') ax.set_ylabel('Head [m]') ax.set_xlabel('Time [years]') ax.legend([ax.lines[1]], ['B58C0698']) # - # #### <a href="#ex3">Answer exercise 3</a> <a name="ans3"></a> gwdata = gwdata_raw.copy() gwdata.loc['2014-1-1':] = gwdata.loc['2014-1-1':] + 5 # #### <a href="#ex4">Answer exercise 4</a> <a name="ans4"></a> precip_m = precip_mm/1000 evap_m = evap_mm/1000 # #### <a href="#ex5">Answer exercise 5</a> <a name="ans5"></a> # + # visualise the observed groundwater levels fig, ax = plt.subplots(figsize=(10, 4)) precip_m.plot(style='-', ax=ax) ax.set_ylabel('precipitation [m]') ax.set_xlabel('Time [years]') fig, ax = plt.subplots(figsize=(10, 4)) evap_m.plot(style='-', ax=ax) ax.set_ylabel('evaporation [m]') ax.set_xlabel('Time [years]') ; # - # #### <a href="#ex6">Answer exercise 6</a> <a name="ans6"></a> # + # Calculate the recharge to the groundwater recharge = precip_m - evap_m print('The data type of the recharge series is: %s' % type(recharge)) # Plot the time series of the precipitation and evaporation fig, ax = plt.subplots(figsize=(10, 4)) recharge.plot(style='-', ax=ax) ax.set_ylabel('Recharge [m]') ax.set_xlabel('Time [years]') ; # - # #### <a href="#ex7">Answer exercise 7</a> <a name="ans7"></a> # + model_dic = ml.to_dict() oseries = model_dic['oseries']['series'] # Plot the time series of the groundwater fig, ax = plt.subplots(figsize=(10, 4)) oseries.plot(marker='.', linewidth=0.5, ax=ax) ax.set_ylabel('Groundwater head [m]') ax.set_xlabel('Time [years]') ax.grid() # - # #### <a href="#ex8">Answer exercise 8</a> <a name="ans8"></a> # + #create a new model with different observation series gwdata_after2000 = gwdata.loc['2000-1-1':] ml2 = ps.Model(gwdata_after2000, name="GWL") ts2 = ps.StressModel(recharge, ps.Gamma, name='recharge', settings="evap") ml2.add_stressmodel(ts2) ml2.solve() #alternative only specify the tmin when solving the model ml.solve(tmin='2000-1-1') # - # #### <a href="#ex9">Answer exercise 9</a> <a name="ans9"></a> # The most important difference between the models are the parameter values and standard errors. They are shown in the table on the right. Off course the period of the plot is also different. ml2.plots.results() ; # #### <a href="#ex10">Answer exercise 10</a> <a name="ans10"></a> # # See the code below to create a pastas model. You can see that the model fit is poor. In the next [notebook](2_external_stresses.ipynb) you will learn how to improve this model fit. # + gwdata2 = pd.read_csv('data/nb1/head_ex10.csv', parse_dates=['Date'], index_col='Date', squeeze=True) precip2 = pd.read_csv('data/nb1/precip_ex10.csv', parse_dates=['Date'], index_col='Date', squeeze=True) evap2 = pd.read_csv('data/nb1/evap_ex10.csv', parse_dates=['Date'], index_col='Date', squeeze=True) recharge2 = precip2 - evap2 print('The data type of the recharge series is: %s' % type(recharge)) ml3 = ps.Model(gwdata2, name="GWL") ts3 = ps.StressModel(recharge2, ps.Gamma, name='recharge', settings="evap") ml3.add_stressmodel(ts3) ml3.solve() ml3.plots.results(figsize=(10, 6)) ; # - # ### Origin of the series # * The rainfall data is taken from rainfall station Heibloem in The Netherlands. # * The evaporation data is taken from weather station Maastricht in The Netherlands. # * The head data is well B58C0698, which was obtained from Dino loket
Exercise_notebooks/On_topic/19_pastas/1_basic_pastas_model.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.2 64-bit # metadata: # interpreter: # hash: 11ddca5089527d17826da45cd024a0d3d3b64d2c1b5dbf54dd1c238d51a17f38 # name: python3 # --- # # Analyzing the UncertaintyForest Class by Reproducing Mutual Information Estimates # # This set of four tutorials (`uncertaintyforest_running_example.ipynb`,`uncertaintyforest_posteriorestimates.ipynb`, `uncertaintyforest_conditionalentropyestimates.ipynb`, and `uncertaintyforest_mutualinformationestimates.ipynb`) will explain the UncertaintyForest class. After following these tutorials, you should have the ability to run UncertaintyForest on your own machine and generate Figures 1, 2, and 3 from [this paper](https://arxiv.org/pdf/1907.00325.pdf), which help you to visualize a comparison of the estimated posteriors and conditional entropy values for several different algorithms. # # If you haven't seen it already, take a look at other tutorials to setup and install the ProgLearn package: `installation_guide.ipynb`. # # *Goal: Run the UncertaintyForest class to produce a figure that compares estimated normalized mutual information values for the UncertaintyForest, KSG, Mixed KSG, and IRF algorithms, as in Figure 3 from [this paper](https://arxiv.org/pdf/1907.00325.pdf)* # ## Import Required Packages # + import numpy as np import matplotlib.pyplot as plt from sklearn.ensemble import RandomForestClassifier from sklearn.calibration import CalibratedClassifierCV from proglearn.forest import UncertaintyForest from functions.unc_forest_tutorials_functions import generate_data_fig3, _make_params, _make_three_class_params, plot_setting, compute_mutual_info, estimate_mi, get_plot_mutual_info_by_pi, get_plot_mutual_info_by_d, plot_fig3, ksg, mixed_ksg # - # Setting figures. settings = [ { 'name' : 'Spherical Gaussians', 'kwargs': {}, }, { 'name' : 'Elliptical Gaussians', 'kwargs': {'var1' : 3}, }, { 'name' : 'Three Class Gaussians', 'kwargs': {'three_class' : True}, }, ] # + # Plot data. fig, axes = plt.subplots(1, len(settings), figsize = (18,4)) for i, setting in enumerate(settings): plot_setting(2000, setting, axes[i]) plt.show() plt.clf() # - # ## Specify Parameters # + # The following are two sets of parameters. # The first are those that were actually used to produce Figure 3. # Below those, you'll find some scaled-down parameters so that you can see the results more quickly. # Here are the paper reproduction parameters # n = 6000 # mus = range(5) # ds = range(1, 16) # mu = 1 # num_trials = 20 # d = 2 # pis = [0.05 * i for i in range(1, 20)] # Here are the scaled-down tutorial parameters n = 400 # number of samples mus = range(3) # range of means ds = range(2, 5) # range of dimensions mu = 1 # mean num_trials = 3 # number of trials to run d = 1 # dimension pis = [0.05 * i for i in range(3, 6)] # prior distribution # - # ## Specify Learners # Now, we'll specify which learners we'll compare (by label). Figure 3 uses four different learners, which are further specified in the function `estimate_mi`, which returns estimates of mutual information for a given dataset (X, y) and type of learner. # Algorithms used to produce Figure 3 algos = [ { 'label': 'IRF', 'title': 'Isotonic Reg. Forest', 'color': "#fdae61", }, { 'label': 'KSG', 'title': 'KSG', 'color': "#1b9e77", }, { 'label': 'Mixed KSG', 'title': 'Mixed KSG', 'color': "purple", }, { 'label': 'UF', 'title': 'Uncertainty Forest', 'color': "#F41711", }, ] # ## Plot Figure 3 # # Finally, we'll run the code to obtain and plot the spherical, elliptical, and three class Gaussians, as well as estimated mutual information vs. class priors and dimensionality (9 subplots). plot_fig3(algos, n, d, mu, settings, pis, ds, num_trials)
docs/tutorials/uncertaintyforest_mutualinformationestimates.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/He11oWorldss/OOP-58002/blob/main/OOP_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="f9lEVOvjU6V1" # Classes with multiple Objects # # + id="mIPhyiCjU3T1" colab={"base_uri": "https://localhost:8080/"} outputId="7790895f-584f-4fba-f3ea-50e8a0fb08f1" class Birds: def __init__(self, bird_name): self.bird_name = bird_name def flying_birds(self): print(f"{self.bird_name} flies above clouds") def non_flying_birds(self): print(f"{self.bird_name} is the national bird of the Philippines") vulture = Birds ("Griffon Vulture") crane = Birds ("Common Crane") emu = Birds ("Emu") vulture.flying_birds() crane.flying_birds() emu.non_flying_birds() # + [markdown] id="IzQ7CgpaXNE8" # Encapsulation # + colab={"base_uri": "https://localhost:8080/"} id="mQEky15wXPDp" outputId="ae34e0c4-0eb1-453e-a146-6db89d53fc54" class foo: def __init__(self,a,b): self.__a = a self.__b = b def add(self): return self.__a + self.__b def sub(self): return self.__a - self.__b foo_object = foo(3,4) print(foo_object.add()) print(foo_object.sub()) foo_object.__b = 5 foo_object.__a = 7 print(foo_object.add()) print(foo_object.sub()) # + [markdown] id="DaxQ7d7oF1pZ" # Inheritance # # + colab={"base_uri": "https://localhost:8080/"} id="ST76jQAAF4jy" outputId="8627108b-3e0f-406c-8d14-6d6492ae0171" class Person: def __init__(self,firstname,surname): self.firstname = firstname self.surname = surname def printname(self): print(self.firstname,self.surname) person = Person("David","Dwight") person.printname() class Student(Person): pass person = Student("Maam", "Sayo") person.printname() # + [markdown] id="-ZzftkHlIlno" # Polymorphism # + colab={"base_uri": "https://localhost:8080/"} id="tNKP3y63IiMB" outputId="669a09fc-03dc-4639-891e-79c3b151523f" class RegularPolygon: def __init__(self,side): self.side = side class Square(RegularPolygon): def area(self): return self.side*self.side class EquillateralTriangle(RegularPolygon): def area(self): return self.side*self.side*0.43 x = Square(4) y = EquillateralTriangle(3) print(x.area()) print(y.area()) # + [markdown] id="20e6aQJyRf5h" # Application 1 # # + id="xxldtM9lRiu1" colab={"base_uri": "https://localhost:8080/"} outputId="b95dced2-1755-47b2-e3bc-ce143a032444" import random class Person: def __init__ (self, student, pre, mid, fin): self.student = student self.pre = pre *0.50 self.mid = mid *0.40 self.fin = fin *0.30 def Grade (self): print (self.student, "has an average grade of", self.pre, "in prelims") print (self.student, "has an average grade of", self.mid, "in midterms") print (self.student, "has an average grade of", self.fin, "in finals") std1 = Person ("David", random.randint(85,89), random.randint(85,89), random.randint(85,89)) std2 = Person ("Dwight", random.randint(92,95), random.randint(92,95), random.randint(92,95)) std3 = Person ("Matthew", random.randint(85,90), random.randint(85,90), random.randint(85,90)) x = input(str("Please input your name: (PLEASE TYPE IN ALL CAPS)")) if x == "DAVID": std1.Grade() else: if x == "DWIGHT": std2.Grade() else: if x == "MATTHEW": std3.Grade() else: print("Cannot be found") # + id="7WSsNdjPe_wP"
OOP_2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={} colab_type="code" id="fzTfZ7gM9tNb" import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # + [markdown] colab_type="text" id="227HyqRr9tNf" # ### Import libraries # + colab={} colab_type="code" id="qFOyxmLQ9tNg" from urllib.request import urlretrieve # + [markdown] colab_type="text" id="e56b5ZPv9tNl" # ### Read Files # + colab={} colab_type="code" id="7ZpoPTBN9tNm" def get_file(url,file_name): return urlretrieve(url,file_name) # + colab={} colab_type="code" id="_bRJ5i5g9tNo" subway_urls = ['https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/6664420f-316f-4f94-9ba4-d4b4677aeea9', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/d1159888-0035-45a0-b238-86b546555ac0', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/240d8e8c-d300-4f91-b94f-cbb3d136d25e', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/98d4ac77-aa9f-40a3-97ee-6fc9070da252', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/92e7649a-cf2f-4ac7-9802-b7f92f00b384', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/61412f10-656b-4992-9a1a-a156dcf2f6c7', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/69a6db37-7982-49c7-8dbc-56919a92afca', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/10080217-8022-41c0-a8ba-2a455b8d9d6e', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/a731c4bb-630a-4530-b590-b325f9b4ef9b', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/5dd04139-6dd0-46c4-99a8-45e7b7607e5c', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/84d2cc1f-296d-4593-a6eb-504a7a5c769a', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/83e7a4e7-fa62-4932-afc2-8c8ca2235835', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/d9bf907b-413c-4028-b9bc-0b72a66bf11e', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/6767e18c-cbb3-461a-b4ae-ba202c9b73e4', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/e91ccb8b-9b0a-4479-adcd-447bfb298ab5', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/718061ee-8351-4cbb-818a-691f03e92041', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/b96a8119-9241-4392-92c7-f2fb6f4eff5f', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/5e22688c-e841-4249-8a18-243dc70307c1', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/9d4cae40-1a5e-4dea-b3c2-95818ae0b521', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/5e3067a3-ff0a-4ecf-a25b-5cf59e7adc25', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/b4e5c8d0-36b3-4e24-8f24-30bf59fded2d', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/9a824dba-20cc-40b1-8f26-778a34a0f3a8', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/e6bac74e-2da2-4429-a76f-202eba3d9193', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/0511879f-3233-4a42-8c28-93b432132c8b', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/447b4a5a-f696-4f05-86c0-9602f56922e5', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/a302fcab-81a1-4142-b0ec-031b0666c1df', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/655a138c-d381-4fe7-b3b3-a6620825161f', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/34d9619f-0239-4dad-a598-b6bc71ce1071', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/b6557580-a0f4-4c96-9ce2-82657b62e88a', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/e2a5e386-ddf7-4416-8e84-c3508c4f9a4f', 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/fd837bd2-85ed-485e-ba02-46e29af52024'] # + colab={} colab_type="code" id="ARF8zL3R9tNq" file_names=['sub_01_18','sub_02_18','sub_03_18','sub_04_18','sub_05_18','sub_06_18','sub_07_18', 'sub_08_18','sub_09_18','sub_10_18','sub_11_18','sub_12_18','sub_01_19','sub_02_19', 'sub_03_19','sub_04_19','sub_05_19'] # + colab={} colab_type="code" id="BiQ2-Dbo9tNt" all_sub_df = [] for url, file in zip(subway_urls, file_names): data = pd.read_excel(get_file(url,file)[0],ignore_index=True) # store DataFrame in list all_sub_df.append(data) # - all_sub_df = pd.concat(all_sub_df) # + colab={} colab_type="code" id="KOee75Nr9tNv" all_sub_df.head() # + colab={} colab_type="code" id="AkfeLWCk9tNx" # + [markdown] colab_type="text" id="0zFvco829tN3" # ### Data Cleansing # + colab={} colab_type="code" id="yfzm_LRw9tOl" sub_delay_code = 'https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/fece136b-224a-412a-b191-8d31eb00491e' delay_code_df1 = pd.read_excel(get_file(sub_delay_code,'delay_code')[0], ignore_index=True,skiprows=1) delay_code_df1=delay_code_df1[['SUB RMENU CODE','CODE DESCRIPTION']] # + colab={} colab_type="code" id="1fW39B7V9tOn" outputId="a3fd2f4f-b90d-4cc1-e6c4-b6ef93b54e7c" delay_code_df1.head() # + colab={} colab_type="code" id="jicPx8LR9tOp" # Put codes and description in a dictionary #and replace the codes in the incident column with it's corresponding description. d = dict(zip(delay_code_df1['SUB RMENU CODE'], delay_code_df1['CODE DESCRIPTION'])) all_sub_df['Incident'] = [d.get(x,'Unknown') for x in all_sub_df['Code']] # - all_sub_df.Line.unique() all_sub_df.isnull().sum() all_sub_df.info() all_sub_df.dropna(inplace=True) # + # list of stations in each line. # - line_1 = ['Vaughan Metropolitan''Highway 407','Pioneer Village','York University','Finch West' 'Downsview Park','Sheppard West','Wilson','Yorkdale','Lawrence West','Glencairn','Eglinton West', 'St Clair West','Dupont','Spadina','St George','Museum','Queen\'s Park','St Patrick','Osgoode', 'St Andrew','Union','King','Queen','Dundas','College','Wellesley','Bloor-Yonge','Rosedale', 'Summerhill','St Clair','Davisville''Eglinton','Lawrence','York Mills', 'Sheppard-Yonge','North York Centre','Finch'] line_2 = ['Kipling', 'Islington','Royal York','Old Mill','Jane','Runnymede','High Park', 'Keele','Dundas West','Lansdowne','Dufferin','Ossington','Christie', 'Bathurst','Spadina','St George','Bay','Bloor-Yonge','Sherbourne','Castle Frank', 'Broadview','Chester','Pape','Donlands','Greenwood','Coxwell','Woodbine','Main Street', 'Victoria Park','Warden','Kennedy'] line_3 = ['Kennedy','Lawrence East','Ellesmere','Midland','Scarborough Centre','McCowan'] line_4 = ['Sheppard-Yonge','Bayview','Bessarion','Leslie','<NAME>'] # The four subway lines are 1,2,3 and 4. # there are entries with other names as well which need to be categoriezed into these four lines. # lists of entries belonging to the same line are created. # The stops near a particular station are included in the line going through that station. # + BD_line = ['BD','BD LINE','BLOOR DANFORTH','45 KIPLING','31 GREENWOOD','126 CHRISTIE','B/D', '37 ISLINGTON','29 DUFFERIN','73 ROYAL YORK','BLOOR DANFORTH LINE','341 KEELE', '11 BAYVIEW', '63 OSSINGTON','BLOOR DANFORTH LINES'] YU_line = ['YU','95 YORK MILLS','104 FAYWOOD','60 STEELES WEST','36 FINCH WEST','504 KING','116 MORNINGSIDE', '32 EGLINTON WEST','YU LINE','YUS','YONGE UNIVERSITY SERVI','504',] SRT_line = ['SRT','9 BELLAMY','129 MCCOWAN NORTH','16 MCCOWAN'] SHP_line = ['SHP','SHEPPARD','25 DON MILLS','YU/SHEP','85 SHEPPARD EAST'] YU_BD = ['YU/BD','YU / BD','YU/ BD','BD/YU','BD/YUS','YU - BD','YU-BD','510 SPADINA','YU - BD LINE','YU BD'] # + colab={} colab_type="code" id="wA1f0Ei-9tOs" outputId="7b6862e4-4c42-46f9-9c7b-c9af30cd6980" all_sub_df.Line=all_sub_df['Line'].apply(lambda x: 'BD' if x in BD_line else ('YU' if(x in YU_line) else ('SRT' if(x in SRT_line) else ('SHP' if (x in SHP_line) else ('YU_BD' if (x in YU_BD) else('Unknown' if (x.isdigit()) else x )))))) # + [markdown] colab_type="text" id="VHejCNY59tOy" # percentage of missing values. # + colab={} colab_type="code" id="yhqjWhT29tO0" outputId="5eb52eb3-e77c-4ba6-9136-12bdaa56f9e7" all_sub_df.Line.unique() # + [markdown] colab_type="text" id="nfD4sHsa9tO2" # The missing values in column "Line" is relatively easy to fill. The sation where the delay causing incident occured can reveal the line. # - # to do # avg delay on days of the week # month, season # time of teh day - time slots # resaons top 5 # top 5 stations # train number with teh highest number of dealays # cause categorised into passenger caused, mechanical, other # + colab={} colab_type="code" id="UvspO7Pq9tPF" all_sub_df.Bound.unique() # + colab={} colab_type="code" id="342rf1g89tPH" # The unfamiliar entries in the Bound column are negligible number #and can be either rtemoived or put in a new actegory called "unknown". # + # on close inspection it was found that the strange entries in the Bound column has no correlation with any of the other columns. # - all_sub_df[all_sub_df['Vehicle']==0].sum() all_sub_df.groupby('Vehicle')['Min Delay'].average() all_sub_df[all_sub_df['Min Delay']==0] # + [markdown] colab_type="text" id="NQPQ56UG9tPJ" # #### Clean Direction data. # + [markdown] colab_type="text" id="6NJPnhwg9tPK" # Cleaning the values of column Directrion # + [markdown] colab_type="text" id="DMPHS3MD9tPr" # ### Visualization # + [markdown] colab_type="text" id="uiInUTCV9tPs" # * more than 5 minute delay distribution. # * comparison of subway bus and streetcar. # * location with highest number of delays. # * biggest reason for delays. # * time of the day with the most delays. # * day of the week with the most delays. # # - all_df.head()
TTC_Subway_delay.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + ## Preprocessing from sklearn.datasets import load_boston from sklearn.cross_validation import train_test_split boston = load_boston() X = boston["data"] y = boston["target"] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state = 10) # + ## Models, Predictions, and Metrics from sklearn import linear_model from sklearn.ensemble import RandomForestRegressor def fit_and_score_model(model_type, hyperparameter, X_train, y_train, X_test, y_test): if model_type == "Ridge": ridge = linear_model.Ridge(alpha=hyperparameter) ridge.fit(X_train, y_train) return ridge.score(X_test, y_test) if model_type == "RandomForestRegressor": rf = RandomForestRegressor(n_estimators=hyperparameter) rf.fit(X_train, y_train) return rf.score(X_test, y_test) for alpha in range(4): ridge_score = fit_and_score_model("Ridge", alpha, X_train, y_train, X_test, y_test) print("Ridge (alpha=" + str(alpha) + ") score: " + str(ridge_score)) for n_estimator in [1,5,7,10]: rf_score = fit_and_score_model("RandomForestRegressor", n_estimator, X_train, y_train, X_test, y_test) print("Random Forest (n_estimator=" + str(n_estimator) + ") score: " + str(rf_score)) # -
curriculum/0_before_you_start/pipelines-and-project-workflow/boston-pipeline.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="CjWvnaQUrZmD" colab_type="text" # # Emotion classification from a multimodel Speech/Body-Joint-Movments dataset # + [markdown] id="ldtHMhuLrewK" colab_type="text" # ***Data*** # # We created an Audio/Body-Joint-Movement signals Dataset and labelled according to emotional status of the observation subjects. This data was fed into an experiment to demonstrate the process of the data-processing and machine learning models building. The dataset contains about 1300 files and eaxh file was rated emotional validity, intensity, and genuineness by the domain-experts. # # Each file was labelled by 14 digits which helps us identify and trace back the characteristics: modality (01 = body-movement, 02 = audio), emotion (01 = neutral, 02 = calm, 03 = happy, 04 = sad, 05 = angry, 06 = fearful, 07 = disgust, 08 = surprised), emotional intensity (01 = normal, 02 = strong), statement (01 = “Scenario-1”, 02 = “Scenarios-2”), repetition (01 = 1st repetition, 02 = 2nd repetition), data-collection-type (01=male, 02=female, 03=English-as 1st language, 04=other-language, 05=collecting mode). # # ***Experimet*** # # For this experiment, we have extracted 28 features from the audio from each file and 9 features from the body-joint-movement signals that I shall explain further down. # # ***Theory and Rationale*** # # We will offer a multimodel approach: - Audio-Visual features analysis for mental status classification. Research and the panel of experts of our consortium accept and consider this approach. Multimodal observations combinations have complementary effects and from a data science viewpoint, this approach will increase machine learning model accuracy [A]. The challenge in multimodal feature learning is how and at what stage to fuse data from multiple modalities. This challenge is complicated by the high dimensionality of raw data, differing temporal resolutions, and differing temporal dynamics across modalities.Surveys on the general problem of sensor fusion [B] and specifically on fusion for affect recognition [C], [D] are available. Fusion can be achieved at early model stages close to the raw sensor data, or at a later stage by combining independent models. In early or feature-level fusion, features are extracted independently and then concatenated for further learning of a joint feature representation; this allows the model to capture correlations between the modalities. Late or decision-level fusion aggregates the results of independent recognition models. The literature generally reports that decision-level fusion works better for affect recognition given the datasets and models currently used [E]. While decision-level fusion typically only involves simple score weighing, feature-level fusion is a representation learning task that may benefit from deep learning. # # In Audio-Visual observations combination there are two common approaches; in approach-1 reseachers took facial expressions while in approach-2 researchers took body-part-movement expressions with speech to study mental-status. In the feature-level fusion with neural networks approach, joint feature representations are learned without considering the temporal context for fusion. For both modalities, body-part-movement features are extracted using FER and SER methods that may involve both handcrafted and deep features (see Sections 3.1 and 3.2). A fully connected DNN, typically initialized via unsupervised pretraining, then learns a high-level joint feature representation of both modalities as an improvement over “shallow” feature fusion. Kim et al. [F] demonstrated how this can be achieved with DBNs. This approach is feasible especially in cases where the goal is to label each body-movement with one affective state. Alternatively, joint feature representations can be learned at the frame level, and then aggregated to the together. [G] used a DBN to fuse frame-level audio-visual features learned independently via CNNs; the learned features are average pooled for classification at the body-movemet level and lead to an improvement over state-of-the-art methods. Feature-level fusion with RNNs (J2 in Fig. 6). Especially when predictions are required at the frame level for dimensional affective states, feature-level fusion could benefit by taking into account the temporal context. Modeling via RNNs makes this possible, potentially improving model robustness and helping to deal with temporal lags between modalities [H] # # This preliminary experiment reported that dynamic feature fusion can lead to performance improvements compared to simpler fusion strategies. However, several other studies based on handcrafted features found that decision-level fusion on top of individual LSTM models leads to better performance. # # Learning from raw audio-visual data with two CNNs, Tzirakis et al. [I] used a two-layer LSTM network for feature fusion, which was found to outperform the state of the art. # # ##### [A] ***<NAME>, <NAME>, <NAME>, and <NAME>, “A survey of affect recognition methods: Audio, visual, and spontaneous expressions,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 1, pp. 39–58, 2009.*** # ##### [B] ***F.Lingenfelser,J.Wagner,andE.Andr´e,“Asystematicdiscussion of fusion techniques for multi-modal affect recognition tasks,” in Proc. Int. Conf. Multimodal Interact., 2011, pp. 19–26.*** # ##### [C] ***<NAME> and <NAME>, “A review and meta-analysis of multimodal affect detection systems,” ACM Comput. Surveys, vol. 47, no. 3, p. 43, 2015.*** # ##### [D} ***<NAME>, <NAME>, <NAME>, and <NAME>, “A review of affective computing: From unimodal analysis to multimodal fusion,” Inf. Fusion, vol. 37, pp. 98–125, 2017*** # ##### [E] ***F.Ringeval,F.Eyben,E.Kroupi,A.Yuce,J.-P.Thiran,T.Ebrahimi, <NAME>, and <NAME>, “Prediction of asynchronous dimensional emotion ratings from audio-visual and physiological data,” Pattern Recognit. Lett., vol. 66, pp. 22–30, 2015*** # ##### [F] ***<NAME>, <NAME>, and <NAME>, “Multimodal emotion recognition using deep learning architectures,” in Proc. Winter Conf. Appl. Comput. Vision, 2016, pp. 1–9*** # ##### [G} ***<NAME>, <NAME>, <NAME>, <NAME>, and <NAME>, “Learning affective features with a hybrid deep model for audio-visual emotion recognition,” IEEE Trans. Circuits Syst. Video Technol., 2017*** # ##### [H] ***<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>, “Asynchronous and event-based fusion systems for affect recognition on naturalistic data in comparison to conventional approaches,” IEEE Trans. Affect. Comput., 2016*** # ##### [I} ***<NAME>, <NAME>, <NAME>, <NAME>, and <NAME>, “End-to-end multimodal emotion recognition using deep neural networks,” IEEE J. Sel. Top. Signal Process., vol. 11, no. 8, pp. 1301–1309, 2017.*** # # ![alt text](https://drive.google.com/uc?id=1q0hGl0lcM5JuUskS1VaXvSAk3vOn4giV) # # # # # + [markdown] id="JDNbxj45rkvB" colab_type="text" # # Analysis # # We are using Colab, a Google Cloud environment for jupyter, so we need to import our files from Google Drive and then install LibROSA, a python package for music and audio analysis. # # After the import, we will plot the signal of the first file. # + id="N-o2JI49WBAe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 122} outputId="b03278af-8d72-435e-bf24-dce838e6b61a" from google.colab import drive drive.mount('/content/drive') # + id="EgFwaDhMbJVm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="ce77b29f-0f0c-447e-ce10-bbcce086c555" # !pip install librosa # + id="tp6tLgwNMqXf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="181c2e8b-5f87-4c8a-90e5-243ece631810" # %cd /content/drive/My Drive/Voice Analytics/Emotion-Classification-Ravdess-master # + id="rxI4xzngdS-e" colab_type="code" colab={} import librosa from librosa import display data, sampling_rate = librosa.load('01-01-01-01-01-01-01.wav') # + id="WgaSHtCIdtX2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="9f8245f8-a4fb-406a-dd59-ba7956acdafc" % pylab inline import os import pandas as pd import glob plt.figure(figsize=(12, 4)) librosa.display.waveplot(data, sr=sampling_rate) # + id="bmiWmwaFyFL1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 296} outputId="81132f39-9a5c-4c29-e036-ccbd1876da3e" Xtra = librosa.stft(data) Xdb = librosa.amplitude_to_db(abs(Xtra)) plt.figure(figsize=(12, 4)) librosa.display.specshow(Xdb, sr=sampling_rate, x_axis='time', y_axis='log') # + id="h7pw9TtD0oz1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 296} outputId="e931ae45-cd8d-499f-9f9f-efa7160aa815" import sklearn spectral_centroids = librosa.feature.spectral_centroid(data, sr=sampling_rate)[0] spectral_centroids.shape (775,) # Computing the time variable for visualization plt.figure(figsize=(12, 4)) frames = range(len(spectral_centroids)) t = librosa.frames_to_time(frames) # Normalising the spectral centroid for visualisation def normalize(x, axis=0): return sklearn.preprocessing.minmax_scale(x, axis=axis) #Plotting the Spectral Centroid along the waveform librosa.display.waveplot(data, sr=sampling_rate, alpha=0.4) plt.plot(t, normalize(spectral_centroids), color='b') # + id="eO78isBk1UVs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 296} outputId="005a0390-662a-4a23-9d28-bc5b9981746d" spectral_rolloff = librosa.feature.spectral_rolloff(data+0.01, sr=sampling_rate)[0] plt.figure(figsize=(12, 4)) librosa.display.waveplot(data, sr=sampling_rate, alpha=0.4) plt.plot(t, normalize(spectral_rolloff), color='r') # + id="ms1tSrLh1qyF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 568} outputId="63f7ebda-9ace-4c0c-d69f-57c99347c292" spectral_bandwidth_2 = librosa.feature.spectral_bandwidth(data+0.01, sr=sampling_rate)[0] spectral_bandwidth_3 = librosa.feature.spectral_bandwidth(data+0.01, sr=sampling_rate, p=3)[0] spectral_bandwidth_4 = librosa.feature.spectral_bandwidth(data+0.01, sr=sampling_rate, p=4)[0] plt.figure(figsize=(15, 9)) librosa.display.waveplot(data, sr=sampling_rate, alpha=0.4) plt.plot(t, normalize(spectral_bandwidth_2), color='r') plt.plot(t, normalize(spectral_bandwidth_3), color='g') plt.plot(t, normalize(spectral_bandwidth_4), color='y') plt.legend(('p = 2', 'p = 3', 'p = 4')) # + id="Uv6USeNC2L4J" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 637} outputId="fa818acb-4442-4a26-8379-b6f0e484fef9" #Plot the signal: plt.figure(figsize=(14, 5)) librosa.display.waveplot(data, sr=sampling_rate) # Zooming in n0 = 9000 n1 = 9100 plt.figure(figsize=(14, 5)) plt.plot(data[n0:n1]) plt.grid() # + id="S9FsZgkZ3BnX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 476} outputId="300b7055-0bb5-4c7d-809a-2d44aa035ed9" mfccs = librosa.feature.mfcc(data, sr=sampling_rate) print(mfccs.shape) (20, 97) #Displaying the MFCCs: plt.figure(figsize=(15, 7)) librosa.display.specshow(mfccs, sr=sampling_rate, x_axis='time') # + id="0PHrZxjL3Vas" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 351} outputId="a6d80796-5c6c-4558-cbcd-b5add68d97b6" chromagram = librosa.feature.chroma_stft(data, sr=sampling_rate, hop_length=3) plt.figure(figsize=(15, 5)) librosa.display.specshow(chromagram, x_axis='time', y_axis='chroma', hop_length=3, cmap='coolwarm') # + [markdown] id="vCtNuVWlr5jL" colab_type="text" # # Load all files # # We will create our numpy array extracting Mel-frequency cepstral coefficients (MFCCs), while the classes to predict will be extracted from the name of the file (see the introductory section of this notebook to see the naming convention of the files of this dataset). # + id="AKvuF--gd6F-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="476b623f-cfef-46d8-a8a5-afa51d70bd05" import time path = '/content/drive/My Drive/Voice Analytics/Emotion-Classification-Ravdess-master/audio1/' lst = [] start_time = time.time() for subdir, dirs, files in os.walk(path): for file in files: try: #Load librosa array, obtain mfcss, store the file and the mcss information in a new array X, sample_rate = librosa.load(os.path.join(subdir,file), res_type='kaiser_fast') mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=40).T,axis=0) # The instruction below converts the labels (from 1 to 8) to a series from 0 to 7 # This is because our predictor needs to start from 0 otherwise it will try to predict also 0. file = int(file[7:8]) - 1 arr = mfccs, file lst.append(arr) # If the file is not valid, skip it except ValueError: continue print("--- Data loaded. Loading time: %s seconds ---" % (time.time() - start_time)) # + id="kLSggnF7kKY1" colab_type="code" colab={} # Creating X and y: zip makes a list of all the first elements, and a list of all the second elements. X, y = zip(*lst) # + id="VzvBRTJIlIE9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5ceee86d-412f-4ad1-e8d0-1977fce40b99" import numpy as np X = np.asarray(X) y = np.asarray(y) X.shape, y.shape # + id="xOutQiAlCjOY" colab_type="code" colab={} # Saving joblib files to not load them again with the loop above import joblib X_name = 'X.joblib' y_name = 'y.joblib' save_dir = '/content/drive/My Drive/Voice Analytics/Emotion-Classification-Ravdess-master' savedX = joblib.dump(X, os.path.join(save_dir, X_name)) savedy = joblib.dump(y, os.path.join(save_dir, y_name)) # + id="nIoFdycUXMxA" colab_type="code" colab={} # Loading saved models X = joblib.load('/content/drive/My Drive/Voice Analytics/Emotion-Classification-Ravdess-master/X.joblib') y = joblib.load('/content/drive/My Drive/Voice Analytics/Emotion-Classification-Ravdess-master/y.joblib') # + [markdown] id="Agw-3KN1sDhh" colab_type="text" # # Decision Tree Classifier # # To make a first attempt in accomplishing this classification task I chose a decision tree: # + id="Q-Xgb5NslTBO" colab_type="code" colab={} from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) # + id="UshLOC1ClWL3" colab_type="code" colab={} from sklearn.tree import DecisionTreeClassifier # + id="_BnCR52nlXw0" colab_type="code" colab={} dtree = DecisionTreeClassifier() # + id="qWyTownblZM0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="03a0b2ee-344e-44fb-9206-0453673deb93" dtree.fit(X_train, y_train) # + id="HEuw6TUQlr7C" colab_type="code" colab={} predictions = dtree.predict(X_test) # + [markdown] id="_1v0i0V7sMw7" colab_type="text" # Let's go with our classification report. # # Before we start, a quick reminder of the classes we are trying to predict: # # emotions = { # "neutral": "0", # "calm": "1", # "happy": "2", # "sad": "3", # "angry": "4", # "fearful": "5", # "disgust": "6", # "surprised": "7" # } # + id="c4kNSYkAleIv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 272} outputId="ddc3b92f-47d2-4e2b-8330-dde2876434b8" from sklearn.metrics import classification_report,confusion_matrix print(classification_report(y_test,predictions)) # + [markdown] id="lCVgjLj-gwE2" colab_type="text" # # Random Forest # + [markdown] id="jfaTxzZ1w__y" colab_type="text" # In this second approach, I switched to a random forest classifier and I made a gridsearch to make some hyperparameters tuning. # # The gridsearch is not shown in the code below otherwise the notebook will require too much time to run. # + id="wcov_DCXgs7v" colab_type="code" colab={} from sklearn.ensemble import RandomForestClassifier # + id="3eo0ljqzg-KM" colab_type="code" colab={} rforest = RandomForestClassifier(criterion="gini", max_depth=10, max_features="log2", max_leaf_nodes = 100, min_samples_leaf = 3, min_samples_split = 20, n_estimators= 22000, random_state= 5) # + id="Tg45qSOfg-26" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="fa20b736-42d0-416f-90eb-b67cf53c9e5e" rforest.fit(X_train, y_train) # + id="aM8KU3qxhGBM" colab_type="code" colab={} predictions = rforest.predict(X_test) # + id="296FW5sBdanI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 272} outputId="75b66e99-bece-4967-8aef-ea872c8a571c" print(classification_report(y_test,predictions)) # + [markdown] colab_type="text" id="hE-srZoj0ZTI" # # K-Neighbors Classifier # + id="338VuxyW0iZG" colab_type="code" colab={} from sklearn.neighbors import KNeighborsClassifier # + id="hshm-IHb0uEl" colab_type="code" colab={} kneigh = KNeighborsClassifier(n_neighbors=2, weights='uniform', algorithm='auto', leaf_size=300, p=2, metric='minkowski', metric_params=None, n_jobs=None) # + id="9jrt3gsx2m-i" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="c6fe1f4f-c746-43e7-a61e-79c6884e0bea" kneigh.fit(X_train, y_train) # + id="t_q5arXC22zx" colab_type="code" colab={} predictions = kneigh.predict(X_test) # + id="cjbBXPde29bu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 272} outputId="0f20899e-21f4-493e-a132-3e0d3f924b53" print(classification_report(y_test,predictions)) # + [markdown] id="t9eqMHV3S8i6" colab_type="text" # # Neural network # + [markdown] id="G-QscoyMxQtn" colab_type="text" # Let's build our neural network! # # To do so, we need to expand the dimensions of our array, adding a third one using the numpy "expand_dims" feature. # + id="W4i187-Pe-w5" colab_type="code" colab={} x_traincnn = np.expand_dims(X_train, axis=2) x_testcnn = np.expand_dims(X_test, axis=2) # + id="vnvoCRX1gQCh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6625b4bd-9760-4138-e312-924e98d7160e" x_traincnn.shape, x_testcnn.shape # + id="9H8gNhudidkT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="7288c002-b82e-41c8-cc8d-6f36d5dfca5d" # %tensorflow_version 2 # + id="kkCnpKWGpAma" colab_type="code" colab={} import os os.kill(os.getpid(), 9) # + id="HZOGIpuefCd3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="84a1f2d6-af35-4f94-d2fa-d6497ccf831d" import keras import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.keras.preprocessing import sequence from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Embedding from tensorflow.keras.utils import to_categorical from tensorflow.keras.layers import Input, Flatten, Dropout, Activation from tensorflow.keras.layers import Conv1D, MaxPooling1D from tensorflow.keras.models import Model from tensorflow.keras.callbacks import ModelCheckpoint model = Sequential() model.add(Conv1D(188, 5,padding='same', input_shape=(40,1))) model.add(Activation('relu')) model.add(Dropout(0.1)) model.add(MaxPooling1D(pool_size=(8))) model.add(Conv1D(128, 5,padding='same',)) model.add(Activation('relu')) model.add(Dropout(0.1)) model.add(Flatten()) model.add(Dense(8)) model.add(Activation('softmax')) opt = keras.optimizers.rmsprop(lr=0.00005, rho=0.9, epsilon=None, decay=0.0) # + [markdown] id="LphftMIZzUvz" colab_type="text" # With *model.summary* we can see a recap of what we have build: # + id="pIWPB4Zgfic7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 493} outputId="17e759c8-2f7a-47b3-8ee8-030dc1ade6bb" model.summary() # + [markdown] id="5qQSBeBhzcLu" colab_type="text" # Now we can compile and fit our model: # + id="iNI1znbsfpTx" colab_type="code" colab={} model.compile(loss='sparse_categorical_crossentropy', optimizer='RMSprop', metrics=['accuracy']) # + id="ktdF-nJKfq6F" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="022db2fb-cc6a-4423-a0bc-f2534e83a8ed" cnnhistory=model.fit(x_traincnn, y_train, batch_size=16, epochs=1000, validation_data=(x_testcnn, y_test)) # + [markdown] id="mFytY6LDzgJ0" colab_type="text" # Let's plot the loss: # + id="TFz4ClZov9gZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="2ab9e742-55f0-42e4-d94c-7c644812079d" plt.plot(cnnhistory.history['loss']) plt.plot(cnnhistory.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # + [markdown] id="Vf1W7LgP2DA5" colab_type="text" # # # And now let's plot the accuracy: # + id="8yyFBt7ASPUe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="d096848b-55a8-4d58-849d-ac26dbcb893f" plt.plot(cnnhistory.history['accuracy']) plt.plot(cnnhistory.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # + [markdown] id="gaZONl1mD8XD" colab_type="text" # Let's now create a classification report to review the f1-score of the model per class. # To do so, we have to: # - Create a variable predictions that will contain the model.predict_classes outcome # - Convert our y_test (array of strings with our classes) to an array of int called new_Ytest, otherwise it will not be comparable to the predictions by the classification report. # + id="EO25uIL-9vqx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="3870f2e3-84b6-43a3-bcf4-7445939e9f0c" predictions = model.predict_classes(x_testcnn) # + id="1i06grlBBSrn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 391} outputId="5d0fc8a3-1193-4004-9f32-a43dbd79447d" predictions # + id="HUHshx93CM_6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 391} outputId="d5f2b3ff-3419-4456-9aa2-58473e2ba182" y_test # + id="tMxojpvWCxOs" colab_type="code" colab={} new_Ytest = y_test.astype(int) # + id="W07EQaC8DE6i" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 391} outputId="7205fc78-9440-4361-bc2e-9461cbb55a6f" new_Ytest # + [markdown] id="FW2XHdTtEedk" colab_type="text" # Okay, now we can display the classification report: # + id="IfVSRmMu96rC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 272} outputId="0f958170-2801-46ca-dcec-983854106b07" from sklearn.metrics import classification_report report = classification_report(new_Ytest, predictions) print(report) # + [markdown] id="hu1S5IowfSDG" colab_type="text" # And now, the confusion matrix: it will show us the misclassified samples # + id="fdy09SCEd7Cl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="9b1fe991-b69c-4ca0-cc4e-e362529ce13d" from sklearn.metrics import confusion_matrix matrix = confusion_matrix(new_Ytest, predictions) print (matrix) # 0 = neutral, 1 = calm, 2 = happy, 3 = sad, 4 = angry, 5 = fearful, 6 = disgust, 7 = surprised # + [markdown] id="x_ySPOyHxkZ3" colab_type="text" # # Save the model # + id="f5kRmoD-sdHj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="78eb631a-5943-4cb6-fd49-876d35df6e99" model_name = 'Emotion_Voice_Detection_Model.h5' save_dir = '/content/drive/My Drive/Voice Analytics/Emotion-Classification-Ravdess-master/' # Save model and weights if not os.path.isdir(save_dir): os.makedirs(save_dir) model_path = os.path.join(save_dir, model_name) model.save(model_path) print('Saved trained model at %s ' % model_path) # + [markdown] id="MNUiznKNwUtJ" colab_type="text" # # Reloading the model to test it # + id="T4oAv6Kx8RBE" colab_type="code" colab={} import keras from keras.initializers import glorot_uniform loaded_model = keras.models.load_model('/content/drive/My Drive/Voice Analytics/Emotion-Classification-Ravdess-master/Emotion_Voice_Detection_Model.h5') loaded_model.summary() # + [markdown] id="FHtPzc0Y8hfZ" colab_type="text" # # Checking the accuracy of the loaded model # + id="qUi-Zjuf8hDB" colab_type="code" colab={} loss, acc = loaded_model.evaluate(x_testcnn, y_test) print("Restored model, accuracy: {:5.2f}%".format(100*acc)) # + [markdown] id="8pXH3y7S9A1N" colab_type="text" # # CNN2 based on image of MFCCs # + colab_type="code" id="VgCJ91oCBrXK" colab={"base_uri": "https://localhost:8080/", "height": 697} outputId="496ca174-9e5c-4d4d-fd4b-fc02ef8a74b8" import keras import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.keras.preprocessing import sequence from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Embedding from tensorflow.keras.utils import to_categorical from tensorflow.keras.layers import Input, Flatten, Dropout, Activation from tensorflow.keras.layers import Conv1D, MaxPooling1D, Conv2D from tensorflow.keras.layers import MaxPooling2D, GlobalMaxPooling2D, GlobalAveragePooling1D, AveragePooling2D, Add from tensorflow.keras.models import Model from tensorflow.keras.callbacks import ModelCheckpoint model = Sequential() input_shape=(64, 64, 3) #1st hidden layer model.add(Conv2D(32, (3, 3), strides=(2, 2), input_shape=input_shape)) model.add(AveragePooling2D((2, 2), strides=(2,2))) model.add(Activation('relu')) #2nd hidden layer model.add(Conv2D(64, (3, 3), padding="same")) model.add(AveragePooling2D((2, 2), strides=(2,2))) model.add(Activation('relu')) #3rd hidden layer model.add(Conv2D(64, (3, 3), padding="same")) model.add(AveragePooling2D((2, 2), strides=(2,2))) model.add(Activation('relu')) #Flatten model.add(Flatten()) model.add(Dropout(rate=0.5)) #Add fully connected layer. model.add(Dense(64)) model.add(Activation('relu')) model.add(Dropout(rate=0.5)) #Output layer model.add(Dense(10)) model.add(Activation('softmax')) model.summary() # + id="I2ZUnsDtXYkP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="c9c1db10-b128-4d05-fff8-5d6cc009415f" # !pip install split-folders # + id="ex8GChVUVr3M" colab_type="code" colab={} import split_folders from pathlib import Path genres = 'A B C D E F G H'.split() for g in genres: Path(f'img_data/{g}').mkdir(parents=True, exist_ok=True) for filename in os.listdir(f'./content/drive/My Drive/Voice Analytics/Emotion-Classification-Ravdess-master/genres/{g}'): songname = f'./content/drive/My Drive/Voice Analytics/Emotion-Classification-Ravdess-master/genres/{g}/{filename}' y, sr = librosa.load(songname, mono=True, duration=5) print(y.shape) plt.specgram(y, NFFT=2048, Fs=2, Fc=0, noverlap=128, cmap=cmap, sides='default', mode='default', scale='dB'); plt.axis('off'); plt.savefig(f'img_data/{g}/{filename[:-3].replace(".", "")}.png') plt.clf() # To only split into training and validation set, set a tuple to `ratio`, i.e, `(.8, .2)`. split_folders.ratio('./img_data/', output="./data", seed=1337, ratio=(.8, .2)) # default values from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator( rescale=1./255, # rescale all pixel values from 0-255, so aftre this step all our pixel values are in range (0,1) shear_range=0.2, #to apply some random tranfromations zoom_range=0.2, #to apply zoom horizontal_flip=True) # image will be flipper horiz test_datagen = ImageDataGenerator(rescale=1./255) training_set = train_datagen.flow_from_directory( '/content/drive/My Drive/Voice Analytics/Emotion-Classification-Ravdess-master/audio/', target_size=(64, 64), batch_size=32, class_mode='categorical', shuffle = False) test_set = test_datagen.flow_from_directory( '/content/drive/My Drive/Voice Analytics/Emotion-Classification-Ravdess-master/audio1/', target_size=(64, 64), batch_size=32, class_mode='categorical', shuffle = False )
REPORT_of_H2020_Emotions_Classification.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Tensorflow 实战 Google 深度学习框架 | 学习笔记(二) # # >Shoot on the moon and if you miss you will still be among the stars. # # Caicloud Github :[tensorflow-tutorial](https://github.com/caicloud/tensorflow-tutorial)https://github.com/caicloud/tensorflow-tutorial # # 原 tutorial 使用的 Tensorflow 最新版本是 1.4.0 ,本人使用的 1.5.0 版本,所以部分代码会略有不同,该笔记仅为个人学习,理解使用。如有错误,还望批评指教。----ZJ # # ## 4 深层神经网络 # # ### 4.1 深度学习与深层神经网络 # # wiki 深度学习定义:一类通过多层非线性变换对高复杂性数据建模算法的合集。 # # 深度学习,两个重要特性:**多层 and 非线性。** # # - 线性模型的局限性,线性模型能够解决的问题是有限的(解决线性可分的问题)。 # - 3.4.2 小节中,输出为所有输入的加权和,导致整个神经网络是一个线性模型。 # - (采用线性函数)任意层的全连接神经网络(FC)和单层神经网络模型的表达能力没有任何区别。 # - 线性模型可以解决线性可分的问题,如通过直线(或高维空间的平面)划分。 # - 现实世界中,绝大多数问题是无法线性分割的,所以出现 Relu 这样非线性的激活函数,来解决高复杂性的问题(深度学习的目的就是解决这样的问题) # # - 激活函数实现去线性化 # - 如果每个神经元的输出通过一个非线性函数,那么整个神经网络就不再是线性的了。 # - 添加偏置项(bias),偏置项 是神经网络中非常有用的一种结构。 # - 每个节点(神经元)的输出在 加权和的基础上,还做了一个非线性变换。 # - 常用非线性激活函数(tf.nn.relu, tf.sigmoid, tf.tanh) # # - 多层网络解决异或运算 # - 在神经网络的发展史,有个很重要的问题,**异或问题**。 # - 1958 感知机模型,将输入进行加权和,没有隐藏层,然后经过激活函数最后得到输出。不能解决异或问题。 # - 异或问题(直观理解):两个输入的符号相同时(同正,同负)输出为 0 ,否则(一正一负)输出为 1. # - 加入隐藏层后,异或问题得到很好的解决 # - 深层网络有组合特征提取的功能,这个特性对于解决不易提取特征向量的问题(如图片识别,语音识别等)有很大的帮助。 # # ### 4.2 损失函数的定义 # # **神经网络模型的效果以及优化的目标是通过损失函数(loss function)来定义的。** # # 监督学习的两大种类:分类问题 and 回归问题。 # # - 经典损失函数 # # - 自定义损失函数 # # # 如何判断一个输出向量和期望的向量有多接近?------交叉熵(cross entropy)是最常用的评判方法之一。 # # 交叉熵 (cross entropy): # # 给定两个概率分布 p,q # # $$H(p,q)= - \sum_{x}p(x)logq(x)$$ # # 注意:交叉熵刻画的是两个概率之间的分布,但是神经网络的输出不一定是一个概率分布。 # # - 概率分布刻画了不同事件发生的概率,当事件总数有限的情况下,概率分布函数 $p(X=x)$ 满足: # # $$\forall x \space\space p(X=x)\in[0,1] 且 \sum_{x}p(X=x)=1$$ # # - 任意事件发生的概率都在 0 和 1 之间 # - 且总有某一个事件发生(概率的和为 1) # # 如何将神经网络前向传播的结果变成概率分布? # # - softmax 回归是一个常用的方法 # # 【图】4_10 # # 假设原始的神经网络输出为 $y_1,y_2,\dots,y_n$,那么经过 softmax 处理之后的输出为: # # $$ softmax(y)_i=y_i^{'}=\frac{e^{yi}}{\sum_{j=1}^{n}e^{yj}}$$ # # - 原始神经网络的输出被用作置信度来生成新的输出,新的输出满足概率分布的所有要求 # - 新的输出可以理解为经过神经网络的推导,一个样本为不同类别的概率分别有多大 # - 因此神经网络的输出变为了一个概率分布,从而可以通过交叉熵来计算预测概率分布和真实答案的概率分布之间的距离。 # - 交叉熵函数不是对称的 ($H(p.q)\neq H(q,p)$),刻画的是通过概率 q 来表达概率分布 p 的困难程度 # - 在神经网络中,p 代表正确标签,q 代表预测值 # + import tensorflow as tf tf.__version__ # + ''' Tensorflow 实现交叉熵 ''' cross_entropy = -tf.reduce_mean(y_* tf.log(tf.clip_by_value(y, 1e-10, 1.0))) # - # reduce_mean 可以理解为 $ \frac{1}{n} \sum$ ,n 代表一个 batch 中样例的数量 # # 其中:`y_ `代表正确结果, `y` 代表预测值 # # 上面一行代码,包含了 4 种不同的 Tensorflow 运算。 # # - `tf.clip_by_value`函数可以将在一个张量中的数值限制在一个范围之内,可以避免一些运算错误,(如 log0 是无效的) # # - 小于 2.5 的数值替换为 2.5 ,大于 4.5 的数值 替换为 4.5 # - 通过 `tf.clip_by_value` 函数可以保证在进行 log 运算时,不会出现 log0 这样的错误 # + import tensorflow as tf v = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) with tf.Session() as sess: print(tf.clip_by_value(v, 2.5, 4.5).eval()) # - # - 第二个运算是 `tf.log` 函数,完成对张量中所有函数一次求对数的功能 # + v = tf.constant([1.0, 2.0, 3.0]) with tf.Session() as sess: print(tf.log(v).eval()) # - # - 第三个是乘法运算,在实现交叉熵的代码中直接将两个矩阵通过 `*`操作相乘,这个操作不是矩阵乘法,而是元素之间直接相乘 (element-wise) # - 矩阵乘法需要使用 `tf.matmul` 函数完成。下面示例给出两者区别: # + v1 = tf.constant([[1.0, 2.0], [3.0, 4.0]]) v2 = tf.constant([[5.0, 6.0], [7.0, 8.0]]) with tf.Session() as sess: print('逐元乘积:\n',(v1*v2).eval()) print('矩阵乘法:\n', tf.matmul(v1, v2).eval()) # - # - `v1 * v2` 的结果是每个位置上对应元素的乘积,如结果中(1,1)元素值 5 是 `1×5 = 5` # - `tf.matmul` 函数完成矩阵乘法 ,如 (1,1)元素值 19 是 `1× 5 + 2× 7 = 19` # # - 上面三个运算完成了对每一个样例中每个类别交叉熵 `p(x)logq(x)`的计算 # - 这三步计算分别得到的结果是一个 `n × m`的二维矩阵,n 为一个 batch 中样例的数量, m 为分类的类别数量。 # - 根据该交叉熵的公式,应该将每行中的 m 个结果相加得到所有样例的交叉熵 # - 然后再对这 n 行取平均得到一个 batch 的平均交叉熵 # - 因为分类问题的类别数量是不变的,所以可以直接对整个矩阵做平均而不改变计算结果的意义 # - 简单展示 `tf.reduce_mean` 函数的使用方法 # # + v = tf.constant([[1.0,2.0,3.0],[4.0,5.0,6.0]]) with tf.Session() as sess: print(tf.reduce_mean(v).eval()) # - # - 因为 交叉熵一般会和 softmax 回归一起使用,所以 Tensorflow 对这两个功能进行了统一封装 ,并提供 `tf.nn.softmax_cross_entopy_with_logits` 函数 # - 可以通过下面代码来实现使用了 softmax 回归之后的交叉熵损失函数 # y 是预测输出值,y_ 是标准答案 cross_entopy = tf.nn.softmax_cross_entropy_with_logits(y,y_) # - 回归问题解决的是对具体数值的预测,如房价预测,销量预测等。 # - 解决回归问题的神经网络一般只有一个输出节点,输出值为预测值。 # - 回归问题最常用的是损失函数是均方误差(MES, mean squared error) # # $$ MSE(y,y^{'}) = \frac{\sum_{i=1}^{n}(y_i-y^{'})^2}{n}$$ # # - $y_i$ 为一个 batch 中第 i 个数据的正确答案,$y^{'}$ 为神经网络给出的预测值 # - Tensorflow 实现均方误差损失函数 # # ``` # mse = tf.reduce_mean(tf.square(y_ - y)) # # ``` # # 其中:`y_ `代表标准答案, `y` 代表神经网络输出值 # # ### 4.2.2 自定义损失函数 # # - Tensorflow 支持任意优化的自定义损失函数 # - 下面公式给出一个 当预测多于真实值和预测少于真实值时有不同系数的损失函数 # # $$ Loss(y, y^{'}) = \sum_{i=1}^{n}f(y_i,y_i^{'}), \space\space f(x,y)=\left\{ # \begin{aligned} # a(x-y) & & x> y \\ # b(y-x) & & x≤y \\ # \end{aligned}\right. $$ # # - $y_i$ 为一个 batch 中第 i 个数据的正确答案,$y^{'}$ 为神经网络给出的预测值 a,b 是常量,如 a = 10 , b = 1,TF 实现: # loss = tf.reduce_sum(tf.where(tf.greater(v1,v2),(v1 - v2) * a, (v2- v1) *b )) # - 用到了 `tf.greater` 和 `tf.where` 来实现选择操作 # - `tf.greater` 的输入是两个张量,此函数会比较这两个输入张量中每一个元素的大小,并返回比较结果 # - 当 `tf.greater ` 的输入张量维度不一样是,TF 会进行类似 Numpy 广播(broadcasting) 的处理。 # - `tf.where` 有三个参数,1.选择条件根据,当选择条件 为 True 时,`tf.where` 函数会选择第二个参数中的值,否则使用第三个参数中的值。 # - 注意:`tf.where` 函数判断和选择都是在元素级别进行,下面展示用法: # + v1 = tf.constant([1.0, 2.0, 3.0, 4.0]) v2 = tf.constant([4.0, 3.0, 2.0, 1.0]) sess = tf.InteractiveSession() print(tf.greater(v1, v2).eval()) print(tf.where(tf.greater(v1,v2),v1,v2).eval()) sess.close() # - # 下面通过一个简单的神经网络程序来讲解损失函数对模型训练结果的影响。 # # - 两个输入节点,一个输出节点,没有隐藏层 # + import tensorflow as tf from numpy.random import RandomState ''' 1. 定义神经网络的相关参数和变量。 ''' batch_size = 8 # 输入两个节点 x = tf.placeholder(tf.float32, shape=(None, 2), name='x-input') # 回归问题一般只有一个输出节点 y_ = tf.placeholder(tf.float32, shape=(None, 1), name='y-input') # 定义一个单层神经网络的前向传播过程,这里是简单加权和 stddev=1 标准差 w1 = tf.Variable(tf.random_normal([2, 1], stddev=1, seed=1)) y = tf.matmul(x, w1) ''' 2. 设置自定义的损失函数。 ''' # 定义预测多了和预测少了的 成本 loss_less = 10 # 预测少了 会损失 10块 loss_more = 1 # 预测多了 只损失 1 块 loss = tf.reduce_sum(tf.where(tf.greater(y,y_), (y - y_) * loss_more, (y_ - y) * loss_less)) train_step = tf.train.AdamOptimizer(0.001).minimize(loss) ''' 3. 生成模拟数据集。 ''' # 通过随机数生成一个数据模拟集 128 个样本,每个样本两个特征 rdm = RandomState(1) X = rdm.rand(128, 2) print(X.shape) # print(X) ''' 设置回归的正确值为两个输入的和加上一个随机量,之所以要加上一个随机量是为了 加入不可预测的噪音,否则不同损失函数的意义就不大了,因为不同损失函数都会在能 完全预测正确的时候最低,一般来说噪音是一个均值为 0 的小量,所以这里的 噪音设置为 -0.05 ~ 0.05 的随机数 ''' # Y 正确值 是一个 list (x1, x2) in X 一个样本有两个特征,128 行 2 列 Y = [[x1 + x2 + (rdm.rand()/10.0-0.05)] for (x1, x2) in X] ''' 4.训练模型。 ''' with tf.Session() as sess: init_op = tf.global_variables_initializer() sess.run(init_op) STEPS = 5000 for i in range(STEPS): start = (i*batch_size) % 128 end = (i*batch_size) % 128 + batch_size sess.run(train_step, feed_dict={x: X[start:end], y_: Y[start:end]}) if i % 1000 == 0: print("After %d training step(s), w1 is: " % (i)) print(sess.run(w1), "\n") print("Final w1 is: \n", sess.run(w1)) # + ''' 5. 重新定义损失函数,使得预测多了的损失大,于是模型应该偏向少的方向预测。 ''' loss_less = 1 loss_more = 10 # 预测多了的损失大 loss = tf.reduce_sum(tf.where(tf.greater(y, y_),(y - y_)* loss_more, (y_ - y) * loss_less)) train_step = tf.train.AdamOptimizer(0.001).minimize(loss) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) STEPS = 5000 for i in range(STEPS): start = (i*batch_size) % 128 end = (i*batch_size) % 128 + batch_size sess.run(train_step, feed_dict={x:X[start:end], y_:Y[start:end]}) if i % 1000 == 0: print("After %d training step(s), w1 is: " % (i)) print(sess.run(w1), "\n") print("Final w1 is: \n", sess.run(w1)) # + ''' 6. 定义损失函数为MSE。 ''' loss = tf.losses.mean_squared_error(y,y_) train_step = tf.train.AdamOptimizer(0.001).minimize(loss) with tf.Session() as sess: init_op = tf.global_variables_initializer() sess.run(init_op) STEPS = 5000 for i in range(STEPS): start = (i*batch_size) % 128 end = (i*batch_size) % 128 + batch_size sess.run(train_step, feed_dict={x: X[start:end], y_: Y[start:end]}) if i % 1000 == 0: print("After %d training step(s), w1 is: " % (i)) print(sess.run(w1), "\n") print("Final w1 is: \n", sess.run(w1)) # - # - 第一种得到的 w1= [[1.019347 ],[1.0428089]],也就是得到的预测函数 $y = 1.02x_1 + 1.04x_2$ 比 $x_1 + x_2$ 大,因为在损失函数中,规定的是 预测少了损失更大,所以为了降低损失,预测值会更大些 # - 第二种,将 loss_more 和 loss_less 调整后,w1 = [[0.9552581],[0.9813394]] ,预测多了损失更大,所以,少预测一些会降低损失 # - 第三种,使用 MSE 均方误差作为损失函数 [[0.9743756],[1.0243336]] ,离标准答案更近 # # # ### 4.3 神经网络优化算法 # # 本节将更加具体的介绍如何通过**反向传播算法(BP backpropgation)** 和**梯度下降算法(gradient descent)**调整神经网络中参数的取值。 # # - 梯度下降算法主要用于优化单个参数的取值 # - 而反向传播算法给出了一个高效的方式在所有参数上使用梯度下降算法,从而使神经网络模型在训练数据上的损失函数尽可能小 # - 略(略了很多,看原书) # - 只有当损失函数为凸函数时,梯度下降算法才能保证达到全局最优解 # - GD : 计算时间太长,太慢 # - SGD:为了加速训练过程,使用随机梯度下降的算法(stochastic gradient descent SGD)这个算法优化的不是在全部的训练数据上的损失函数,而是在每一轮迭代中,随机优化某一条训练数据上的损失函数。缺:在某一条数据上损失函数更小,不代表全部数据上的损失函数更小 # - mini-batch GD :折中的方法,一小部分数据成为一个 batch,batch-size 是这一小部分的大小,可以大大减小收敛所需要的迭代次数 # # # ``` # # batch_size = n # # # 每次读取一小部分数据作为当前数据来执行反向传播算法 # # x = tf.placeholder(tf.float32, shape=(batch_size, 2), name='x-input') # y_ = tf.placeholder(tf.float32, shape=(batch_size, 2), name='y-input') # # # 定义神经网络结构和优化算法 # # loss = None # train_step = tf.train.AdamOptimizer(0.001).minimize(loss) # # # 训练神经网络 # with tf.Session() as sess: # # # 参数初始化 # None # # 迭代的更新参数 # for i in range(STEPS): # # 准备 batch_size 个训练数据。 # # 一般将所有的训练数据随机打乱之后再选取可以得到更好的优化效果 # current_X, current_Y = ... # sess.run(train_step, feed_dict={x: current_X, y_:current_Y}) # ``` # # # ### 4.4 神经网络进一步优化 # # 本节继续介绍神经网络优化过程中可能遇到的一些问题,以及解决这些问题的常用方法。 # # #### 4.4.1 学习率的设置 # # 如何设置学习率(learning rate)? # # 通过指数衰减的方法来设置梯度下降算法中的学习率,既可以让模型在训练前期快速接近较优解,又可以保证模型在训练后期不会有太大的波动。 # # - 学习率不能过大,也不能过小,Tensorflow 提供了一种更加灵活的学习率设置方法——指数衰减法。 # - tf.train.exponential_decay 函数实现了指数衰减学习率 # # - exponential_decay 函数会指数级地减小学习率,它实现以下代码的功能 # # ``` # decay_learning_rate = learning_rate & decay_rate ^ (global_step / decay_steps) # # ``` # # - decay_learning_rate :每一轮优化时使用的学习率 # - learning_rate : 初始学习率 # - decay_rate :衰减速度 # # - tf.train.exponential_decay 函数可以通过设置参数 staircase 选择不同的衰减方式,staircase= False ,平滑连续衰减 staircase= True 阶梯状衰减 # # ![](./images/decay.png) # # 代码示例: # # #### 假设我们要最小化函数 $y=x^2$, 选择初始点 $x_0=5$ # # #### 1. 学习率为1的时候,x 在 5 和 -5 之间震荡。 # # + import tensorflow as tf # 通过 exponential_decay 函数生成学习率 # 使用指数衰减的学习率。在 minimize 函数中传入 global_step 将自动更新 # global_step 参数,从而使得学习率也得到相应更新 # - # # #### 4.4.2 过拟合问题 # # #### 4.4.3 滑动平均模型
tf_caicloud/Chapter03/chapter_4_Deep NN.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Opacus - Syft Duet - Data Scientist 🥁 # ## PART 1: Connect to a Remote Duet Server import syft as sy sy.load_lib("opacus") duet = sy.join_duet(loopback=True) # ### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 0 : Now STOP and run the Data Owner notebook until Checkpoint 1. class SyNet(sy.Module): def __init__(self, torch_ref): super(SyNet, self).__init__(torch_ref=torch_ref) self.conv1 = self.torch_ref.nn.Conv2d(1, 16, 8, 2, padding=3) self.conv2 = self.torch_ref.nn.Conv2d(16, 32, 4, 2) self.fc1 = self.torch_ref.nn.Linear(32 * 4 * 4, 32) self.fc2 = self.torch_ref.nn.Linear(32, 10) def forward(self, x): F = self.torch_ref.nn.functional # x of shape [B, 1, 28, 28] x = F.relu(self.conv1(x)) # -> [B, 16, 14, 14] x = F.max_pool2d(x, 2, 1) # -> [B, 16, 13, 13] x = F.relu(self.conv2(x)) # -> [B, 32, 5, 5] x = F.max_pool2d(x, 2, 1) # -> [B, 32, 4, 4] x = x.view(-1, 32 * 4 * 4) # -> [B, 512] x = F.relu(self.fc1(x)) # -> [B, 32] x = self.fc2(x) # -> [B, 10] return x # lets import torch and torchvision just as we normally would import torch import torchvision # now we can create the model and pass in our local copy of torch local_model = SyNet(torch) # Next we can get our MNIST Test Set ready using our local copy of torch. local_model.modules # + # we need some transforms for the MNIST data set local_transform_1 = torchvision.transforms.ToTensor() # this converts PIL images to Tensors local_transform_2 = torchvision.transforms.Normalize(0.1307, 0.3081) # this normalizes the dataset # compose our transforms local_transforms = torchvision.transforms.Compose([local_transform_1, local_transform_2]) # - # Lets define a few settings which are from the original Opacus MNIST example args # TODO: support secure_rng args = { "batch_size": 64, "test_batch_size": 1024, "epochs": 10, "n_runs": 1, "lr": 0.1, "sigma": 1.0, "max_per_sample_grad_norm": 1.0, "delta": 1e-5, "no_cuda": False, "dry_run": True, "seed": 42, # the meaning of life "log_interval": 10, "save_model": True, "disable_dp": False, } # + from syft.util import get_root_data_path # we will configure the test set here locally since we want to know if our Data Owner's # private training dataset will help us reach new SOTA results for our benchmark test set test_kwargs = { "batch_size": args["test_batch_size"], } test_data = torchvision.datasets.MNIST(str(get_root_data_path()), train=False, download=True, transform=local_transforms) test_loader = torch.utils.data.DataLoader(test_data,**test_kwargs) test_data_length = len(test_loader.dataset) print(test_data_length) # - # Now its time to send the model to our partner’s Duet Server. # Note: You can load normal torch model weights before sending your model. # Try training the model and saving it at the end of the notebook and then coming back and # reloading the weights here, or you can train the same model one using the original script # in `original` dir and load it here as well. # service_auth issues? model = local_model.send(duet) # Lets create an alias for our partner’s torch called `remote_torch` so we can refer to the local torch as `torch` and any operation we want to do remotely as `remote_torch`. Remember, the return values from `remote_torch` are `Pointers`, not the real objects. They mostly act the same when using them with other `Pointers` but you can't mix them with local torch objects. remote_torch = duet.torch # lets ask to see if our Data Owner has CUDA has_cuda = False has_cuda_ptr = remote_torch.cuda.is_available() has_cuda = bool(has_cuda_ptr.get( request_block=True, reason="To run test and inference locally", timeout_secs=5, # change to something slower )) print(has_cuda) # + use_cuda = not args["no_cuda"] and has_cuda # now we can set the seed remote_torch.manual_seed(args["seed"]) device = remote_torch.device("cuda" if use_cuda else "cpu") print(f"Data Owner device is {device.type.get()}") # - # if we have CUDA lets send our model to the GPU if has_cuda: model.cuda(device) else: model.cpu() # Lets get our params, setup an optimizer and a scheduler just the same as the PyTorch MNIST example params = model.parameters() optimizer = remote_torch.optim.SGD(params, lr=args["lr"], momentum=0) remote_opacus = duet.opacus # this is the model we give to Opacus because its a real nn.ModulePointer on the other side # and has all the real methods for searching layers and attaching to them model.real_module sample_size = 60000 noise_multiplier = 1.0 max_grad_norm = 1.0 privacy_engine_ptr = remote_opacus.privacy_engine.PrivacyEngine( model.real_module, batch_size=args["batch_size"], sample_size=sample_size, noise_multiplier=noise_multiplier, max_grad_norm=max_grad_norm ) privacy_engine_ptr.attach(optimizer) # Next we need a training loop so we can improve our remote model. Since we want to train on remote data we should first check if the model is remote since we will be using remote_torch in this function. To check if a model is local or remote simply use the `.is_local` attribute. def train(model, torch_ref, train_loader, optimizer, epoch, args, train_data_length, privacy_engine_ptr): # + 0.5 lets us math.ceil without the import train_batches = round((train_data_length / args["batch_size"]) + 0.5) print(f"> Running train in {train_batches} batches") model.train() criterion = torch_ref.nn.CrossEntropyLoss() losses = [] for _batch_idx, data_tuple in enumerate(train_loader): data, target = data_tuple[0], data_tuple[1] optimizer.zero_grad() output = model(data) loss = criterion(output, target) loss.backward() optimizer.step() loss_item_ptr = loss.item() local_loss = loss_item_ptr.get( reason="To evaluate training progress", request_block=True, timeout_secs=5 ) if local_loss is not None: losses.append(local_loss) if not args["disable_dp"]: epsilon_tuple = privacy_engine_ptr.get_privacy_spent(args["delta"]) res = duet.python.List(epsilon_tuple) epsilon_ptr = duet.python.Float(epsilon_tuple[0]) best_alpha_ptr = duet.python.Float(epsilon_tuple[1]) epsilon = epsilon_ptr.get( reason="So we dont go over it", request_block=True, timeout_secs=5 ) best_alpha = best_alpha_ptr.get( reason="So we dont go over it", request_block=True, timeout_secs=5 ) loss_mean = float("-inf") if len(losses) > 0: loss_mean = sum(losses) / len(losses) if epsilon is None: epsilon = float("-inf") if best_alpha is None: best_alpha = float("-inf") print( f"Train Epoch: {epoch} \t" f"Loss: {loss_mean:.6f} " f"(ε = {epsilon:.2f}, δ = {args['delta']}) for α = {best_alpha}" ) else: print(f"Train Epoch: {epoch} \t Loss: {loss_mean:.6f}") if args["dry_run"]: break if _batch_idx >= train_batches - 1: print("batch_idx >= train_batches, breaking") break # Now we can define a simple test loop very similar to the original PyTorch MNIST example. # This function should expect a remote model from our outer epoch loop, so internally we can call `get` to download the weights to do an evaluation on our machine with our local test set. Remember, if we have trained on private data, our model will require permission to download, so we should use request_block=True and make sure the Data Owner approves our requests. For the rest of this function, we will use local `torch` as we normally would. def test_local(model, torch_ref, test_loader, test_data_length): # + 0.5 lets us math.ceil without the import test_batches = round((test_data_length / args["test_batch_size"]) + 0.5) print(f"> Running test_local in {test_batches} batches") # download remote model if not model.is_local: local_model = model.get( request_block=True, reason="test evaluation", timeout_secs=5 ) else: local_model = model local_model.eval() criterion = torch_ref.nn.CrossEntropyLoss() test_loss = 0 correct = 0 with torch.no_grad(): for batch_idx, (data, target) in enumerate(test_loader): output = local_model(data) test_loss += criterion(output, target).item() # sum up batch loss pred = output.argmax( dim=1, keepdim=True ) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() if args["dry_run"]: break if batch_idx >= test_batches - 1: print("batch_idx >= test_batches, breaking") break test_loss /= len(test_loader.dataset) print( "\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n".format( test_loss, correct, len(test_loader.dataset), 100.0 * correct / len(test_loader.dataset), ) ) return correct / len(test_loader.dataset) # Finally just for demonstration purposes, we will get the built-in MNIST dataset but on the Data Owners side from `remote_torchvision`. # + # we need some transforms for the MNIST data set remote_torchvision = duet.torchvision transform_1 = remote_torchvision.transforms.ToTensor() # this converts PIL images to Tensors transform_2 = remote_torchvision.transforms.Normalize(0.1307, 0.3081) # this normalizes the dataset remote_list = duet.python.List() # create a remote list to add the transforms to remote_list.append(transform_1) remote_list.append(transform_2) # compose our transforms transforms = remote_torchvision.transforms.Compose(remote_list) # The DO has kindly let us initialise a DataLoader for their training set train_kwargs = { "batch_size": args["batch_size"], } train_data_ptr = remote_torchvision.datasets.MNIST(str(get_root_data_path()), train=True, download=True, transform=transforms) train_loader_ptr = remote_torch.utils.data.DataLoader(train_data_ptr,**train_kwargs) # - train_data_length = 60000 # ## PART 3: Training # + import time # args["dry_run"] = False # uncomment to do a full train print("Starting Training") for epoch in range(1, args["epochs"] + 1): epoch_start = time.time() print(f"Epoch: {epoch}") # remote training on model with remote_torch train(model, remote_torch, train_loader_ptr, optimizer, epoch, args, train_data_length, privacy_engine_ptr) # local testing on model with local torch test_local(model, torch, test_loader, test_data_length) epoch_end = time.time() print(f"Epoch time: {int(epoch_end - epoch_start)} seconds") if args["dry_run"]: break print("Finished Training") # - if args["save_model"]: model.get( request_block=True, reason="test evaluation", timeout_secs=5 ).save("./duet_mnist.pt") # ## PART 4: Inference # A model would be no fun without the ability to do inference. The following code shows some examples on how we can do this either remotely or locally. # + import matplotlib.pyplot as plt def draw_image_and_label(image, label): fig = plt.figure() plt.tight_layout() plt.imshow(image, cmap="gray", interpolation="none") plt.title("Ground Truth: {}".format(label)) def prep_for_inference(image): image_batch = image.unsqueeze(0).unsqueeze(0) image_batch = image_batch * 1.0 return image_batch # - def classify_local(image, model): if not model.is_local: print("model is remote try .get()") return -1, torch.Tensor([-1]) image_tensor = torch.Tensor(prep_for_inference(image)) output = model(image_tensor) preds = torch.exp(output) local_y = preds local_y = local_y.squeeze() pos = local_y == max(local_y) index = torch.nonzero(pos, as_tuple=False) class_num = index.squeeze() return class_num, local_y def classify_remote(image, model): if model.is_local: print("model is local try .send()") return -1, remote_torch.Tensor([-1]) image_tensor_ptr = remote_torch.Tensor(prep_for_inference(image)) output = model(image_tensor_ptr) preds = remote_torch.exp(output) preds_result = preds.get( request_block=True, reason="To see a real world example of inference", timeout_secs=10 ) if preds_result is None: print("No permission to do inference, request again") return -1, torch.Tensor([-1]) else: # now we have the local tensor we can use local torch local_y = torch.Tensor(preds_result) local_y = local_y.squeeze() pos = local_y == max(local_y) index = torch.nonzero(pos, as_tuple=False) class_num = index.squeeze() return class_num, local_y # + # lets grab something from the test set import random total_images = test_data_length # 10000 index = random.randint(0, total_images) print("Random Test Image:", index) count = 0 batch = index // test_kwargs["batch_size"] batch_index = index % int(total_images / len(test_loader)) for tensor_ptr in test_loader: data, target = tensor_ptr[0], tensor_ptr[1] if batch == count: break count += 1 print(f"Displaying {index} == {batch_index} in Batch: {batch}/{len(test_loader)}") if batch_index > len(data): batch_index = 0 image_1 = data[batch_index].reshape((28, 28)) label_1 = target[batch_index] draw_image_and_label(image_1, label_1) # - # classify remote class_num, preds = classify_remote(image_1, model) print(f"Prediction: {class_num} Ground Truth: {label_1}") print(preds) local_model = model.get( request_block=True, reason="To run test and inference locally", timeout_secs=5, ) # classify local class_num, preds = classify_local(image_1, local_model) print(f"Prediction: {class_num} Ground Truth: {label_1}") print(preds) # + # We can also download an image from the web and run inference on that from PIL import Image, ImageEnhance import PIL.ImageOps import os def classify_url_image(image_url): filename = os.path.basename(image_url) os.system(f'curl -O {image_url}') im = Image.open(filename) im = PIL.ImageOps.invert(im) # im = im.resize((28,28), Image.ANTIALIAS) im = im.convert('LA') enhancer = ImageEnhance.Brightness(im) im = enhancer.enhance(3) print(im.size) fig = plt.figure() plt.tight_layout() plt.imshow(im, cmap="gray", interpolation="none") # classify local class_num, preds = classify_local(image_1, local_model) print(f"Prediction: {class_num}") print(preds) # + # image_url = "https://raw.githubusercontent.com/kensanata/numbers/master/0018_CHXX/0/number-100.png" # classify_url_image(image_url) # - # ### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 1 : Now STOP and run the Data Owner notebook until the next checkpoint.
examples/differential-privacy/opacus/Opacus_Syft_Data_Scientist.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + cellView="form" colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="6uNrFWq5BRba" #@title # Copyright 2018 Google LLC. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # https://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # - # # Create Your Own Visualizations! # Instructions: # 1. Install tensor2tensor and train up a Transformer model following the instruction in the repository https://github.com/tensorflow/tensor2tensor. # 2. Update cell 3 to point to your checkpoint, it is currently set up to read from the default checkpoint location that would be created from following the instructions above. # 3. If you used custom hyper parameters then update cell 4. # 4. Run the notebook! # + import os import tensorflow as tf from tensor2tensor import problems from tensor2tensor.bin import t2t_decoder # To register the hparams set from tensor2tensor.utils import registry from tensor2tensor.utils import trainer_lib from tensor2tensor.visualization import attention from tensor2tensor.visualization import visualization # + language="javascript" # require.config({ # paths: { # d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min' # } # }); # - # ## HParams # PUT THE MODEL YOU WANT TO LOAD HERE! CHECKPOINT = os.path.expanduser('~/t2t_train/translate_ende_wmt32k/transformer-transformer_base_single_gpu') # HParams problem_name = 'translate_ende_wmt32k' data_dir = os.path.expanduser('~/t2t_data/') model_name = "transformer" hparams_set = "transformer_base_single_gpu" # ## Visualization visualizer = visualization.AttentionVisualizer(hparams_set, model_name, data_dir, problem_name, beam_size=1) # + tf.Variable(0, dtype=tf.int64, trainable=False, name='global_step') sess = tf.train.MonitoredTrainingSession( checkpoint_dir=CHECKPOINT, save_summaries_secs=0, ) # - input_sentence = "I have two dogs." output_string, inp_text, out_text, att_mats = visualizer.get_vis_data_from_string(sess, input_sentence) print(output_string) # ## Interpreting the Visualizations # - The layers drop down allow you to view the different Transformer layers, 0-indexed of course. # - Tip: The first layer, last layer and 2nd to last layer are usually the most interpretable. # - The attention dropdown allows you to select different pairs of encoder-decoder attentions: # - All: Shows all types of attentions together. NOTE: There is no relation between heads of the same color - between the decoder self attention and decoder-encoder attention since they do not share parameters. # - Input - Input: Shows only the encoder self-attention. # - Input - Output: Shows the decoder’s attention on the encoder. NOTE: Every decoder layer attends to the final layer of encoder so the visualization will show the attention on the final encoder layer regardless of what layer is selected in the drop down. # - Output - Output: Shows only the decoder self-attention. NOTE: The visualization might be slightly misleading in the first layer since the text shown is the target of the decoder, the input to the decoder at layer 0 is this text with a GO symbol prepreded. # - The colored squares represent the different attention heads. # - You can hide or show a given head by clicking on it’s color. # - Double clicking a color will hide all other colors, double clicking on a color when it’s the only head showing will show all the heads again. # - You can hover over a word to see the individual attention weights for just that position. # - Hovering over the words on the left will show what that position attended to. # - Hovering over the words on the right will show what positions attended to it. attention.show(inp_text, out_text, *att_mats)
tensor2tensor/visualization/TransformerVisualization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Kaggle San Francisco Crime Classification # ## Berkeley MIDS W207 Final Project: <NAME>, <NAME>, <NAME>, <NAME> # # # ### Environment and Data # + # Import relevant libraries: import time import numpy as np import pandas as pd from sklearn.neighbors import KNeighborsClassifier from sklearn import preprocessing from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import StandardScaler from sklearn.naive_bayes import BernoulliNB from sklearn.naive_bayes import MultinomialNB from sklearn.naive_bayes import GaussianNB from sklearn.grid_search import GridSearchCV from sklearn.metrics import classification_report from sklearn.metrics import log_loss from sklearn.linear_model import LogisticRegression from sklearn import svm from sklearn.neural_network import MLPClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier # Import Meta-estimators from sklearn.ensemble import AdaBoostClassifier from sklearn.ensemble import BaggingClassifier from sklearn.ensemble import GradientBoostingClassifier # Import Calibration tools from sklearn.calibration import CalibratedClassifierCV # Set random seed and format print output: np.random.seed(0) np.set_printoptions(precision=3) # - # #### DDL to construct table for SQL transformations: # # ```sql # CREATE TABLE kaggle_sf_crime ( # dates TIMESTAMP, # category VARCHAR, # descript VARCHAR, # dayofweek VARCHAR, # pd_district VARCHAR, # resolution VARCHAR, # addr VARCHAR, # X FLOAT, # Y FLOAT); # ``` # #### Getting training data into a locally hosted PostgreSQL database: # ```sql # \copy kaggle_sf_crime FROM '/Users/Goodgame/Desktop/MIDS/207/final/sf_crime_train.csv' DELIMITER ',' CSV HEADER; # ``` # # #### SQL Query used for transformations: # # ```sql # SELECT # category, # date_part('hour', dates) AS hour_of_day, # CASE # WHEN dayofweek = 'Monday' then 1 # WHEN dayofweek = 'Tuesday' THEN 2 # WHEN dayofweek = 'Wednesday' THEN 3 # WHEN dayofweek = 'Thursday' THEN 4 # WHEN dayofweek = 'Friday' THEN 5 # WHEN dayofweek = 'Saturday' THEN 6 # WHEN dayofweek = 'Sunday' THEN 7 # END AS dayofweek_numeric, # X, # Y, # CASE # WHEN pd_district = 'BAYVIEW' THEN 1 # ELSE 0 # END AS bayview_binary, # CASE # WHEN pd_district = 'INGLESIDE' THEN 1 # ELSE 0 # END AS ingleside_binary, # CASE # WHEN pd_district = 'NORTHERN' THEN 1 # ELSE 0 # END AS northern_binary, # CASE # WHEN pd_district = 'CENTRAL' THEN 1 # ELSE 0 # END AS central_binary, # CASE # WHEN pd_district = 'BAYVIEW' THEN 1 # ELSE 0 # END AS pd_bayview_binary, # CASE # WHEN pd_district = 'MISSION' THEN 1 # ELSE 0 # END AS mission_binary, # CASE # WHEN pd_district = 'SOUTHERN' THEN 1 # ELSE 0 # END AS southern_binary, # CASE # WHEN pd_district = 'TENDERLOIN' THEN 1 # ELSE 0 # END AS tenderloin_binary, # CASE # WHEN pd_district = 'PARK' THEN 1 # ELSE 0 # END AS park_binary, # CASE # WHEN pd_district = 'RICHMOND' THEN 1 # ELSE 0 # END AS richmond_binary, # CASE # WHEN pd_district = 'TARAVAL' THEN 1 # ELSE 0 # END AS taraval_binary # FROM kaggle_sf_crime; # ``` # #### Loading the data, version 2, with weather features to improve performance: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs) # # We seek to add features to our models that will improve performance with respect to out desired performance metric. There is evidence that there is a correlation between weather patterns and crime, with some experts even arguing for a causal relationship between weather and crime [1]. More specifically, a 2013 paper published in Science showed that higher temperatures and extreme rainfall led to large increases in conflict. In the setting of strong evidence that weather influences crime, we see it as a candidate for additional features to improve the performance of our classifiers. Weather data was gathered from (insert source). Certain features from this data set were incorporated into the original crime data set in order to add features that were hypothesizzed to improve performance. These features included (insert what we eventually include). # + #data_path = "./data/train_transformed.csv" #df = pd.read_csv(data_path, header=0) #x_data = df.drop('category', 1) #y = df.category.as_matrix() ########## Adding the date back into the data #import csv #import time #import calendar #data_path = "./data/train.csv" #dataCSV = open(data_path, 'rt') #csvData = list(csv.reader(dataCSV)) #csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y'] #allData = csvData[1:] #dataCSV.close() #df2 = pd.DataFrame(allData) #df2.columns = csvFields #dates = df2['Dates'] #dates = dates.apply(time.strptime, args=("%Y-%m-%d %H:%M:%S",)) #dates = dates.apply(calendar.timegm) #print(dates.head()) #x_data['secondsFromEpoch'] = dates #colnames = x_data.columns.tolist() #colnames = colnames[-1:] + colnames[:-1] #x_data = x_data[colnames] ########## ########## Adding the weather data into the original crime data #weatherData1 = "./data/1027175.csv" #weatherData2 = "./data/1027176.csv" #dataCSV = open(weatherData1, 'rt') #csvData = list(csv.reader(dataCSV)) #csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y'] #allWeatherData1 = csvData[1:] #dataCSV.close() #dataCSV = open(weatherData2, 'rt') #csvData = list(csv.reader(dataCSV)) #csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y'] #allWeatherData2 = csvData[1:] #dataCSV.close() #weatherDF1 = pd.DataFrame(allWeatherData1) #weatherDF1.columns = csvFields #dates1 = weatherDF1['DATE'] #sunrise1 = weatherDF1['DAILYSunrise'] #sunset1 = weatherDF1['DAILYSunset'] #weatherDF2 = pd.DataFrame(allWeatherData2) #weatherDF2.columns = csvFields #dates2 = weatherDF2['DATE'] #sunrise2 = weatherDF2['DAILYSunrise'] #sunset2 = weatherDF2['DAILYSunset'] #functions for processing the sunrise and sunset times of each day #def get_hour_and_minute(milTime): # hour = int(milTime[:-2]) # minute = int(milTime[-2:]) # return [hour, minute] #def get_date_only(date): # return time.struct_time(tuple([date[0], date[1], date[2], 0, 0, 0, date[6], date[7], date[8]])) #def structure_sun_time(timeSeries, dateSeries): # sunTimes = timeSeries.copy() # for index in range(len(dateSeries)): # sunTimes[index] = time.struct_time(tuple([dateSeries[index][0], dateSeries[index][1], dateSeries[index][2], timeSeries[index][0], timeSeries[index][1], dateSeries[index][5], dateSeries[index][6], dateSeries[index][7], dateSeries[index][8]])) # return sunTimes #dates1 = dates1.apply(time.strptime, args=("%Y-%m-%d %H:%M",)) #sunrise1 = sunrise1.apply(get_hour_and_minute) #sunrise1 = structure_sun_time(sunrise1, dates1) #sunrise1 = sunrise1.apply(calendar.timegm) #sunset1 = sunset1.apply(get_hour_and_minute) #sunset1 = structure_sun_time(sunset1, dates1) #sunset1 = sunset1.apply(calendar.timegm) #dates1 = dates1.apply(calendar.timegm) #dates2 = dates2.apply(time.strptime, args=("%Y-%m-%d %H:%M",)) #sunrise2 = sunrise2.apply(get_hour_and_minute) #sunrise2 = structure_sun_time(sunrise2, dates2) #sunrise2 = sunrise2.apply(calendar.timegm) #sunset2 = sunset2.apply(get_hour_and_minute) #sunset2 = structure_sun_time(sunset2, dates2) #sunset2 = sunset2.apply(calendar.timegm) #dates2 = dates2.apply(calendar.timegm) #weatherDF1['DATE'] = dates1 #weatherDF1['DAILYSunrise'] = sunrise1 #weatherDF1['DAILYSunset'] = sunset1 #weatherDF2['DATE'] = dates2 #weatherDF2['DAILYSunrise'] = sunrise2 #weatherDF2['DAILYSunset'] = sunset2 #weatherDF = pd.concat([weatherDF1,weatherDF2[32:]],ignore_index=True) # Starting off with some of the easier features to work with-- more to come here . . . still in beta #weatherMetrics = weatherDF[['DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYWindSpeed', \ # 'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY', 'DAILYSunrise', 'DAILYSunset']] #weatherMetrics = weatherMetrics.convert_objects(convert_numeric=True) #weatherDates = weatherMetrics['DATE'] #'DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYWindSpeed', #'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY' #timeWindow = 10800 #3 hours #hourlyDryBulbTemp = [] #hourlyRelativeHumidity = [] #hourlyWindSpeed = [] #hourlySeaLevelPressure = [] #hourlyVisibility = [] #dailySunrise = [] #dailySunset = [] #daylight = [] #test = 0 #for timePoint in dates:#dates is the epoch time from the kaggle data # relevantWeather = weatherMetrics[(weatherDates <= timePoint) & (weatherDates > timePoint - timeWindow)] # hourlyDryBulbTemp.append(relevantWeather['HOURLYDRYBULBTEMPF'].mean()) # hourlyRelativeHumidity.append(relevantWeather['HOURLYRelativeHumidity'].mean()) # hourlyWindSpeed.append(relevantWeather['HOURLYWindSpeed'].mean()) # hourlySeaLevelPressure.append(relevantWeather['HOURLYSeaLevelPressure'].mean()) # hourlyVisibility.append(relevantWeather['HOURLYVISIBILITY'].mean()) # dailySunrise.append(relevantWeather['DAILYSunrise'].iloc[-1]) # dailySunset.append(relevantWeather['DAILYSunset'].iloc[-1]) # daylight.append(1.0*((timePoint >= relevantWeather['DAILYSunrise'].iloc[-1]) and (timePoint < relevantWeather['DAILYSunset'].iloc[-1]))) #if timePoint < relevantWeather['DAILYSunset'][-1]: #daylight.append(1) #else: #daylight.append(0) # if test%100000 == 0: # print(relevantWeather) # test += 1 #hourlyDryBulbTemp = pd.Series.from_array(np.array(hourlyDryBulbTemp)) #hourlyRelativeHumidity = pd.Series.from_array(np.array(hourlyRelativeHumidity)) #hourlyWindSpeed = pd.Series.from_array(np.array(hourlyWindSpeed)) #hourlySeaLevelPressure = pd.Series.from_array(np.array(hourlySeaLevelPressure)) #hourlyVisibility = pd.Series.from_array(np.array(hourlyVisibility)) #dailySunrise = pd.Series.from_array(np.array(dailySunrise)) #dailySunset = pd.Series.from_array(np.array(dailySunset)) #daylight = pd.Series.from_array(np.array(daylight)) #x_data['HOURLYDRYBULBTEMPF'] = hourlyDryBulbTemp #x_data['HOURLYRelativeHumidity'] = hourlyRelativeHumidity #x_data['HOURLYWindSpeed'] = hourlyWindSpeed #x_data['HOURLYSeaLevelPressure'] = hourlySeaLevelPressure #x_data['HOURLYVISIBILITY'] = hourlyVisibility #x_data['DAILYSunrise'] = dailySunrise #x_data['DAILYSunset'] = dailySunset #x_data['Daylight'] = daylight #x_data.to_csv(path_or_buf="C:/MIDS/W207 final project/x_data.csv") ########## # Impute missing values with mean values: #x_complete = x_data.fillna(x_data.mean()) #X_raw = x_complete.as_matrix() # Scale the data between 0 and 1: #X = MinMaxScaler().fit_transform(X_raw) # Shuffle data to remove any underlying pattern that may exist: #shuffle = np.random.permutation(np.arange(X.shape[0])) #X, y = X[shuffle], y[shuffle] # Separate training, dev, and test data: #test_data, test_labels = X[800000:], y[800000:] #dev_data, dev_labels = X[700000:800000], y[700000:800000] #train_data, train_labels = X[:700000], y[:700000] #mini_train_data, mini_train_labels = X[:75000], y[:75000] #mini_dev_data, mini_dev_labels = X[75000:100000], y[75000:100000] #labels_set = set(mini_dev_labels) #print(labels_set) #print(len(labels_set)) #print(train_data[:10]) # - # ### Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets. # # + # Data path to your local copy of Sam's "train_transformed.csv", which was produced by ?separate Python script? data_path_for_labels_only = "/Users/Bryan/Desktop/UC_Berkeley_MIDS_files/Courses/W207_Intro_To_Machine_Learning/Final_Project/sf_crime-master/data/train_transformed.csv" df = pd.read_csv(data_path_for_labels_only, header=0) y = df.category.as_matrix() # Data path to your local copy of Kalvin's "x_data.csv", which was produced by the negated cell above data_path = "/Users/Bryan/Desktop/UC_Berkeley_MIDS_files/Courses/W207_Intro_To_Machine_Learning/Final_Project/x_data_08_15.csv" df = pd.read_csv(data_path, header=0) # Impute missing values with mean values: x_complete = df.fillna(df.mean()) X_raw = x_complete.as_matrix() # Scale the data between 0 and 1: X = MinMaxScaler().fit_transform(X_raw) # Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time: np.random.seed(0) shuffle = np.random.permutation(np.arange(X.shape[0])) X, y = X[shuffle], y[shuffle] # Due to difficulties with log loss and set(y_pred) needing to match set(labels), we will remove the extremely rare # crimes from the data for quality issues. X_minus_trea = X[np.where(y != 'TREA')] y_minus_trea = y[np.where(y != 'TREA')] X_final = X_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')] y_final = y_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')] # Separate training, dev, and test data: test_data, test_labels = X_final[800000:], y_final[800000:] dev_data, dev_labels = X_final[700000:800000], y_final[700000:800000] train_data, train_labels = X_final[100000:700000], y_final[100000:700000] calibrate_data, calibrate_labels = X_final[:100000], y_final[:100000] # Create mini versions of the above sets mini_train_data, mini_train_labels = X_final[:20000], y_final[:20000] mini_calibrate_data, mini_calibrate_labels = X_final[19000:28000], y_final[19000:28000] mini_dev_data, mini_dev_labels = X_final[49000:60000], y_final[49000:60000] # Create list of the crime type labels. This will act as the "labels" parameter for the log loss functions that follow crime_labels = list(set(y_final)) crime_labels_mini_train = list(set(mini_train_labels)) crime_labels_mini_dev = list(set(mini_dev_labels)) crime_labels_mini_calibrate = list(set(mini_calibrate_labels)) print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev),len(crime_labels_mini_calibrate)) #print(len(train_data),len(train_labels)) #print(len(dev_data),len(dev_labels)) #print(len(mini_train_data),len(mini_train_labels)) #print(len(mini_dev_data),len(mini_dev_labels)) #print(len(test_data),len(test_labels)) #print(len(mini_calibrate_data),len(mini_calibrate_labels)) #print(len(calibrate_data),len(calibrate_labels)) # - # ### Sarah's School data that we may still get to work as features: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs) # + ### Read in zip code data #data_path_zip = "./data/2016_zips.csv" #zips = pd.read_csv(data_path_zip, header=0, sep ='\t', usecols = [0,5,6], names = ["GEOID", "INTPTLAT", "INTPTLONG"], dtype ={'GEOID': int, 'INTPTLAT': float, 'INTPTLONG': float}) #sf_zips = zips[(zips['GEOID'] > 94000) & (zips['GEOID'] < 94189)] ### Mapping longitude/latitude to zipcodes #def dist(lat1, long1, lat2, long2): # return np.sqrt((lat1-lat2)**2+(long1-long2)**2) # return abs(lat1-lat2)+abs(long1-long2) #def find_zipcode(lat, long): # distances = sf_zips.apply(lambda row: dist(lat, long, row["INTPTLAT"], row["INTPTLONG"]), axis=1) # return sf_zips.loc[distances.idxmin(), "GEOID"] #x_data['zipcode'] = 0 #for i in range(0, 1): # x_data['zipcode'][i] = x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1) #x_data['zipcode']= x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1) ### Read in school data #data_path_schools = "./data/pubschls.csv" #schools = pd.read_csv(data_path_schools,header=0, sep ='\t', usecols = ["CDSCode","StatusType", "School", "EILCode", "EILName", "Zip", "Latitude", "Longitude"], dtype ={'CDSCode': str, 'StatusType': str, 'School': str, 'EILCode': str,'EILName': str,'Zip': str, 'Latitude': float, 'Longitude': float}) #schools = schools[(schools["StatusType"] == 'Active')] ### Find the closest school #def dist(lat1, long1, lat2, long2): # return np.sqrt((lat1-lat2)**2+(long1-long2)**2) #def find_closest_school(lat, long): # distances = schools.apply(lambda row: dist(lat, long, row["Latitude"], row["Longitude"]), axis=1) # return min(distances) #x_data['closest_school'] = x_data_sub.apply(lambda row: find_closest_school(row['y'], row['x']), axis=1) # - # ### Formatting to meet Kaggle submission standards: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs) # + # The Kaggle submission format requires listing the ID of each example. # This is to remember the order of the IDs after shuffling #allIDs = np.array(list(df.axes[0])) #allIDs = allIDs[shuffle] #testIDs = allIDs[800000:] #devIDs = allIDs[700000:800000] #trainIDs = allIDs[:700000] # Extract the column names for the required submission format #sampleSubmission_path = "./data/sampleSubmission.csv" #sampleDF = pd.read_csv(sampleSubmission_path) #allColumns = list(sampleDF.columns) #featureColumns = allColumns[1:] # Extracting the test data for a baseline submission #real_test_path = "./data/test_transformed.csv" #testDF = pd.read_csv(real_test_path, header=0) #real_test_data = testDF #test_complete = real_test_data.fillna(real_test_data.mean()) #Test_raw = test_complete.as_matrix() #TestData = MinMaxScaler().fit_transform(Test_raw) # Here we remember the ID of each test data point, in case we ever decide to shuffle the test data for some reason #testIDs = list(testDF.axes[0]) # - # #### Generate baseline prediction probabilities from MNB classifier and store in a .csv file (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs) # + # Generate a baseline MNB classifier and make it return prediction probabilities for the actual test data #def MNB(): # mnb = MultinomialNB(alpha = 0.0000001) # mnb.fit(train_data, train_labels) # print("\n\nMultinomialNB accuracy on dev data:", mnb.score(dev_data, dev_labels)) # return mnb.predict_proba(dev_data) #MNB() #baselinePredictionProbabilities = MNB() # Place the resulting prediction probabilities in a .csv file in the required format # First, turn the prediction probabilties into a data frame #resultDF = pd.DataFrame(baselinePredictionProbabilities,columns=featureColumns) # Add the IDs as a final column #resultDF.loc[:,'Id'] = pd.Series(testIDs,index=resultDF.index) # Make the 'Id' column the first column #colnames = resultDF.columns.tolist() #colnames = colnames[-1:] + colnames[:-1] #resultDF = resultDF[colnames] # Output to a .csv file # resultDF.to_csv('result.csv',index=False) # - # *Note: the code above will shuffle data differently every time it's run, so model accuracies will vary accordingly.* ## Data sub-setting quality check-point print(train_data[:1]) print(train_labels[:1]) # + # Modeling quality check-point with MNB--fast model def MNB(): mnb = MultinomialNB(alpha = 0.0000001) mnb.fit(train_data, train_labels) print("\n\nMultinomialNB accuracy on dev data:", mnb.score(dev_data, dev_labels)) MNB() # - # ### Defining Performance Criteria # # As determined by the Kaggle submission guidelines, the performance criteria metric for the San Francisco Crime Classification competition is Multi-class Logarithmic Loss (also known as cross-entropy). There are various other performance metrics that are appropriate for different domains: accuracy, F-score, Lift, ROC Area, average precision, precision/recall break-even point, and squared error. # # (Describe each performance metric and a domain in which it is preferred. Give Pros/Cons if able) # # - Multi-class Log Loss: # # - Accuracy: # # - F-score: # # - Lift: # # - ROC Area: # # - Average precision # # - Precision/Recall break-even point: # # - Squared-error: # # # ### Model Prototyping # We will start our classifier and feature engineering process by looking at the performance of various classifiers with default parameter settings in predicting labels on the mini_dev_data: # + def model_prototype(train_data, train_labels, eval_data, eval_labels): knn = KNeighborsClassifier(n_neighbors=5).fit(train_data, train_labels) bnb = BernoulliNB(alpha=1, binarize = 0.5).fit(train_data, train_labels) mnb = MultinomialNB().fit(train_data, train_labels) log_reg = LogisticRegression().fit(train_data, train_labels) neural_net = MLPClassifier().fit(train_data, train_labels) random_forest = RandomForestClassifier().fit(train_data, train_labels) decision_tree = DecisionTreeClassifier().fit(train_data, train_labels) support_vm_step_one = svm.SVC(probability = True) support_vm = support_vm_step_one.fit(train_data, train_labels) models = [knn, bnb, mnb, log_reg, neural_net, random_forest, decision_tree, support_vm] for model in models: eval_prediction_probabilities = model.predict_proba(eval_data) eval_predictions = model.predict(eval_data) print(model, "Multi-class Log Loss:", log_loss(y_true = eval_labels, y_pred = eval_prediction_probabilities, labels = crime_labels_mini_dev), "\n\n") model_prototype(mini_train_data, mini_train_labels, mini_dev_data, mini_dev_labels) # - # ### Adding Features, Hyperparameter Tuning, and Model Calibration To Improve Prediction For Each Classifier # # Here we seek to optimize the performance of our classifiers in a three-step, dynamnic engineering process. # # ##### 1) Feature addition # # We previously added components from the weather data into the original SF crime data as new features. We will not repeat work done in our initial submission, where our training dataset did not include these features. For comparision with respoect to how the added features improved our performance with respect to log loss, please refer back to our initial submission. # # We can have Kalvin expand on exactly what he did here. # # ##### 2) Hyperparameter tuning # # Each classifier has parameters that we can engineer to further optimize performance, as opposed to using the default parameter values as we did above in the model prototyping cell. This will be specific to each classifier as detailed below. # # ##### 3) Model calibration # # We can calibrate the models via Platt Scaling or Isotonic Regression to attempt to improve their performance. # # - Platt Scaling: ((brief explanation of how it works)) # # - Isotonic Regression: ((brief explanation of how it works)) # # For each classifier, we can use CalibratedClassifierCV to perform probability calibration with isotonic regression or sigmoid (Platt Scaling). The parameters within CalibratedClassifierCV that we can adjust are the method ('sigmoid' or 'isotonic') and cv (cross-validation generator). As we will already be training our models before calibration, we will only use cv = 'prefit'. Thus, in practice the cross-validation generator will not be a modifiable parameter for us. # # ### K-Nearest Neighbors # ###### Hyperparameter tuning: # # For the KNN classifier, we can seek to optimize the following classifier parameters: n-neighbors, weights, and the power parameter ('p'). # + list_for_ks = [] list_for_ws = [] list_for_ps = [] list_for_log_loss = [] def k_neighbors_tuned(k,w,p): tuned_KNN = KNeighborsClassifier(n_neighbors=k, weights=w, p=p).fit(mini_train_data, mini_train_labels) dev_prediction_probabilities = tuned_KNN.predict_proba(mini_dev_data) list_for_ks.append(this_k) list_for_ws.append(this_w) list_for_ps.append(this_p) working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev) list_for_log_loss.append(working_log_loss) #print("Multi-class Log Loss with KNN and k,w,p =", k,",",w,",", p, "is:", working_log_loss) k_value_tuning = [i for i in range(1,2002,500)] weight_tuning = ['uniform', 'distance'] power_parameter_tuning = [1,2] start = time.clock() for this_k in k_value_tuning: for this_w in weight_tuning: for this_p in power_parameter_tuning: k_neighbors_tuned(this_k, this_w, this_p) index_best_logloss = np.argmin(list_for_log_loss) print('For KNN the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with k =', list_for_ks[index_best_logloss], 'w =', list_for_ws[index_best_logloss], 'p =', list_for_ps[index_best_logloss]) end = time.clock() print("Computation time for this step is %.2f" % (end-start), 'seconds') # - # ###### Model calibration: # # We will consider embeding this step within the for loop for the hyperparameter tuning. More likely we will pipeline it along with the hyperparameter tuning steps. We will then use GridSearchCV top find the optimized parameters based on our performance metric of Mutli-Class Log Loss. # + # Here we will calibrate the KNN classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV # with various parameter settings. The "method" parameter can be set to "sigmoid" or to "isotonic", # corresponding to Platt Scaling and to Isotonic Regression respectively. list_for_ks = [] list_for_ws = [] list_for_ps = [] list_for_ms = [] list_for_log_loss = [] def knn_calibrated(k,w,p,m): tuned_KNN = KNeighborsClassifier(n_neighbors=k, weights=w, p=p).fit(mini_train_data, mini_train_labels) dev_prediction_probabilities = tuned_KNN.predict_proba(mini_dev_data) ccv = CalibratedClassifierCV(tuned_KNN, method = m, cv = 'prefit') ccv.fit(mini_calibrate_data, mini_calibrate_labels) ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data) list_for_ks.append(this_k) list_for_ws.append(this_w) list_for_ps.append(this_p) list_for_ms.append(this_m) working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev) list_for_log_loss.append(working_log_loss) print("Multi-class Log Loss with KNN and k,w,p =", k,",",w,",",p,",",m,"is:", working_log_loss) k_value_tuning = [i for i in range(1,2002,500)] weight_tuning = ['uniform', 'distance'] power_parameter_tuning = [1,2] methods = ['sigmoid', 'isotonic'] start = time.clock() for this_k in k_value_tuning: for this_w in weight_tuning: for this_p in power_parameter_tuning: for this_m in methods: knn_calibrated(this_k, this_w, this_p, this_m) index_best_logloss = np.argmin(list_for_log_loss) print('For KNN the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with k =', list_for_ks[index_best_logloss], 'w =', list_for_ws[index_best_logloss], 'p =', list_for_ps[index_best_logloss], 'm =', list_for_ms[index_best_logloss]) end = time.clock() print("Computation time for this step is %.2f" % (end-start), 'seconds') # - # ### Multinomial, Bernoulli, and Gaussian Naive Bayes # + def GNB(): gnb = GaussianNB() gnb.fit(train_data, train_labels) print("GaussianNB accuracy on dev data:", gnb.score(dev_data, dev_labels)) # Gaussian Naive Bayes requires the data to have a relative normal distribution. Sometimes # adding noise can improve performance by making the data more normal: train_data_noise = np.random.rand(train_data.shape[0],train_data.shape[1]) modified_train_data = np.multiply(train_data,train_data_noise) gnb_noise = GaussianNB() gnb.fit(modified_train_data, train_labels) print("GaussianNB accuracy with added noise:", gnb.score(dev_data, dev_labels)) # Going slightly deeper with hyperparameter tuning and model calibration: def BNB(alphas): bnb_one = BernoulliNB(binarize = 0.5) bnb_one.fit(train_data, train_labels) print("\n\nBernoulli Naive Bayes accuracy when alpha = 1 (the default value):", bnb_one.score(dev_data, dev_labels)) bnb_zero = BernoulliNB(binarize = 0.5, alpha=0) bnb_zero.fit(train_data, train_labels) print("BNB accuracy when alpha = 0:", bnb_zero.score(dev_data, dev_labels)) bnb = BernoulliNB(binarize=0.5) clf = GridSearchCV(bnb, param_grid = alphas) clf.fit(train_data, train_labels) print("Best parameter for BNB on the dev data:", clf.best_params_) clf_tuned = BernoulliNB(binarize = 0.5, alpha=0.00000000000000000000001) clf_tuned.fit(train_data, train_labels) print("Accuracy using the tuned Laplace smoothing parameter:", clf_tuned.score(dev_data, dev_labels), "\n\n") def investigate_model_calibration(buckets, correct, total): clf_tuned = BernoulliNB(binarize = 0.5, alpha=0.00000000000000000000001) clf_tuned.fit(train_data, train_labels) # Establish data sets pred_probs = clf_tuned.predict_proba(dev_data) max_pred_probs = np.array(pred_probs.max(axis=1)) preds = clf_tuned.predict(dev_data) # For each bucket, look at the predictions that the model yields. # Keep track of total & correct predictions within each bucket. bucket_bottom = 0 bucket_top = 0 for bucket_index, bucket in enumerate(buckets): bucket_top = bucket for pred_index, pred in enumerate(preds): if (max_pred_probs[pred_index] <= bucket_top) and (max_pred_probs[pred_index] > bucket_bottom): total[bucket_index] += 1 if preds[pred_index] == dev_labels[pred_index]: correct[bucket_index] += 1 bucket_bottom = bucket_top def MNB(): mnb = MultinomialNB(alpha = 0.0000001) mnb.fit(train_data, train_labels) print("\n\nMultinomialNB accuracy on dev data:", mnb.score(dev_data, dev_labels)) alphas = {'alpha': [0.00000000000000000000001, 0.0000001, 0.0001, 0.001, 0.01, 0.1, 0.0, 0.5, 1.0, 2.0, 10.0]} buckets = [0.5, 0.9, 0.99, 0.999, .9999, 0.99999, 1.0] correct = [0 for i in buckets] total = [0 for i in buckets] MNB() GNB() BNB(alphas) investigate_model_calibration(buckets, correct, total) for i in range(len(buckets)): accuracy = 0.0 if (total[i] > 0): accuracy = correct[i] / total[i] print('p(pred) <= %.13f total = %3d accuracy = %.3f' %(buckets[i], total[i], accuracy)) # - # The Bernoulli Naive Bayes and Multinomial Naive Bayes models can predict whether a loan will be good or bad with XXX% accuracy. # # ###### Hyperparameter tuning: # # We will prune the work above. Will seek to optimize the alpha parameter (Laplace smoothing parameter) for MNB and BNB classifiers. # # ###### Model calibration: # # Here we will calibrate the MNB, BNB and GNB classifiers with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The "method" parameter can be set to "sigmoid" or to "isotonic", corresponding to Platt Scaling and to Isotonic Regression respectively. Will likely embed this step within the for loop for the hyperparameter tuning as that makes more sense. Or will pipeline it along with the hyperparameter tuning steps. We will then use GridSearchCV top find the optimized parameters based on our performance metric of Mutli-Class Log Loss. # # THE REST OF THE MODEL CALIBRATION SECTIONS ARE SIMILAR AND THE OUTLINE WILL NOT BE REPEATED AS WOULD BE REDUNDANT. # # ### Logistic Regression # # ###### Hyperparameter tuning: # # For the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag') # # ###### Model calibration: # # See above # # ### Decision Tree # # ###### Hyperparameter tuning: # # For the Decision Tree classifier, we can seek to optimize the following classifier parameters: min_samples_leaf (the minimum number of samples required to be at a leaf node), max_depth # # From readings, setting min_samples_leaf to approximately 1% of the data points can stop the tree from inappropriately classifying outliers, which can help to improve accuracy (unsure if significantly improves MCLL). # # # ###### Model calibration: # # See above # ### Support Vector Machines # # ###### Hyperparameter tuning: # # For the SVM classifier, we can seek to optimize the following classifier parameters: C (penalty parameter C of the error term), kernel ('linear', 'poly', 'rbf', sigmoid', or 'precomputed') # # See source [2] for parameter optimization in SVM # # ###### Model calibration: # # See above # ### Neural Nets # # ###### Hyperparameter tuning: # # For the Neural Networks MLP classifier, we can seek to optimize the following classifier parameters: hidden_layer_sizes, activation ('identity', 'logistic', 'tanh', 'relu'), solver ('lbfgs','sgd', adam'), alpha, learning_rate ('constant', 'invscaling','adaptive') # + ### All the work from Sarah's notebook: import theano from theano import tensor as T from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams print (theano.config.device) # We're using CPUs (for now) print (theano.config.floatX )# Should be 64 bit for CPUs np.random.seed(0) from IPython.display import display, clear_output # + numFeatures = train_data[1].size numTrainExamples = train_data.shape[0] numTestExamples = test_data.shape[0] print ('Features = %d' %(numFeatures)) print ('Train set = %d' %(numTrainExamples)) print ('Test set = %d' %(numTestExamples)) class_labels = list(set(train_labels)) print(class_labels) numClasses = len(class_labels) # + ### Binarize the class labels def binarizeY(data): binarized_data = np.zeros((data.size,39)) for j in range(0,data.size): feature = data[j] i = class_labels.index(feature) binarized_data[j,i]=1 return binarized_data train_labels_b = binarizeY(train_labels) test_labels_b = binarizeY(test_labels) numClasses = train_labels_b[1].size print ('Classes = %d' %(numClasses)) print ('\n', train_labels_b[:5, :], '\n') print (train_labels[:10], '\n') # + ###1) Parameters numFeatures = train_data.shape[1] numHiddenNodeslayer1 = 50 numHiddenNodeslayer2 = 30 w_1 = theano.shared(np.asarray((np.random.randn(*(numFeatures, numHiddenNodeslayer1))*0.01))) w_2 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer1, numHiddenNodeslayer2))*0.01))) w_3 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer2, numClasses))*0.01))) params = [w_1, w_2, w_3] ###2) Model X = T.matrix() Y = T.matrix() srng = RandomStreams() def dropout(X, p=0.): if p > 0: X *= srng.binomial(X.shape, p=1 - p) X /= 1 - p return X def model(X, w_1, w_2, w_3, p_1, p_2, p_3): return T.nnet.softmax(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(X, p_1), w_1)),p_2), w_2)),p_3),w_3)) y_hat_train = model(X, w_1, w_2, w_3, 0.2, 0.5,0.5) y_hat_predict = model(X, w_1, w_2, w_3, 0., 0., 0.) ### (3) Cost function cost = T.mean(T.sqr(y_hat - Y)) cost = T.mean(T.nnet.categorical_crossentropy(y_hat_train, Y)) # + ### (4) Objective (and solver) alpha = 0.01 def backprop(cost, w): grads = T.grad(cost=cost, wrt=w) updates = [] for wi, grad in zip(w, grads): updates.append([wi, wi - grad * alpha]) return updates update = backprop(cost, params) train = theano.function(inputs=[X, Y], outputs=cost, updates=update, allow_input_downcast=True) y_pred = T.argmax(y_hat_predict, axis=1) predict = theano.function(inputs=[X], outputs=y_pred, allow_input_downcast=True) miniBatchSize = 10 def gradientDescent(epochs): for i in range(epochs): for start, end in zip(range(0, len(train_data), miniBatchSize), range(miniBatchSize, len(train_data), miniBatchSize)): cc = train(train_data[start:end], train_labels_b[start:end]) clear_output(wait=True) print ('%d) accuracy = %.4f' %(i+1, np.mean(np.argmax(test_labels_b, axis=1) == predict(test_data))) ) gradientDescent(50) ### How to decide what # to use for epochs? epochs in this case are how many rounds? ### plot costs for each of the 50 iterations and see how much it decline.. if its still very decreasing, you should ### do more iterations; otherwise if its looking like its flattening, you can stop # - # ###### Model calibration: # # See above # ### Random Forest # # ###### Hyperparameter tuning: # # For the Random Forest classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_features, max_depth, min_samples_leaf, bootstrap (whether or not bootstrap samples are used when building trees), oob_score (whether or not out-of-bag samples are used to estimate the generalization accuracy) # # ###### Model calibration: # # See above # ### Meta-estimators # ### AdaBoost Classifier # # ###### Hyperparameter tuning: # # There are no major changes that we seek to make in the AdaBoostClassifier with respect to default parameter values. # # ###### Adaboosting each classifier: # # We will run the AdaBoostClassifier on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration. # # ### Bagging Classifier # # ###### Hyperparameter tuning: # # For the Bagging meta classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_samples, max_features, bootstrap (whether or not bootstrap samples are used when building trees), bootstrap_features (whether features are drawn with replacement), and oob_score (whether or not out-of-bag samples are used to estimate the generalization accuracy) # # ###### Bagging each classifier: # # We will run the BaggingClassifier on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration. # # ### Gradient Boosting Classifier # # ###### Hyperparameter tuning: # # For the Gradient Boosting meta classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_depth, min_samples_leaf, and max_features # # ###### Gradient Boosting each classifier: # # We will run the GradientBoostingClassifier with loss = 'deviance' (as loss = 'exponential' uses the AdaBoost algorithm) on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration. # # ### Final evaluation on test data # Here we will likely use Pipeline and GridSearchCV in order to find the overall classifier with optimized Multi-class Log Loss. # This will be the last step after all attempts at feature addition, hyperparameter tuning, and calibration are completed # and the corresponding performance metrics are gathered. # ### References # # 1) Hsiang, <NAME>. and <NAME> <NAME>. "Quantifying the Influence of Climate on Human Conflict". Science, Vol 341, Issue 6151, 2013 # # 2) Huang, Cheng-Lung. Wang, Chieh-Jen. "A GA-based feature selection and parameters optimization for support vector machines". Expert Systems with Applications, Vol 31, 2006, p 231-240 # # 3) More to come #
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_18_2356.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.6.0 # language: julia # name: julia-1.6 # --- # # Preparations # First, we are going to load a few packages to perform calculations and visualize the results: using Plots, LaTeXStrings, Interact, LinearAlgebra # For interactive plots (uncomment if interactive plots don't work, should only be needed once): # + #using WebIO #WebIO.install_jupyter_nbextension() # uncomment on first run # - # Next we define the $2\times 2$ unit matrix $\sigma_0$ and the well known Pauli matrices $\sigma_{x,y,z}$, since we are going to use them a few times. sigma0=Matrix{ComplexF64}(I, 2, 2); sigmax=Matrix{ComplexF64}([0.0 1.0; 1.0 0.0]); sigmay=Matrix{ComplexF64}([0.0 -im; im 0.0]); sigmaz=Matrix{ComplexF64}([1.0 0.0; 0.0 -1.0]); # # Introduction: Physics and aim # In my research project we investigate so called two-dimensional topological insulators (2DTIs) and the helical edge states in these systems. # These are effective 1D states at the edge of a 2D system with certain properties. # For more information see [Qi/Zhang: Topological insulators and superconductors](https://arxiv.org/abs/1008.2026v1), Chapter II. # More specifically we study the density of states and transmission if impurities with finite electric potential and magnetic moment are placed on the edge. # The Hamiltonian for these systems reads # $$ # H(t) = \hbar v_F k \sigma_z + \sum_j (V\sigma_0+\mathbf{M}(t) \cdot \mathbf{\sigma})\delta(x-x_j) # $$ # Note, that we have included a time dependence in the magnetic moment. # We have previously investigated the [static case](https://doi.org/10.1103/PhysRevB.98.165423) and a natural extension of the model use there are harmonically rotating magnetic impurities. # We have built on the Green's function (GF) formalism for the static case by performing a Fourier decomposition resulting in a Matrix equation that can be solved numerically. # Here we want to focus on one of the parameters of these numerical calculations, namely the number of Fourier components needed for reasonable results. # # Floquet Hamiltonian and homogenous fields # It turns out that the GF approach we are using is closely related to the Floquet formalism (see e.g. [Griffoni/Hänggi: Driven quantum tunneling](https://www.physik.uni-augsburg.de/theo1/hanggi/Papers/213.pdf), Chapter II) frequently used to perform calculations on periodically driven systems. # The case of homogeneous fields can be viewed as a continuous limit of the impurity case and we can explicitly calculate the a matrix representation of the Floquet Hamiltonian for the homogeneous case. # Formally, it can be written as # $$ # \mathcal{H}_{m,n;j,k} = <j|H^{m-n}|k> + n \hbar \Omega \delta_{n,m} \delta_{j,k} # $$ # where # $$ # H^{m-n} = \frac{1}{T} \int_0^T dt \, H(t) \exp{(-i(m-n)\Omega t)}. # $$ # For simplicity we use the Hamiltonian # $$ # H(t) = \hbar\nu_F k \sigma_z + M (\sin{(\Omega t)}\sigma_z + \cos{(\Omega t)}\sigma_x). # $$ # Neglecting the potential $V$ is motivated by the fact, that a constant electric field will only shift the band structure and not have any other impact. # We simply assume we rotate the magnetic field around the edge of the material (that is the y-axis) staying perpendicular to it at all times. # The resulting matrix reads # $$ # \left(\begin{array}{c|c|c|c|c} # \ddots &\ddots & & & \\ \hline # \ddots & \hbar \nu_F k \sigma_z -\hbar\Omega\sigma_0 & -\frac{i}{2}M (\sigma_x + i \sigma_z) & & \\ \hline # & \frac{i}{2}M (\sigma_x - i \sigma_z) & \hbar \nu_F k \sigma_z & -\frac{i}{2}M (\sigma_x + i \sigma_z) & \\ \hline # & & \frac{i}{2}M (\sigma_x - i \sigma_z) & \hbar \nu_F k \sigma_z +\hbar\Omega\sigma_0 &\ddots \\ \hline # & & &\ddots &\ddots # \end{array}\right). # $$ # This matrix now has block-tri-diagonal structure with $2\times 2$ blocks and we can make use of the Pauli matrices we defined earlier to write a function that returns this matrix truncated after a certain number of blocks $n_{max}$: """ H(k,omega,nmax,M) Compute the matrix representation of the Floquet Hamiltonian at momentum 'k' with driving frequency 'omega' cutting of after 'nmax' Floquet components using magnetic field strength 'M' """ function H(k,omega,nmax,M) H=Matrix{ComplexF64}(I, 2*(2*nmax+1), 2*(2*nmax+1)) # initialize a matrix of the correct size n=-nmax:nmax # prefactors for the diagonal blocks for i=1:2:2*(2*nmax+1)-2 # fill diagonal H[i:i+1,i:i+1]=n[Int((i+1)/2)]*omega*sigma0-k*sigmaz # fill off diagonals H[i:i+1,i+2:i+3]=-im/2*M*(sigmax+im*sigmaz) # upper H[i+2:i+3,i:i+1]=im/2*M*(sigmax-im*sigmaz) # lower end H[end-1:end,end-1:end]=n[end]*omega*sigma0-k*sigmaz # last block return H end # The number of blocks $n_{max}$ corresponds to the number of Fourier components of the GF in the approach we are using for the impurity system. # The energy eigenvalues $E$ of this matrix depending on the momentum $k$ gives the Floquet band structure of the system. # Floquet theory also gives a criterion when this Hamiltonian can be used, namely there is a periodicity condition for the eigenvalues. # Since in this case we have $2 \times 2$ blocks in this case we will find two sets of eigenvalues, where the sets each follow the periodicity condition $\varepsilon = \varepsilon_0 + n\hbar\Omega$. # This also makes clear why it is important to choose a suitable number of Fourier/Floquet components: too many will slow down the calculations while not enough will give wrong results. # # The block-tri-diagonal structure gives a intuitive picture of what we expect to see. # If the magnetic field is turned of we simply get copies of the linear dispersion known for the helical edge states shifted by $n\hbar\Omega$, so-called Floque-sub-bands. # If we turn on the magnetic field the off-diagonal blocks are non-zero and couple the Floquet-sub-bands, such that the crossings turn into avoided crossings. # This can be seen in the interactive graphic below. # We can also clearly see that the periodicity is fulfilled in the fast driving case already for very few Floquet bands while we need significantly more for small driving frequencies. @manipulate throttle=0.1 for Ω=0.2:0.2:25, n=1:25, M=0:0.1:1 # throttle for better reactivity, sliders for Ω,n_max,M kmax = maximum([Ω,3]) # set maximum momentum k, at least 3 for good visibility for small Ω k=range(-kmax,kmax,length=300) # set range for momentum k E=eigvals(H(k[1],Ω,n,M)) # calculate Energy eigenvalues for first k for i in k[2:end] # loop over remaining k E=hcat(E,eigvals(H(i,Ω,n,M))) # append to already calculated eigenvalues end # and plot plot(k,E',legend=false,lc=:red,xlabel=L"\hbar\nu_F k",ylabel=L"E",title="Floquet band structure for homogenous fields") end # In case an error message 'WebIO not detected' or similar shows up try uncommenting the second code cell to install WebIO (Note that this will not work in Binder). # In any case this does not influence the further results of this notebook and we can plot the same graph without interactive sliders/widget by choosing parameters in this cell Ω=2 n=6 M=1 # and running the command in the manipulate macro. Feel free to try out different values here if the interactive plot does not work. kmax = maximum([Ω,3]) # set maximum momentum k, at least 3 for good visibility for small Ω k=range(-kmax,kmax,length=300) # set range for momentum k E=eigvals(H(k[1],Ω,n,M)) # calculate Energy eigenvalues for first k for i in k[2:end] # loop over remaining k E=hcat(E,eigvals(H(i,Ω,n,M))) # append to already calculated eigenvalues end # and plot plot(k,E',legend=false,lc=:red,xlabel=L"\hbar\nu_F k",ylabel=L"E",title="Floquet band structure for homogenous fields") # # Comparison to Green's function results # We can also explicitly calculate the zeroth Fourier component of the GF for small numbers of Fourier components and use the spectral function to visualize the band structure. # For two Fourier components it can be written as # $$ # G_0(E,k)=\left( 1-\tilde{g}(E,k)\left( V_+ \frac{\tilde{g}(E+\hbar \Omega,k)}{1-\tilde{g}(E+\hbar \Omega,k)V_+\tilde{g}(E+2\hbar \Omega,k)V_+^\dagger} V_+^\dagger + V_+^\dagger \frac{\tilde{g}(E-\hbar \Omega,k)}{1-\tilde{g}(E-\hbar \Omega,k)V_+^\dagger\tilde{g}(E-2\hbar \Omega,k)V_+} V_+\right)\right)^{-1}\tilde{g}(E,k), # $$ # with the Fourier components of the driving # $$ # V_\pm = \frac{M}{2}(\sigma_x \mp i \sigma_z) # $$ # and the free GF # $$ # \tilde{g}(E,k) = \frac{1}{(E+i0^+)-k\sigma_z}. # $$ # Note that we have added a small imaginary part for convergence (needed for integration over $k$ via the residual theorem). # The spectral function is then defined as # $$ # A(E,k) = -\mathrm{Im} \, \mathrm{Tr} \, G_0(E,k) # $$ # and we can write a function for it. # # First define a function for the free GF and the Fourier components of the driving (with $M=1$ for simplicity): g0(E,k)=inv(sigma0*(E+im*10e-3)-k*sigmaz) Vp=1/2*(sigmax-im*sigmaz) Vm=1/2*(sigmax+im*sigmaz); # Using these we can define the spectral function: A(E,k,omega)=-imag(tr(inv(sigma0-g0(E,k)*(Vp*inv(sigma0-g0(E+omega,k)*Vp*g0(E+2*omega,k)*Vm)*g0(E+omega,k)*Vm+Vm*inv(sigma0-g0(E-omega,k)*Vm*g0(E-2*omega,k)*Vp)*g0(E-omega,k)*Vp))*g0(E,k))) # We can calculate this on a suitable range of energies and momenta using the map function (for now $\Omega = 25$, we can use the interactive plot above to find a good range for $E,k$) Emax = 75 kmax = 25 Erange = range(-Emax,Emax,length=300) krange = range(-kmax,kmax,length=300) Ekrange = [(E,k) for E in Erange, k in krange] spectral = map(x->A(x...,25),Ekrange); # and plot the results as a heatmap: plot(krange,Erange,spectral,linetype=:heatmap,clims=(0,10e-4*maximum(spectral)),colorbar_title="Spectral function A",xlabel=L"\hbar\nu_F k",ylabel=L"E",size=(900,400),margins = 5mm) # Here we can see that the sub-bands carry different weight and that for this particular fast driving case with already the second set of sub bands is almost invisible. # This hints at the cut off after two Fourier coefficients being good as expected from the Floquet band structure above. # We can also put the Floquet band structure on top to compare and take a closer look at the avoided crossings. # For this first calculate the Floquet band structure as before and the spectral function with higher resolution around one of the crossing: # Floquet band structure E=eigvals(H(krange[1],25,2,1)) # calculate Energy eigenvalues for first k for i in krange[2:end] # loop over remaining k E=hcat(E,eigvals(H(i,25,2,1))) # append to already calculated eigenvalues end # higher resolution for inset Erange_ = range(10,15,length=300) krange_ = range(10,15,length=300) Ekrange_ = [(E,k) for E in Erange_, k in krange_] spectral_ = map(x->A(x...,25),Ekrange_); # Now we can plot the heatmap, overlay it with the Floquet bandstructure and do the same for the inset: hm=plot(krange,Erange,spectral,linetype=:heatmap,clims=(0,10e-4*maximum(spectral)),colorbar_title="Spectral function A",xlabel=L"\hbar\nu_F k",ylabel=L"E",size=(900,400), margins = 5mm) plot!(hm,krange,E',lc=:red,legend=false,widen=false) plot!(krange_,Erange_,spectral_,linetype=:heatmap,xlims=(10,15),ylims=(10,15),clims=(0,10e-4*maximum(spectral)),cbar=false,inset_subplots = [(1, bbox(0.0,0.02,0.5,0.3,:center,:top))], subplot=2,fg_text=:white) plot!(hm[2],krange,E',lc=:red,legend=false,widen=false,xlims=(10,15),ylims=(10,15)) # save to file # savefig("hf_inset.pdf") # pdf, gets rather big with heatmap savefig("hf_inset.png") # So far we looked at the fast driving case, where we only need a few Fourier/Floquet components. # Now lets look at a faster driving case and create similar figures. # For now we choose $\Omega = 2$ and overlay the spectral function from the GF as before with the Floquet band structure for $n_{max} = 2$ and $n_{max} = 10$. # spectral function Emax = 5 kmax = 9 Erange = range(-Emax,Emax,length=300) krange = range(-kmax,kmax,length=300) Ekrange = [(E,k) for E in Erange, k in krange] spectral2 = map(x->A(x...,2),Ekrange); # Floquet band structure E2=eigvals(H(krange[1],2,2,1)) # calculate Energy eigenvalues for first k for i in krange[2:end] # loop over remaining k E2=hcat(E2,eigvals(H(i,2,2,1))) # append to already calculated eigenvalues end E10=eigvals(H(krange[1],2,10,1)) # calculate Energy eigenvalues for first k for i in krange[2:end] # loop over remaining k E10=hcat(E10,eigvals(H(i,2,10,1))) # append to already calculated eigenvalues end hm=plot(krange,Erange,spectral2,linetype=:heatmap,clims=(0,10e-3*maximum(spectral)),colorbar_title="Spectral function A",xlabel=L"\hbar\nu_F k",ylabel=L"E",ylims=(-Emax,Emax),size=(900,400),margins=5mm) plot!(hm,krange,E2',lc=:red,legend=false,widen=false) plot!(hm,krange,E10',lc=:grey,ls=:dash,legend=false,widen=false) #plot!(krange_,Erange_,spectral_,linetype=:heatmap,xlims=(10,15),ylims=(10,15),clims=(0,10e-4*maximum(spectral)),cbar=false,inset_subplots = [(1, bbox(0.0,0.02,0.5,0.3,:center,:top))], subplot=2,fg_text=:white) #plot!(hm[2],krange,E',lc=:red,legend=false,widen=false,xlims=(10,15),ylims=(10,15)) #save to file #savefig("med_noinset.pdf") # pdf, gets rather big with heatmap savefig("med_noinset.png") # Here we see that the periodicity is not fulfilled for the red case but the results in the end are not too different because the sub bands carrying significant weight still match rather well. # If we calculate the density of states from the spectral function here (integrate over $k$) we will only see a slight shift in the position of the outermost peaks. # # Now finally we want to look at a case where we certainly need more Fourier/Floquet components. # For this we will disregard the rather slow spectral function calculations and only compare the Floquet band structures. # Choosing $\Omega = 0.2$ and $n_{max}=10,25$. # set limits and calculate Floquet bandstructure Emax = 2 kmax = 3 krange = range(-kmax,kmax,length=300) E1=eigvals(H(krange[1],0.2,10,1)) # calculate Energy eigenvalues for first k for i in krange[2:end] # loop over remaining k E1=hcat(E1,eigvals(H(i,0.2,10,1))) # append to already calculated eigenvalues end E25=eigvals(H(krange[1],0.2,25,1)) # calculate Energy eigenvalues for first k for i in krange[2:end] # loop over remaining k E25=hcat(E25,eigvals(H(i,0.2,25,1))) # append to already calculated eigenvalues end # plot pltleft=plot(krange,E1',lc=:red,label=["\$ n_{max}=$(10) \$" reshape(["" for i in 1:size(E1)[1]-1],1,size(E1)[1]-1)],xlims=(-kmax,kmax),ylims=(-Emax,Emax),borderstyle=:box,ylabel=L"E") pltright=plot(krange,E25',lc=:blue,label=["\$ n_{max}=$(25) \$" reshape(["" for i in 1:size(E25)[1]-1],1,size(E25)[1]-1)],xlims=(-kmax,kmax),ylims=(-Emax,Emax),borderstyle=:box) pltboth=plot(pltleft,pltright,layout=(1,2),size=(1000,400),xlabel=L"\hbar\nu_F k",guidefontsize=22,legend=:top,legendfontsize=14,xticks=-kmax:kmax,tickfontsize=14, margins = [5mm 10mm]) #save to file savefig("lf_noinset.pdf") savefig("lf_noinset.png") # Here we can clearly see that on the left the periodicity is not respected, while it is fulfilled on the right. # To guarantee proper results in the final results we can now use the same procedure for the desired driving frequency and check where we cut of the Fourier series. # # Using the type of plots above it is possible to easily estimate if the number of Fourier coefficients chosen for the driving frequency is sufficient or not.
fbs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline from matplotlib import style style.use('fivethirtyeight') import matplotlib.pyplot as plt import numpy as np import pandas as pd import datetime as dt # # Reflect Tables into SQLAlchemy ORM # Python SQL toolkit and Object Relational Mapper import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func # create engine to hawaii.sqlite engine = create_engine("sqlite:////Users/Vaidehee/Desktop/10-Advanced-Data-Storage-and-Retrieval/Instructions/Resources/hawaii.sqlite") # + # reflect an existing database into a new model Base = automap_base() # reflect the tables Base.prepare(engine, reflect=True) # - # View all of the classes that automap found Base.classes.keys() # Save references to each table Measurement = Base.classes.measurement Station = Base.classes.station # Create our session (link) from Python to the DB session = Session(engine) # # Exploratory Precipitation Analysis # + # Find the most recent date in the data set. most_rec_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first() most_rec_date # + # Design a query to retrieve the last 12 months of precipitation data and plot the results. # Starting from the most recent data point in the database. # Calculate the date one year from the last date in data set. diff = dt.date(2017, 8, 23) - dt.timedelta(days=365) diff # + # Perform a query to retrieve the data and precipitation scores query = session.query(Measurement.date,Measurement.prcp).\ filter(Measurement.date > '2016-08-22').\ order_by(Measurement.date.desc()).all() query # Save the query results as a Pandas DataFrame and set the index to the date column df = pd.DataFrame(query, columns=['date','precipitation']) df.set_index('date', inplace=True) # + # Sort the dataframe by date df.index = pd.to_datetime(df.index) df.sort_index(ascending=True,inplace=True) df # + # Use Pandas Plotting with Matplotlib to plot the data import matplotlib.dates as mdates from matplotlib.dates import DayLocator, HourLocator, DateFormatter, drange ax = df.plot(figsize=(10,8)) ax.xaxis.set_major_locator(mdates.MonthLocator(interval=1)) ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d')) ax.set_xlabel('Date') ax.set_ylabel('Inches') plt.tight_layout() plt.show() plt.savefig("/Users/Vaidehee/Desktop/10-Advanced-Data-Storage-and-Retrieval/Instructions/Images/precipitation_graph.png") # - # Use Pandas to calcualte the summary statistics for the precipitation data df.describe() # # Exploratory Station Analysis # Design a query to calculate the total number stations in the dataset total_station_count = session.query(Station).count() total_station_count # Design a query to find the most active stations (i.e. what stations have the most rows?) # List the stations and the counts in descending order. most_active_stations = session.query(Measurement.station, func.count(Measurement.tobs)).group_by(Measurement.station).\ order_by(func.count(Measurement.tobs).desc()).all() most_active_stations # + # Using the most active station id from the previous query, calculate the lowest, highest, and average temperature. active_station_temps = session.query(func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)).\ filter(Measurement.station == 'USC00519281').all() active_station_temps # + # Using the most active station id # Query the last 12 months of temperature observation data for this station and plot the results as a histogram last_year = dt.date(2017, 8, 23) - dt.timedelta(days=365) last_year active_station = session.query(Measurement.tobs).\ filter(Measurement.date > '2016-08-23', Measurement.station == 'USC00519281').all() df = pd.DataFrame(active_station, columns=['tobs']) df.plot.hist(bins=12) plt.xlabel('Temperature') plt.ylabel('Frequency') plt.tight_layout() plt.savefig("/Users/Vaidehee/Desktop/10-Advanced-Data-Storage-and-Retrieval/Instructions/Images/precipitation_graph.png") # - # # Close session # Close Session session.close()
climate_starter.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import torch from flytracker import run from flytracker.utils.param_helpers import load_frame, test_mask, test_threshold from skimage.filters import threshold_local import numpy as np import matplotlib.pyplot as plt from natsort import natsorted import os # %load_ext autoreload # %autoreload 2 # - # To split video: # ffmpeg -i seq_1.mp4 -c copy -segment_time 00:10:00 -f segment -reset_timestamps 1 split/seq_%02d.mp4 movie_path = "../../data/experiments/bruno/videos/split/seq_00.mp4" all_movies = "../../data/experiments/bruno/videos/split/mini_split" image = load_frame(movie_path, frame=100, color=False) # + mask = torch.ones((1080, 1280), dtype=bool) mask[:130, :] = 0 mask[-160:, :] = 0 mask[:, :270] = 0 mask[:, -205:] = 0 mask[:190, :350] = 0 mask[:195, -270:] = 0 mask[-220:, :340] = 0 mask[870:, 1010:] = 0 # - def local_threshold(image, mask, threshold=50, block_size=31): background = threshold_local(image, block_size=block_size, method='median', ) # we apply the mask after the background to avoid annoying edge effects background = test_mask(torch.tensor(background, dtype=torch.uint8), mask) return background - threshold local_threshold = local_threshold(image, mask, 40, 31) # feed in unmasked image! # # Baseline - completely # %%time df = run( movie_path, mask, n_arenas=4, gpu=True, n_ini=100, threshold=local_threshold, ) len(df) df_baseline = df[0] df_baseline.frame.unique().size min_frame = 0 for ID in df_baseline.ID.unique(): plt.plot(df_baseline.query(f"ID=={ID} and frame > {min_frame}")["x"], df_baseline.query(f"ID=={ID} and frame > {min_frame}")["y"], 'o') ID = 5 plt.plot(df_baseline.query(f"ID=={ID}")["x"], df_baseline.query(f"ID=={ID}")["y"]) df_baseline.frame.unique().size # # Multiple movie_path = "../../data/experiments/bruno/videos/split/seq_00.mp4" all_movies = "../../data/experiments/bruno/videos/split/" # %%time df = run( all_movies, mask, n_arenas=4, gpu=True, n_ini=100, threshold=local_threshold) df.shape ID = 30 plt.plot(df.query(f"ID=={ID}")["x"], df.query(f"ID=={ID}")["y"], '-') min_frame = 0 for ID in df.ID.unique(): plt.plot(df.query(f"ID=={ID} and frame > {min_frame}")["x"], df.query(f"ID=={ID} and frame > {min_frame}")["y"], 'o') df.frame.unique().size
dev/multivideo/local_threshold.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <table width = "100%"> # <tr style="background-color:white;"> # <!-- QWorld Logo --> # <td style="text-align:left;width:200px;"> # <a href="https://qworld.net/" target="_blank"><img src="../images/QWorld.png"> </a></td> # <td style="text-align:right;vertical-align:bottom;font-size:16px;"> # Prepared by <a href="https://gitlab.com/sabahuddin.ahmad" target="_blank"> <NAME> </a></td> # </tr> # </table> # # <hr> # ## QUBO Formulation with Penalty Method # In the previous section, we learnt about the maximum cut problem where the problem characteristics led directly to an optimazation problem in the desired QUBO form. The natural QUBO form contains no constraints. # # However, there are many problems that include constraints that must be satisfied to reach an optimum solution. Many of these constrained problems can be reconverted to formulate the desired QUBO form using a method known as **Penalty Method**. # ### Penalty Method # Penalty Method solves a constrained optimization problem using a certain class of algorithms by replacing it with a series of unconstrained problems whose solutions approximate the solution of the original constrained optimization problem. # # To create the related unconstrained problems, a **penalty function** is added to the original objective function. The penalties are chosen such that the effects of constraints can be included in the final optimal solution. Penalty function includes a penalty parameter multiplied by a measure of violation of the constraints. The measure of violation is zero in the region where constraints are not violated; non-zero otherwise. Thus, the penalties are formulated so that they equal zero for feasible solutions; non-zero for infeasible solutions. # # Let's assume we have a constrained problem where we are minimizing a function, # $$\min f(x)$$ # where the constraint is, # $$c_i(x)\leq 0, \forall i \in I$$ # So, the related series of unconstrained minimization problems are: # $$\min \Phi_k(x) = f(x) + P_k \sum_{i \in I} g(c_i(x))$$ # # where $g(c_i(x))= \max (0, c_i(x))^2$ is the penalty function and $P_k$ are the penalty parameters. # # In each iteration k of the penalty method, we increase the penalty parameter, solve the unconstrained problem and use its slution for the next iteration. Eventually, the solutions converge to a solution which is the actual solution of the original constrained problem. # # Now, lets understand it in another way. # # #### Transforming Linear Equality Constraints To Equivalent Penalties # Suppose we have integer variables $y_1,....,y_k$ and suppose that $f(y_1,....,y_k)$ is our objective function to be minimized. # # Given a linear equality of the form, # $$\sum_{i=1}^{k} a_iy_i=b$$ # # where $a_i, b \in \mathbb{R}$. # The transformation results in the following objective function, # # $$f(y_1,....,y_k)+P\left( \sum_{i=1}^{k}a_iy_i-b\right)^2$$ # # If variables $y_i$ satisfy the equality, this reduces to the original $f(y_1,....,y_k)$. Note, $P \in \mathbb{N}$ (Natural numbers). # # #### Transforming Linear Inequality Constraints To Equivalent Penalties # Given a linear inequality of the form, # $$\sum_{i=1}^{k} a_iy_i \leq b$$ # # Linear inequality constraints have to be transformed to equality constraints by using slack variables i-e, $\eta$. # # $$\sum_{i=1}^{k} a_iy_i+\eta=b$$ # # The transformation results in the following objective function, # # $$f(y_1,....,y_k)+P\left( \sum_{i=1}^{k}a_iy_i+\eta-b\right)^2$$ # # <div class="alert alert-block alert-info">We always assume $\eta \geq 0$. # # Moreover, $\eta$ is an integer variable and has to be converted to binary. It can be bounded as follows, # # $$0\leq \eta \leq - \left(\sum_{i=1}^{k} \min \{a_i\underline{\rm y_i}, a_i\overline{\rm y_i}\}-b\right)$$ # # where $\underline{\rm y_i}$ is the lower-bound and $\overline{\rm y_i}$ is the upper-bound of $y_i$.</div> # # #### Transforming Integer Variables to Binary Variables # # Since we considered integer variables $y_1,....,y_k$ and QUBO uses binary decision variables, we should convert our integer variables to binary variables. # # For integer variables $y_1,....,y_k$, we have $\underline{\rm y_i} \leq y_i \leq \overline{\rm y_i}$ which can be expressed as $y \in \{\underline{\rm y_i},....,\overline{\rm y_i}\}$. # # We can represent each integer variable as a linear combination of binary variables, # # $$y_i = \sum_{j=0}^{N-1} c_j x_j $$ # # where $c_j \in \mathbb{Z}_+$ (set of non-negative integers),and $x_j \in \{0,1\} \forall j \in \{0,...,N-1\}$. # # This representation is known as **integer encoding**. The width of the integer encoding, denoted by $N$ is the number of binary variables required for encoding the integer variable $y_i$ or the number of bits required to store that integer variable. # # We will consider binary encoding only where $c_j =2^j$,and $N = \lfloor \log_2 (\overline{\rm y_i}) \rfloor +1$. # # $$y_i = \sum_{j=0}^{\lfloor \log_2 (\overline{\rm y_i}) \rfloor} 2^j x_j $$ # # Note the use of the floor function in $N$. # <a id="cases"></a> # #### Specific Cases # For specific constraints, there are some useful known quadratic penalties. # # |Classical Constraint| Equivalent Quadratic Penalty | # |:-----|:-----------| # |$x+y \leq 1$ |$P(xy)$ | # |$x+y \geq 1$ |$P(1-x-y+xy)$| # |$x+y=1$ |$P(1-x-y+2xy)$ | # |$x \leq y$ |$P(x-xy)$ | # |$x_1+ x_2+x_3 \leq 1$ |$P(x_1x_2+x_1x_3+x_2x_3)$ | # |$x=y$ |$P(x+y-2xy)$ | # # <div class="alert alert-block alert-info">All variables here are binary. P is a positive, scalar penalty parameter and it must be chosen to be sufficiently large.</div> # ### Task 1 # # Consider a constrained problem of the form: # $$\min y = f(x)$$ # # which is subjected to the constraint $x_1+ x_2 \leq 1$ ($x_1$ & $x_2$ are binary variables). What will be the equivalent penalty for this problem? What will be the form of the related unconstrained problem? # [Click here for solution](QUBO_PenaltyMethod_Solutions.ipynb#task1) # # *** # ### Task 2 # # Consider a constrained problem of the form: # $$\min y = f(x)$$ # # which is subjected to the constraint $x_1+ x_2 = 1$ ($x_1$ & $x_2$ are binary variables). What will be the equivalent penalty for this problem? What will be the form of the related unconstrained problem? # [Click here for solution](QUBO_PenaltyMethod_Solutions.ipynb#task2) # # *** # ### Task 3 # # Consider the following constraint $y_1+ y_2 \geq 10$, where $y_1$ & $y_2$ are integer variables and $0 \leq y_1 \leq 8$ and $0 \leq y_2 \leq 5$. What will be the equivalent penalty? # # Suppose that $f(y_1, y_2)$ is the objective function to be minimized, what will be the new objective function including the penalty? # [Click here for solution](QUBO_PenaltyMethod_Solutions.ipynb#task3) # # *** # ### References # *** # 1. Wikipedia contributors. (2021). *Penalty method.* Wikipedia, The Free Encyclopedia. [Link](https://en.wikipedia.org/w/index.php?title=Penalty_method&oldid=1020411673) # 2. <NAME>, <NAME>, <NAME>. (2019). *Quantum Bridge Analytics I: A Tutorial on Formulating and Using QUBO Models.* [[arXiv Preprint]](https://arxiv.org/abs/1811.11538) # 3. <NAME>, <NAME>, <NAME>. (2021). *Unconstrained Binary Models of the Travelling Salesman Problem Variants for Quantum Optimization.* [[arXiv Preprint]](https://arxiv.org/abs/2106.09056) # 4. <NAME>, <NAME>. (2019). *Practical Integer-to-Binary Mapping for Quantum Annealers*. Quantum Information Processing, Vol. 18, No. 4, 94. [[arXiv Preprint]](https://arxiv.org/abs/1706.01945)
notebooks/QUBO_PenaltyMethod.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (py36) # language: python # name: py36 # --- # # COOT on MNIST/USPS example # This notebook gives a simple example of the COOT problem between two well-known datasets: MNIST and USPS. # # We recall that for two datasets $\mathbf{X} \in \mathbb{R}^{n \times d}, \mathbf{X'} \in \mathbb{R}^{n' \times d'}$ the COOT problem aims at finding two optimal transport maps $\boldsymbol{\pi^{s}}$ and $\boldsymbol{\pi^{v}}$ between the samples and the features that solve: # # \begin{equation} # \underset{\begin{smallmatrix}\boldsymbol{\pi^{s}} \in \Pi(\mathbf{w},\mathbf{w'}) \\ \boldsymbol{\pi^{v}} \in \Pi(\mathbf{v},\mathbf{v'}) \end{smallmatrix}} {\min} \sum_{ijkl} \left|X_{ik}-X_{jl}\right|^{p} \pi^{s}_{ij} \pi^{v}_{kl} # \end{equation} # # where $\mathbf{w},\mathbf{w'}$ are histograms on the samples and $\mathbf{v},\mathbf{v'}$ are histogram on the features. # # In our example the samples are images and the features are the pixels of the images. We will have $n,n'=300$ and $d=784, d'=256$ import numpy as np from scipy import ndimage import scipy as sp import matplotlib.pylab as pl import ot import scipy.io import sys sys.path.append('../code') import cot # ### Load the MNIST/USPS dataset # + data=sp.io.loadmat('../data/mnist.mat') Xtot1=data['xapp'].astype(np.float32) Ytot1=data['yapp'] d1=Xtot1.shape[1] Ytot1[Ytot1==10]=0 data=sp.io.loadmat('../data/usps.mat') Xtot2=(data['xapp'].astype(np.float32)+1)/2 Ytot2=data['yapp'] Ytot2-=1 d2=Xtot2.shape[1] np.random.seed(1976) # + def get_data(x,y,nbperclass): xr=np.zeros((0,x.shape[1])) yr=np.zeros((0)) for i in range(np.max(y).astype(int)+1): xi=x[y.ravel()==i,:] idx=np.random.permutation(xi.shape[0]) xr=np.concatenate((xr,xi[idx[:nbperclass],:]),0) yr=np.concatenate((yr,i*np.ones(nbperclass))) return xr,yr #%% We take 300 samples per class nbperclass=300 xs,ys=get_data(Xtot1,Ytot1,nbperclass) xs=xs/255 selmnist=xs.sum(0)>0 ntot=nbperclass*10 xs2=np.zeros((xs.shape[0],d1)) #xs2[:,sel1]=xs xt,ys=get_data(Xtot2,Ytot2,nbperclass) vs=xs.sum(axis=0) # set the weights on the features vs/=vs.sum() vt=xt.sum(axis=0) vt/=vt.sum() # + ot.tic() Ts,Tv,_,log=cot.cot_numpy(xs,xt,v1=vs,v2=vt,niter=100,log=True) # solve COOT ot.toc() pl.figure(1,figsize=(6,4)) pl.plot(log['cost']) pl.title('evolution of cost (no Mass correction)') pl.show() ot.tic() Tsr,Tvr,_,logr=cot.cot_numpy(xs,xt,v1=vs,v2=vt,niter=100,log=True,algo2='sinkhorn',reg2=.5e-2) # solve COOT with sinkhorn ot.toc() # - pl.figure(2,figsize=(6,6)) pl.imshow(Ts) pl.colorbar() pl.show() Tv.shape,Ts.shape # ### Confusion matrix on the samples # We evaluate the COOT ability to find the good assignments of the images (samples), i.e. if it aligns well the samples classes between the two datasets based on the knowledge of $\boldsymbol{\pi^{s}}$ # + #%% confusion matrix nbc=10 Cmat=np.zeros((nbc,nbc)) for i in range(ntot): #print(i) for j in range(ntot): if Ts[i,j]: Cmat[int(ys[i]),int(ys[j])]+=Ts[i,j] print('Find the good class in {:.2f}% '.format(100*np.sum(np.diag(Cmat)))) #%% pl.imshow(Cmat*10), pl.colorbar() pl.title('Confusion matrix for COOT between samples') pl.ylabel('Labels MNIST') pl.xlabel('Labels USPS') # - # ### Vizualize the transport on the features # We propose to vizualize the optimal oupling on the features $\boldsymbol{\pi^{v}}$. In order to do that we color code each pixel of an image of size USPS and we transfer to an image of size MNIST. # + # #%%pix dim_source=16 dim_target=28 image = np.zeros((dim_source,dim_source,3)) for i in range(dim_source): for j in range(dim_source): image[i,j,0]=i image[i,j,1]=j image[i,j,2]=dim_source/2 image=image.astype(np.float32)/dim_source diag=1./Tv.sum(axis=1) diag[diag==np.inf]=0 image_target = np.dot(np.diag(diag),np.dot(image.reshape((dim_source*dim_source,3)).T,Tv.T).T) image_target[~selmnist,:]=np.nan #we remove non informative features image_target=image_target.reshape((dim_target,dim_target,3)) diagr=1./Tvr.sum(axis=1) diagr[diagr==np.inf]=0 image_targetr = np.dot(np.diag(diagr),np.dot(image.reshape((dim_source*dim_source,3)).T,Tvr.T).T) image_targetr[~selmnist,:]=np.nan image_targetr=image_targetr.reshape((dim_target,dim_target,3)) pl.figure(3,figsize=(16,32)) pl.subplot(1,2,1) pl.imshow(image) pl.title('source image') pl.axis('off') pl.subplot(1,2,2) pl.imshow(image_target) pl.title('Transfered image') pl.axis('off') pl.show() #%% import scipy.sparse sTs= scipy.sparse.coo_matrix(Ts) row=sTs.row col=sTs.col pl.figure(10,figsize=(14,3.5)) pl.clf() pl.subplot(1,4,1) pl.plot(col,row,'.',markersize=3,alpha=0.5) #pl.spy(Tv,markersize=3,marker='.',alpha=0.5) pl.title('$\pi^s$ matrix between samples') pl.xlabel('USPS samples') pl.ylabel('MNIST samples') pl.xticks([300*i for i in range(11)],[' ']*11) pl.yticks([300*i for i in range(11)],[]*11) pl.axis('scaled') pl.xlim((0,ntot)) pl.ylim((ntot,0)) pl.grid() pl.subplot(1,4,2) pl.imshow(Cmat*10,cmap='Blues'), #pl.colorbar() pl.title('Confusion matrix') pl.ylabel('Labels MNIST') pl.xlabel('Labels USPS') #pl.xticks(*pl.yticks()) pl.yticks([i for i in range(10)],[i for i in range(10)]) pl.ylim((nbc-.5,-.5)) pl.xticks([i for i in range(10)],[i for i in range(10)]) pl.xlim((-.5,nbc-.5,)) pl.subplot(1,4,3) pl.imshow(image) pl.title('USPS colored pixels') pl.axis('off') pl.xlim([-6,22]) pl.ylim([-6,22]) pl.subplot(1,4,4) pl.imshow(image_target) pl.title("MNIST pixels through $\pi^v$") pl.axis('off') pl.show() pl.savefig('./mnist_usps.png') pl.savefig('./mnist_usps.pdf',bbox_inches='tight') # - # We observe that the spatial structured is preserved (without supervision): the pixel are transported coherently on the center of the image # ### Vizualize the images after transformation through the optimal couplings # We can also vizualize the images after transformation via the optimal couplings # + #%% nbl,nbc=5,2 #idx_sel=np.random.randint(0,ntot,n_fig) idx_sel=np.arange(0,ntot,nbperclass)+3 xts=xt[idx_sel,:] xss=xs[idx_sel,:] I=np.zeros((28*nbl,28*(nbc*2+1)))+1 for i in range(nbl): for j in range(nbc): I[i*28:(i+1)*28,j*28:(j+1)*28]=xss[i+j*nbl].reshape((28,28)) I[i*28+6:(i)*28+22,j*28+28*(nbc+1)+6:j*28+28*(nbc+1)+22]=xts[i+j*nbl].reshape((16,16)) pl.figure(15) pl.clf() pl.imshow(I,cmap='Blues') pl.axis('off') pl.ylim([ I.shape[0],-10]) pl.text(20,-7,'MNIST',fontsize=15) pl.text(20+28*(nbc+1),-7,'USPS',fontsize=15) # + #%% import scipy.sparse sTv= scipy.sparse.coo_matrix(Ts) row=sTs.row col=sTs.col pl.figure(11,figsize=(16,3.5)) pl.clf() pl.subplot(1,5,1) pl.imshow(I,cmap='gray') pl.axis('off') pl.ylim([ I.shape[0],-10]) pl.text(15,-9,'MNIST',fontsize=12) pl.text(15+28*(nbc+1),-9,'USPS',fontsize=12) pl.subplot(1,5,2) pl.plot(col,row,'.',markersize=3,alpha=0.5) #pl.spy(Tv,markersize=3,marker='.',alpha=0.5) pl.title('$\pi^s$ matrix between samples') pl.xlabel('USPS samples') pl.ylabel('MNIST samples') pl.xticks([300*i for i in range(11)],[' ']*11) pl.yticks([300*i for i in range(11)],[]*11) pl.axis('scaled') pl.xlim((0,ntot)) pl.ylim((ntot,0)) pl.grid() pl.subplot(1,5,3) pl.imshow(image) pl.title('USPS colored coded pixels') pl.axis('off') pl.xlim([-6,22]) pl.ylim([22,-6]) pl.subplot(1,5,4) pl.imshow(image_target) pl.title("MNIST pixels through $\pi^v$") pl.axis('off') #pl.show() pl.subplot(1,5,5) pl.imshow(image_targetr) pl.title("MNIST pixels through entropic $\pi^v$") pl.axis('off') #pl.show() pl.savefig('./mnist_usps.png') pl.savefig('./mnist_usps.pdf',bbox_inches='tight') #%% import random import PIL as pil # build a rectangle in axes coords left, width = .25, .5 bottom, height = .25, .5 right = left + width top = bottom + height def predict_barycenter(data,T): diag=1./T.sum(axis=1) diag[diag==np.inf]=0 return np.dot(T,data.T).T.dot(np.diag(diag)) def predict_barycenter_reverse(data,T): diag=1./T.sum(axis=1) diag[diag==np.inf]=0 return np.dot(T,data).T.dot(np.diag(diag)).T random.seed(1985) np.random.seed(1976) n_fig=16 idx_sel=np.random.randint(0,ntot,n_fig) xsel=xs[idx_sel,:] xpred=np.zeros((n_fig,d2)) xpredr=np.zeros((n_fig,d2)) for i in range(n_fig): xpred[i,:]=predict_barycenter(xsel[i,:],Tv.T) xpredr[i,:]=predict_barycenter(xsel[i,:],Tvr.T) cmap_g='gray' pl.figure(figsize=(n_fig,4)) for i in range(n_fig): ax= pl.subplot(4,n_fig,i+1) pl.imshow(xsel[i,:].reshape((28,28)),cmap=cmap_g) pl.axis('off') pl.xticks(()) pl.yticks(()) #pl.xlim([-6,22]) #pl.ylim([22,-6]) if i==0: ax.text(left-.3, 0.5*(bottom+top), 'MNIST', horizontalalignment='right', verticalalignment='center', rotation='vertical', transform=ax.transAxes) #pl.ylabel('MNIST') pl.subplot(4,n_fig,i+1+n_fig) img = pil.Image.fromarray((xsel[i,:]* 255 / np.max(xsel[i,:])).reshape((28,28)) .astype('float32')) img = img.resize((16,16)) pl.imshow(img,cmap=cmap_g) pl.axis('off') pl.xticks(()) pl.yticks(()) pl.xlim([-6,22]) pl.ylim([22,-6]) if i==0: ax.text(left-.3, 0.5*(bottom+top) - 1.1, 'Resize', horizontalalignment='right', verticalalignment='center', rotation='vertical', transform=ax.transAxes) ax = pl.subplot(4,n_fig,i+1+2*n_fig) ax.imshow(xpred[i,:].reshape((16,16)),cmap=cmap_g) pl.axis('off') pl.xticks(()) pl.yticks(()) pl.xlim([-6,22]) pl.ylim([22,-6]) if i==0: ax.text(left-.3, 0.5*(bottom+top), 'Map $\pi^v$', horizontalalignment='right', verticalalignment='center', rotation='vertical', transform=ax.transAxes) ax= pl.subplot(4,n_fig,i+1+3*n_fig) pl.imshow(xpredr[i,:].reshape((16,16)),cmap=cmap_g) pl.axis('off') pl.xticks(()) pl.yticks(()) pl.xlim([-6,22]) pl.ylim([22,-6]) if i==0: ax.text(left-.3, 0.5*(bottom+top), 'Map reg $\pi^v$', horizontalalignment='right', verticalalignment='center', rotation='vertical', transform=ax.transAxes) #pl.ylabel('Map reg $\pi^v$') pl.savefig('./transp_piv_mnist.png') pl.savefig('./transp_piv_mnist.pdf',bbox_inches='tight') #%% import random import PIL as pil def predict_barycenter(data,T): diag=1./T.sum(axis=1) diag[diag==np.inf]=0 return np.dot(T,data.T).T.dot(np.diag(diag)) def predict_barycenter_reverse(data,T): diag=1./T.sum(axis=1) diag[diag==np.inf]=0 return np.dot(T,data).T.dot(np.diag(diag)).T random.seed(1985) np.random.seed(1986) n_fig=15 idx_sel=np.random.randint(0,ntot,n_fig) xsel=xt[idx_sel,:] xpred=np.zeros((n_fig,d1)) xpredr=np.zeros((n_fig,d1)) for i in range(n_fig): xpred[i,:]=predict_barycenter(xsel[i,:],Tv) xpredr[i,:]=predict_barycenter(xsel[i,:],Tvr) pl.figure(figsize=(n_fig,4)) cmap_g='gray' for i in range(n_fig): ax=pl.subplot(4,n_fig,i+1) pl.imshow(xsel[i,:].reshape((16,16)),cmap=cmap_g) pl.axis('off') pl.xticks(()) pl.yticks(()) pl.xlim([-6,22]) pl.ylim([22,-6]) if i==0: ax.text(left-.3, 0.5*(bottom+top), 'USPS', horizontalalignment='right', verticalalignment='center', rotation='vertical', transform=ax.transAxes) ax=pl.subplot(4,n_fig,i+1+n_fig) img = pil.Image.fromarray((xsel[i,:]* 255 / np.max(xsel[i,:])).reshape((16,16)).astype('float32')) img = img.resize((28,28)) pl.imshow(np.array(img),cmap=cmap_g) pl.axis('off') pl.xticks(()) pl.yticks(()) if i==0: ax.text(left-.3, 0.5*(bottom+top) , 'Resize', horizontalalignment='right', verticalalignment='center',rotation='vertical', transform=ax.transAxes) ax=pl.subplot(4,n_fig,i+1+2*n_fig) pl.imshow(xpred[i,:].reshape((28,28)),cmap=cmap_g) pl.axis('off') pl.xticks(()) pl.yticks(()) if i==0: ax.text(left-.2, 0.5*(bottom+top) , 'Map $\pi^v$', horizontalalignment='right', verticalalignment='center', rotation='vertical', transform=ax.transAxes) ax=pl.subplot(4,n_fig,i+1+3*n_fig) pl.imshow(xpredr[i,:].reshape((28,28)),cmap=cmap_g) pl.axis('off') pl.xticks(()) pl.yticks(()) if i==0: ax.text(left-.2, 0.5*(bottom+top) , 'Map reg $\pi^v$', horizontalalignment='right', verticalalignment='center', rotation='vertical', transform=ax.transAxes) pl.ylabel('Map reg $\pi^v$') pl.savefig('./transp_piv_usps.png') pl.savefig('./transp_piv_usps.pdf',bbox_inches='tight')
example/coot_mnist.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #### #scaling data #### # #!/usr/bin/env python # encoding: utf-8 r""" Low-level routines for finite-size scaling analysis See Also -------- fssa : The high-level module Notes ----- The **fssa** package provides routines to perform finite-size scaling analyses on experimental data [10]_ [11]_. It has been inspired by Oliver Melchert and his superb **autoScale** package [3]_. References ---------- .. [10] <NAME> and <NAME>, Monte Carlo Methods in Statistical Physics (Oxford University Press, 1999) .. [11] <NAME> and <NAME>, `Monte Carlo Simulation in Statistical Physics <http://dx.doi.org/10.1007/978-3-642-03163-2>`_ (Springer, Berlin, Heidelberg, 2010) .. [3] <NAME>, `arXiv:0910.5403 <http://arxiv.org/abs/0910.5403>`_ (2009) """ # Python 2/3 compatibility from __future__ import (absolute_import, division, print_function, unicode_literals) import warnings from builtins import * from collections import namedtuple import numpy as np import numpy.ma as ma import scipy.optimize #from .optimize import _minimize_neldermead class ScaledData(namedtuple('ScaledData', ['x', 'y', 'dy'])): """ A :py:func:`namedtuple <collections.namedtuple>` for :py:func:`scaledata` output """ # set this to keep memory requirements low, according to # http://docs.python.org/3/library/collections.html#namedtuple-factory-function-for-tuples-with-named-fields __slots__ = () def scaledata3(l, rho, a, da, rho_c, nu, zeta, omega, acst, omega2, acst2): r''' Scale experimental data according to critical exponents Parameters ---------- l, rho : 1-D array_like finite system sizes `l` and parameter values `rho` a, da : 2-D array_like of shape (`l`.size, `rho`.size) experimental data `a` with standard errors `da` obtained at finite system sizes `l` and parameter values `rho`, with ``a.shape == da.shape == (l.size, rho.size)`` rho_c : float in range [rho.min(), rho.max()] (assumed) critical parameter value with ``rho_c >= rho.min() and rho_c <= rho.max()`` nu, zeta : float (assumed) critical exponents Returns ------- :py:class:`ScaledData` scaled data `x`, `y` with standard errors `dy` x, y, dy : ndarray two-dimensional arrays of shape ``(l.size, rho.size)`` Notes ----- Scale data points :math:`(\varrho_j, a_{ij}, da_{ij})` observed at finite system sizes :math:`L_i` and parameter values :math:`\varrho_i` according to the finite-size scaling ansatz .. math:: L^{-\zeta/\nu} a_{ij} = \tilde{f}\left( L^{1/\nu} (\varrho_j - \varrho_c) \right). The output is the scaled data points :math:`(x_{ij}, y_{ij}, dy_{ij})` with .. math:: x_{ij} & = L_i^{1/\nu} (\varrho_j - \varrho_c) \\ y_{ij} & = L_i^{-\zeta/\nu} a_{ij} \\ dy_{ij} & = L_i^{-\zeta/\nu} da_{ij} such that all data points :ref:`collapse <data-collapse-method>` onto the single curve :math:`\tilde{f}(x)` with the right choice of :math:`\varrho_c, \nu, \zeta` [4]_ [5]_. Raises ------ ValueError If `l` or `rho` is not 1-D array_like, if `a` or `da` is not 2-D array_like, if the shape of `a` or `da` differs from ``(l.size, rho.size)`` References ---------- .. [4] <NAME> and <NAME>, Monte Carlo Methods in Statistical Physics (Oxford University Press, 1999) .. [5] <NAME> and <NAME>, `Monte Carlo Simulation in Statistical Physics <http://dx.doi.org/10.1007/978-3-642-03163-2>`_ (Springer, Berlin, Heidelberg, 2010) ''' # l should be 1-D array_like l = np.asanyarray(l) if l.ndim != 1: raise ValueError("l should be 1-D array_like") # rho should be 1-D array_like rho = np.asanyarray(rho) if rho.ndim != 1: raise ValueError("rho should be 1-D array_like") # a should be 2-D array_like a = np.asanyarray(a) if a.ndim != 2: raise ValueError("a should be 2-D array_like") # a should have shape (l.size, rho.size) if a.shape != (l.size, rho.size): raise ValueError("a should have shape (l.size, rho.size)") # da should be 2-D array_like da = np.asanyarray(da) if da.ndim != 2: raise ValueError("da should be 2-D array_like") # da should have shape (l.size, rho.size) if da.shape != (l.size, rho.size): raise ValueError("da should have shape (l.size, rho.size)") # rho_c should be float rho_c = float(rho_c) # rho_c should be in range if rho_c > rho.max() or rho_c < rho.min(): warnings.warn("rho_c is out of range", RuntimeWarning) # nu should be float nu = float(nu) # zeta should be float zeta = float(zeta) l_mesh, rho_mesh = np.meshgrid(l, rho, indexing='ij') nu0 = 5/6 rho_c0 = 1.202 x = (1.0/(1.0 + 10**acst2 * np.power(l_mesh, - 10**omega2) ) ) * np.power(l_mesh, 1. / nu) * (rho_mesh - rho_c0) y = (1.0/(1.0 + 10**acst * np.power(l_mesh, - 10**omega) ) ) * np.power(l_mesh, - zeta / nu) * a dy = (1.0/(1.0 + 10**acst * np.power(l_mesh, - 10**omega) ) ) * np.power(l_mesh, - zeta / nu) * da #y = (1.0/(1.0 + acst * np.power(l_mesh, - omega) ) ) * np.power(l_mesh, - zeta ) * a #dy = (1.0/(1.0 + acst * np.power(l_mesh, - omega) ) ) * np.power(l_mesh, - zeta) * da return ScaledData(x, y, dy) def _wls_linearfit_predict(x, w, wx, wy, wxx, wxy, select): """ Predict a point according to a weighted least squares linear fit of the data This function is a helper function for :py:func:`quality`. It is not supposed to be called directly. Parameters ---------- x : float The position for which to predict the function value w : ndarray The pre-calculated weights :math:`w_l` wx : ndarray The pre-calculated weighted `x` data :math:`w_l x_l` wy : ndarray The pre-calculated weighted `y` data :math:`w_l y_l` wxx : ndarray The pre-calculated weighted :math:`x^2` data :math:`w_l x_l^2` wxy : ndarray The pre-calculated weighted `x y` data :math:`w_l x_l y_l` select : indexing array To select the subset from the `w`, `wx`, `wy`, `wxx`, `wxy` data Returns ------- float, float The estimated value of the master curve for the selected subset and the squared standard error """ # linear fit k = w[select].sum() kx = wx[select].sum() ky = wy[select].sum() kxx = wxx[select].sum() kxy = wxy[select].sum() delta = k * kxx - kx ** 2 m = 1. / delta * (k * kxy - kx * ky) b = 1. / delta * (kxx * ky - kx * kxy) b_var = kxx / delta m_var = k / delta bm_covar = - kx / delta # estimation y = b + m * x dy2 = b_var + 2 * bm_covar * x + m_var * x**2 return y, dy2 def _jprimes(x, i, x_bounds=None): """ Helper function to return the j' indices for the master curve fit This function is a helper function for :py:func:`quality`. It is not supposed to be called directly. Parameters ---------- x : mapping to ndarrays The x values. i : int The row index (finite size index) x_bounds : 2-tuple, optional bounds on x values Returns ------- ret : mapping to ndarrays Has the same keys and shape as `x`. Its element ``ret[i'][j]`` is the j' such that :math:`x_{i'j'} \leq x_{ij} < x_{i'(j'+1)}`. If no such j' exists, the element is np.nan. Convert the element to int to use as an index. """ j_primes = - np.ones_like(x) try: x_masked = ma.masked_outside(x, x_bounds[0], x_bounds[1]) except (TypeError, IndexError): x_masked = ma.asanyarray(x) k, n = x.shape # indices of lower and upper bounds edges = ma.notmasked_edges(x_masked, axis=1) x_lower = np.zeros(k, dtype=int) x_upper = np.zeros(k, dtype=int) x_lower[edges[0][0]] = edges[0][-1] x_upper[edges[-1][0]] = edges[-1][-1] for i_prime in range(k): if i_prime == i: j_primes[i_prime][:] = np.nan continue jprimes = np.searchsorted( x[i_prime], x[i], side='right' ).astype(float) - 1 jprimes[ np.logical_or( jprimes < x_lower[i_prime], jprimes >= x_upper[i_prime] ) ] = np.nan j_primes[i_prime][:] = jprimes return j_primes def _select_mask(j, j_primes): """ Return a boolean mask for selecting the data subset according to the j' Parameters ---------- j : int current j index j_primes : ndarray result from _jprimes call """ ret = np.zeros_like(j_primes, dtype=bool) my_iprimes = np.invert(np.isnan(j_primes[:, j])).nonzero()[0] my_jprimes = j_primes[my_iprimes, j] my_jprimes = my_jprimes.astype(np.int) ret[my_iprimes, my_jprimes] = True ret[my_iprimes, my_jprimes + 1] = True return ret def quality(x, y, dy, x_bounds=None): r''' Quality of data collapse onto a master curve defined by the data This is the reduced chi-square statistic for a data fit except that the master curve is fitted from the data itself. Parameters ---------- x, y, dy : 2-D array_like output from :py:func:`scaledata`, scaled data `x`, `y` with standard errors `dy` x_bounds : tuple of floats, optional lower and upper bound for scaled data `x` to consider Returns ------- float the quality of the data collapse Raises ------ ValueError if not all arrays `x`, `y`, `dy` have dimension 2, or if not all arrays are of the same shape, or if `x` is not sorted along rows (``axis=1``), or if `dy` does not have only positive entries Notes ----- This is the implementation of the reduced :math:`\chi^2` quality function :math:`S` by Houdayer & Hartmann [6]_. It should attain a minimum of around :math:`1` for an optimal fit, and be much larger otherwise. For further information, see the :ref:`quality-function` section in the manual. References ---------- .. [6] <NAME> and <NAME>, Physical Review B 70, 014418+ (2004) `doi:10.1103/physrevb.70.014418 <http://dx.doi.org/doi:10.1103/physrevb.70.014418>`_ ''' # arguments should be 2-D array_like x = np.asanyarray(x) y = np.asanyarray(y) dy = np.asanyarray(dy) args = {"x": x, "y": y, "dy": dy} for arg_name, arg in args.items(): if arg.ndim != 2: raise ValueError("{} should be 2-D array_like".format(arg_name)) # arguments should have all the same shape if not x.shape == y.shape == dy.shape: raise ValueError("arguments should be of same shape") # x should be sorted for all system sizes l if not np.array_equal(x, np.sort(x, axis=1)): raise ValueError("x should be sorted for each system size") # dy should have only positive entries if not np.all(dy > 0.0): raise ValueError("dy should have only positive values") # first dimension: system sizes l # second dimension: parameter values rho k, n = x.shape # pre-calculate weights and other matrices w = dy ** (-2) wx = w * x wy = w * y wxx = w * x * x wxy = w * x * y # calculate master curve estimates master_y = np.zeros_like(y) master_y[:] = np.nan master_dy2 = np.zeros_like(dy) master_dy2[:] = np.nan # loop through system sizes for i in range(k): j_primes = _jprimes(x=x, i=i, x_bounds=x_bounds) # loop through x values for j in range(n): # discard x value if it is out of bounds try: if not x_bounds[0] <= x[i][j] <= x_bounds[1]: continue except: pass # boolean mask for selected data x_l, y_l, dy_l select = _select_mask(j=j, j_primes=j_primes) if not select.any(): # no data to select # master curve estimate Y_ij remains undefined continue # master curve estimate master_y[i, j], master_dy2[i, j] = _wls_linearfit_predict( x=x[i, j], w=w, wx=wx, wy=wy, wxx=wxx, wxy=wxy, select=select ) # average within finite system sizes first return np.nanmean( np.nanmean( (y - master_y) ** 2 / (dy ** 2 + master_dy2), axis=1 ) ) def _neldermead_errors(sim, fsim, fun): """ Estimate the errors from the final simplex of the Nelder--Mead algorithm This is a helper function and not supposed to be called directly. Parameters ---------- sim : ndarray the final simplex fsim : ndarray the function values at the vertices of the final simplex fun : callable the goal function to minimize """ # fit quadratic coefficients n = len(sim) - 1 ymin = fsim[0] sim = np.copy(sim) fsim = np.copy(fsim) centroid = np.mean(sim, axis=0) fcentroid = fun(centroid) # enlarge distance of simplex vertices from centroid until all have at # least an absolute function value distance of 0.1 for i in range(n + 1): while np.abs(fsim[i] - fcentroid) < 0.01: sim[i] += sim[i] - centroid fsim[i] = fun(sim[i]) # the vertices and the midpoints x_ij x = 0.5 * ( sim[np.mgrid[0:n + 1, 0:n + 1]][1] + sim[np.mgrid[0:n + 1, 0:n + 1]][0] ) y = np.nan * np.ones(shape=(n + 1, n + 1)) for i in range(n + 1): y[i, i] = fsim[i] for j in range(i + 1, n + 1): y[i, j] = y[j, i] = fun(x[i, j]) y0i = y[np.mgrid[0:n + 1, 0:n + 1]][0][1:, 1:, 0] y0j = y[np.mgrid[0:n + 1, 0:n + 1]][0][0, 1:, 1:] b = 2 * (y[1:, 1:] + y[0, 0] - y0i - y0j) q = (sim - sim[0])[1:].T varco = ymin * np.dot(q, np.dot(np.linalg.inv(b), q.T)) return np.sqrt(np.diag(varco)), varco def autoscale3(l, rho, a, da, rho_c0, nu0, zeta0, omega0, acst0, omega20, acst20, x_bounds=None, **kwargs): """ Automatically scale finite-size data and fit critical point and exponents Parameters ---------- l, rho, a, da : array_like input for the :py:func:`scaledata` function rho_c0, nu0, zeta0 : float initial guesses for the critical point and exponents x_bounds : tuple of floats, optional lower and upper bound for scaled data `x` to consider Returns ------- res : OptimizeResult res['success'] : bool Indicates whether the optimization algorithm has terminated successfully. res['x'] : ndarray res['rho'], res['nu'], res['zeta'] : float The fitted critical point and exponents, ``res['x'] == [res['rho'], res['nu'], res['zeta']]`` res['drho'], res['dnu'], res['dzeta'] : float The respective standard errors derived from fitting the curvature at the minimum, ``res['errors'] == [res['drho'], res['dnu'], res['dzeta']]``. res['errors'], res['varco'] : ndarray The standard errors as a vector, and the full variance--covariance matrix (the diagonal entries of which are the squared standard errors), ``np.sqrt(np.diag(res['varco'])) == res['errors']`` See also -------- scaledata For the `l`, `rho`, `a`, `da` input parameters quality The goal function of the optimization scipy.optimize.minimize The optimization wrapper routine scipy.optimize.OptimizeResult The return type Notes ----- This implementation uses the quality function by Houdayer & Hartmann [8]_ which measures the quality of the data collapse, see the sections :ref:`data-collapse-method` and :ref:`quality-function` in the manual. This function and the whole fssa package have been inspired by Oliver Melchert and his superb **autoScale** package [9]_. The critical point and exponents, including its standard errors and (co)variances, are fitted by the Nelder--Mead algorithm, see the section :ref:`neldermead` in the manual. References ---------- .. [8] <NAME> and <NAME>, Physical Review B 70, 014418+ (2004) `doi:10.1103/physrevb.70.014418 <http://dx.doi.org/doi:10.1103/physrevb.70.014418>`_ .. [9] <NAME>, `arXiv:0910.5403 <http://arxiv.org/abs/0910.5403>`_ (2009) Examples -------- >>> # generate artificial scaling data from master curve >>> # with rho_c == 1.0, nu == 2.0, zeta == 0.0 >>> import fssa >>> l = [ 10, 100, 1000 ] >>> rho = np.linspace(0.9, 1.1) >>> l_mesh, rho_mesh = np.meshgrid(l, rho, indexing='ij') >>> master_curve = lambda x: 1. / (1. + np.exp( - x)) >>> x = np.power(l_mesh, 0.5) * (rho_mesh - 1.) >>> y = master_curve(x) >>> dy = y / 100. >>> y += np.random.randn(*y.shape) * dy >>> a = y >>> da = dy >>> >>> # run autoscale >>> res = fssa.autoscale(l=l, rho=rho, a=a, da=da, rho_c0=0.9, nu0=2.0, zeta0=0.0) """ def goal_function(x): my_x, my_y, my_dy = scaledata3( rho=rho, l=l, a=a, da=da, rho_c=x[0], nu=x[1], zeta=x[2], omega = x[3], acst = x[4], omega2 = x[5], acst2 = x[6] ) return quality( my_x, my_y, my_dy, x_bounds=x_bounds, ) ret = scipy.optimize.minimize( goal_function, [rho_c0, nu0, zeta0, omega0, acst0, omega20, acst20], method='Nelder-Mead', options={ 'xtol': 1e-4, 'ftol': 1e-4, } ) errors, varco = _neldermead_errors( sim=ret['final_simplex'][0], fsim=ret['final_simplex'][1], fun=goal_function, ) ret['varco'] = varco ret['errors'] = errors ret['rho'], ret['nu'], ret['zeta'], ret['omega'], ret['acst'], ret['omega2'], ret['acst2'] = ret['x'] ret['drho'], ret['dnu'], ret['dzeta'], ret['domega'], ret['dacst'], ret['domega2'], ret['dacst2']= ret['errors'] return ret # + #### #scaling data #### # #!/usr/bin/env python # encoding: utf-8 r""" Low-level routines for finite-size scaling analysis See Also -------- fssa : The high-level module Notes ----- The **fssa** package provides routines to perform finite-size scaling analyses on experimental data [10]_ [11]_. It has been inspired by <NAME> and his superb **autoScale** package [3]_. References ---------- .. [10] <NAME> and <NAME>, Monte Carlo Methods in Statistical Physics (Oxford University Press, 1999) .. [11] <NAME> and <NAME>, `Monte Carlo Simulation in Statistical Physics <http://dx.doi.org/10.1007/978-3-642-03163-2>`_ (Springer, Berlin, Heidelberg, 2010) .. [3] <NAME>, `arXiv:0910.5403 <http://arxiv.org/abs/0910.5403>`_ (2009) """ # Python 2/3 compatibility from __future__ import (absolute_import, division, print_function, unicode_literals) import warnings from builtins import * from collections import namedtuple import numpy as np import numpy.ma as ma import scipy.optimize #from .optimize import _minimize_neldermead class ScaledData(namedtuple('ScaledData', ['x', 'y', 'dy'])): """ A :py:func:`namedtuple <collections.namedtuple>` for :py:func:`scaledata` output """ # set this to keep memory requirements low, according to # http://docs.python.org/3/library/collections.html#namedtuple-factory-function-for-tuples-with-named-fields __slots__ = () def scaledata4(l, rho, a, da, omega, acst, omega2, acst2): rho_c = 1.205 nu = 5/6 zeta = 1/3 r''' Scale experimental data according to critical exponents Parameters ---------- l, rho : 1-D array_like finite system sizes `l` and parameter values `rho` a, da : 2-D array_like of shape (`l`.size, `rho`.size) experimental data `a` with standard errors `da` obtained at finite system sizes `l` and parameter values `rho`, with ``a.shape == da.shape == (l.size, rho.size)`` rho_c : float in range [rho.min(), rho.max()] (assumed) critical parameter value with ``rho_c >= rho.min() and rho_c <= rho.max()`` nu, zeta : float (assumed) critical exponents Returns ------- :py:class:`ScaledData` scaled data `x`, `y` with standard errors `dy` x, y, dy : ndarray two-dimensional arrays of shape ``(l.size, rho.size)`` Notes ----- Scale data points :math:`(\varrho_j, a_{ij}, da_{ij})` observed at finite system sizes :math:`L_i` and parameter values :math:`\varrho_i` according to the finite-size scaling ansatz .. math:: L^{-\zeta/\nu} a_{ij} = \tilde{f}\left( L^{1/\nu} (\varrho_j - \varrho_c) \right). The output is the scaled data points :math:`(x_{ij}, y_{ij}, dy_{ij})` with .. math:: x_{ij} & = L_i^{1/\nu} (\varrho_j - \varrho_c) \\ y_{ij} & = L_i^{-\zeta/\nu} a_{ij} \\ dy_{ij} & = L_i^{-\zeta/\nu} da_{ij} such that all data points :ref:`collapse <data-collapse-method>` onto the single curve :math:`\tilde{f}(x)` with the right choice of :math:`\varrho_c, \nu, \zeta` [4]_ [5]_. Raises ------ ValueError If `l` or `rho` is not 1-D array_like, if `a` or `da` is not 2-D array_like, if the shape of `a` or `da` differs from ``(l.size, rho.size)`` References ---------- .. [4] <NAME> and <NAME>, Monte Carlo Methods in Statistical Physics (Oxford University Press, 1999) .. [5] <NAME> and <NAME>, `Monte Carlo Simulation in Statistical Physics <http://dx.doi.org/10.1007/978-3-642-03163-2>`_ (Springer, Berlin, Heidelberg, 2010) ''' # l should be 1-D array_like l = np.asanyarray(l) if l.ndim != 1: raise ValueError("l should be 1-D array_like") # rho should be 1-D array_like rho = np.asanyarray(rho) if rho.ndim != 1: raise ValueError("rho should be 1-D array_like") # a should be 2-D array_like a = np.asanyarray(a) if a.ndim != 2: raise ValueError("a should be 2-D array_like") # a should have shape (l.size, rho.size) if a.shape != (l.size, rho.size): raise ValueError("a should have shape (l.size, rho.size)") # da should be 2-D array_like da = np.asanyarray(da) if da.ndim != 2: raise ValueError("da should be 2-D array_like") # da should have shape (l.size, rho.size) if da.shape != (l.size, rho.size): raise ValueError("da should have shape (l.size, rho.size)") # rho_c should be float rho_c = float(rho_c) # rho_c should be in range if rho_c > rho.max() or rho_c < rho.min(): warnings.warn("rho_c is out of range", RuntimeWarning) # nu should be float nu = float(nu) # zeta should be float zeta = float(zeta) l_mesh, rho_mesh = np.meshgrid(l, rho, indexing='ij') x = (1.0/(1.0 + acst2 * np.power(l_mesh, - omega2) ) ) * np.power(l_mesh, 1. / nu) * (rho_mesh - rho_c) y = (1.0/(1.0 + acst * np.power(l_mesh, - omega) ) ) * np.power(l_mesh, - zeta / nu) * a dy = (1.0/(1.0 + acst * np.power(l_mesh, - omega) ) ) * np.power(l_mesh, - zeta / nu) * da return ScaledData(x, y, dy) def _wls_linearfit_predict(x, w, wx, wy, wxx, wxy, select): """ Predict a point according to a weighted least squares linear fit of the data This function is a helper function for :py:func:`quality`. It is not supposed to be called directly. Parameters ---------- x : float The position for which to predict the function value w : ndarray The pre-calculated weights :math:`w_l` wx : ndarray The pre-calculated weighted `x` data :math:`w_l x_l` wy : ndarray The pre-calculated weighted `y` data :math:`w_l y_l` wxx : ndarray The pre-calculated weighted :math:`x^2` data :math:`w_l x_l^2` wxy : ndarray The pre-calculated weighted `x y` data :math:`w_l x_l y_l` select : indexing array To select the subset from the `w`, `wx`, `wy`, `wxx`, `wxy` data Returns ------- float, float The estimated value of the master curve for the selected subset and the squared standard error """ # linear fit k = w[select].sum() kx = wx[select].sum() ky = wy[select].sum() kxx = wxx[select].sum() kxy = wxy[select].sum() delta = k * kxx - kx ** 2 m = 1. / delta * (k * kxy - kx * ky) b = 1. / delta * (kxx * ky - kx * kxy) b_var = kxx / delta m_var = k / delta bm_covar = - kx / delta # estimation y = b + m * x dy2 = b_var + 2 * bm_covar * x + m_var * x**2 return y, dy2 def _jprimes(x, i, x_bounds=None): """ Helper function to return the j' indices for the master curve fit This function is a helper function for :py:func:`quality`. It is not supposed to be called directly. Parameters ---------- x : mapping to ndarrays The x values. i : int The row index (finite size index) x_bounds : 2-tuple, optional bounds on x values Returns ------- ret : mapping to ndarrays Has the same keys and shape as `x`. Its element ``ret[i'][j]`` is the j' such that :math:`x_{i'j'} \leq x_{ij} < x_{i'(j'+1)}`. If no such j' exists, the element is np.nan. Convert the element to int to use as an index. """ j_primes = - np.ones_like(x) try: x_masked = ma.masked_outside(x, x_bounds[0], x_bounds[1]) except (TypeError, IndexError): x_masked = ma.asanyarray(x) k, n = x.shape # indices of lower and upper bounds edges = ma.notmasked_edges(x_masked, axis=1) x_lower = np.zeros(k, dtype=int) x_upper = np.zeros(k, dtype=int) x_lower[edges[0][0]] = edges[0][-1] x_upper[edges[-1][0]] = edges[-1][-1] for i_prime in range(k): if i_prime == i: j_primes[i_prime][:] = np.nan continue jprimes = np.searchsorted( x[i_prime], x[i], side='right' ).astype(float) - 1 jprimes[ np.logical_or( jprimes < x_lower[i_prime], jprimes >= x_upper[i_prime] ) ] = np.nan j_primes[i_prime][:] = jprimes return j_primes def _select_mask(j, j_primes): """ Return a boolean mask for selecting the data subset according to the j' Parameters ---------- j : int current j index j_primes : ndarray result from _jprimes call """ ret = np.zeros_like(j_primes, dtype=bool) my_iprimes = np.invert(np.isnan(j_primes[:, j])).nonzero()[0] my_jprimes = j_primes[my_iprimes, j] my_jprimes = my_jprimes.astype(np.int) ret[my_iprimes, my_jprimes] = True ret[my_iprimes, my_jprimes + 1] = True return ret def quality(x, y, dy, x_bounds=None): r''' Quality of data collapse onto a master curve defined by the data This is the reduced chi-square statistic for a data fit except that the master curve is fitted from the data itself. Parameters ---------- x, y, dy : 2-D array_like output from :py:func:`scaledata`, scaled data `x`, `y` with standard errors `dy` x_bounds : tuple of floats, optional lower and upper bound for scaled data `x` to consider Returns ------- float the quality of the data collapse Raises ------ ValueError if not all arrays `x`, `y`, `dy` have dimension 2, or if not all arrays are of the same shape, or if `x` is not sorted along rows (``axis=1``), or if `dy` does not have only positive entries Notes ----- This is the implementation of the reduced :math:`\chi^2` quality function :math:`S` by Houdayer & Hartmann [6]_. It should attain a minimum of around :math:`1` for an optimal fit, and be much larger otherwise. For further information, see the :ref:`quality-function` section in the manual. References ---------- .. [6] <NAME> and <NAME>, Physical Review B 70, 014418+ (2004) `doi:10.1103/physrevb.70.014418 <http://dx.doi.org/doi:10.1103/physrevb.70.014418>`_ ''' # arguments should be 2-D array_like x = np.asanyarray(x) y = np.asanyarray(y) dy = np.asanyarray(dy) args = {"x": x, "y": y, "dy": dy} for arg_name, arg in args.items(): if arg.ndim != 2: raise ValueError("{} should be 2-D array_like".format(arg_name)) # arguments should have all the same shape if not x.shape == y.shape == dy.shape: raise ValueError("arguments should be of same shape") # x should be sorted for all system sizes l #print(x==np.sort(x, axis=1)) #if not np.array_equal(x, np.sort(x, axis=1)): # raise ValueError("x should be sorted for each system size") # dy should have only positive entries if not np.all(dy > 0.0): raise ValueError("dy should have only positive values") # first dimension: system sizes l # second dimension: parameter values rho k, n = x.shape # pre-calculate weights and other matrices w = dy ** (-2) wx = w * x wy = w * y wxx = w * x * x wxy = w * x * y # calculate master curve estimates master_y = np.zeros_like(y) master_y[:] = np.nan master_dy2 = np.zeros_like(dy) master_dy2[:] = np.nan # loop through system sizes for i in range(k): j_primes = _jprimes(x=x, i=i, x_bounds=x_bounds) # loop through x values for j in range(n): # discard x value if it is out of bounds try: if not x_bounds[0] <= x[i][j] <= x_bounds[1]: continue except: pass # boolean mask for selected data x_l, y_l, dy_l select = _select_mask(j=j, j_primes=j_primes) if not select.any(): # no data to select # master curve estimate Y_ij remains undefined continue # master curve estimate master_y[i, j], master_dy2[i, j] = _wls_linearfit_predict( x=x[i, j], w=w, wx=wx, wy=wy, wxx=wxx, wxy=wxy, select=select ) # average within finite system sizes first return np.nanmean( np.nanmean( (y - master_y) ** 2 / (dy ** 2 + master_dy2), axis=1 ) ) def _neldermead_errors(sim, fsim, fun): """ Estimate the errors from the final simplex of the Nelder--Mead algorithm This is a helper function and not supposed to be called directly. Parameters ---------- sim : ndarray the final simplex fsim : ndarray the function values at the vertices of the final simplex fun : callable the goal function to minimize """ # fit quadratic coefficients n = len(sim) - 1 ymin = fsim[0] sim = np.copy(sim) fsim = np.copy(fsim) centroid = np.mean(sim, axis=0) fcentroid = fun(centroid) # enlarge distance of simplex vertices from centroid until all have at # least an absolute function value distance of 0.1 for i in range(n + 1): while np.abs(fsim[i] - fcentroid) < 0.01: sim[i] += sim[i] - centroid fsim[i] = fun(sim[i]) # the vertices and the midpoints x_ij x = 0.5 * ( sim[np.mgrid[0:n + 1, 0:n + 1]][1] + sim[np.mgrid[0:n + 1, 0:n + 1]][0] ) y = np.nan * np.ones(shape=(n + 1, n + 1)) for i in range(n + 1): y[i, i] = fsim[i] for j in range(i + 1, n + 1): y[i, j] = y[j, i] = fun(x[i, j]) y0i = y[np.mgrid[0:n + 1, 0:n + 1]][0][1:, 1:, 0] y0j = y[np.mgrid[0:n + 1, 0:n + 1]][0][0, 1:, 1:] b = 2 * (y[1:, 1:] + y[0, 0] - y0i - y0j) q = (sim - sim[0])[1:].T varco = ymin * np.dot(q, np.dot(np.linalg.inv(b), q.T)) return np.sqrt(np.diag(varco)), varco def autoscale4(l, rho, a, da, omega0, acst0, omega20, acst20, x_bounds=None, **kwargs): (rho_c0, nu0, zeta0) = (1.205, 5/6, 1/3) """ Automatically scale finite-size data and fit critical point and exponents Parameters ---------- l, rho, a, da : array_like input for the :py:func:`scaledata` function rho_c0, nu0, zeta0 : float initial guesses for the critical point and exponents x_bounds : tuple of floats, optional lower and upper bound for scaled data `x` to consider Returns ------- res : OptimizeResult res['success'] : bool Indicates whether the optimization algorithm has terminated successfully. res['x'] : ndarray res['rho'], res['nu'], res['zeta'] : float The fitted critical point and exponents, ``res['x'] == [res['rho'], res['nu'], res['zeta']]`` res['drho'], res['dnu'], res['dzeta'] : float The respective standard errors derived from fitting the curvature at the minimum, ``res['errors'] == [res['drho'], res['dnu'], res['dzeta']]``. res['errors'], res['varco'] : ndarray The standard errors as a vector, and the full variance--covariance matrix (the diagonal entries of which are the squared standard errors), ``np.sqrt(np.diag(res['varco'])) == res['errors']`` See also -------- scaledata For the `l`, `rho`, `a`, `da` input parameters quality The goal function of the optimization scipy.optimize.minimize The optimization wrapper routine scipy.optimize.OptimizeResult The return type Notes ----- This implementation uses the quality function by Houdayer & Hartmann [8]_ which measures the quality of the data collapse, see the sections :ref:`data-collapse-method` and :ref:`quality-function` in the manual. This function and the whole fssa package have been inspired by Oliver Melchert and his superb **autoScale** package [9]_. The critical point and exponents, including its standard errors and (co)variances, are fitted by the Nelder--Mead algorithm, see the section :ref:`neldermead` in the manual. References ---------- .. [8] <NAME> and <NAME>, Physical Review B 70, 014418+ (2004) `doi:10.1103/physrevb.70.014418 <http://dx.doi.org/doi:10.1103/physrevb.70.014418>`_ .. [9] <NAME>, `arXiv:0910.5403 <http://arxiv.org/abs/0910.5403>`_ (2009) Examples -------- >>> # generate artificial scaling data from master curve >>> # with rho_c == 1.0, nu == 2.0, zeta == 0.0 >>> import fssa >>> l = [ 10, 100, 1000 ] >>> rho = np.linspace(0.9, 1.1) >>> l_mesh, rho_mesh = np.meshgrid(l, rho, indexing='ij') >>> master_curve = lambda x: 1. / (1. + np.exp( - x)) >>> x = np.power(l_mesh, 0.5) * (rho_mesh - 1.) >>> y = master_curve(x) >>> dy = y / 100. >>> y += np.random.randn(*y.shape) * dy >>> a = y >>> da = dy >>> >>> # run autoscale >>> res = fssa.autoscale(l=l, rho=rho, a=a, da=da, rho_c0=0.9, nu0=2.0, zeta0=0.0) """ def goal_function(x): my_x, my_y, my_dy = scaledata4( rho=rho, l=l, a=a, da=da, omega = x[0], acst = x[1], omega2 = x[2], acst2 = x[3] ) return quality( my_x, my_y, my_dy, x_bounds=x_bounds, ) ret = scipy.optimize.minimize( goal_function, [omega0, acst0, omega20, acst20], method='Nelder-Mead', options={ 'xtol': 1e-4, 'ftol': 1e-4, } ) errors, varco = _neldermead_errors( sim=ret['final_simplex'][0], fsim=ret['final_simplex'][1], fun=goal_function, ) ret['varco'] = varco ret['errors'] = errors ret['omega'], ret['acst'], ret['omega2'], ret['acst2'] = ret['x'] ret['domega'], ret['dacst'], ret['domega2'], ret['dacst2']= ret['errors'] return ret # + # #!/usr/bin/env python # coding: utf-8 from __future__ import division import numpy as np from numpy.random import rand from numpy import linalg as LA import matplotlib import matplotlib.pyplot as plt import matplotlib.patches as patches from matplotlib.lines import Line2D import matplotlib.lines as mlines import math import sys import os from random import shuffle from scipy.interpolate import UnivariateSpline from scipy.interpolate import InterpolatedUnivariateSpline from scipy.optimize import fmin from scipy.optimize import fsolve from scipy import interpolate from scipy.optimize import curve_fit import scipy.optimize as opt import matplotlib.colors as colors import matplotlib.cm as cmx from scipy.signal import savgol_filter from random import gauss import matplotlib.ticker as ticker from matplotlib.ticker import (MultipleLocator, FormatStrFormatter, AutoMinorLocator) #ipython specific # %matplotlib inline from matplotlib import rc rc('font',**{'family':'sans-serif', 'size' : 10}) #, 'sans-serif':['Arial']}) ## for Palatino and other serif fonts use: #rc('font',**{'family':'serif','serif':['Palatino']}) rc('text', usetex=True) color_red = (0.73, 0.13869999999999993, 0.) color_orange = (1., 0.6699999999999999, 0.) color_green = (0.14959999999999996, 0.43999999999999995, 0.12759999999999994) color_blue = (0.06673600000000002, 0.164512, 0.776) color_purple = (0.25091600000000003, 0.137378, 0.29800000000000004) color_ocker = (0.6631400000000001, 0.71, 0.1491) color_pink = (0.71, 0.1491, 0.44730000000000003) color_brown = (0.651, 0.33331200000000005, 0.054683999999999955) color_all = [color_red, color_orange, color_green, color_blue, color_purple, color_ocker,color_pink, color_brown] from funcfssa import autoscale, scaledata #usual scaling from funcfssa2 import autoscale2, scaledata2 #correction in y #from funcfssa3 import autoscale3, scaledata3 #added correction in x #from funcfssa4 import autoscale4, scaledata4 #added correction in x j2 = 1.0 j6 = 1.0 Kc = 0.0 lambda3 = 2.1 q_Q = 1 offset = 0 N_og = [40, 60, 80, 100, 120, 140, 160, 180, 200, 240, 260, 340, 380] #N_og = [40, 60, 80, 100, 120, 140, 160, 180, 200] N_list = N_og #N_fit_cv = 200 N_fit_cv = 300 #### #code ### cv_ind = 1 chi_ind = 10 #for chi_sigma m_ind = 9 #for m_sigma data = np.load('Delta1_data_v2.npy',allow_pickle=True) ###### #----------------------------------------------------------------------------------------------------------------------- ####### #parameters of the code ###### #----------------------------------------------------------------------------------------------------------------------- ###### #size_investigate = [20, 32] Blocks = ['*','o', 'v', 'd','s','+', 'x'] #colors_size = ['blue', 'red', 'green', 'brown'] cNorm = colors.Normalize(vmin=0, vmax=1) scalarMap = cmx.ScalarMappable(norm=cNorm, cmap='brg_r') #scalarMap = cmx.ScalarMappable(norm=cNorm, cmap='viridis_r') colors_size = [scalarMap.to_rgba(i/(len(N_list)-1)) for i in range(len(N_list))] #colors_size = [scalarMap.to_rgba(i/(len(N_og)-1)) for i in range(len(N_og))] ### scaling of specific heat ### ls = N_list a_arr = [] da_arr = [] flat_temp = [] ind = cv_ind #here choose your fit scaleC = 1 #take temp range of L=380 as the basic range of temperature for all of them rhos = np.array(data[-1][0]) #for each system size L, download the data, and creat interpolation function. #use rhos (the T list) for all of them to get the interpolated values for CV (a_arr) #da_arr is a bogus list - could reduce 0.01 by a lot if needed. for i in range(len(N_list)): i_og = i + offset prelim_y = (N_list[i]**(2*scaleC))*data[i_og][2*ind + 1] prelim_x = data[i_og][0] spl_alt = InterpolatedUnivariateSpline(prelim_x, prelim_y, k=5) a_arr.append(spl_alt(rhos)) da_arr.append(0.01*spl_alt(rhos)) a_arr = np.array(a_arr) da_arr = np.array(da_arr) # scaling function ansatz autoscale (no corrections to scaling) or autoscale2 (corrections to scaling on x or y) or autoscale3 (corrections to scaling on both x and y) print('\nSpecific heat: initial values') print('(Tc, nu, beta, beta/nu) = (1.2035, 0.83, 0.33, 0.4)') #choose how you want to do the scaling scale = 3 do_fit = True #values obtained by previous fits vals = [ 1.202, 1.8, 0.76, 0.31, 100, 0.0, 0.0] #good fit, alpha/nu ~ 0.4 like it should #if you want the actual values for nu and alpha, but try to only fit the corrections, you probably get #vals = [ 1.202, 0.83, 0.33, 0.85, 23.0, 0.0, 0.0] #or this #vals = [ 1.202, 0.9, 0.4, 0.7, 20.0, 0.0, 0.0] print(ls) if do_fit: if scale == 3: # [Tc, nu, \gamma, a1, \omega_1, a2, \omega_2] init_guess = [1.2, 5/6, 1/3, 0.1, 0.1, 0.1, 0.1] noisy_ret = autoscale3(ls, rhos, a_arr, da_arr, *init_guess) print('CV:') print('values: Tc, nu, zeta, omega1, a1, omega2, a2') print(noisy_ret['x']) print('errors') print(noisy_ret['errors']) noisy_auto_scaled_data =scaledata3(ls, rhos, a_arr, da_arr, *noisy_ret['x']) elif scale == 2: init_guess = [1.2, 5/6, 1/3, 0.1, 0.1] noisy_ret = autoscale2(ls, rhos, a_arr, da_arr, *init_guess) print('CV:') print('values: Tc, nu, zeta, omega, a') print(noisy_ret['x']) print('errors') print(noisy_ret['errors']) noisy_auto_scaled_data =scaledata2(ls, rhos, a_arr, da_arr, *noisy_ret['x']) elif scale == 1: init_guess = [1.2, 5/6, 1/3] noisy_ret = autoscale(ls, rhos, a_arr, da_arr, *init_guess) print('CV:') print('values: Tc, nu, zeta') print(noisy_ret['x']) print('errors') print(noisy_ret['errors']) noisy_auto_scaled_data =scaledata(ls, rhos, a_arr, da_arr, *noisy_ret['x']) elif scale == 4: init_guess = [0.1, 0.1, 0.1, 0.1] noisy_ret = autoscale4(ls, rhos, a_arr, da_arr, *init_guess) print('CV:') print('values: omega1, a1, omega2, a2') print(noisy_ret['x']) print('errors') print(noisy_ret['errors']) noisy_auto_scaled_data =scaledata4(ls, rhos, a_arr, da_arr, *noisy_ret['x']) else: noisy_auto_scaled_data = scaledata3(ls, rhos, a_arr, da_arr, *vals) #the values obtained by the fit #[Tc, nu, alpha] = noisy_ret['x'][0:3] ######### #-------- ######### #the figure environment fig = plt.figure(figsize = (10,10) ) ax1 = plt.subplot() #specific heat for i in range(len(N_list)): #hand test #if you want to use your own values from vals, use next line #noisy_auto_scaled_data = scaledata3(ls, rhos, a_arr, da_arr, *vals) y_val = (noisy_auto_scaled_data.y.T)[:,i] x_val = (noisy_auto_scaled_data.x.T)[:,i] ax1.plot(x_val, y_val, marker ='o', markersize = 5, linestyle = '-', linewidth = 0.5, color=colors_size[i]) #ax1.set_xlabel(r"$(T - T_{3})L^{1/\nu}/(1 + a_2 L^{-\omega_2})$", fontsize=10); ax1.set_ylabel(r'$c_{v}L^{\alpha/\nu}/(1 + a_1 L^{-\omega_1})$ ', fontsize=10); # ax1.set_xlabel(r"$\tilde{t} = (T - T_{potts})L^{1/\nu} (1 + a_2 L^{-\omega_2})^{-1}$", fontsize=10); # ax1.set_ylabel(r'Scaled Specific Heat $\tilde{c}_{v} = L^{\beta/\nu} c_{v} (1 + a_1 L^{-\omega_1})^{-1}$ ', fontsize=10); #do the legend and append at the end # patches_lg = [] # for i in range(len(N_list)): # patches_lg.append(patches.Patch(color=colors_size[i], label='L='+str(int(N_list[i])))) #ax.legend(handles=patches_lg, loc='upper right', bbox_to_anchor=(0.98,0.95), ncol=1,fontsize = 10) patches_lg = [] for i in range(len(N_list)): patches_lg.append(Line2D([0], [0], color=colors_size[i], linewidth = 2, linestyle = '-', label='$'+str(int(N_list[i]))+'$') ) ax1.legend(handles=patches_lg, loc='upper left', title = '$L = $', title_fontsize = 9, ncol=1, fontsize = 9) ax1.xaxis.set_minor_locator(MultipleLocator(5)) ax1.grid(which='major', axis='both', linestyle='-', alpha = 0.4) ax1.grid(which='minor', axis='both', linestyle='-', alpha = 0.2) ax1.tick_params(axis='both', which='major', labelsize=10) ax1.tick_params(axis='both', which='minor', labelsize=10) #plt.savefig('./scaleCV.png', format='png', dpi = 100, bbox_inches='tight') plt.show() # - rhos # + code_folding=[51] #a function that returns the cv scaling plot only #easy to play with vals and see the result def figure_noscale(): fig = plt.figure(figsize = (10,10) ) ax1 = plt.subplot() #specific heat for i in range(len(N_list)): #hand test y_val = a_arr[i] x_val = rhos ax1.plot(x_val, y_val, marker ='o', markersize = 5, linestyle = '-', linewidth = 0.5, color=colors_size[i]) ax1.set_xlabel(r"$(T - T_{3})L^{1/\nu}/(1 + a_2 L^{-\omega_2})$", fontsize=10); ax1.set_ylabel(r'$c_{v}L^{\alpha/\nu}/(1 + a_1 L^{-\omega_1})$ ', fontsize=10); # ax1.set_xlabel(r"$\tilde{t} = (T - T_{potts})L^{1/\nu} (1 + a_2 L^{-\omega_2})^{-1}$", fontsize=10); # ax1.set_ylabel(r'Scaled Specific Heat $\tilde{c}_{v} = L^{\beta/\nu} c_{v} (1 + a_1 L^{-\omega_1})^{-1}$ ', fontsize=10); #do the legend and append at the end # patches_lg = [] # for i in range(len(N_list)): # patches_lg.append(patches.Patch(color=colors_size[i], label='L='+str(int(N_list[i])))) #ax.legend(handles=patches_lg, loc='upper right', bbox_to_anchor=(0.98,0.95), ncol=1,fontsize = 10) patches_lg = [] for i in range(len(N_list)): patches_lg.append(Line2D([0], [0], color=colors_size[i], linewidth = 2, linestyle = '-', label='$'+str(int(N_list[i]))+'$') ) ax1.legend(handles=patches_lg, loc='upper left', title = '$L = $', title_fontsize = 9, ncol=1, fontsize = 9) ax1.xaxis.set_minor_locator(MultipleLocator(5)) ax1.grid(which='major', axis='both', linestyle='-', alpha = 0.4) ax1.grid(which='minor', axis='both', linestyle='-', alpha = 0.2) ax1.tick_params(axis='both', which='major', labelsize=10) ax1.tick_params(axis='both', which='minor', labelsize=10) #plt.savefig('./scaleCV.png', format='png', dpi = 100, bbox_inches='tight') plt.show() return def figure_fit(vals): fig = plt.figure(figsize = (10,10) ) ax1 = plt.subplot() #specific heat for i in range(len(N_list)): #hand test noisy_auto_scaled_data =scaledata3(ls, rhos, a_arr, da_arr, *vals) y_val = (noisy_auto_scaled_data.y.T)[:,i] x_val = (noisy_auto_scaled_data.x.T)[:,i] ax1.plot(x_val, y_val, marker ='o', markersize = 5, linestyle = '-', linewidth = 0.5, color=colors_size[i]) ax1.set_xlabel(r"$(T - T_{3})L^{1/\nu}/(1 + a_2 L^{-\omega_2})$", fontsize=10); ax1.set_ylabel(r'$c_{v}L^{\alpha/\nu}/(1 + a_1 L^{-\omega_1})$ ', fontsize=10); # ax1.set_xlabel(r"$\tilde{t} = (T - T_{potts})L^{1/\nu} (1 + a_2 L^{-\omega_2})^{-1}$", fontsize=10); # ax1.set_ylabel(r'Scaled Specific Heat $\tilde{c}_{v} = L^{\beta/\nu} c_{v} (1 + a_1 L^{-\omega_1})^{-1}$ ', fontsize=10); #do the legend and append at the end # patches_lg = [] # for i in range(len(N_list)): # patches_lg.append(patches.Patch(color=colors_size[i], label='L='+str(int(N_list[i])))) #ax.legend(handles=patches_lg, loc='upper right', bbox_to_anchor=(0.98,0.95), ncol=1,fontsize = 10) patches_lg = [] for i in range(len(N_list)): patches_lg.append(Line2D([0], [0], color=colors_size[i], linewidth = 2, linestyle = '-', label='$'+str(int(N_list[i]))+'$') ) ax1.legend(handles=patches_lg, loc='upper left', title = '$L = $', title_fontsize = 9, ncol=1, fontsize = 9) ax1.xaxis.set_minor_locator(MultipleLocator(5)) ax1.grid(which='major', axis='both', linestyle='-', alpha = 0.4) ax1.grid(which='minor', axis='both', linestyle='-', alpha = 0.2) ax1.tick_params(axis='both', which='major', labelsize=10) ax1.tick_params(axis='both', which='minor', labelsize=10) #plt.savefig('./scaleCV.png', format='png', dpi = 100, bbox_inches='tight') plt.show() return # - #no scaling figure_noscale() #Tc, nu, alpha, omega1, a1, omega2, a2 #testV = [ 1.202, 1.8, 0.76, 0.31, 100, 0.0, 0.0] testV = [ 1.202, 0.83, 0.33, 1.0, 30.0, 0.0, 0.0] #print("alpha/nu = ", testV[2]/testV[1]) figure_fit(testV)
.ipynb_checkpoints/test_scaling_cv-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <a id="top"></a> # + [markdown] hideCode=false hidePrompt=false # # Db2 11.5 SQL Enhancements # Updated: 2019-10-03 # + [markdown] hideCode=false hidePrompt=false # ### SQL Enhancements # Moving from one database vendor to another can sometimes be difficult due to syntax differences between data types, functions, and language elements. Db2 already has a high degree of compatibility with Oracle PLSQL along with some of the Oracle data types. # # Db2 11.5 introduces some additional data type and function compatibility that will reduce some of the migration effort required when porting from other systems. # - # ### Load Db2 Extensions and Connect to the Database # The `connection` notebook contains the `CONNECT` statement which allows access to the `SAMPLE` database. If you need to modify the connection information, edit the `connection.ipynb` notebook. # + hideCode=false hideOutput=false hidePrompt=false # %run ../db2.ipynb # %run ../connection.ipynb # - # We populate the database with the EMPLOYEE and DEPARTMENT tables so that we can run the various examples. # %sql -sampledata # # Table of Contents # # * [If NOT EXISTS](#ifnotexists) # # * [If EXISTS](#ifexists) # # * [Columnar Data Types](#blobclob) # # * [Quote Identifiers and Literals](#quotes) # # * [Oracle Compatibility Libraries](#oracle) # <a id='ifnotexists'></a> # + [markdown] hideCode=false hidePrompt=false # ## IF NOT EXISTS # # The `IF NOT EXISTS` clause is used during table creation. This SQL specifies that no error message is shown when the table cannot be created because a table with the specified name already exists in the current database and schema. Note that the existing table and the specified table in the command are not compared so the application must ensure that the target table and rows are as expected. # ```sql # CREATE TABLE IF NOT EXISTS CUSTOMER … # ``` # # Use this option for scripted applications that are running SQL commands. This syntax will suppress the **Table already exists** error message, so the scripted application is not impacted or halted. # # You cannot use the `IF NOT EXISTS` option with the `AS SELECT` clause. # # Unless other errors prevent the creation of the table, a `CREATE TABLE` message is returned although no table is created. An error code is ignored if a table with the specified name already exists. # # This SQL will delete an existing table and then re-create it. # + hideCode=false hidePrompt=false language="sql" # DROP TABLE DEMO_TABLE; # CREATE TABLE DEMO_TABLE( SOMEVALUE VARCHAR(100) ); # + [markdown] hideCode=false hidePrompt=false # A normal `CREATE TABLE` statement will fail with an error since the table already exists. # + hideCode=false hidePrompt=false # %sql CREATE TABLE DEMO_TABLE( SOMEVALUE VARCHAR(100) ); # + [markdown] hideCode=false hidePrompt=false # Using the `IF NOT EXISTS` clause will ignore the fact that the table already exists and complete successfully. # + hideCode=false hidePrompt=false # %sql CREATE TABLE IF NOT EXISTS DEMO_TABLE( SOMEVALUE VARCHAR(100) ); # - # [Back to Top](#top) # <a id='ifexists'></a> # + [markdown] hideCode=false hidePrompt=false # ## IF EXISTS # # The `IF EXISTS` is used when dropping a table and specifies that no error message is shown when the table is not dropped because it does not currently exist in the database. # ```sql # DROP TABLE CUSTOMER IF EXISTS # ``` # Use this option for scripted applications that are running SQL commands. This option will suppress the **Table not found** error message so the scripted application is not impacted or halted. # # The `DROP TABLE` message is returned although no table was dropped. Failure is ignored if a table with the specified name does not exist. # # # First we create a table (**Note:** The Db2 magic commands ignore errors!) # + hideCode=false hidePrompt=false magic_args="-q" language="sql" # DROP TABLE DEMO_TABLE; # CREATE TABLE DEMO_TABLE( SOMEVALUE VARCHAR(100) ); # - # This first `DROP` statement will drop the table without an error message. # + hideCode=false hidePrompt=false language="sql" # DROP TABLE DEMO_TABLE; # - # If we try dropping it again using the normal syntax, we will get an error message. # + hideCode=false hidePrompt=false language="sql" # DROP TABLE DEMO_TABLE; # - # Adding the `IF EXISTS` clause to the `DROP` statement will ignore the error message. # + hideCode=false hidePrompt=false language="sql" # DROP TABLE IF EXISTS DEMO_TABLE; # - # [Back to Top](#top) # <a id='blobclob'></a> # ## Columnar Data Types # # Db2 11.5 adds two new data types to columnar tables: CLOB and BLOB. These two data types can contain any combination # of characters (CLOB) or binary values (BLOB). BLOBs are not affected by the codepage of the server that the values are stored on. # # A CLOB or BLOB data type can have a maximum length of 2 Gb. # # This example shows the use of these two data types in a columnar table. # + language="sql" # DROP TABLE CLOBBLOB; # # CREATE TABLE CLOBBLOB # ( # AUDIO_TITLE CLOB(1024), # AUDIO_TRACK BLOB(1 M) # ) ORGANIZE BY COLUMN; # - # When using a CLOB or BLOB object, an `INLINE LENGTH` specification should be used to try and place as much of the data on the data page to take advantage of the performance advantage provided by the buffer pool caching effect. If you do not specify an inline length for large objects, the data will not reside in the buffer pool and searching and retrieval of this data will take an additional I/O operation. # The following SQL will recreate the table specifying an inline length for the columns. # + language="sql" # DROP TABLE CLOBBLOB; # # CREATE TABLE CLOBBLOB # ( # AUDIO_TITLE CLOB(1024) INLINE LENGTH 1024, # AUDIO_TRACK BLOB(1 M) INLINE LENGTH 1024 # ) ORGANIZE BY COLUMN; # - # [Back to Top](#top) # <a id='quotes'></a> # ## QUOTE Indentifier and Quote Literal # # The `QUOTE_IDENT` function returns a string that can be used as an identifier in an SQL statement. The input value is a string expression which will have the proper quotes placed around it so that it can be used as a valid name in a SQL statement. The `QUOTE_LITERAL` function returns a string that can be used as a string in an SQL statement. The input value is a string expression which will have the proper quotes placed around it so that it can be used as a valid literal in a SQL statement. # # These functions can be used when constructing dynamic SQL statements, or for generating values for the new JSON functions. Db2 (the SQL language) requires special handling of quotes which is different from many programming languages. # # A Db2 column name, *without* special characters, does not require any quotes surrounding it. The standard always folds the string into uppercase, so consider the following SQL. # + language="sql" # DROP TABLE DEMO; # CREATE TABLE DEMO # ( # firstname varchar(10), # lastname varchar(10) # ); # DESCRIBE TABLE DEMO; # - # Describing the table structure will show that the column names are in uppercase. # %sql DESCRIBE TABLE DEMO # If we wanted to use lowercase column names then we will need to delimit them with double quotes as shown in the following SQL. # + language="sql" # DROP TABLE DEMO; # CREATE TABLE DEMO # ( # "firstname" varchar(10), # "lastname" varchar(10) # ); # DESCRIBE TABLE DEMO; # - # Describing the table structure will now show that the column names are in lowercase. # %sql DESCRIBE TABLE DEMO # When generating column names we can surround the input strings with the `QUOTE_IDENT` function to make sure that the quotes are properly placed around the names. Note how uppercase identifiers that **do not** have special characters do not require quotes to be placed around them. # + magic_args="-grid" language="sql" # VALUES ('HELLO WORLD', QUOTE_IDENT('HELLO WORLD')) # UNION ALL # VALUES ('HELLOWORLD', QUOTE_IDENT('HELLOWORLD')) # UNION ALL # VALUES ('HELLO_WORLD', QUOTE_IDENT('HELLO_WORLD')) # UNION ALL # VALUES ('hello world', QUOTE_IDENT('hello world')) # UNION ALL # VALUES ('hello"world', QUOTE_IDENT('hello"world')) # UNION ALL # VALUES ('hello''world', QUOTE_IDENT('hello''world')) # UNION ALL # VALUES ('', QUOTE_IDENT('')) # - # The `QUOTE_LITERAL` function makes sure that a string has the proper quotes placed inside the string. Db2 requires that single quotes within a string be escaped with two quote characters. So the following is a valid Db2 string: # ``` # 'This is a single quote '' within a Db2 string' # ``` # # All Db2 strings must start and end with the single quote character `'`, so any imbedded quotes must be duplicated (i.e. you need two of them `''`) to represent one quote character. # # This first example shows how an integer value will be quoted. # %sql VALUES QUOTE_LITERAL(42.5) # When dealing with application variables, you don't always know if the quotes are properly delimited. This example has multiple quote characters in the string. quotes = "O'Brian went to O'Hare airport for a flight to Hawai'i" # results = %sql -r VALUES QUOTE_LITERAL(:quotes) results[1][0] # [Back to Top](#top) # <a id='oracle'></a> # ## Oracle Compatibility Libraries # # Db2 continues to update the Oracle compatibility modules that are available in the database. In order to use these libraries, you must turn on the Oracle compatibility vector **before creating a database**. You do not need to have complete Oracle compatibility on to use these functions. The following Db2 setting should be turned on to get these compatibility libraries to work. # ```sql # DB2_COMPATIBILITY_VECTOR=400 # ``` # # **Note:** If the database you are connecting to does not have this compatibility vector set, then none of the functions below will work. # # The libraries that are new in this release are: # * `DBMS_APPLICATION_INFO` # # Includes procedures that set custom client info exported through some of the table functions in the database and helps identify the targeted sessions upon executing the procedure. # # # * `DBMS_LOCK` (Currently in beta) # # Provides lock management functionality that allows SQL PL developers to control concurrent access to critical resources in their applications. # # # * `UTL_RAW` # # Provides a set of routines for manipulating binary data of the data type VARBINARY. These routines perform various functions including data conversion, casting, comparison, concatenation, substring, xrange and translation. # # The schema for all procedures and functions in these modules is `SYSIBMADM` # ### DBMS_APPLICATION_INFO # The `DBMS_APPLICATION_INFO` library is used for tracking and debugging routines that are running in the database. The functions that are found in this library inlcudes: # # * `READ_CLIENT_INFO ( client_info )` # # Reads and returns the value of the client information field from the current session # # # * `READ_MODULE ( module_name, action_name )` # # Reads and returns the value of the module and actions fields from the current session # # # * `SET_CLIENT_INFO ( client_info )` # # Sets and registers the value of the client information field from the current session # # # * `SET_MODULE ( module_name, action_name )` # # Sets and registers the module name that is currently being executed # # # * `SET_ACTION ( action_name )` # # Sets and registers the action name within the current module # # # * `SET_SESSION_LONGOPS( rindex, slno, op_name, target, context, sofar, totalwork, target_desc, units )` # # Sets and registers a row in the SYSTOOLS.SESSION_LONGOPS table, to store progress information for long operations # # The following call will place information into a catalog table about the procedure called `add_employee`. # %sql CALL DBMS_APPLICATION_INFO.SET_MODULE('customer_maintenance','update address'); # To retrieve this information you would use the `READ_MODULE` function. # %sql CALL DBMS_APPLICATION_INFO.READ_MODULE( ?, ? ) # To reset the information we set the values to null. # %sql CALL DBMS_APPLICATION_INFO.SET_MODULE('null','null'); # ### UTL_RAW # The `UTL_RAW` library provides a set of routines for manipulating binary data of the data type `VARBINARY`. These routines perform various functions including data conversion, casting, comparison, concatenation, substring, xrange and translation.The functions that are found in this library inlcudes: # # * `BIT_AND ( x,y )` − bitwise logical AND operation against x, y # * `BIT_OR ( x,y )` − bitwise logical OR operation against x, y # * `BIT_XOR ( x,y )` − bitwise logical EXCLUSIVE OR operation against x,y # * `BIT_COMPLEMENT ( x )` − bitwise logical COMPLEMENT operation against x # * `COMPARE ( x, y, pad )` − compares two values with optional padding # * `CONCAT ( x1, x2, …, x12 )` − concatenates up to twelve (12) VARBINARY values into a single value # * `COPIES ( x, n )` − returns the concatenated results of the VARBINARY value a specified number times # * `LENGTH ( x )` − returns the length of a VARBINARY value # * `REVERSE ( x )` − reverses the order of digits of a VARBINARY value # * `SUBSTR ( x, start, length )` − returns a specified portion of a VARBINARY value # * `CAST_TO_RAW` − casts a VARCHAR value to a VARBINARY value # * `CAST_TO_VARCHAR2` − casts a VARBINARY value to a VARCHAR2 value # * `CAST_FROM_NUMBER` − casts a DECFLOAT value to a VARBINARY value # * `CAST_TO_NUMBER` − casts a VARBINARY value to a DECFLOAT value # * `CAST_FROM_BINARY_DOUBLE` − casts a DOUBLE value to a VARBINARY value # * `CAST_FROM_BINARY_FLOAT` − casts a FLOAT value to a VARBINARY value # * `CAST_FROM_BINARY_INTEGER` − casts an INTEGER value to a VARBINARY value # * `CAST_TO_BINARY_DOUBLE` − casts a VARBINARY value to a DOUBLE value # * `CAST_FROM_BINARY_FLOAT` − casts a VARBINARY value to a FLOAT value # * `CAST_FROM_BINARY_INTEGER` − casts a VARBINARY value to an INTEGER value # # **AND** performs a bitwise logical AND operation against x, y import binascii # result = %sql -r VALUES UTL_RAW.BIT_AND( bx'0D', bx'0B') print(binascii.hexlify(result[1][0])) # **OR** performs a bitwise logical OR operation against x, y # result = %sql -r VALUES UTL_RAW.BIT_OR( bx'0D', bx'0B' ) print(binascii.hexlify(result[1][0])) # **XOR** performs a bitwise logical EXCLUSIVE OR operation against x,y # result = %sql -r VALUES UTL_RAW.BIT_XOR( bx'0D', bx'0B' ) print(binascii.hexlify(result[1][0])) # **COMPLEMENT** performs a bitwise logical COMPLEMENT operation against x # result = %sql -r VALUES UTL_RAW.BIT_COMPLEMENT( bx'0D') print(binascii.hexlify(result[1][0])) # **COMPARE** compares two values with optional padding. If the two values are equal, the result is zero. If the values differ, the returned value is the byte of the first value that differs from the second. # %sql VALUES UTL_RAW.COMPARE ( bx'010D', bx'010C' ) # **CONCAT** concatenates up to twelve (12) VARBINARY values into a single value. # result = %sql -r VALUES UTL_RAW.CONCAT ( bx'010D', bx'010C' ) print(binascii.hexlify(result[1][0])) # **COPIES** returns the concatenated results of the VARBINARY value a specified number times. # result = %sql -r VALUES UTL_RAW.COPIES ( bx'010D', 3 ) print(binascii.hexlify(result[1][0])) # **LENGTH** returns the length of a VARBINARY value in bytes. # %sql VALUES UTL_RAW.LENGTH ( bx'010D010CFFFF') # **REVERSE** reverses the order of digits of a VARBINARY value. Note that this doesn't flip the bits in the byte values. # result = %sql -r VALUES UTL_RAW.REVERSE ( bx'010D010CFFFF') print(binascii.hexlify(result[1][0])) # **SUBSTR** returns a specified portion of a VARBINARY value. Byte positions start at 1. # result = %sql -r VALUES UTL_RAW.SUBSTR( bx'010D010CFFFF', 2, 2) print(binascii.hexlify(result[1][0])) # [Back to Top](#top) # <a id='function'></a> # #### Credits: IBM 2019, <NAME> [<EMAIL>]
Db2_11.5_Features/Db2_11.5_SQL_Enhancements.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import json import altair as alt from altair import expr, datum import matplotlib.pyplot as plt import numpy as np import pandas as pd import requests colors = ["#FFC759", "#FF7B9C", "#607196", "#BABFD1"] alt.themes.enable("dark") n = 24 w = 15 x = [i for i in range(8, n + 1)] y = [w / (n / 24) * (n - i) for i in x] df = pd.DataFrame([x, y], index=["x", "y"]).T # + slope_slider = alt.binding_range(min=10, max=25, step=1) slope_selection = alt.selection_single( bind=slope_slider, fields=["Wage"], name="slope", init={"Wage": 15} ) base_slider = alt.binding_range(min=0, max=100, step=10) base_selection = alt.selection_single( bind=base_slider, fields=["Income"], name="base", init={"Income": 0} ) ic_slider = alt.binding_range(min=0.25, max=0.9, step=0.01) ic_selection = alt.selection_single( bind=ic_slider, fields=["Curve"], name="indifference", init={"Curve": 0.4} ) ic_check = alt.binding_range(min=0, max=1, step=1) icc_selection = alt.selection_single( bind=ic_check, fields=["On"], name="indifferencec", init={"On": 0} ) base = ( alt.Chart(df) .mark_line(color=colors[1], strokeWidth=3) .encode( x=alt.X( "x:Q", scale=alt.Scale(domain=[8, 24]), axis=alt.Axis( labelFontSize=16, tickCount=7, grid=False, title="Free time (hours)", titleFontSize=16, titleY=-10, titleBaseline="bottom", titleAngle=0, ), ), y=alt.Y( "c:Q", scale=alt.Scale(domain=[0, 400]), axis=alt.Axis( labelFontSize=16, tickCount=7, grid=False, title="Consumption (£)", titleFontSize=16, titleX=10, titleY=5, titleBaseline="bottom", titleAngle=0, titleAlign="left", ), ), ) ) line1 = base.mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1).encode( x="cx:Q", y="cy:Q" ) line2 = base.mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1).encode( y="y:Q" ) ic1 = base.mark_line(color=colors[0], strokeWidth=1, clip=True).encode( x="ix:Q", y=alt.Y( "i1:Q", scale=alt.Scale(domain=[0, 400]), ), opacity=alt.condition( "indifferencec.On==0", alt.value(0), alt.value(1), ), ) point1 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 120}])) .mark_point(color=colors[3], opacity=1, fill=colors[3], size=40) .encode(x="x:Q", y="y:Q") ) pline1 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 120}, {"x": 8, "y": 120}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline2 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 0}, {"x": 16, "y": 120}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) atext1 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 120,"t":"A"}])) .mark_text(color=colors[2], fontSize=14,xOffset=0,yOffset=-12) .encode(x="x:Q", y="y:Q",text="t:N") ) marks1 = (point1 + pline1 + pline2 +atext1).encode( opacity=alt.condition( "(indifference.Curve==0.54)&(slope.Wage==15)&(base.Income==0)&(indifferencec.On==1)", alt.value(1), alt.value(0), ) ) layer1 = ( (line1 + line2 + base + ic1 + marks1) .add_selection(ic_selection) .add_selection(icc_selection) .add_selection(slope_selection) .add_selection(base_selection) .transform_calculate(cx="24") .transform_calculate(op1="4") .transform_calculate(cy="min(datum.y,base.Income)") .transform_calculate(c="((24-datum.x)*slope.Wage)+1*base.Income") .transform_calculate( i1="10*indifference.Curve+100/((datum.x-8)*(indifference.Curve/5))" ) .transform_calculate(ix="datum.x/1") .transform_calculate(c3="300*slope.Wage/15-pow(slope.Wage/7,(datum.x*2/4.62))") ) layer=(layer1.properties(height=350, width=500)).configure_view(stroke=None).properties( title="", background="rgba(0,0,0,0)" ) f='1' layer.save("visualisation/" + f + ".json") layer # + slope_slider = alt.binding_range(min=10, max=25, step=1) slope_selection = alt.selection_single( bind=slope_slider, fields=["Wage"], name="slope", init={"Wage": 15} ) base_slider = alt.binding_range(min=0, max=100, step=10) base_selection = alt.selection_single( bind=base_slider, fields=["Income"], name="base", init={"Income": 0} ) ic_slider = alt.binding_range(min=0.25, max=0.9, step=0.01) ic_selection = alt.selection_single( bind=ic_slider, fields=["Curve"], name="indifference", init={"Curve": 0.54} ) ic_check = alt.binding_range(min=0, max=1, step=1) icc_selection = alt.selection_single( bind=ic_check, fields=["On"], name="indifferencec", init={"On": 1} ) base = ( alt.Chart(df) .mark_line(color=colors[1], strokeWidth=3) .encode( x=alt.X( "x:Q", scale=alt.Scale(domain=[8, 24]), axis=alt.Axis( labelFontSize=16, tickCount=7, grid=False, title="Free time (hours)", titleFontSize=16, titleY=-10, titleBaseline="bottom", titleAngle=0, ), ), y=alt.Y( "c:Q", scale=alt.Scale(domain=[0, 400]), axis=alt.Axis( labelFontSize=16, tickCount=7, grid=False, title="Consumption (£)", titleFontSize=16, titleX=10, titleY=5, titleBaseline="bottom", titleAngle=0, titleAlign="left", ), ), ) ) line1 = base.mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1).encode( x="cx:Q", y="cy:Q" ) line2 = base.mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1).encode( y="y:Q" ) ic1 = base.mark_line(color=colors[0], strokeWidth=1, clip=True).encode( x="ix:Q", y=alt.Y( "i1:Q", scale=alt.Scale(domain=[0, 400]), ), opacity=alt.condition( "indifferencec.On==0", alt.value(0), alt.value(1), ), ) point1 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 120}])) .mark_point(color=colors[3], opacity=1, fill=colors[3], size=40) .encode(x="x:Q", y="y:Q") ) pline1 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 120}, {"x": 8, "y": 120}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline2 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 0}, {"x": 16, "y": 120}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) point2 = ( alt.Chart(pd.DataFrame([{"x": 15.5, "y": 212.5}])) .mark_point(color=colors[3], opacity=1, fill=colors[3], size=40) .encode(x="x:Q", y="y:Q") ) atext1 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 120,"t":"A"}])) .mark_text(color=colors[2], fontSize=14,xOffset=0,yOffset=-12) .encode(x="x:Q", y="y:Q",text="t:N") ) pline21 = ( alt.Chart(pd.DataFrame([{"x": 15.5, "y": 212.5}, {"x": 8, "y": 212.5}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline22 = ( alt.Chart(pd.DataFrame([{"x": 15.5, "y": 0}, {"x": 15.5, "y": 212.5}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline23 = ( alt.Chart(pd.DataFrame([{"x": 15.5, "y": 250}, {"x": 16, "y": 250}])) .mark_line(color=colors[2], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) ptext2 = ( alt.Chart(pd.DataFrame([{"x": 15.9, "y": 213,"t":"⬅"}, {"x": 18, "y": 213,"t":"Substitution effect"}])) .mark_text(color=colors[2], fontSize=14) .encode(x="x:Q", y="y:Q",text="t:N") ) atext2 = ( alt.Chart(pd.DataFrame([{"x": 15.5, "y": 212.5,"t":"D"}])) .mark_text(color=colors[2], fontSize=14,xOffset=0,yOffset=-12) .encode(x="x:Q", y="y:Q",text="t:N") ) point3 = ( alt.Chart(pd.DataFrame([{"x": 18, "y": 160}])) .mark_point(color=colors[3], opacity=1, fill=colors[3], size=40) .encode(x="x:Q", y="y:Q") ) pline31 = ( alt.Chart(pd.DataFrame([{"x": 18, "y": 160}, {"x": 8, "y": 160}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline32 = ( alt.Chart(pd.DataFrame([{"x": 18, "y": 0}, {"x": 18, "y": 160}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline33 = ( alt.Chart(pd.DataFrame([{"x": 18, "y": 160}, {"x": 16, "y": 160}])) .mark_line(color=colors[2], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) atext3 = ( alt.Chart(pd.DataFrame([{"x": 18, "y": 160,"t":"C"}])) .mark_text(color=colors[2], fontSize=14,xOffset=0,yOffset=-12) .encode(x="x:Q", y="y:Q",text="t:N") ) marks1 = (point1 + pline1 + pline2+atext1).encode( opacity=alt.condition( "1==1", alt.value(1), alt.value(0), ) ) marks2 = (point2 + pline21 + pline22+atext2).encode( opacity=alt.condition( "(indifference.Curve==0.32)&(slope.Wage==25)&(base.Income==0)&(indifferencec.On==1)", alt.value(1), alt.value(0), ) ) marks3 = (point3 + pline31 + pline32+atext3).encode( opacity=alt.condition( "(indifference.Curve==0.32)&(slope.Wage==15)&(base.Income==70)&(indifferencec.On==1)", alt.value(1), alt.value(0), ) ) layer1 = ( (line1 + line2 + base + ic1 + marks1 + marks2+marks3) .add_selection(ic_selection) .add_selection(icc_selection) .add_selection(slope_selection) .add_selection(base_selection) .transform_calculate(cx="24") .transform_calculate(op1="4") .transform_calculate(cy="min(datum.y,base.Income)") .transform_calculate(c="((24-datum.x)*slope.Wage)+1*base.Income") .transform_calculate( i1="10*indifference.Curve+100/((datum.x-8)*(indifference.Curve/5))" ) .transform_calculate(ix="datum.x/1") .transform_calculate(c3="300*slope.Wage/15-pow(slope.Wage/7,(datum.x*2/4.62))") ) layer=(layer1.properties(height=350, width=500)).configure_view(stroke=None).properties( title="", background="rgba(0,0,0,0)" ) f='2' layer.save("visualisation/" + f + ".json") layer # + slope_slider = alt.binding_range(min=10, max=25, step=1) slope_selection = alt.selection_single( bind=slope_slider, fields=["Wage"], name="slope", init={"Wage": 25} ) base_slider = alt.binding_range(min=0, max=100, step=10) base_selection = alt.selection_single( bind=base_slider, fields=["Income"], name="base", init={"Income": 0} ) ic_slider = alt.binding_range(min=0.25, max=0.9, step=0.01) ic_selection = alt.selection_single( bind=ic_slider, fields=["Curve"], name="indifference", init={"Curve": 0.32} ) ic_check = alt.binding_range(min=0, max=1, step=1) icc_selection = alt.selection_single( bind=ic_check, fields=["On"], name="indifferencec", init={"On": 1} ) base = ( alt.Chart(df) .mark_line(color=colors[1], strokeWidth=3) .encode( x=alt.X( "x:Q", scale=alt.Scale(domain=[8, 24]), axis=alt.Axis( labelFontSize=16, tickCount=7, grid=False, title="Free time (hours)", titleFontSize=16, titleY=-10, titleBaseline="bottom", titleAngle=0, ), ), y=alt.Y( "c:Q", scale=alt.Scale(domain=[0, 400]), axis=alt.Axis( labelFontSize=16, tickCount=7, grid=False, title="Consumption (£)", titleFontSize=16, titleX=10, titleY=5, titleBaseline="bottom", titleAngle=0, titleAlign="left", ), ), ) ) line1 = base.mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1).encode( x="cx:Q", y="cy:Q" ) line2 = base.mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1).encode( y="y:Q" ) ic1 = base.mark_line(color=colors[0], strokeWidth=1, clip=True).encode( x="ix:Q", y=alt.Y( "i1:Q", scale=alt.Scale(domain=[0, 400]), ), opacity=alt.condition( "indifferencec.On==0", alt.value(0), alt.value(1), ), ) point1 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 120}])) .mark_point(color=colors[3], opacity=1, fill=colors[3], size=40) .encode(x="x:Q", y="y:Q") ) pline1 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 120}, {"x": 8, "y": 120}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline2 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 0}, {"x": 16, "y": 120}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) atext1 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 120,"t":"A"}])) .mark_text(color=colors[2], fontSize=14,xOffset=0,yOffset=-12) .encode(x="x:Q", y="y:Q",text="t:N") ) point2 = ( alt.Chart(pd.DataFrame([{"x": 15.5, "y": 212.5}])) .mark_point(color=colors[3], opacity=1, fill=colors[3], size=40) .encode(x="x:Q", y="y:Q") ) pline21 = ( alt.Chart(pd.DataFrame([{"x": 15.5, "y": 212.5}, {"x": 8, "y": 212.5}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline22 = ( alt.Chart(pd.DataFrame([{"x": 15.5, "y": 0}, {"x": 15.5, "y": 212.5}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline23 = ( alt.Chart(pd.DataFrame([{"x": 15.5, "y": 250}, {"x": 16, "y": 250}])) .mark_line(color=colors[2], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) ptext2 = ( alt.Chart(pd.DataFrame([{"x": 16.3, "y": 213,"t":"D⬅C"}, {"x": 18.8, "y": 213,"t":"Substitution effect"}])) .mark_text(color=colors[2], fontSize=14) .encode(x="x:Q", y="y:Q",text="t:N") ) atext2 = ( alt.Chart(pd.DataFrame([{"x": 15.5, "y": 212.5,"t":"D"}])) .mark_text(color=colors[2], fontSize=14,xOffset=0,yOffset=-12) .encode(x="x:Q", y="y:Q",text="t:N") ) point3 = ( alt.Chart(pd.DataFrame([{"x": 18, "y": 160}])) .mark_point(color=colors[3], opacity=1, fill=colors[3], size=40) .encode(x="x:Q", y="y:Q") ) pline31 = ( alt.Chart(pd.DataFrame([{"x": 18, "y": 160}, {"x": 8, "y": 160}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline32 = ( alt.Chart(pd.DataFrame([{"x": 18, "y": 0}, {"x": 18, "y": 160}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline33 = ( alt.Chart(pd.DataFrame([{"x": 18, "y": 160}, {"x": 16, "y": 160}])) .mark_line(color=colors[2], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) ptext3 = ( alt.Chart(pd.DataFrame([{"x": 17.3, "y": 60,"t":"A➡C"}, {"x": 15.3, "y": 60,"t":"Income effect"}, {"x": 14.8, "y": 90,"t":"D⬅A"}, {"x": 12.7, "y": 90,"t":"Overall effect"}])) .mark_text(color=colors[2], fontSize=14) .encode(x="x:Q", y="y:Q",text="t:N") ) marks1 = (point1 + pline1 + pline2+atext1).encode( opacity=alt.condition( "1==1", alt.value(1), alt.value(0), ) ) marks2 = (point2 + pline21 + pline22+atext2).encode( opacity=alt.condition( "1==1", alt.value(1), alt.value(0), ) ) atext3 = ( alt.Chart(pd.DataFrame([{"x": 18, "y": 160,"t":"C"}])) .mark_text(color=colors[2], fontSize=14,xOffset=0,yOffset=-12) .encode(x="x:Q", y="y:Q",text="t:N") ) marks3 = (point3 + pline31 + pline32 + ptext2+ ptext3+atext3).encode( opacity=alt.condition( "(indifference.Curve==0.32)&(slope.Wage==15)&(base.Income==70)&(indifferencec.On==1)", alt.value(1), alt.value(0), ) ) layer1 = ( (line1 + line2 + base + ic1 + marks1 + marks2+marks3) .add_selection(ic_selection) .add_selection(icc_selection) .add_selection(slope_selection) .add_selection(base_selection) .transform_calculate(cx="24") .transform_calculate(op1="4") .transform_calculate(cy="min(datum.y,base.Income)") .transform_calculate(c="((24-datum.x)*slope.Wage)+1*base.Income") .transform_calculate( i1="10*indifference.Curve+100/((datum.x-8)*(indifference.Curve/5))" ) .transform_calculate(ix="datum.x/1") .transform_calculate(c3="300*slope.Wage/15-pow(slope.Wage/7,(datum.x*2/4.62))") ) layer=(layer1.properties(height=350, width=500)).configure_view(stroke=None).properties( title="", background="rgba(0,0,0,0)" ) f='3' layer.save("visualisation/" + f + ".json") layer # + slope_slider = alt.binding_range(min=10, max=25, step=1) slope_selection = alt.selection_single( bind=slope_slider, fields=["Wage"], name="slope", init={"Wage": 15} ) base_slider = alt.binding_range(min=0, max=100, step=10) base_selection = alt.selection_single( bind=base_slider, fields=["Income"], name="base", init={"Income": 70} ) ic_slider = alt.binding_range(min=0.25, max=0.9, step=0.01) ic_selection = alt.selection_single( bind=ic_slider, fields=["Curve"], name="indifference", init={"Curve": 0.32} ) ic_check = alt.binding_range(min=0, max=1, step=1) icc_selection = alt.selection_single( bind=ic_check, fields=["On"], name="indifferencec", init={"On": 1} ) base = ( alt.Chart(df) .mark_line(color=colors[1], strokeWidth=3) .encode( x=alt.X( "x:Q", scale=alt.Scale(domain=[8, 24]), axis=alt.Axis( labelFontSize=16, tickCount=7, grid=False, title="Free time (hours)", titleFontSize=16, titleY=-10, titleBaseline="bottom", titleAngle=0, ), ), y=alt.Y( "c:Q", scale=alt.Scale(domain=[0, 400]), axis=alt.Axis( labelFontSize=16, tickCount=7, grid=False, title="Consumption (£)", titleFontSize=16, titleX=10, titleY=5, titleBaseline="bottom", titleAngle=0, titleAlign="left", ), ), ) ) line1 = base.mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1).encode( x="cx:Q", y="cy:Q" ) line2 = base.mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1).encode( y="y:Q" ) ic1 = base.mark_line(color=colors[0], strokeWidth=1, clip=True).encode( x="ix:Q", y=alt.Y( "i1:Q", scale=alt.Scale(domain=[0, 400]), ), opacity=alt.condition( "indifferencec.On==0", alt.value(0), alt.value(1), ), ) point1 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 120}])) .mark_point(color=colors[3], opacity=1, fill=colors[3], size=40) .encode(x="x:Q", y="y:Q") ) pline1 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 120}, {"x": 8, "y": 120}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline2 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 0}, {"x": 16, "y": 120}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) atext1 = ( alt.Chart(pd.DataFrame([{"x": 16, "y": 120,"t":"A"}])) .mark_text(color=colors[2], fontSize=14,xOffset=0,yOffset=-12) .encode(x="x:Q", y="y:Q",text="t:N") ) point2 = ( alt.Chart(pd.DataFrame([{"x": 15.5, "y": 212.5}])) .mark_point(color=colors[3], opacity=1, fill=colors[3], size=40) .encode(x="x:Q", y="y:Q") ) pline21 = ( alt.Chart(pd.DataFrame([{"x": 15.5, "y": 212.5}, {"x": 8, "y": 212.5}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline22 = ( alt.Chart(pd.DataFrame([{"x": 15.5, "y": 0}, {"x": 15.5, "y": 212.5}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline23 = ( alt.Chart(pd.DataFrame([{"x": 15.5, "y": 250}, {"x": 16, "y": 250}])) .mark_line(color=colors[2], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) ptext2 = ( alt.Chart(pd.DataFrame([{"x": 16.3, "y": 213,"t":"D⬅C"}, {"x": 18.8, "y": 213,"t":"Substitution effect"}])) .mark_text(color=colors[2], fontSize=14) .encode(x="x:Q", y="y:Q",text="t:N") ) atext2 = ( alt.Chart(pd.DataFrame([{"x": 15.5, "y": 212.5,"t":"D"}])) .mark_text(color=colors[2], fontSize=14,xOffset=0,yOffset=-12) .encode(x="x:Q", y="y:Q",text="t:N") ) point3 = ( alt.Chart(pd.DataFrame([{"x": 18, "y": 160}])) .mark_point(color=colors[3], opacity=1, fill=colors[3], size=40) .encode(x="x:Q", y="y:Q") ) pline31 = ( alt.Chart(pd.DataFrame([{"x": 18, "y": 160}, {"x": 8, "y": 160}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline32 = ( alt.Chart(pd.DataFrame([{"x": 18, "y": 0}, {"x": 18, "y": 160}])) .mark_line(color=colors[3], strokeDash=[5, 5], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) pline33 = ( alt.Chart(pd.DataFrame([{"x": 18, "y": 160}, {"x": 16, "y": 160}])) .mark_line(color=colors[2], strokeWidth=1) .encode(x="x:Q", y="y:Q") ) ptext3 = ( alt.Chart(pd.DataFrame([{"x": 17.3, "y": 60,"t":"A➡C"}, {"x": 15.3, "y": 60,"t":"Income effect"}, {"x": 14.8, "y": 90,"t":"D⬅A"}, {"x": 12.7, "y": 90,"t":"Overall effect"}])) .mark_text(color=colors[2], fontSize=14) .encode(x="x:Q", y="y:Q",text="t:N") ) atext3 = ( alt.Chart(pd.DataFrame([{"x": 18, "y": 160,"t":"C"}])) .mark_text(color=colors[2], fontSize=14,xOffset=0,yOffset=-12) .encode(x="x:Q", y="y:Q",text="t:N") ) marks1 = (point1 + pline1 + pline2+atext1).encode( opacity=alt.condition( "1==1", alt.value(1), alt.value(0), ) ) marks2 = (point2 + pline21 + pline22+atext2).encode( opacity=alt.condition( "1==1", alt.value(1), alt.value(0), ) ) marks3 = (point3 + pline31 + pline32 + ptext2+ ptext3+atext3).encode( opacity=alt.condition( "(indifference.Curve==0.32)&(slope.Wage==15)&(base.Income==70)&(indifferencec.On==1)", alt.value(1), alt.value(0), ) ) layer1 = ( (line1 + line2 + base + ic1 + marks1 + marks2+marks3) .add_selection(ic_selection) .add_selection(icc_selection) .add_selection(slope_selection) .add_selection(base_selection) .transform_calculate(cx="24") .transform_calculate(op1="4") .transform_calculate(cy="min(datum.y,base.Income)") .transform_calculate(c="((24-datum.x)*slope.Wage)+1*base.Income") .transform_calculate( i1="10*indifference.Curve+100/((datum.x-8)*(indifference.Curve/5))" ) .transform_calculate(ix="datum.x/1") .transform_calculate(c3="300*slope.Wage/15-pow(slope.Wage/7,(datum.x*2/4.62))") ) layer=(layer1.properties(height=350, width=500)).configure_view(stroke=None).properties( title="", background="rgba(0,0,0,0)" ) f='4' layer.save("visualisation/" + f + ".json") layer # -
test/bristol/freetime.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 401 CNN # # View more, visit my tutorial page: https://morvanzhou.github.io/tutorials/ # My Youtube Channel: https://www.youtube.com/user/MorvanZhou import torch import torch.nn as nn from torch.autograd import Variable import torch.utils.data as Data import torchvision import matplotlib.pyplot as plt # %matplotlib inline torch.manual_seed(1) # reproducible # Hyper Parameters EPOCH = 1 # train the training data n times, to save time, we just train 1 epoch BATCH_SIZE = 50 LR = 0.001 # learning rate DOWNLOAD_MNIST = False # set to False if you have downloaded - this has been downloaded in /mnist folder # The following cell downloads the data and create 'raw' and 'processed' subfolder # Mnist digits dataset train_data = torchvision.datasets.MNIST( root='./mnist/', train=True, # this is training data transform=torchvision.transforms.ToTensor(), # Converts a PIL.Image or numpy.ndarray to # torch.FloatTensor of shape (C x H x W) and normalize in the range [0.0, 1.0] download=DOWNLOAD_MNIST, # download it if you don't have it ) # plot one example print(train_data.train_data.size()) # (60000, 28, 28) print(train_data.train_labels.size()) # (60000) print(train_data.train_data[0].numpy()) plt.imshow(train_data.train_data[0].numpy(), cmap='gray') plt.title('%i' % train_data.train_labels[0]) plt.show() # Data Loader for easy mini-batch return in training, the image batch shape will be (50, 1, 28, 28) train_loader = Data.DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True) # convert test data into Variable, pick 2000 samples to speed up testing test_data = torchvision.datasets.MNIST(root='./mnist/', train=False) test_x = Variable(torch.unsqueeze(test_data.test_data, dim=1)).type(torch.FloatTensor)[:2000]/255. # shape from (2000, 28, 28) to (2000, 1, 28, 28), value in range(0,1) test_y = test_data.test_labels[:2000] class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.conv1 = nn.Sequential( # input shape (1, 28, 28) nn.Conv2d( in_channels=1, # input height out_channels=16, # n_filters kernel_size=5, # filter size stride=1, # filter movement/step padding=2, # if want same width and length of this image after con2d, padding=(kernel_size-1)/2 if stride=1 ), # output shape (16, 28, 28) nn.ReLU(), # activation nn.MaxPool2d(kernel_size=2), # choose max value in 2x2 area, output shape (16, 14, 14) ) self.conv2 = nn.Sequential( # input shape (1, 28, 28) nn.Conv2d(16, 32, 5, 1, 2), # output shape (32, 14, 14) nn.ReLU(), # activation nn.MaxPool2d(2), # output shape (32, 7, 7) ) self.out = nn.Linear(32 * 7 * 7, 10) # fully connected layer, output 10 classes def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = x.view(x.size(0), -1) # flatten the output of conv2 to (batch_size, 32 * 7 * 7) output = self.out(x) return output, x # return x for visualization cnn = CNN() print(cnn) # net architecture optimizer = torch.optim.Adam(cnn.parameters(), lr=LR) # optimize all cnn parameters loss_func = nn.CrossEntropyLoss() # the target label is not one-hotted # + # following function (plot_with_labels) is for visualization, can be ignored if not interested from matplotlib import cm try: from sklearn.manifold import TSNE; HAS_SK = True except: HAS_SK = False; print('Please install sklearn for layer visualization') def plot_with_labels(lowDWeights, labels): plt.cla() X, Y = lowDWeights[:, 0], lowDWeights[:, 1] for x, y, s in zip(X, Y, labels): c = cm.rainbow(int(255 * s / 9)); plt.text(x, y, s, backgroundcolor=c, fontsize=9) plt.xlim(X.min(), X.max()); plt.ylim(Y.min(), Y.max()); plt.title('Visualize last layer'); plt.show(); plt.pause(0.01) plt.ion() # training and testing for epoch in range(EPOCH): for step, (x, y) in enumerate(train_loader): # gives batch data, normalize x when iterate train_loader b_x = Variable(x) # batch x b_y = Variable(y) # batch y output = cnn(b_x)[0] # cnn output loss = loss_func(output, b_y) # cross entropy loss optimizer.zero_grad() # clear gradients for this training step loss.backward() # backpropagation, compute gradients optimizer.step() # apply gradients if step % 100 == 0: test_output, last_layer = cnn(test_x) pred_y = torch.max(test_output, 1)[1].data.squeeze() accuracy = (pred_y == test_y).sum().item() / float(test_y.size(0)) print('Epoch: ', epoch, '| train loss: %.4f' % loss.item(), '| test accuracy: %.2f' % accuracy) if HAS_SK: # Visualization of trained flatten layer (T-SNE) tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000) plot_only = 500 low_dim_embs = tsne.fit_transform(last_layer.data.numpy()[:plot_only, :]) labels = test_y.numpy()[:plot_only] plot_with_labels(low_dim_embs, labels) plt.ioff() # - # print 10 predictions from test data test_output, _ = cnn(test_x[:10]) pred_y = torch.max(test_output, 1)[1].data.numpy().squeeze() print(pred_y, 'prediction number') print(test_y[:10].numpy(), 'real number')
tutorial-contents-notebooks/401_CNN.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 import numpy as np import matplotlib.pyplot as plt import scipy from scipy.interpolate import interp2d # ### Import functions # See core.py for further details import core # ### Make some data # Three use cases # - partial nans # - all nan # - full data = np.random.random((100,100)) data[10:20,10:20] = np.nan plt.colorbar(plt.imshow(data,vmin=0,vmax=1)) positions = [(9,10), (15,15), (40,40)] plt.colorbar(plt.imshow(data,vmin=0,vmax=1)) plt.scatter(*zip(*positions), c='r') # ### Test inpolation methods # - Nearest Neighbor # - Inverse Distance Weighting # - linear # - cubic # - quintic # + dx=3 dy=3 spacing = 1 IDW_power = core.find_optimal_IDW_power(positions,data,dx=dx,dy=dy,spacing=spacing,powers= np.arange(0,20,0.1)) for p in positions: #slice out window window = data[p[0]-dx:p[0]+dx, p[1]-dy:p[1]+dy] # get center point coordinate in window center = dx-spacing/2,dy-spacing/2 # extract window coordinates coords = [] for x in np.arange(0,window.shape[0],1): for y in np.arange(0,window.shape[1],1): coords.append((x,y)) coords_x = np.array(coords)[:,0] coords_y = np.array(coords)[:,1] plt.colorbar(plt.imshow(window,vmin=0,vmax=1)) plt.scatter(center[0],center[1], c='r') plt.scatter(coords_x,coords_y, c='b') plt.show() if np.isnan(window).all(): print('No data in window. Consider increasing window size.') else: print('mean:',np.nanmean(window)) print('IDW:', core.IDW(window,coords_x,coords_y,center,IDW_power)) print('nearest:', core.nearest_neighbor(window,coords_x,coords_y,center)) print('scipy linear:',core.scipy_methods(window,coords_x,coords_y,center,kind= 'linear')) print('scipy cubic:',core.scipy_methods(window,coords_x,coords_y,center,kind= 'cubic')) print('scipy quintic:',core.scipy_methods(window,coords_x,coords_y,center,kind= 'quintic')) # - # ### Try to interpolate nans with scipy.interpolate.griddata # + dx=3 dy=3 spacing = 1 IDW_power = core.find_optimal_IDW_power(positions,data,dx=dx,dy=dy,spacing=spacing,powers= np.arange(0,20,0.1)) for p in positions: #slice out window window = data[p[0]-dx:p[0]+dx, p[1]-dy:p[1]+dy] # get center point coordinate in window center = dx-spacing/2,dy-spacing/2 # extract window coordinates points = [] values = [] coords = [] for x in np.arange(0,window.shape[0],1): for y in np.arange(0,window.shape[1],1): if not np.isnan(window[x,y]): points.append((x,y)) values.append(window[x,y]) coords.append((x,y)) points = np.array(points) values = np.array(values) coords = np.array(coords) coords_x = np.array(coords)[:,0] coords_y = np.array(coords)[:,1] if np.isnan(window).all(): print('\nNo data in window. Consider increasing window size.') else: window = scipy.interpolate.griddata(points,values,(coords_x,coords_y), method='cubic',).reshape(6,6) plt.colorbar(plt.imshow(window,vmin=0,vmax=1)) plt.scatter(center[0],center[1], c='r') plt.scatter(coords_x,coords_y, c='b') plt.show() print('mean:',np.nanmean(window)) print('IDW:', core.IDW(window,coords_x,coords_y,center,power=IDW_power)) print('nearest:', core.nearest_neighbor(window,coords_x,coords_y,center)) print('scipy linear:',core.scipy_methods(window,coords_x,coords_y,center,kind= 'linear')) print('scipy cubic:',core.scipy_methods(window,coords_x,coords_y,center,kind= 'cubic')) print('scipy quintic:',core.scipy_methods(window,coords_x,coords_y,center,kind= 'quintic')) # - # ## Takeaways: # # - mean is never really accurate # - IDW is robust to nans, similar to scipy linear, and better than nearest neighbor # - scipy.interpolate.griddata fills in some data, but not enough to use scipy point interpolation methods
friedrich/synthetic_data_example.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Dependencies import pandas as pd # File Import and Read csvfile = "HeroesOfPymoli/Resources/purchase_data.csv" df = pd.read_csv(csvfile) # + # Player Count #Retrieve Values from Data playerCount = df.SN.nunique() playerCounttable = pd.DataFrame() playerCounttable ["Total Players"] = [playerCount] #Display Table playerCounttable # + #Purchasing Analysis (Total) #Retrieve Values from Data uniqueItems = df["Item Name"].nunique() averagePrice = df["Price"].mean() totalPurchase = len(df["Purchase ID"]) totalSum = df["Price"].sum() #Create DataFrame itemsTable = pd.DataFrame() itemsTable["Count of Unique Items"] = [uniqueItems] itemsTable["Average Price"] = averagePrice itemsTable["Count of Purchases"]= [totalPurchase] itemsTable["Total Sum of Purchases"] = [totalSum] #Format Data Appropriately itemsTable["Average Price"]=itemsTable["Average Price"].map("${:,.2f}".format) itemsTable["Total Sum of Purchases"]=itemsTable["Total Sum of Purchases"].map("${:,.2f}".format) #Display Table itemsTable # + #Gender Demographics #Retrieve Values from Data people_df = df.groupby(['SN','Gender']).size().reset_index().rename(columns={0:'count'}) genderCount = people_df.Gender.value_counts() genderPercent=genderCount/len(people_df) #Create DataFrame genderTable=pd.DataFrame() genderTable["Total Count"] = genderCount #Format Data Appropriately genderTable["Percentage of Players"] = (genderPercent*100).map("{:.2f}%".format) #Display Table genderTable # + #Purchasing Analysis (Gender) #Retrieve Values from Data for Columns a genAverageperperson = df.groupby("Gender").Price.sum() / genderTable["Total Count"] #Create DataFrame and Format Data Appropriately genderPurchaseTable=pd.DataFrame() genderPurchaseTable["Purchase Count"] = totalGenpurchase genderPurchaseTable["Average Purchase Price"] = genAvgprice.map("${:,.2f}".format) genderPurchaseTable["Total Purchase Value"] = genSumPrice.map("${:,.2f}".format) genderPurchaseTable["Avg Total Purchase per Person"] = genAverageperperson.map("${:,.2f}".format) #Display Table genderPurchaseTable # + #Age Demographics (Age) #Creat Bins and Bin Labels bins =[0, 9, 14, 19, 24, 29, 34, 39, 50] group_names = [" <10", "10-14","15-19","20-24","25-29","30-34","35-39","40+"] #Retrieve Data for Columns (setting bin names to strings) ageRange = pd.cut(df["Age"], bins, labels=group_names) df["Age Range"] = pd.cut(df["Age"], bins, labels=group_names).astype(str) age_df = df.groupby(['SN','Age Range']).size().reset_index().rename(columns={0:'count'}) #Retrieve Data and Sort ageGrouptable = age_df.groupby("Age Range").size().reset_index().rename(columns={0:'Total Count'}) #Calculate % ageGrouptable["Percentage of Players"] = ageGrouptable["Total Count"] / len(people_df) * 100 #Format Data Appropriately ageGrouptable["Percentage of Players"] = ageGrouptable["Percentage of Players"].map("{:.2f}%".format) ageGrouptable = ageGrouptable.set_index("Age Range") #Display Table ageGrouptable # + #Purchasing Analysis (Age) #Retrieve Values from Data for Columns ageSize = df.groupby("Age Range").size() ageAvgPrice = df.groupby("Age Range").Price.mean() ageTotPrice = df.groupby("Age Range").Price.sum() ageAvgPerPerson = df.groupby("Age Range").Price.sum()/ageGrouptable["Total Count"] #Create DataFrame and Format Data Appropriately agePurchasetable = pd.DataFrame() agePurchasetable["Purchase Count"] = ageSize agePurchasetable["Average Purchase Price"] = ageAvgPrice.map("${:,.2f}".format) agePurchasetable["Total Purchase Value"] = ageTotPrice.map("${:,.2f}".format) agePurchasetable["Avg Total Purchase per Person"] = ageAvgPerPerson.map("${:,.2f}".format) #Sort and Display Table agePurchasetable.sort_index() # + #Top Spenders #Retrieve Values from Data for Columns topSpenders = df.groupby("SN").size() topSpendersaverage = df.groupby("SN").Price.mean() topSpenderssum = df.groupby("SN").Price.sum() #Create DataFrame topSpenders_df = pd.DataFrame() topSpenders_df["SN"]=topSpenders topSpenders_df["Average Purchase Price"]= topSpendersaverage topSpenders_df["Total Purchase Value"]= topSpenderssum #Sort Data topSpenders_df=topSpenders_df.sort_values(by="Total Purchase Value", ascending=False) #Format Data Appropriately topSpenders_df["Average Purchase Price"]=topSpenders_df["Average Purchase Price"].map("${:,.2f}".format) topSpenders_df["Total Purchase Value"]=topSpenders_df["Total Purchase Value"].map("${:,.2f}".format) #Display Table (Sample) topSpenders_df.head() # + #Most Popular Items #Retrieve Values from Data for Columns topItems = df.groupby(["Item ID", "Item Name"]).size() topItemssaverage = df.groupby(["Item ID", "Item Name"]).Price.mean() topItemsssum = df.groupby(["Item ID", "Item Name"]).Price.sum() #Create DataFrame topItems_df = pd.DataFrame() topItems_df["Purchase Count"] = topItems topItems_df["Item Price"] = topItemssaverage topItems_df["Total Purchase Value"] = topItemsssum #Sort Data topItems_df=topItems_df.sort_values(by="Purchase Count", ascending=False) #Format Data Appropriately topItems_df["Item Price"]=topItems_df["Item Price"].map("${:,.2f}".format) topItems_df["Total Purchase Value"]=topItems_df["Total Purchase Value"].map("${:,.2f}".format) #Display Table (Sample) topItems_df.head() # + #Most Profitable Items (NOTE: Same as Most Popular Items except sorted to Purchase Value) #Retrieve Values from Data for Columns topItems = df.groupby(["Item ID", "Item Name"]).size() topItemssaverage = df.groupby(["Item ID", "Item Name"]).Price.mean() topItemsssum = df.groupby(["Item ID", "Item Name"]).Price.sum() #Create DataFrame topItems_df = pd.DataFrame() topItems_df["Purchase Count"]=topItems topItems_df["Item Price"]= topItemssaverage topItems_df["Total Purchase Value"]= topItemsssum #Sort Data topItems_df=topItems_df.sort_values(by="Total Purchase Value", ascending=False) #Format Data Appropriately topItems_df["Item Price"]=topItems_df["Item Price"].map("${:,.2f}".format) topItems_df["Total Purchase Value"]=topItems_df["Total Purchase Value"].map("${:,.2f}".format) #Display Table (Sample) topItems_df.head()
Heroes of Pymoli/Instructions/HeroesOfPymoli (JWORLEIN).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/Isiumlord/ProjetoFinal-AcidentesTerrestres/blob/main/DataSet-CasosCovid-Pandas.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="FMfPgT1MO1rl" #1 - IMPORTANDO AS BIBLIOTECAS import pandas as pd import numpy as np # + id="Bd5xdCCRXyBu" colab={"base_uri": "https://localhost:8080/"} outputId="770c4644-bbcb-4aba-c26a-c770379afd14" #2 - INTEGRANDO COM A GCP from google.colab import auth from google.cloud import storage auth.authenticate_user() project_id = 'projeto-final-grupo03' # !gcloud config set project projeto-final-grupo03 # + id="R0zcHFLqYEAP" colab={"base_uri": "https://localhost:8080/"} outputId="6954ca6d-52e4-4946-f2d6-7d8468e0a783" #3 - IMPORTANDO ARQUIVO DA GCP PARA O COLAB # !gsutil cp gs://datasets_gp03/casos-covid.csv /tmp/casos-covid.csv # + id="GOAe-guKYShU" colab={"base_uri": "https://localhost:8080/", "height": 617} outputId="e1c9d558-3b40-4bab-f033-3d2f4d4eabdd" #3 - LENDO E MOSTRANDO O DATAFRAME df = pd.read_csv('/tmp/casos-covid.csv') df # + id="7sYBBzP5aiK9" #4 - VERIFICANDO O TIPO DOS DADOS df.dtypes # + id="MoC5D-V3atyp" colab={"base_uri": "https://localhost:8080/", "height": 617} outputId="b98fd946-f668-499f-9c31-d97052a9893d" #5 - CRIANDO BACKUP EM CASO DE ERROS NA EDIÇÃO df.copy() # + id="GE7IerODa5Qx" #6 - TRADUZINDO OS TÍTULOS DAS COLUNAS DA TABELA df.rename(columns={"date": "Data"}, inplace=True) df.rename(columns={"state": "Estado"}, inplace=True) df.rename(columns={"newDeaths": "Novas_Mortes"}, inplace=True) df.rename(columns={"deaths": "Mortes"}, inplace=True) df.rename(columns={"newCases": "Novos_Casos"}, inplace=True) df.rename(columns={"totalCases": "Total_De_Casos"}, inplace=True) # + id="lCEWzhzLa_uc" colab={"base_uri": "https://localhost:8080/", "height": 313} outputId="7dd32946-cd7e-4c59-9b0c-764a0dd5008c" #7 - VERIFICANDO TRADUÇÃO df.head() # + id="RqXQPSN8bc3h" colab={"base_uri": "https://localhost:8080/"} outputId="56648815-5d03-4c5f-e752-226774c350d2" #8 - ANALIZANDO DADOS DA COLUNA pd.unique(df["country"]) # + id="qEe0B9rvcsxI" colab={"base_uri": "https://localhost:8080/"} outputId="a0c333dc-cefd-4631-e1ab-8b0d6980dd7e" #8 - ANALIZANDO DADOS DA COLUNA pd.unique(df["city"]) # + id="xRyNhGJGd342" colab={"base_uri": "https://localhost:8080/"} outputId="ad0088f0-bd20-4479-a8c6-75bc23634265" #8 - ANALIZANDO DADOS DA COLUNA pd.unique(df["Estado"]) # + id="aotzg9SpeVuo" #9 - DROPANDO LINHAS COM VALOR 'ESTADO = TOTAL' df.drop(df.loc[df['Estado'] == 'TOTAL'].index, inplace=True) # + id="i0Im-7cmelSj" colab={"base_uri": "https://localhost:8080/"} outputId="631daff7-98f5-4095-d446-ecc537571d12" #10 - VERIFICANDO DROP 'ESTADO = TOTAL' pd.unique(df["Estado"]) # + id="x5kyBvAqV-6-" colab={"base_uri": "https://localhost:8080/"} outputId="32f88c47-6cb5-4abe-c341-1cffd3b57fc8" #11 - VERFICANDO NÚMERO DE ESTADOS cont = 0 for i in pd.unique(df["Estado"]): cont += 1 print(cont) # + id="EG9vRLXKcxIL" #12 - DROPANDO COLUNAS PARA MELHOR VISUALIZAÇÃO df.drop(["city"], axis=1, inplace=True) df.drop(["epi_week"], axis=1, inplace=True) df.drop(["country"], axis=1, inplace=True) df.drop(["totalCases_per_100k_inhabitants"], axis=1, inplace=True) df.drop(["deaths_per_100k_inhabitants"], axis=1, inplace=True) df.drop(["deathsMS"], axis=1, inplace=True) df.drop(["totalCasesMS"], axis=1, inplace=True) df.drop(["deaths_by_totalCases"], axis=1, inplace=True) df.drop(["recovered"], axis=1, inplace=True) df.drop(["suspects"], axis=1, inplace=True) df.drop(["tests"], axis=1, inplace=True) df.drop(["tests_per_100k_inhabitants"], axis=1, inplace=True) df.drop(["vaccinated"], axis=1, inplace=True) df.drop(["vaccinated_per_100_inhabitants"], axis=1, inplace=True) df.drop(["vaccinated_second"], axis=1, inplace=True) df.drop(["vaccinated_second_per_100_inhabitants"], axis=1, inplace=True) df.drop(["vaccinated_single"], axis=1, inplace=True) df.drop(["vaccinated_single_per_100_inhabitants"], axis=1, inplace=True) df.drop(["vaccinated_third"], axis=1, inplace=True) df.drop(["vaccinated_third_per_100_inhabitants"], axis=1, inplace=True) # + id="9UtPIhvYdic8" colab={"base_uri": "https://localhost:8080/", "height": 423} outputId="1cb772c3-1179-43ab-9ccf-a79aa3658d16" #13 - VERIFICANDO DROP DAS COLUNAS df # + id="tDWyum11fGen" colab={"base_uri": "https://localhost:8080/", "height": 423} outputId="4c60da6f-dace-4639-9a77-616002d28936" #14 - DROPANDO POSSÍVEIS DUPLICATAS df.drop_duplicates() # + id="DG4P9MRfh0H6" colab={"base_uri": "https://localhost:8080/"} outputId="f83c33ff-823c-4ea4-af85-c7c58ced3219" #15 - VERIFICANDO POSSÍVEIS DADOS NULOS print(df.isnull().sum()) # + id="fwj8gjtHujBi" #16 - CONVERTENDO O FORMATO DOS DADOS 'DATA' E 'ESTADO' df["Data"] = pd.to_datetime(df["Data"], dayfirst=True) df.Estado = df.Estado.convert_dtypes(convert_string=True) # + id="ppfVoF1xkiQC" colab={"base_uri": "https://localhost:8080/", "height": 53} outputId="00e98cc7-31e1-4884-8056-0889ff40ce00" #17 - SALVANDO O ARQUIVO NORMALIZADO NO BUCKET df.to_csv('casos-covid_tratado.csv') # !gsutil cp casos-covid_tratado.csv gs://notebooks_pandas_gp03/ # !gsutil cat gs://notebooks_pandas_gp03/casos-covid_tratado.csv
Pandas-DataSet-CasosCovid.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] tags=["remove_cell"] # # Quantum Fourier Transform # - # In this tutorial, we introduce the quantum fourier transform (QFT), derive the circuit, and implement it using Qiskit. We show how to run QFT on a simulator and a five qubit device. # # ## Contents # 1. [Introduction](#introduction) # 2. [Intuition](#intuition) # 2.1 [Counting in the Fourier Basis](#counting-fourier) # 3. [Example 1: 1-qubit QFT](#example1) # 4. [The Quantum Fourier transform](#qfteqn) # 5. [The Circuit that Implements the QFT](#circuit) # 6. [Example 2: 3-qubit QFT](#example2) # 7. [Some Notes About the Form of the QFT Circuit](#formnote) # 8. [Qiskit Implementation](#implementation) # 8.1 [Example on 3 Qubits](#threeqft) # 8.2 [General QFT Function](#generalqft) # 8.3 [Running QFT on a Real Quantum Device](#implementationdev) # 9. [Problems](#problems) # 10. [References](#references) # ## 1. Introduction <a id='introduction'></a> # # The Fourier transform occurs in many different versions throughout classical computing, in areas ranging from signal processing to data compression to complexity theory. The quantum Fourier transform (QFT) is the quantum implementation of the discrete Fourier transform over the amplitudes of a wavefunction. It is part of many quantum algorithms, most notably Shor's factoring algorithm and quantum phase estimation. # The discrete Fourier transform acts on a vector $(x_0, ..., x_{N-1})$ and maps it to the vector $(y_0, ..., y_{N-1})$ according to the formula # # # $$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$ # # # where $\omega_N^{jk} = e^{2\pi i \frac{jk}{N}}$. # # Similarly, the quantum Fourier transform acts on a quantum state $\vert X\rangle = \sum_{j=0}^{N-1} x_j \vert j \rangle$ and maps it to the quantum state $\vert Y\rangle = \sum_{k=0}^{N-1} y_k \vert k \rangle$ according to the formula # # # $$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$ # # # with $\omega_N^{jk}$ defined as above. Note that only the amplitudes of the state were affected by this transformation. # # This can also be expressed as the map: # # # $$\vert j \rangle \mapsto \frac{1}{\sqrt{N}}\sum_{k=0}^{N-1}\omega_N^{jk} \vert k \rangle$$ # # # # Or the unitary matrix: # # # $$ U_{QFT} = \frac{1}{\sqrt{N}} \sum_{j=0}^{N-1} \sum_{k=0}^{N-1} \omega_N^{jk} \vert k \rangle \langle j \vert$$ # # # ## 2. Intuition <a id="intuition"></a> # # The quantum Fourier transform (QFT) transforms between two bases, the computational (Z) basis, and the Fourier basis. The H-gate is the single-qubit QFT, and it transforms between the Z-basis states $|0\rangle$ and $|1\rangle$ to the X-basis states $|{+}\rangle$ and $|{-}\rangle$. In the same way, all multi-qubit states in the computational basis have corresponding states in the Fourier basis. The QFT is simply the function that transforms between these bases. # # $$ # |\text{State in Computational Basis}\rangle \quad \xrightarrow[]{\text{QFT}} \quad |\text{State in Fourier Basis}\rangle # $$ # # $$ # \text{QFT}|x\rangle = |\widetilde{x}\rangle # $$ # # (We often note states in the Fourier basis using the tilde (~)). # # ### 2.1 Counting in the Fourier basis: <a id="counting-fourier"></a> # # In the computational basis, we store numbers in binary using the states $|0\rangle$ and $|1\rangle$: # # ![zbasiscounting](images/zbasis-counting.gif) # # Note the frequency with which the different qubits change; the leftmost qubit flips with every increment in the number, the next with every 2 increments, the third with every 4 increments, and so on. In the Fourier basis, we store numbers using different rotations around the Z-axis: # # ![fbasiscounting](images/fourierbasis-counting.gif) # # The number we want to store dictates the angle at which each qubit is rotated around the Z-axis. In the state $|\widetilde{0}\rangle$, all qubits are in the state $|{+}\rangle$. As seen in the example above, to encode the state $|\widetilde{5}\rangle$ on 4 qubits, we rotated the leftmost qubit by $\tfrac{5}{2^n} = \tfrac{5}{16}$ full turns ($\tfrac{5}{16}\times 2\pi$ radians). The next qubit is turned double this ($\tfrac{10}{16}\times 2\pi$ radians, or $10/16$ full turns), this angle is then doubled for the qubit after, and so on. # # Again, note the frequency with which each qubit changes. The leftmost qubit (`qubit 0`) in this case has the lowest frequency, and the rightmost the highest. # # ## 3. Example 1: 1-qubit QFT <a id='example1'></a> # # Consider how the QFT operator as defined above acts on a single qubit state $\vert\psi\rangle = \alpha \vert 0 \rangle + \beta \vert 1 \rangle$. In this case, $x_0 = \alpha$, $x_1 = \beta$, and $N = 2$. Then, # # # # $$y_0 = \frac{1}{\sqrt{2}}\left( \alpha \exp\left(2\pi i\frac{0\times0}{2}\right) + \beta \exp\left(2\pi i\frac{1\times0}{2}\right) \right) = \frac{1}{\sqrt{2}}\left(\alpha + \beta\right)$$ # # # # and # # # # $$y_1 = \frac{1}{\sqrt{2}}\left( \alpha \exp\left(2\pi i\frac{0\times1}{2}\right) + \beta \exp\left(2\pi i\frac{1\times1}{2}\right) \right) = \frac{1}{\sqrt{2}}\left(\alpha - \beta\right)$$ # # # # such that the final result is the state # # # # $$U_{QFT}\vert\psi\rangle = \frac{1}{\sqrt{2}}(\alpha + \beta) \vert 0 \rangle + \frac{1}{\sqrt{2}}(\alpha - \beta) \vert 1 \rangle$$ # # # # This operation is exactly the result of applying the Hadamard operator ($H$) on the qubit: # # # # $$H = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}$$ # # # # If we apply the $H$ operator to the state $\vert\psi\rangle = \alpha \vert 0 \rangle + \beta \vert 1 \rangle$, we obtain the new state: # # $$\frac{1}{\sqrt{2}}(\alpha + \beta) \vert 0 \rangle + \frac{1}{\sqrt{2}}(\alpha - \beta) \vert 1 \rangle # \equiv \tilde{\alpha}\vert 0 \rangle + \tilde{\beta}\vert 1 \rangle$$ # # Notice how the Hadamard gate performs the discrete Fourier transform for $N = 2$ on the amplitudes of the state. # ## 4. The Quantum Fourier transform<a id="qfteqn"></a> # So what does the quantum Fourier transform look like for larger $N$? Let's derive a transformation for $N=2^n$, $QFT_N$ acting on the state $\vert x \rangle = \vert x_1\ldots x_n \rangle$ where $x_1$ is the most significant bit. This maths is here for those that find it useful, if you struggle with it then don’t worry; as long as you understand the intuition in section 2 then you can continue straight to the next section. # # $$ # \begin{aligned} # QFT_N\vert x \rangle & = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1}\omega_N^{xy} \vert y \rangle # \\ # & = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i xy / 2^n} \vert y \rangle ~\text{since}\: \omega_N^{xy} = e^{2\pi i \frac{xy}{N}} \:\text{and}\: N = 2^n # \\ # & = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i \left(\sum_{k=1}^n y_k/2^k\right) x} \vert y_1 \ldots y_n \rangle \:\text{rewriting in fractional binary notation}\: y = y_1\ldots y_n, y/2^n = \sum_{k=1}^n y_k/2^k # \\ # & = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} \prod_{k=1}^n e^{2 \pi i x y_k/2^k } \vert y_1 \ldots y_n \rangle \:\text{after expanding the exponential of a sum to a product of exponentials} # \\ # & = \frac{1}{\sqrt{N}} \bigotimes_{k=1}^n \left(\vert0\rangle + e^{2 \pi i x /2^k } \vert1\rangle \right) \:\text{after rearranging the sum and products, and expanding} # \sum_{y=0}^{N-1} = \sum_{y_1=0}^{1}\sum_{y_2=0}^{1}\ldots\sum_{y_n=0}^{1} # \\ # & = \frac{1}{\sqrt{N}} # \left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) # \otimes # \left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) # \otimes # \ldots # \otimes # \left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) # \otimes # \left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) # \end{aligned} # $$ # # This is a mathematical description of the animation we saw in the intuition section: # # ![fbasiscounting](images/fourierbasis-counting.gif) # ## 5. The Circuit that Implements the QFT <a name="circuit"></a> # # The circuit that implements QFT makes use of two gates. The first one is a single-qubit Hadamard gate, $H$, that you already know. From the discussion in [Example 1](#example1) above, you have already seen that the action of $H$ on the single-qubit state $\vert x_k\rangle$ is # # # # $$H\vert x_k \rangle = \frac{1}{\sqrt{2}}\left(\vert0\rangle + \exp\left(\frac{2\pi i}{2}x_k\right)\vert1\rangle\right)$$ # # # # The second is a two-qubit controlled rotation $CROT_k$ given in block-diagonal form as # # $$CROT_k = \left[\begin{matrix} # I&0\\ # 0&UROT_k\\ # \end{matrix}\right]$$ # # where # # $$UROT_k = \left[\begin{matrix} # 1&0\\ # 0&\exp\left(\frac{2\pi i}{2^k}\right)\\ # \end{matrix}\right]$$ # # The action of $CROT_k$ on a two-qubit state $\vert x_l x_j\rangle$ where the first qubit is the control and the second is the target is given by # # # # $$CROT_k\vert 0x_j\rangle = \vert 0x_j\rangle$$ # # # and # # # $$CROT_k\vert 1x_j\rangle = \exp\left( \frac{2\pi i}{2^k}x_j \right)\vert 1x_j\rangle$$ # # # # Given these two gates, a circuit that implements [an n-qubit QFT](#qfteqn) is shown below. # # ![image1](images/qft.png) # # The circuit operates as follows. We start with an n-qubit input state $\vert x_1x_2\ldots x_n\rangle$. # # <ol> # <li> After the first Hadamard gate on qubit 1, the state is transformed from the input state to # # $$ # H_1\vert x_1x_2\ldots x_n\rangle = # \frac{1}{\sqrt{2}} # \left[\vert0\rangle + \exp\left(\frac{2\pi i}{2}x_1\right)\vert1\rangle\right] # \otimes # \vert x_2x_3\ldots x_n\rangle # $$ # # <li> After the $UROT_2$ gate on qubit 1 controlled by qubit 2, the state is transformed to # # $$ # \frac{1}{\sqrt{2}} # \left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1\right)\vert1\rangle\right] # \otimes # \vert x_2x_3\ldots x_n\rangle # $$ # # <li> After the application of the last $UROT_n$ gate on qubit 1 controlled by qubit $n$, the state becomes # # $$ # \frac{1}{\sqrt{2}} # \left[\vert0\rangle + # \exp\left( # \frac{2\pi i}{2^n}x_n + # \frac{2\pi i}{2^{n-1}}x_{n-1} + # \ldots + # \frac{2\pi i}{2^2}x_2 + # \frac{2\pi i}{2}x_1 # \right) # \vert1\rangle\right] # \otimes # \vert x_2x_3\ldots x_n\rangle # $$ # # Noting that # # $$ # x = 2^{n-1}x_1 + 2^{n-2}x_2 + \ldots + 2^1x_{n-1} + 2^0x_n # $$ # # we can write the above state as # # $$ # \frac{1}{\sqrt{2}} # \left[\vert0\rangle + # \exp\left( # \frac{2\pi i}{2^n}x # \right) # \vert1\rangle\right] # \otimes # \vert x_2x_3\ldots x_n\rangle # $$ # # <li> After the application of a similar sequence of gates for qubits $2\ldots n$, we find the final state to be: # # $$ # \frac{1}{\sqrt{2}} # \left[\vert0\rangle + # \exp\left( # \frac{2\pi i}{2^n}x # \right) # \vert1\rangle\right] # \otimes # \frac{1}{\sqrt{2}} # \left[\vert0\rangle + # \exp\left( # \frac{2\pi i}{2^{n-1}}x # \right) # \vert1\rangle\right] # \otimes # \ldots # \otimes # \frac{1}{\sqrt{2}} # \left[\vert0\rangle + # \exp\left( # \frac{2\pi i}{2^{2}}x # \right) # \vert1\rangle\right] # \otimes # \frac{1}{\sqrt{2}} # \left[\vert0\rangle + # \exp\left( # \frac{2\pi i}{2^{1}}x # \right) # \vert1\rangle\right] # $$ # # which is exactly the QFT of the input state as derived <a href="#qfteqn">above</a> with the caveat that the order of the qubits is reversed in the output state. # </ol> # ## 6. Example 2: 3-qubit QFT <a id='example2'></a> # # The steps to creating the circuit for $\vert y_3y_2y_1\rangle = QFT_8\vert x_3x_2x_1\rangle$ would be: # # <ol> # <li> Apply a Hadamard gate to $\vert x_1 \rangle$ # # $$ # |\psi_1\rangle = # \vert x_3\rangle # \otimes # \vert x_2\rangle # \otimes # \frac{1}{\sqrt{2}} # \left[ # \vert0\rangle + # \exp\left(\frac{2\pi i}{2}x_1\right) # \vert1\rangle\right] # $$ # # <li> Apply a $UROT_2$ gate to $\vert x_1\rangle$ depending on $\vert x_2\rangle$ # # $$ # |\psi_2\rangle = # \vert x_3\rangle # \otimes # \vert x_2\rangle # \otimes # \frac{1}{\sqrt{2}} # \left[ # \vert0\rangle + # \exp\left( # \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1 # \right) # \vert1\rangle\right] # $$ # # <li> Apply a $UROT_3$ gate to $\vert x_1\rangle$ depending on $\vert x_3\rangle$ # # $$ # |\psi_3\rangle = # \vert x_3\rangle # \otimes # \vert x_2\rangle # \otimes # \frac{1}{\sqrt{2}} # \left[ # \vert0\rangle + # \exp\left( # \frac{2\pi i}{2^3}x_3 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1 # \right) # \vert1\rangle\right] # $$ # # <li> Apply a Hadamard gate to $\vert x_2 \rangle$ # # $$ # |\psi_4\rangle = # \vert x_3\rangle # \otimes # \frac{1}{\sqrt{2}} # \left[ # \vert0\rangle + # \exp\left( # \frac{2\pi i}{2}x_2 # \right) # \vert1\rangle\right] # \otimes # \frac{1}{\sqrt{2}} # \left[ # \vert0\rangle + # \exp\left( # \frac{2\pi i}{2^3}x_3 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1 # \right) # \vert1\rangle\right] # $$ # # <li> Apply a $UROT_2$ gate to $\vert x_2\rangle$ depending on $\vert x_3\rangle$ # # $$ # |\psi_5\rangle = # \vert x_3\rangle # \otimes # \frac{1}{\sqrt{2}} # \left[ # \vert0\rangle + # \exp\left( # \frac{2\pi i}{2^2}x_3 + \frac{2\pi i}{2}x_2 # \right) # \vert1\rangle\right] # \otimes # \frac{1}{\sqrt{2}} # \left[ # \vert0\rangle + # \exp\left( # \frac{2\pi i}{2^3}x_3 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1 # \right) # \vert1\rangle\right] # $$ # # <li> Apply a Hadamard gate to $\vert x_3\rangle$ # # $$ # |\psi_6\rangle = # \frac{1}{\sqrt{2}} # \left[ # \vert0\rangle + # \exp\left( # \frac{2\pi i}{2}x_3 # \right) # \vert1\rangle\right] # \otimes # \frac{1}{\sqrt{2}} # \left[ # \vert0\rangle + # \exp\left( # \frac{2\pi i}{2^2}x_3 + \frac{2\pi i}{2}x_2 # \right) # \vert1\rangle\right] # \otimes # \frac{1}{\sqrt{2}} # \left[ # \vert0\rangle + # \exp\left( # \frac{2\pi i}{2^3}x_3 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1 # \right) # \vert1\rangle\right] # $$ # # # <li> Keep in mind the reverse order of the output state relative to the desired QFT. Therefore, we must reverse the order of the qubits (in this case swap $y_1$ and $y_3$). # ## 7. Some Notes About the Form of the QFT Circuit <a id="formnote"></a> # The example above demonstrates a very useful form of the QFT for $N=2^n$. Note that only the last qubit depends on the values of all the other input qubits and each further bit depends less and less on the input qubits. This becomes important in physical implementations of the QFT, where nearest-neighbor couplings are easier to achieve than distant couplings between qubits. # # Additionally, as the QFT circuit becomes large, an increasing amount of time is spent doing increasingly slight rotations. It turns out that we can ignore rotations below a certain threshold and still get decent results, this is known as the approximate QFT. This is also important in physical implementations, as reducing the number of operations can greatly reduce decoherence and potential gate errors. # ## 8. Qiskit Implementation<a id='implementation'></a> # # In Qiskit, the implementation of the $CROT$ gate used in the discussion above is a controlled phase rotation gate. This gate is defined in [OpenQASM](https://github.com/QISKit/openqasm) as # # $$ # CP(\theta) = # \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & e^{i\theta}\end{bmatrix} # $$ # # Hence, the mapping from the $CROT_k$ gate in the discussion above into the $CP$ gate is found from the equation # # $$ # \theta = 2\pi/2^k = \pi/2^{k-1} # $$ # # ### 8.1 Example on 3 Qubits <a id="threeqft"></a> import numpy as np from numpy import pi # importing Qiskit from qiskit import QuantumCircuit, transpile, assemble, Aer, IBMQ from qiskit.providers.ibmq import least_busy from qiskit.tools.monitor import job_monitor from qiskit.visualization import plot_histogram, plot_bloch_multivector # It is useful to work out the relevant code for the 3-qubit case before generalizing to the $n$-qubit case. First, we must define our quantum circuit: qc = QuantumCircuit(3) # **Note**: Remember that Qiskit's least significant bit has the lowest index (0), thus the circuit will be mirrored through the horizontal in relation to the image in section 5. First, we apply a H-gate to qubit 2 : qc.h(2) qc.draw() # Next, we want to turn this an extra quarter turn if qubit 1 is in the state $|1\rangle$: qc.cp(pi/2, 1, 2) # CROT from qubit 1 to qubit 2 qc.draw() # And another eighth turn if the least significant qubit (0) is $|1\rangle$: qc.cp(pi/4, 0, 2) # CROT from qubit 2 to qubit 0 qc.draw() # With that qubit taken care of, we can now ignore it and repeat the process, using the same logic for qubits 0 and 1: qc.h(1) qc.cp(pi/2, 0, 1) # CROT from qubit 0 to qubit 1 qc.h(0) qc.draw() # Finally we must swap the qubits 0 and 2 to complete the QFT: qc.swap(0,2) qc.draw() # ### 8.2 General QFT Function <a id="generalqft"></a> # # We will now create a general circuit for the QFT in Qiskit. Creating large general circuits like this is really where Qiskit shines. # # It is easier to build a circuit that implements the QFT with the qubits upside down, then swap them afterwards; we will start off by creating the function that rotates our qubits correctly. Let’s start as we did with the 3 qubit example, by correctly rotating the most significant qubit (the qubit with the highest index): def qft_rotations(circuit, n): if n == 0: # Exit function if circuit is empty return circuit n -= 1 # Indexes start from 0 circuit.h(n) # Apply the H-gate to the most significant qubit for qubit in range(n): # For each less significant qubit, we need to do a # smaller-angled controlled rotation: circuit.cp(pi/2**(n-qubit), qubit, n) # Let’s see how this looks: qc = QuantumCircuit(4) qft_rotations(qc,4) qc.draw() # We can use the widget below to see how this circuit scales with the number of qubits in our circuit: from qiskit_textbook.widgets import scalable_circuit scalable_circuit(qft_rotations) # Great! This is the first part of our QFT. Now we have correctly rotated the most significant qubit, we need to correctly rotate the second most significant qubit. Then we must deal with the third most significant, and so on. But why write more code? When we get to the end of our `qft_rotations()` function, we can use the same code to repeat the process on the next `n-1` qubits: # + def qft_rotations(circuit, n): """Performs qft on the first n qubits in circuit (without swaps)""" if n == 0: return circuit n -= 1 circuit.h(n) for qubit in range(n): circuit.cp(pi/2**(n-qubit), qubit, n) # At the end of our function, we call the same function again on # the next qubits (we reduced n by one earlier in the function) qft_rotations(circuit, n) # Let's see how it looks: qc = QuantumCircuit(4) qft_rotations(qc,4) qc.draw() # - # That was easy! Process in which a function calls itself directly or indirectly is called _recursion._ It can greatly simplify code. We can again see how this scales using the widget below: scalable_circuit(qft_rotations) # Finally, we need to add the swaps at the end of the QFT function to match the definition of the QFT. We will combine this into the final function `qft()`: # + def swap_registers(circuit, n): for qubit in range(n//2): circuit.swap(qubit, n-qubit-1) return circuit def qft(circuit, n): """QFT on the first n qubits in circuit""" qft_rotations(circuit, n) swap_registers(circuit, n) return circuit # Let's see how it looks: qc = QuantumCircuit(4) qft(qc,4) qc.draw() # - # This is the generalised circuit for the quantum Fourier transform. We can again see how this scales using the widget below: scalable_circuit(qft) # We now want to demonstrate this circuit works correctly. To do this we must first encode a number in the computational basis. We can see the number 5 in binary is `101`: bin(5) # (The `0b` just reminds us this is a binary number). Let's encode this into our qubits: # + # Create the circuit qc = QuantumCircuit(3) # Encode the state 5 qc.x(0) qc.x(2) qc.draw() # - # And let's check the qubit's states using the aer simulator: sim = Aer.get_backend("aer_simulator") qc_init = qc.copy() qc_init.save_statevector() statevector = sim.run(qc_init).result().get_statevector() plot_bloch_multivector(statevector) # Finally, let's use our QFT function and view the final state of our qubits: qft(qc,3) qc.draw() qc.save_statevector() statevector = sim.run(qc).result().get_statevector() plot_bloch_multivector(statevector) # We can see out QFT function has worked correctly. Compared the state $|\widetilde{0}\rangle = |{+}{+}{+}\rangle$, Qubit 0 has been rotated by $\tfrac{5}{8}$ of a full turn, qubit 1 by $\tfrac{10}{8}$ full turns (equivalent to $\tfrac{1}{4}$ of a full turn), and qubit 2 by $\tfrac{20}{8}$ full turns (equivalent to $\tfrac{1}{2}$ of a full turn). # ### 8.3 Running QFT on a Real Quantum Device<a id="implementationdev"></a> # If we tried running the circuit at the end of section 8.2 on a real device, the results would be completely random, since all qubits are in equal superposition of $|0\rangle$ and $|1\rangle$. If we want to demonstrate and investigate the QFT working on real hardware, we can instead create the state $|\widetilde{5}\rangle$ seen at the end of section 8.2, run the QFT in reverse, and verify the output is the state $|5\rangle$ as expected. # # Firstly, let’s use Qiskit to easily reverse our QFT operation: def inverse_qft(circuit, n): """Does the inverse QFT on the first n qubits in circuit""" # First we create a QFT circuit of the correct size: qft_circ = qft(QuantumCircuit(n), n) # Then we take the inverse of this circuit invqft_circ = qft_circ.inverse() # And add it to the first n qubits in our existing circuit circuit.append(invqft_circ, circuit.qubits[:n]) return circuit.decompose() # .decompose() allows us to see the individual gates # Now let's put our qubits in the state $|\widetilde{5}\rangle$: # + nqubits = 3 number = 5 qc = QuantumCircuit(nqubits) for qubit in range(nqubits): qc.h(qubit) qc.p(number*pi/4,0) qc.p(number*pi/2,1) qc.p(number*pi,2) qc.draw() # - # And we can see this does indeed result in the Fourier state $|\widetilde{5}\rangle$: qc_init = qc.copy() qc_init.save_statevector() sim = Aer.get_backend("aer_simulator") statevector = sim.run(qc_init).result().get_statevector() plot_bloch_multivector(statevector) # Finally, let's apply our inverse QFT: qc = inverse_qft(qc, nqubits) qc.measure_all() qc.draw() # + tags=["uses-hardware"] # Load our saved IBMQ accounts and get the least busy backend device with less than or equal to nqubits IBMQ.load_account() provider = IBMQ.get_provider(hub='ibm-q') backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= nqubits and not x.configuration().simulator and x.status().operational==True)) print("least busy backend: ", backend) # + tags=["uses-hardware"] shots = 2048 transpiled_qc = transpile(qc, backend, optimization_level=3) job = backend.run(transpiled_qc, shots=shots) job_monitor(job) # + tags=["uses-hardware"] counts = job.result().get_counts() plot_histogram(counts) # - # We (hopefully) see that the highest probability outcome is $101$. # ## 9. Problems<a id="problems"></a> # 1. The [above implementation](#implementationdev) of QFT was tested by preparing the Fourier state $|\widetilde{5}\rangle$ for which $\text{QFT}^{\dagger}|\widetilde{5}\rangle = |101\rangle$. Try to find the state $|a\rangle$ such that $\text{QFT}^{\dagger}|a\rangle = |100\rangle$. # 2. Find the state $|b\rangle$ such that $\text{QFT}^{\dagger}|b\rangle = |011\rangle$. # 3. Try to write the QFT function without recursion. Use Qiskit's unitary simulator to verify your results. # ## 10. References<a id="references"></a> # 1. <NAME> and <NAME>, Quantum Computation and Quantum Information, Cambridge Series on Information and the Natural Sciences (Cambridge University Press, Cambridge, 2000). import qiskit.tools.jupyter # %qiskit_version_table
content/ch-algorithms/quantum-fourier-transform.ipynb