code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## TRANSFORMERLESS POWER SUPPLY
# +
from IPython.display import display_latex
# Usage: display_full_latex('u_x')
def display_full_latex(idx):
if(isinstance(idx, str)):
eqn = '\\[' + idx + '\\]'
display_latex(eqn, raw=True)
else:
eqn = '\\[' + latex(idx) + '\\]'
display_latex(eqn, raw=True)
return
# -
eqn = 'X_c = \\frac{ V_{rms}\\sqrt{2} - V_{ref} }{I_{out}}'
display_full_latex(eqn)
# Where <br>
# V<sub>ref</sub> is the required imput voltage at the input node of the rectifier, and <br>
# I<sub>ref</sub> is the desired current.
| .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# # *MoMA's* artists and their collaboration network
# ## Data project for "2021W 136010-1 Introduction to DH Tools and Methods".
# ##### <NAME> (01315533) 2022-01-23
#
# # Introduction
# *The Museum of Modern Art* in New York provides two datasets on *github*, one containing the artists that are represented in their collections and the other one the artworks itselt.
# Ithe artworks themselves.
# I will use the datasets, particularly the artwork dataset, to conduct a social network analysis of the artists represented in the *MoMA* collections. In doing so, I will assume that there is a social connection between the artists who collaborated in the creation of an artwork.
#
# # Research question and objective
# The goal of this project is to examine the dataset and create a network analysis based on the shared authorship of artworks by artists.
# A graph object will be created and centrality measures (Betwenness Centrality, Closeness Centrality, Degree distribution) will be calculated and displayed in plots (histograms and graphs). The size of the nodes in the respective diagrams should depend on the displayed centrality measures. Based on one of the centrality measures, a subgraph containing only the 100 most important nodes should be computed. A nice representation of this subgraph is the goal of the network investigation, where also some of the node attributes like gender or origin should be reflected.
#
# # Source
# *MoMA* notes in the README file on Github that some of the data is not complete and that other information is not "approved by the curator." They also make it clear that use of the data is at the user's risk.
# MoMA planned to update the records on Github monthly, but the last update was done in January 2021. This is also the version (1.62) that I downloaded to use for this project.
#
# ## Reference:
# - Data collected by: Moma – Museum of modern Art.
# - Dataset: 15,222 records. Encoding: UTF-8.
# - Data format: .csv and JSON.
# - Data distribution via [Github](https://github.com/MuseumofModernArt/collection) by the users <NAME> and momadm.
# - Licensing: CC0 License.
# - Digital object identifier DOI: 10.5281/zenodo.4408594
# - [Presidential URL](https://zenodo.org/record/4408594#.YcGKvC1h1pR)
# - Version: v1.62, release date: 2021/01/01
# - Day and time of download: 2021/12/21 12:43 PM
#
# ## Short citation:
# Moma – Museum of Modern Art (2021/01/01), Artists (Data file in CSV Format). Doi: 10.5281/zenodo.4408594. Retrieved from https://github.com/MuseumofModernArt/collection
#
# Moma – Museum of Modern Art (2021/01/01), Artwork (Data file in CSV Format). Doi: 10.5281/zenodo.4408594. Retrieved from https://github.com/MuseumofModernArt/collection
#
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# # Importing the dataset and getting a first overview
# Setting up the working environment:
# -
#Setting up the working environment
import pandas as pd
# Importing the files and displaying the first overview:
#Import the file downloaded from github
artworksAll = pd.read_csv("/Users/linsernora/Pyhton_Course/Python_CourseNorLins/MoMAartworks/Artworks.csv")
artworksAll.head(15)
# My first attemt to find out if there are objects listed with more than one artist in the Artist column was no very helpful.
# I though my just printing a couple of rows I might get an impression of the data.
#investing the artist column
print(artworksAll["Artist"].head(30))
# Getting all the column labels.
#listing all the column headers of the dataset:
list(artworksAll.columns.values)
# ## Finding multiple artist occurrences
# I tried to find out if there are rows with more than on artist by using the .isin() method.
# At first I though that the given result would suggest that there are no commas in the column.
# To use this approach I would have needed to take further steps with this result and to find out in which columns the result is "True".
#checking if there are rows with multiple artists in the Artist column
artists = artworksAll["Artist"]
multiple = artists.isin([","])
multiple.head()
# The next apprach was more useful. Subsetting the rows that contain a "," in the "Artist" column and assigning it to a new DataFrame *multipleArtists*.
# The .info() methods shows that there are 8059 entries in the dataset where there is more than one artist mentioned in the "Artist" column.
#Subsetting the rows with more than one artist.
multipleArtists = artworksAll[artworksAll["Artist"].str.contains(",")==True]
multipleArtists.info()
# Before geoing any further I wanted to check if the dataset is restricted to show one object per row. As the result shows, there are no duplicate "ObjectID"'s in the dataset.
#checking if there are objectsIds mentioned more than once:
objectIds = artworksAll["ObjectID"]
artworksAll[objectIds.isin(objectIds[objectIds.duplicated()])]
# + [markdown] tags=[]
# ## Cleaning the data
# Reducing the columns that are not needed for the task.
# -
#reducing the dataframe, only keeping the columns that are needed:
multipleArtists = multipleArtists[["Title", "Artist", "ConstituentID", "ObjectID"]]
#checking if it worked:
multipleArtists.head()
multipleArtists.info()
# ### Splitting up the values
# Next step is to split up the values from the columns "Artist", "Constitutent ID" and "Title" into separate rows.
# So that each artist occurrence is mentioned in its own row. The data in the columns "Title" and "ObjektID" needs to be copied to the new observations.
#
# With a little help of *stackoverflow* I created the following approach of stacking and unstacking that provides me with the desired output. I created a new dataframe with the result called *singleArtist*.
#Stacking and unstacking the data to move every artist-occurrence from the "Artist" column into separate lines.
#Without loosing the data that needs to be copied.
singleArtist = (multipleArtists.set_index(['Title', 'ObjectID'])
.stack()
.str.split(',', expand=True)
.stack()
.unstack(-2)
.reset_index(-1, drop=True)
.reset_index()
)
#calling singleArtist to see if it worked:
singleArtist
# Investing if the first rows do not hold any data in the Title column or if something went wrong there.
#chechking if the object does not have a title of if I made I mistake:
multipleArtists[multipleArtists["ObjectID"] == 136435]
# The artwork does not hold any data as Titel.
# Making sure the stacking worked and also the constiutent IDs were split up accordingly to the artist names.
#checking if I split up the Constitutent IDs correctly
singleArtist[singleArtist["ObjectID"] == 81]
singleArtist[singleArtist["Artist"] == "<NAME>"]
# The multiple occurrences in the "Artist"Column are split up into separate lines, the metadata was copied in to the lines as wanted.
# Before adding the additional rows for each artist, the dataframe contained 'r: mutipleArtists.count()' 8059 elements. Now, after splitting up the multiple occurrences, the dataframe holds 24900 elements.
# This might be to big to comfortable work with in my environment, but I will try it out.
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## Creating a Graph object
# To investigate the network and to be able to draw graphs
# -
# Creating a nodelist with unique values
#creating a unique list of all the possible nodes, I might need that later.
objectnodes = singleArtist["ObjectID"].unique()
objectnodes
#how many objects are we dealing with in the dataset singleArtist?
print("Count of objectnodes in the dataframe singleArtist:", len(objectnodes))
#creating a unique list of nodes for the ArtistID's
artistnodes = singleArtist["ConstituentID"].unique()
#how many unique artists are in the singleArtist dataframe?
print("Count of artistnodes in the dataframe singleArtist:", len(artistnodes))
# Importing networkx and matplotlib packages for further investigation and vizualisation of the social network.
import networkx as nx
from networkx import Graph as NXGraph
from networkx.drawing.nx_agraph import graphviz_layout
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
# Creating a undirected graph object and drawing it
#creating an empty graph
G = nx.Graph()
#creating an edgelist containging only the ObjectIDs and the ConstitutentIDs. might be useful later.
edgelist = multipleArtists[["ObjectID", "ConstituentID"]]
#adding the edges:
G= nx.from_pandas_edgelist(edgelist, source="ObjectID", target="ConstituentID")
# +
#don't run again if not needed, takes a long time.
#nx.draw(G)
# -
# Running this takes up to 10 minutes. To comfortably work with the dataset, I need to reduce the size.
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# # Reducing the size fo the graph object
# The graph object holding the complete data needs to be reduced, it slows down my computer to much.
# For this project I will limit the dataset based on the aquisition date of the object.
#
# Only the objects that where arquired between the years 1980 and 2000 stay in the dataset. To do this I subset the original dataframe with a slice on the "DateAquired" column.
# To do so I set the "DateAcquired" column as index and sort the dataframe based on that values.
# -
#Setting the DateAcquired Colum as index and sorting it in ascending order.
artworks_ind = artworksAll.set_index("DateAcquired").sort_index()
artworks_ind.head()
# Next I slice the df brased on the timeframe between 1980 and 2000.
#Slicing the DataFrame so we only keep objects with an aquisition date ("DataAquired") between 1980 and 2000:
artworks_sliced1 = artworks_ind.loc["1980-01-01":"2000-01-01"]
#getting the info on the new dataset
artworks_sliced1.info()
print("The new dataframe contains", len(artworks_sliced1), "elements.")
# This slice holds 21682 elements and is still to big. I shorten the timeframe again to only work with objects that were aquired between the years 1980 and 1990.
#reducing the slice to the timeframe 1980 until 1990
artworks_sliced = artworks_ind.loc["1980-01-01":"1990-01-01"]
artworks_sliced.info()
print("The sliced dataframe contains", len(artworks_sliced), "elements")
# With this dataset of 10187 lines I repeat the steps from above.
# + [markdown] tags=[]
# # Prepping the smaller dataset
# Repeating the steps from above: Subsetting the rows with multiple artists, removing unnecessary columns, restacking the data so every artist is mentioned in its own row, adding columns with ID an prefix "artist" or "object" repectiviley.
# -
## Subsetting the rows with more than one artist.
multiArtists = artworks_sliced[artworks_sliced["Artist"].str.contains(",")==True]
multiArtists.info()
#only keeping the columns that are needed:
multiArtists = multiArtists[["Title", "Artist", "ConstituentID", "ObjectID"]]
#removing the multiple values and adding them as new rows:
single_Artist = (multiArtists.set_index(['Title', 'ObjectID'])
.stack()
.str.split(',', expand=True)
.stack()
.unstack(-2)
.reset_index(-1, drop=True)
.reset_index()
)
single_Artist.info()
single_Artist.head()
print("Number of elements in the dataframe single_Artist:", len(single_Artist))
# This leaves us with 1559 entries, a managable size for this social network project.
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## Adding prefixes to the Ids to distinguish them:
# Before I go any further I am adding new columns to the dataset where I store acombination of the ObjectID and the ConstituentId together with the prefixes "object" and "artist". The numbers alone would not be distinguishable.
# -
#adding a new column where the ObjectId has the prefix "object"
single_Artist["ObjectID_unique"] = "object" + single_Artist["ObjectID"].astype(str)
single_Artist.head()
#adding another new column where the ConstitutentID has the prefix "artist"
single_Artist["ConstituentID_unique"] = "artist" + single_Artist["ConstituentID"].astype(str)
single_Artist.head()
#removing the white spaces:
single_Artist["ConstituentID_unique"] = single_Artist["ConstituentID_unique"].str.replace(' ', '')
single_Artist.head()
#To make sure there are no whitespaces in the "ObjectID_unique" column
single_Artist["ObjectID_unique"] = single_Artist["ObjectID_unique"].str.replace(' ', '')
single_Artist.head()
# With the cleaned up data I count the unique ObjetIds and unique ConstitutenIDs again.
#
#counting the unique artist Ids in the new DataFrame:
artist_Ids = single_Artist["ConstituentID_unique"].unique()
print("There are", len(artist_Ids), "unique artists mentioned in the dataset.")
#counting the unique object Ids in the new DataFrame:
object_Ids = single_Artist["ObjectID_unique"].unique()
print("There are", len(object_Ids), "unique objects mentioned in the dataset.")
# The new smaller Dataframes contains 575 artists and 615 different objects. Lets find out how they are connected to each other.
# + [markdown] tags=[]
# ## Adding metadata to the dataframe.
# I will now add metadata on the artist entities from the second dataset MoMA provides.
# -
#Importing the file.
#Import the file from github
artists_complete = pd.read_csv("/Users/linsernora/Pyhton_Course/Python_CourseNorLins/MoMAartworks/Artists.csv")
artists_complete.head(15)
single_Artist.head()
#just to remeber the size of the single_Artist Dateframe:
single_Artist.shape
# +
#adding the metadata based on the ConstituentID (as unique value in both dfs)
#at first try I received the error "You are trying to merge on object and int64 columns. If you wish to proceed you should use pd.concat".
#So I convert the ConstituentID vlaues to type text (since there is also "Nan" in there).
single_Artist["ConstituentID"]=single_Artist["ConstituentID"].astype(str)
artists_complete["ConstituentID"]=artists_complete["ConstituentID"].astype(str)
# +
#merging the two dataframes based on the ConstituentID column
single_Artistextended = pd.merge(single_Artist, artists_complete, on="ConstituentID", how = "left")
single_Artistextended.head()
# -
#checking the shape of the df and making sure only columns where added:
single_Artistextended.shape
# + [markdown] tags=[]
# # Creating graph object
# With the smaller dataframe a new graph object is created (Gnew).
#
# The graph is set up as undirected. Since there is no direction in the relation between artists and objects.
# -
#initiate an empty undirected graph
Gnew = nx.Graph()
type(Gnew)
#adding the edges to the graph. I am setting the unique object Id columns as the source and the unique artist Ids as the target.
Gnew= nx.from_pandas_edgelist(single_Artistextended, source="ObjectID_unique", target="ConstituentID_unique")
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# # Drawing the first graph
# -
#plotting the first graph without any specifications:
nx.draw(Gnew)
plt.show()
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## Refining the plots
# With setting some arguments the graph plot can be refined and made better readable.
# -
#plotting the same graph but with some arguments
pos = nx.spring_layout(Gnew, scale=1)
fig = plt.figure(1, figsize=(20,20))
nx.draw(Gnew, with_labels=False, node_color="seagreen", node_size = 15)
# + [markdown] tags=[]
# # Exploring the network
# -
#Getting feeling for the size of the graph
#Count of the nodes:
print("There are", len(Gnew.nodes()), "nodes in the network.")
#Count of the edges:
print("There are", len(Gnew.edges()), "edges in the network.")
# + [markdown] tags=[]
# # Adding node attributes
# To do so we need to get the data from the artists dataset. We will then compute different colours for the gender values in the dataset. This column will than be appended to the graph object where it will be used to plot the node color accordingly.
# -
# ## Adding gender attribute to artists nodes
# + [markdown] tags=[]
# Getting the gender data from the artists dataset, to have an unique identifyer we need to adapt the ConstitutentID like we did for the graph object and add "artists" to the values as prefix. We allso remove unwanted whitespaces.
# -
#Add the "ConstitutentID_unique" column to the artis df, so that we have a unique identifier.
artists_complete["ConstituentID_unique"] = "artist" + artists_complete["ConstituentID"].astype(str)
#remove the whitespaces
artists_complete["ConstituentID_unique"] = artists_complete["ConstituentID_unique"].str.replace('_', '')
artists_complete.head()
#getting the count of the unique values
len(artists_complete["Gender"].unique())
#getting the unique values
artists_complete["Gender"].unique()
# We need to unify the gender entries to four unique values: "non-binary", "male", "female", "nan".
#
#unify the gender values:
artists_complete["Gender"] = artists_complete["Gender"].str.replace('Male', 'male')
artists_complete["Gender"] = artists_complete["Gender"].str.replace('Female', 'female')
artists_complete["Gender"] = artists_complete["Gender"].str.replace('Non-Binary', 'non-binary')
artists_complete["Gender"] = artists_complete["Gender"].str.replace('Non-binary', 'non-binary')
#checking how many unique values we have now:
artists_complete["Gender"].unique()
#checking if we now have 4 unique values.
len(artists_complete["Gender"].unique())
#Replace NaN values with the value "notspecified"
artists_complete["Gender"] = artists_complete["Gender"].fillna("notspecified")
artists_complete["Gender"].unique()
# We now use the numpy package to add a new column "GenderColour" to the dataframe artists_complete. The colum stores colours according to the 4 gender values. We assign the following colours to the gender values: green = male, blue = female, red = non-binary, gray = unspecified.
# +
#https://www.dataquest.io/blog/tutorial-add-column-pandas-dataframe-based-on-if-else-condition/
#create list of conditions
import numpy as np
conditions = [
(artists_complete["Gender"] == "male"),
(artists_complete["Gender"] == "female"),
(artists_complete["Gender"] == "non-binary"),
(artists_complete["Gender"] == "notspecified")
#(artists_complete["Gender"] == "NaN")
]
#create a list of the values we want to assign for each condition (order is important)
values = ["green", "blue", "red", "gray"]
#green = male, blue = female, red = non-binary, gray = unspecified.
#create a new column and use np.select to assign values to it using our lists as arguments
artists_complete["GenderColour"] = np.select(conditions,values)
#inspect the df
artists_complete.head()
# -
# Now that we have the GenderColour column in the dataframe we will add this column as attribute to the nodes of our graph object.
# Before we do that we check the size of the graph and the node count.
#GenderColoursunique = artists_complete[artists_complete["GenderColour"]]
artists_complete.info()
#checking if all of the items in the dateframe have a GenderColour assigned.
artists_complete["GenderColour"].unique()
# We need to change the data in the "GenderColour" column from type "object" to type "string".
# +
#did not work:
#artists_complete["GenderColour"] = artists_complete["GenderColour"].astype("string")
#artists_complete.info()
# -
#what if I change the type to "float"?
artists_complete["GenderColour"] = artists_complete["GenderColour"].astype("object")
artists_complete.info()
Gnew.number_of_nodes()
# +
#if we add one value as attribute to all nodes at first and than add the gender specific colours, we are not missing any node.
#nx.set_node_attributes(Gnew, values="['black']", name="GenderColour") #was a trie to add the data with the same brackets as the other colour values, did not work.
nx.set_node_attributes(Gnew, values='black', name= "GenderColour")
# +
#set_node_attributes() takes a dictionary as input. To assign multiple attriputes to the nodes at once, we have to create a dictionary of dictionaries.
#the outer dictionary represents the nodes, the inner the keys korresponding to the attributes.
#good thing: nodes that are not in the graph are ignored.
#we start with one attribute (gender):
artists_gender= artists_complete[["ConstituentID_unique", "GenderColour"]]
#setting the index
artists_gender_dic = artists_gender.set_index("ConstituentID_unique").T.to_dict("list") # based on the argument in to_dict() the output is designed.
#The value are stored as tuples
#how can I prevent the brackets?
# -
#adding the "gender"data to the node attributes
nx.set_node_attributes(Gnew, values=artists_gender_dic, name="GenderColour")
#creating a list of the gender attribute data:
#getting the attributes
GenderColour_attributes = nx.get_node_attributes(Gnew, "GenderColour")
#getting the values out of the dictionary
GenderColourValues = GenderColour_attributes.values()
#making it a list and checking if the length is as expexted
GenderColourList = list(GenderColourValues)
len(GenderColourList)
# +
#Gnew.nodes(data=True)
# -
len(list(nx.get_node_attributes(Gnew, name = "GenderColour")))
# +
#Defining the function unique:
def unique(listofattributes):
'''Function unique returns the unique values in a list'''
# initialize a null list
unique_list = []
# traverse for all elements
for x in listofattributes:
# check if exists in unique_list or not
if x not in unique_list:
unique_list.append(x)
# print list
for x in unique_list:
print(x)
#getting unique values of our
unique(GenderColourList)
# -
# We got the attributes, but the ones we added from the dictionary have additional parantheses. This causes that we can't iterate over the node attributes in order to plot the node colours accordingly. Example: {'object5334': 'black',
# 'artist437': ['gray']}
# I tried removing the brackets from the dictionary but failed. I tried RegEx to remove the brackets but failed. I tried do manipulate the "black" entry by adding the same marks and brackets, but it did not work.
#
# I am missing something very basic here, but can't figure out what it is.
# +
#remove the parantheses from the node attributes?
#GenderColourValues2 = GenderColourValues.str.replace('[]', '')
# -
# The error is here for documentation.
# +
#nx.draw(Gnew, node_color = [nx.get_node_attributes(Gnew,'GenderColour')[g] for g in Gnew.nodes()])
# -
#changes all attributes to black...don't run again (was used to add black to all nodes, but was replaced. can be deleted at the end.)
#for node in Gnew.nodes:
#if node in nx.get_node_attributes(Gnew, "GenderColour") == "grey":
# nx.set_node_attributes(Gnew, values= "black", name="GenderColour")
#if node in nx.get_node_attributes(Gnew, "GenderColour") != "blue":
# nx.set_node_attributes(Gnew, values= "black", name="GenderColour")
#if node in nx.get_node_attributes(Gnew, "GenderColour") != "red":
# nx.set_node_attributes(Gnew, values= "black", name="GenderColour")
#if node in nx.get_node_attributes(Gnew, "GenderColour") != "green":
# nx.set_node_attributes(Gnew, values= "black", name="GenderColour")
#Gnew.nodes(data=True)
# +
#gender_nodes = nx.get_node_attributes(Gnew, "GenderColour")
# -
#adding a column to the dataframe that holds the objects with the colour the object nodes should be displayed in the graph
single_Artist["object_colour"] = "lightgray"
single_Artist.head()
list(single_Artist.columns)
# +
#Adding the colour data to the graph object:
#creating a dataframe containing only the relevant data:
objectcol = single_Artist[["ObjectID_unique", "object_colour"]]
#Creating a dictionary and setting the index.
objectcoldic = objectcol.set_index("ObjectID_unique").T.to_dict("list")
#adding the "colour" data as node attribut
nx.set_node_attributes(Gnew, objectcoldic, "GenderColour")
# +
#Gnew.nodes(data=True)
# -
# The graph objects now holds an colour attribute based on the gender of the artists. for objects the colour "lightgray" was chooosen. Next up is plotting the graph that prints the nodes colours repectively.
#getting the attributes
GC = nx.get_node_attributes(Gnew, "GenderColour")
#getting the values out of the dictionary
GSV = GC.values()
#making it a list
GCLIST = list(GSV)
len(GCLIST)
# Somehow we are loosing one node. I don't know why. In the next step we check which unique values are stored in the list.
# +
#finding the unique values in the colors list:
def unique(listofattributes):
# initialize a null list
unique_list = []
# traverse for all elements
for x in listofattributes:
# check if exists in unique_list or not
if x not in unique_list:
unique_list.append(x)
# print list
for x in unique_list:
print(x)
unique(GCLIST)
# +
#throws error: ValueError: 'c' argument must be a color, a sequence of colors, or a sequence of numbers, not [['lightgray'], ['gray'], ...
#nx.draw(Gnew, node_color = GCLIST)
# +
#throws an keyError: 'artistnan'
#nx.draw(Gnew, node_color = [nx.get_node_attributes(Gnew,'GenderColour')[g] for g in Gnew.nodes()])
# +
#throws this error: ValueError: 'c' argument must be a color, a sequence of colors, or a sequence of numbers, not {'object5334': ['lightgray'], 'artist437': ['gray'],
#nx.draw(Gnew, node_color = nx.get_node_attributes(Gnew, "GenderColour"))
# +
#Error: float() argument must be a string or a number, not 'dict_values'
#nx.draw(Gnew, node_color = GSV)
# -
df = pd.Series((i[0] for i in GCLIST))
df.unique()
Gnew.number_of_nodes()
# Here I am cheating an appending a new row with one "lightgray" value. So that the number of nodes and attributes add up.
# I don't know where I lost the one node attribute.
# Or maybe I can find the node that has no attribute? There should no node without one.
# +
GCLIST.append("lightgray")
len(GCLIST)
# +
#nx.draw(Gnew, node_color = df)
#error: 'c' argument has 1189 elements, which is inconsistent with 'x' and 'y' with size 1190.
# +
#c = [random.random()] * len(Gnew) # random color...
#nx.draw(Gnew, node_colour = c)
# +
#does not work
node_colorTest = []
for node in Gnew.nodes(data=True):
if "gray" in node[1]["GenderColour"]:
node_colorTest.append("gray")
if "lightgray" in node[1]["GenderColour"]:
node_colorTest.append("lightgray")
if "green" in node[1]["GenderColour"]:
node_colorTest.append("green")
if "blue" in node[1]["GenderColour"]:
node_colorTest.append("blue")
if "red" in node[1]["GenderColour"]:
node_colorTest.append("red")
else:
node_colorTest.append("black")
nx.draw(Gnew, node_color=node_colorTest)
# +
#colors3 = nx.get_node_attributes(Gnew, "GenderColour")
#nx.get_node_attributes(Gnew, 'GenderColour').values()
# +
#https://stackoverflow.com/questions/28910766/python-networkx-set-node-color-automatically-based-on-number-of-attribute-opt
import matplotlib.pyplot as plt
from itertools import count
genCol = set(nx.get_node_attributes(Gnew, 'GenderColour'))
# +
mapping = dict(zip(sorted(genCol), count()))
nodes = Gnew.nodes()
colors = [mapping[Gnew.nodes[n]["GenderColour"]] for n in nodes]
colors4 = [nx.get_node_attributes(Gnew, "GenderColour")[g] for g in Gnew.nodes()]
#drawing
pos = nx.spring_layout(Gnew)
ec = nx.draw_network_edges(Gnew, pos, alpha = 0.2)
nc = nx.draw_network_nodes(Gnew, pos, nodelist = nodes, node_colour = colors4, with_labels = False, node_size = 100, cmap=plt.cm.jet)
plt.colorbar(nc)
plt.axis("off")
plt.show()
# +
#nx.get_node_attribute creates a dictionary (key = nodelabel, value = gendercolour value)
#example: {'object5334': ['lightgray'],'artist437': ['gray']}
colAttribute = nx.get_node_attributes(Gnew, "GenderColour")
colAttributesValues = colAttribute.values()
colAttributesList = list(colAttribute.values())
# +
#It does not work because there are nodes without an GenderColour attribute:
colors5 = list()
for color in nx.get_node_attributes(Gnew, "GenderColour").items():
colors5.append(color)
colors6 = list(colAttribute.values())
#colors7 = [nx.get_node_attributes(Gnew,'GenderColour')[g] for g in Gnew.nodes()]
colors8 = [u[1] for u in Gnew.nodes(data="GenderColour")]
#list(colors8)
#nx.draw(Gnew, node_color = colors8)
# +
#finding the unique values in the colors list:
def unique(colors8):
# initialize a null list
unique_list = []
# traverse for all elements
for x in colors8:
# check if exists in unique_list or not
if x not in unique_list:
unique_list.append(x)
# print list
for x in unique_list:
print(x)
unique(colors8)
#There are nodes in there without a value
# -
colors11 = pd.Series(colors8).fillna("['black']").tolist()
print(unique(colors11))
nx.draw(Gnew, node_color=colors11)
# +
colors9 = [u[1] for u in Gnew.nodes(data="GenderColour")]
not_colors = [c for c in colors if c not in ("green", "lightgray", "gray", "blue")]
if not_colors:
print("TEST FAILED:", not_colors)
#If you have None in any of your node's node_type attribute, this will print those nodes in black:
# +
#(change *colors* to):
colors10 = []
for u in Gnew.nodes(data="GenderColour"):
if u[1] in ("green", "lightgray", "gray", "blue"):
colors10.append(u[1])
elif u[1] == None:
colors10.append("lightgray")
else:
#do something?
print("ERROR: Should be green, lightgray, gray, blue")
# -
colour = [nx.get_node_attributes(Gnew,'GenderColour')[g] for g in Gnew.nodes()]
nx.draw(Gnew, node_color = colAttributesList)
#tryping to find the nodes where the colour is missing
# +
#plot graph T with the node colour based on the gender.
#plotting T with node_size depending on betweenness centrality
npos = nx.spring_layout(Gnew, scale=1)
fig = plt.figure(1, figsize=(20,20))
gendercolours = [nx.get_node_attributes(Gnew, "GenderColour")[g] for g in Gnew.nodes()]
nx.draw(Gnew, pos=npos,
node_size=[v * 10000 for v in between_centrality_T.values()],
#also adding labels
with_labels=True,
node_color=gendercolours,
#node_color = [nx.get_node_attributes(Gnew,'GenderColour')[g] for g in Gnew.nodes()],
edge_color="black")
plt.show()
# + [markdown] tags=[]
# # Investigating the centrality measure of the network:
# + [markdown] tags=[]
# ## Degree centrality
# Finding the network centrality meassures.
# Since the network is not directed, we don't need to take in degree and out degree into account.
# -
#computing the degree centrality
centrality = nx.degree_centrality(Gnew)
#adding the degree_centrality values as node attributes
nx.set_node_attributes(Gnew, centrality, "DegreeCentrality")
#plotting the degree distribution of the network
plt.title("Histogram of degree centrality distribution")
plt.hist(list(centrality.values()))
plt.show()
#importing matplotlib
import matplotlib.cm as cm
#drawing the graph with nodessizes based on the centrality meassure
pos = nx.spring_layout(Gnew, scale=1)
fig = plt.figure(1, figsize=(30,30))
nx.draw(Gnew, pos=pos, node_size=[v * 1000 for v in centrality.values()])
plt.show()
# + [markdown] tags=[]
# #### Defining the function *find_nodes_with_highest_deg_cent*
# The function eturns the nodes with the highest degree centrality in the graph.
# +
# Defining find_nodes_with_highest_deg_cent()
def find_nodes_with_highest_deg_cent(G):
'''returns the nodes with the higehst degree centrality in the graph G'''
# Computing the degree centrality of G: deg_cent
deg_cent = nx.degree_centrality(G)
# Computing the maximum degree centrality: max_dc
max_dc = max(list(deg_cent.values()))
nodes = set()
# Iterating over the degree centrality dictionary
for k, v in deg_cent.items():
# Checking if the current value has the maximum degree centrality
if v == max_dc:
# Adding the current node to the set of nodes
nodes.add(k)
return nodes
# Find the node(s) that has the highest degree centrality in G: top_dc
top_dc = find_nodes_with_highest_deg_cent(Gnew)
print("Those are the nodes with the highest degree centrality: ", top_dc)
# -
# Assertion statement that checks that the node(s) is/are correctly identified.
for node in top_dc:
assert nx.degree_centrality(Gnew)[node] == max(nx.degree_centrality(Gnew).values())
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## Closeness Centrality:
# Assuming that important nodes are closer to other nodes.
# *Closeness centrality* is calucalted as the sum of the path length from the given node to all other nodes.
# -
#calculating the closeness centrality:
close_centrality = nx.closeness_centrality(Gnew)
#adding the meassure as node attribute to the graph object
nx.set_node_attributes(Gnew, centrality, "ClosenessCentrality")
#plotting the closeness centrality of the network as histogram
plt.hist(list(close_centrality.values()))
plt.title("Histogram on Closeness Centrality Distribution")
plt.show()
#plotting the graph with the node sizes depending on the closeness centrality messure of the nodes.
fig = plt.figure(1, figsize=(30,30))
nx.draw(Gnew, pos = pos, nodelist=close_centrality.keys(), node_color="seagreen", node_size=[v * 10000 for v in close_centrality.values()])
plt.show()
# + [markdown] tags=[]
# ## Betweenness Centrality:
# *Betweenness centrality* is computed under the assumption that important nodes connect other nodes. Nodes with a high betweeness cetnrality act like bridges in the network.
# -
#calculating the betweeness centrality:
bet_centrality = nx.betweenness_centrality(Gnew)
#adding the betweeness centrality as node attribute to the graph object:
nx.set_node_attributes(Gnew, bet_centrality, "BetweenessCentrality")
#plotting the degree distribution of the network
plt.hist(list(bet_centrality.values()))
plt.title("Histogramm on the Betweenness Centrality Distribution")
plt.show()
# ## Finding the nodes with the highest betweenness centrality
# + [markdown] tags=[]
# ### Defining the function *find_node_with_highest_bet_cent*
# The function returns the nodes with the highest betweeness centrality in the graph G.
# +
# Define find_node_with_highest_bet_cent()
def find_node_with_highest_bet_cent(G):
'''returns the nodes with the highest betweeness centrality in the graph G.'''
# Computing betweenness centrality: bet_cent
bet_cent = nx.betweenness_centrality(G)
# Computing maximum betweenness centrality: max_bc
max_bc = max(list(bet_cent.values()))
nodes = set()
# Iterating over the betweenness centrality dictionary
for k, v in bet_cent.items():
# Checking if the current value has the maximum betweenness centrality
if v == max_bc:
# Adding the current node to the set of nodes
nodes.add(k)
return nodes
# Using that function to find the node(s) that has the highest betweenness centrality in the network: top_bc
top_bc = find_node_with_highest_bet_cent(Gnew)
print("The node with the highest betweeness centrality is: ",top_bc)
# Assertion statement that checks that the node(s) is/are correctly identified.
for node in top_bc:
assert nx.betweenness_centrality(Gnew)[node] == max(nx.betweenness_centrality(Gnew).values())
# + [markdown] tags=[]
# ## Defining the function *summary*
# Defining the function *summary* that profides us with dataframe of all the nodes and their betweeness centrality measure in descending order.
# -
# defining the function summary
def summary(G):
'''
Getting the values and nodes with their betweeness centrality in descending order
'''
#use from_dic() to create a dataframe with the keys and vlaues of the bet_centrality object
df = pd.DataFrame.from_dict({
'node': list(bet_centrality.keys()),
'between_centrality': list(bet_centrality.values())
})
#sort the values by centrality with descending order:
return df.sort_values('between_centrality', ascending=False)
#calling the function on our graph object
top_bet_cent = summary(Gnew)
top_bet_cent.head()
# ## Subsetting the graph based on betweeness centrality
# We will use the betweeness centrality to subset our complete graph object into a smaller one. The top 100 nodes in regards of their betweenness centrality will be in the subset.
#subsetting the dataframe, keeping only the top 100 nodes with the highest betweenness centrality.
topbet = top_bet_cent.iloc[0:100]
topbet.head()
# + [markdown] tags=[]
# We create a new graph object *T* with the subgraph() function. We will draw a first plot of this subgraph to get a sense for the structure.
# -
#Creating new graph object as a subgraph og Gnew only with the top 100 nodes:
topbet_nodes = topbet["node"]
T = Gnew.subgraph(topbet_nodes)
#Plotting the new graph object:
npos = nx.spring_layout(T, scale=1)
fig = plt.figure(1, figsize=(30,30))
nx.draw(T, pos=npos, node_color="seagreen", node_size=500, edge_color="black")
plt.show()
# ### Refining the plot of T
# We plot the graphobject T with nodes size depending on the betweenness centrality meassure. We also add the node labels.
#calculating the betweenness centrality of the graph object T
between_centrality_T = nx.betweenness_centrality(T)
#plotting T with node_size depending on betweenness centrality
npos = nx.spring_layout(T, scale=1)
fig = plt.figure(1, figsize=(20,20))
nx.draw(T, pos=npos, node_color="seagreen",
node_size=[v * 10000 for v in between_centrality_T.values()],
#also adding labels
with_labels=True,
edge_color="blue")
plt.show()
# # Try on 4.2.22:
# Adding meta data to the nodes. Can we subset out graph object with the dataframe of artists?
#I would like to change the node shape based on the beginnen of the node label. Artist... should be round, object... should be rectangular.
#getting all the nodes
T.nodes(data=True)
npos = nx.spring_layout(T, scale=1)
fig = plt.figure(1, figsize=(20,20))
nx.draw(T, pos=npos, node_color="seagreen",
node_size=[v * 10000 for v in between_centrality_T.values()],
#also adding labels
with_labels=True,
edge_color="blue")
plt.show()
# + [markdown] tags=[]
# ## Neihgbors of nodes
# Getting the degree of every nodes in the graph. Plotting a histogram of the degree centrality distribution.
# -
#computing the degree of every node: degrees
degrees = [len(list(Gnew.neighbors(n))) for n in Gnew.nodes()]
print(degrees)
#Plot a histogram of the degree distribution of the graph
plt.figure()
plt.hist(degrees)
plt.title("Histogram of the degree distribution of the graph")
plt.show()
#Plot a scatter plot of the centrality distribution and the degree distribution
plt.figure()
plt.scatter(degrees, list(centrality.values()))
plt.title("Scatter Plot of the centrality and the degree distribution")
plt.show()
# #### Defing the function *nodes_neighbors*
# The function *nodes_neighbors* returns the count of neighbors the nodes have.
#defining the function nodes_neighbors
def nodes_neighbors(G):
"""
Returns the count of neighbors the nodes in G have.
"""
nodes = set()
#iterate over all nodes in G
for n in G.nodes():
#counting the nodes and adding them to the set:
nodes.add(len(list(G.neighbors(n))))
#return the neighbor of n
return nodes
neighbors = nodes_neighbors(Gnew)
print(neighbors)
# #### Defining a functure *nodes_with_m_nrbs*
# The function return all nodes in a graph that have a soecific count of (m) neighbors.
# Define nodes_with_m_nbrs()
def nodes_with_m_nbrs(G, m):
"""
Returns all nodes in graph G that have m neighbors.
"""
nodes = set()
# Iterate over all nodes in G
for n in G.nodes():
# Check if the number of neighbors of n matches m
if len(list(G.neighbors(n))) == m:
# Add the node n to the set
nodes.add(n)
# Return the nodes with m neighbors
return nodes
# Compute and print all nodes in Gnew that have 6 neighbors
six_nbrs = nodes_with_m_nbrs(Gnew, 6)
print(six_nbrs)
# Compute and print all nodes in Gnew that have 72 neighbors
seventytwo_nbrs = nodes_with_m_nbrs(Gnew, 72)
print(seventytwo_nbrs)
#checking if there realy is no node with 14 neighbors
#(As the result shows when using the nodes_neighbors function on Gnew):
fourteen_nbrs = nodes_with_m_nbrs(Gnew, 14)
print(fourteen_nbrs)
# + [markdown] tags=[]
# ## Investigate Triangles in the graph
# Finding nodes that are involved in triangles. The API nx.triangles() returns a dictionary where the nodes are the keys and the values are the number of triangles.
# -
#creating a dictionary with the triangles of nodes.
triangles = nx.triangles(Gnew)
triangles.get(1)
#nothing is return, there is no node with 1 triangle.
triangles_values = list(triangles.values())
print(sorted(triangles_values))
#complicate way to find out if there are tringles.
triangles_values = list(triangles.values())
triangles2 = [i for i in triangles_values if i > 1]
print(triangles2)
# There are no triangles in the graph object Gnew.Therefore tThe following function is not useful in this case, since there are no nodes in a triangle relationships.
# Code was:
# from itertools import combinations
# #Function that identifies all nodes in a triangle relationship.
# def nodes_in_triangle(G, n):
# """
# Returns the nodes in a graph G that are involved in a triangle relationship.
# """
# triangle_nodes = set([n])
# # Iterating over all possible triangle relationship combinations
# for n1, n2 in combinations(G.neighbors(n), 2):
# # Checking if n1 and n2 have an edge between them
# if G.has_edge(n1, n2) == True:
# # Adding n1 to triangle_nodes
# triangle_nodes.add(n1)
# # Adding n2 to triangle_nodes
# triangle_nodes.add(n2)
# return triangle_nodes
#
# nodes_in_triangle(Gnew, 1)
# #shouldn't the result show at least two nodes?
# + [markdown] tags=[]
# ## Cliques
# -
#finding all cliques. not very informative in this case. Only holds pairs of two nodes.
cliques = list(nx.find_cliques(Gnew))
print(cliques[:5])
# ##### Finding cliques.
# cliques are "groups of nodes that are fully connected to one another", while a maximal clique is a clique that cannot be extended by adding another node in the graph.
#finding the cliques:
cliques = nx.find_cliques(Gnew)
cliques
print("There are", len(list(cliques)), "cliques in the graph")
#why do I get the result 0?
largest_clique = sorted(nx.find_cliques(Gnew), key=lambda x:len(x))[-1]
print("The largest clique constists of the nodes: ", largest_clique)
# When the largest clique constits of only two nodes, there are no real cliques.
| Ass4_040222.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Signal Processing
#
# [back to main page](../index.ipynb)
#
# [audiolazy](audiolazy.ipynb)
# ## Links
#
# http://scikits.appspot.com/talkbox
#
# http://bmcfee.github.io/librosa/
#
# http://thomas-cokelaer.info/software/spectrum/html/contents.html
#
# http://nipy.org/nitime/index.html
#
# https://github.com/MTG/sms-tools
#
# http://pythonhosted.org/audiolazy/
#
# http://ajaxsoundstudio.com/software/pyo/
#
# Bob (signal-processing and machine learning toolbox)
# http://idiap.github.io/bob/
#
# Speech Signal Processing:
# https://github.com/idiap/ssp
#
# http://siggigue.github.io/pyfilterbank/, https://github.com/SiggiGue/pyfilterbank
#
# https://github.com/fakufaku/AudioTools
#
# Signal Processing for Communications [Jupyter notebooks with examples](https://github.com/LCAV/DSPNumex), [free PDF book](http://www.sp4comm.org/download.html), [MOOC](https://www.coursera.org/course/dsp)
#
# MIT opencourseware video lectures:
# [Signals and Systems (Alan V. Oppenheim)](http://ocw.mit.edu/resources/res-6-007-signals-and-systems-spring-2011/),
# [Digital Signal Processing (Alan V. Oppenheim)](http://ocw.mit.edu/resources/res-6-008-digital-signal-processing-spring-2011/)
#
# Soundpipe (C library): http://paulbatchelor.github.io/proj/soundpipe.html, https://github.com/PaulBatchelor/Soundpipe
#
# libaudioverse, A (C++) library for 3D and environmental audio:
# http://camlorn.github.io/libaudioverse/docs/branches/master/libaudioverse_manual.html
# https://github.com/camlorn/libaudioverse
# Python bindings: http://camlorn.github.io/libaudioverse/docs/branches/master/python/
#
# https://github.com/ronnyandersson/zignal
#
# https://github.com/crabl/HeadSpace
#
# http://willdrevo.com/fingerprinting-and-audio-recognition-with-python/
#
# https://github.com/Parisson/TimeSide/
# http://files.parisson.com/timeside/doc/index.html
# https://github.com/thomasfillon/Timeside-demos
#
# https://github.com/pierre-rouanet/dtw
# https://github.com/pierre-rouanet/aupyom
#
# https://github.com/micknoise/Maximilian (C++)
#
# https://github.com/endolith/waveform_analysis
# https://gist.github.com/endolith/148112
# http://mathworks.com/matlabcentral/fileexchange/69-octave
#
# https://github.com/astorfi/speechpy
#
# signal (C++11): https://github.com/ideoforms/signal, http://libsignal.io/
#
# https://github.com/everdrone/libsnd (C++)
#
# https://github.com/LancePutnam/Gamma (C++)
#
# https://madmom.readthedocs.io/, https://github.com/CPJKU/madmom (Python audio and music signal processing library)
#
# Automatic headphone equalization: https://github.com/jaakkopasanen/AutoEq
#
# Introduction to Digital Filters (with Audio Applications): https://ccrma.stanford.edu/~jos/filters/
# Jupyter Notebooks: https://github.com/khiner/notebooks/tree/master/introduction_to_digital_filters
#
# C++ Library for Audio Analysis: http://www.adamstark.co.uk/project/gist/, https://github.com/adamstark/Gist
#
# The Synthesis ToolKit in C++ (STK): https://ccrma.stanford.edu/software/stk/, https://github.com/thestk/stk
#
# Fast, modern C++ DSP framework, FFT, Audio Sample Rate Conversion, FIR/IIR/Biquad Filters (SSE, AVX, ARM NEON): https://github.com/kfrlib/kfr (GPL-3.0)
#
# A Collection of Useful C++ Classes for Digital Signal Processing: https://github.com/vinniefalco/DSPFilters
#
# Phase Vocoder In Python: https://github.com/haoyu987/phasevocoder
#
# Learning DSP Illustrated: https://dspillustrations.com/
#
# https://rust.audio/articles/useful-resources/
#
# PySDR: A Guide to SDR and DSP using Python: https://pysdr.org/
#
# The Scientist and Engineer's Guide to Digital Signal Processing: http://www.dspguide.com/pdfbook.htm
#
# Intuitive Guide to Convolution: https://betterexplained.com/articles/intuitive-convolution/
#
# Q Audio DSP Library (C++): https://cycfi.github.io/q/, https://github.com/cycfi/q
#
# https://github.com/olilarkin/awesome-musicdsp
#
# https://github.com/c4dm/qm-dsp (C++)
# ### Filter Design
#
# http://www.dsprelated.com/showarticle/164.php
# http://www.dsprelated.com/showarticle/170.php
# http://www.dsprelated.com/showarticle/194.php
#
# http://www.dsprelated.com/showcode/270.php
#
# http://scipy-cookbook.readthedocs.org/items/FIRFilter.html
#
# http://scipy-cookbook.readthedocs.org/items/FiltFilt.html
#
# https://github.com/chipmuenk/pyFDA
# ### Fourier Transform
#
# #### Relation of Fourier Transform with Epicycles
#
# https://www.youtube.com/watch?v=QVuU2YCwHjw
#
# https://www.youtube.com/watch?v=qS4H6PEcCCA
#
# https://mathematica.stackexchange.com/questions/118834/how-to-graph-the-contour-of-a-resulting-manipulate-curve
#
# https://mathematica.stackexchange.com/questions/171755/how-can-i-draw-a-homer-with-epicycloids
#
# https://bl.ocks.org/jinroh/7524988
#
# Drawing an elephant with four complex parameters: https://doi.org/10.1119/1.3254017
# ### IIR
#
# http://www.earlevel.com/main/2012/11/26/biquad-c-source-code/
#
# http://www.earlevel.com/main/2013/10/13/biquad-calculator-v2/
#
# IIR filters can be evaluated in parallel:
# https://raphlinus.github.io/audio/2019/02/14/parallel-iir.html
#
# https://github.com/berndporr/iir1 (C++)
# ### Fixed Point
#
# http://www.dsprelated.com/showarticle/580.php
# ### MP3 Encoder
#
# http://nbviewer.ipython.org/github/LCAV/MP3Lab/blob/master/mp3python.ipynb
# ### Digital Media
#
# http://xiph.org/video/
# ### Time-Frequency Transforms
#
# https://en.wikipedia.org/wiki/Constant_Q_transform
#
# https://grrrr.org/research/software/nsgt/
# https://github.com/grrrr/nsgt
# ### Voice Activity Detection (VAD)
#
# https://github.com/wiseman/py-webrtcvad
# ### Music Information Retrieval
#
# http://musicinformationretrieval.com/
# https://github.com/stevetjoa/stanford-mir
# ### Synthesizer
#
# https://github.com/irmen/synthesizer
#
# https://github.com/VCVRack/Rack
# <p xmlns:dct="http://purl.org/dc/terms/">
# <a rel="license"
# href="http://creativecommons.org/publicdomain/zero/1.0/">
# <img src="http://i.creativecommons.org/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" />
# </a>
# <br />
# To the extent possible under law,
# <span rel="dct:publisher" resource="[_:publisher]">the person who associated CC0</span>
# with this work has waived all copyright and related or neighboring
# rights to this work.
# </p>
| signal-processing/index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/jigardp/tensorflow/blob/master/Copy_of_Celsius_to_Fahrenheit.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="HnKx50tv5aZD"
# ##### Copyright 2018 The TensorFlow Authors.
# + cellView="form" colab_type="code" id="IwtS_OXU5cWG" colab={}
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="YHI3vyhv5p85"
# # The Basics: Training Your First Model
# + [markdown] colab_type="text" id="_wJ2E7jV5tN5"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/courses/udacity_intro_to_tensorflow_for_deep_learning/l02c01_celsius_to_fahrenheit.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l02c01_celsius_to_fahrenheit.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="F8YVA_634OFk"
# Welcome to this Colab where you will train your first Machine Learning model!
#
# We'll try to keep things simple here, and only introduce basic concepts. Later Colabs will cover more advanced problems.
#
# The problem we will solve is to convert from Celsius to Fahrenheit, where the approximate formula is:
#
# $$ f = c \times 1.8 + 32 $$
#
#
# Of course, it would be simple enough to create a conventional Python function that directly performs this calculation, but that wouldn't be machine learning.
#
#
# Instead, we will give TensorFlow some sample Celsius values (0, 8, 15, 22, 38) and their corresponding Fahrenheit values (32, 46, 59, 72, 100).
# Then, we will train a model that figures out the above formula through the training process.
# + [markdown] colab_type="text" id="fA93WUy1zzWf"
# ## Import dependencies
#
# First, import TensorFlow. Here, we're calling it `tf` for ease of use. We also tell it to only display errors.
#
# Next, import [NumPy](http://www.numpy.org/) as `np`. Numpy helps us to represent our data as highly performant lists.
# + colab_type="code" id="X9uIpOS2zx7k" colab={}
from __future__ import absolute_import, division, print_function
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
import numpy as np
# + [markdown] colab_type="text" id="AC3EQFi20buB"
# ## Set up training data
#
# As we saw before, supervised Machine Learning is all about figuring out an algorithm given a set of inputs and outputs. Since the task in this Codelab is to create a model that can give the temperature in Fahrenheit when given the degrees in Celsius, we create two lists `celsius_q` and `fahrenheit_a` that we can use to train our model.
# + colab_type="code" id="gg4pn6aI1vms" colab={}
celsius_q = np.array([-40, -10, 0, 8, 15, 22, 38], dtype=float)
fahrenheit_a = np.array([-40, 14, 32, 46, 59, 72, 100], dtype=float)
for i,c in enumerate(celsius_q):
print("{} degrees Celsius = {} degrees Fahrenheit".format(c, fahrenheit_a[i]))
# + [markdown] colab_type="text" id="wwJGmDrQ0EoB"
# ### Some Machine Learning terminology
#
# - **Feature** — The input(s) to our model. In this case, a single value — the degrees in Celsius.
#
# - **Labels** — The output our model predicts. In this case, a single value — the degrees in Fahrenheit.
#
# - **Example** — A pair of inputs/outputs used during training. In our case a pair of values from `celsius_q` and `fahrenheit_a` at a specific index, such as `(22,72)`.
#
#
# + [markdown] colab_type="text" id="VM7_9Klvq7MO"
# ## Create the model
#
# Next create the model. We will use simplest possible model we can, a Dense network. Since the problem is straightforward, this network will require only a single layer, with a single neuron.
#
# ### Build a layer
#
# We'll call the layer `l0` and create it by instantiating `tf.keras.layers.Dense` with the following configuration:
#
# * `input_shape=[1]` — This specifies that the input to this layer is a single value. That is, the shape is a one-dimensional array with one member. Since this is the first (and only) layer, that input shape is the input shape of the entire model. The single value is a floating point number, representing degrees Celsius.
#
# * `units=1` — This specifies the number of neurons in the layer. The number of neurons defines how many internal variables the layer has to try to learn how to solve the problem (more later). Since this is the final layer, it is also the size of the model's output — a single float value representing degrees Fahrenheit. (In a multi-layered network, the size and shape of the layer would need to match the `input_shape` of the next layer.)
#
# + colab_type="code" id="pRllo2HLfXiu" colab={}
l0 = tf.keras.layers.Dense(units=1, input_shape=[1])
# + [markdown] colab_type="text" id="_F00_J9duLBD"
# ### Assemble layers into the model
#
# Once layers are defined, they need to be assembled into a model. The Sequential model definition takes a list of layers as argument, specifying the calculation order from the input to the output.
#
# This model has just a single layer, l0.
# + colab_type="code" id="cSp-GpLSuMRq" colab={}
model = tf.keras.Sequential([l0])
# + [markdown] colab_type="text" id="t7pfHfWxust0"
# **Note**
#
# You will often see the layers defined inside the model definition, rather than beforehand:
#
# ```python
# model = tf.keras.Sequential([
# tf.keras.layers.Dense(units=1, input_shape=[1])
# ])
# ```
# + [markdown] colab_type="text" id="kiZG7uhm8qCF"
# ## Compile the model, with loss and optimizer functions
#
# Before training, the model has to be compiled. When compiled for training, the model is given:
#
# - **Loss function** — A way of measuring how far off predictions are from the desired outcome. (The measured difference is called the "loss".)
#
# - **Optimizer function** — A way of adjusting internal values in order to reduce the loss.
#
# + colab_type="code" id="m8YQN1H41L-Y" colab={}
model.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(0.1))
# + [markdown] colab_type="text" id="17M3Pqv4P52R"
# These are used during training (`model.fit()`, below) to first calculate the loss at each point, and then improve it. In fact, the act of calculating the current loss of a model and then improving it is precisely what training is.
#
# During training, the optimizer function is used to calculate adjustments to the model's internal variables. The goal is to adjust the internal variables until the model (which is really a math function) mirrors the actual equation for converting Celsius to Fahrenheit.
#
# TensorFlow uses numerical analysis to perform this tuning, and all this complexity is hidden from you so we will not go into the details here. What is useful to know about these parameters are:
#
# The loss function ([mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error)) and the optimizer ([Adam](https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/)) used here are standard for simple models like this one, but many others are available. It is not important to know how these specific functions work at this point.
#
# One part of the Optimizer you may need to think about when building your own models is the learning rate (`0.1` in the code above). This is the step size taken when adjusting values in the model. If the value is too small, it will take too many iterations to train the model. Too large, and accuracy goes down. Finding a good value often involves some trial and error, but the range is usually within 0.001 (default), and 0.1
# + [markdown] colab_type="text" id="c-Jk4dG91dvD"
# ## Train the model
#
# Train the model by calling the `fit` method.
#
# During training, the model takes in Celsius values, performs a calculation using the current internal variables (called "weights") and outputs values which are meant to be the Fahrenheit equivalent. Since the weights are intially set randomly, the output will not be close to the correct value. The difference between the actual output and the desired output is calculated using the loss function, and the optimizer function directs how the weights should be adjusted.
#
# This cycle of calculate, compare, adjust is controlled by the `fit` method. The first argument is the inputs, the second argument is the desired outputs. The `epochs` argument specifies how many times this cycle should be run, and the `verbose` argument controls how much output the method produces.
# + colab_type="code" id="lpRrl7WK10Pq" colab={}
history = model.fit(celsius_q, fahrenheit_a, epochs=500, verbose=False)
print("Finished training the model")
# + [markdown] colab_type="text" id="GFcIU2-SdCrI"
# In later videos, we will go into more details on what actually happens here and how a Dense layer actually works internally.
# + [markdown] colab_type="text" id="0-QsNCLD4MJZ"
# ## Display training statistics
#
# The `fit` method returns a history object. We can use this object to plot how the loss of our model goes down after each training epoch. A high loss means that the Fahrenheit degrees the model predicts is far from the corresponding value in `fahrenheit_a`.
#
# We'll use [Matplotlib](https://matplotlib.org/) to visualize this (you could use another tool). As you can see, our model improves very quickly at first, and then has a steady, slow improvement until it is very near "perfect" towards the end.
#
#
# + colab_type="code" id="IeK6BzfbdO6_" colab={}
import matplotlib.pyplot as plt
plt.xlabel('Epoch Number')
plt.ylabel("Loss Magnitude")
plt.plot(history.history['loss'])
# + [markdown] colab_type="text" id="LtQGDMob5LOD"
# ## Use the model to predict values
#
# Now you have a model that has been trained to learn the relationshop between `celsius_q` and `fahrenheit_a`. You can use the predict method to have it calculate the Fahrenheit degrees for a previously unknown Celsius degrees.
#
# So, for example, if the Celsius value is 100, what do you think the Fahrenheit result will be? Take a guess before you run this code.
# + colab_type="code" id="oxNzL4lS2Gui" colab={}
print(model.predict([100.0]))
# + [markdown] colab_type="text" id="jApk6tZ1fBg1"
# The correct answer is $100 \times 1.8 + 32 = 212$, so our model is doing really well.
#
# ### To review
#
#
# * We created a model with a Dense layer
# * We trained it with 3500 examples (7 pairs, over 500 epochs).
#
# Our model tuned the variables (weights) in the Dense layer until it was able to return the correct Fahrenheit value for any Celsius value. (Remember, 100 Celsius was not part of our training data.)
#
#
#
# + [markdown] colab_type="text" id="zRrOky5gm20Z"
# ## Looking at the layer weights
#
# Finally, let's print the internal variables of the Dense layer.
# + colab_type="code" id="kmIkVdkbnZJI" colab={}
print("These are the layer variables: {}".format(l0.get_weights()))
# + [markdown] colab_type="text" id="RSplSnMvnWC-"
# The first variable is close to ~1.8 and the second to ~32. These values (1.8 and 32) are the actual variables in the real conversion formula.
#
# This is really close to the values in the conversion formula. We'll explain this in an upcoming video where we show how a Dense layer works, but for a single neuron with a single input and a single output, the internal math looks the same as [the equation for a line](https://en.wikipedia.org/wiki/Linear_equation#Slope%E2%80%93intercept_form), $y = mx + b$, which has the same form as the conversion equation, $f = 1.8c + 32$.
#
# Since the form is the same, the variables should converge on the standard values of 1.8 and 32, which is exactly what happened.
#
# With additional neurons, additional inputs, and additional outputs, the formula becomes much more complex, but the idea is the same.
#
# ### A little experiment
#
# Just for fun, what if we created more Dense layers with different units, which therefore also has more variables?
# + colab_type="code" id="Y2zTA-rDS5Xk" colab={}
l0 = tf.keras.layers.Dense(units=4, input_shape=[1])
l1 = tf.keras.layers.Dense(units=4)
l2 = tf.keras.layers.Dense(units=1)
model = tf.keras.Sequential([l0, l1, l2])
model.compile(loss='mean_squared_error', optimizer=tf.keras.optimizers.Adam(0.1))
model.fit(celsius_q, fahrenheit_a, epochs=500, verbose=False)
print("Finished training the model")
print(model.predict([100.0]))
print("Model predicts that 100 degrees Celsius is: {} degrees Fahrenheit".format(model.predict([100.0])))
print("These are the l0 variables: {}".format(l0.get_weights()))
print("These are the l1 variables: {}".format(l1.get_weights()))
print("These are the l2 variables: {}".format(l2.get_weights()))
# + [markdown] colab_type="text" id="xrpFFlgYhCty"
# As you can see, this model is also able to predict the corresponding Fahrenheit value really well. But when you look at the variables (weights) in the `l0` and `l1` layers, they are nothing even close to ~1.8 and ~32. The added complexity hides the "simple" form of the conversion equation.
#
# Stay tuned for the upcoming video on how Dense layers work for the explanation.
| Copy_of_Celsius_to_Fahrenheit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
from datasets import load_dataset
from pipelines import pipeline
from nlgeval import compute_metrics
from tqdm import tqdm
import json
train_dataset, valid_dataset = load_dataset('squad', split=['train', 'validation'])
nlp = pipeline("question-generation")
# +
hyp = './results/eval_test/hyp.txt'
ref = './results/eval_test/ref1.txt'
ctx = './results/eval_test/ref2.txt'
res = './results/eval_test/res.txt'
dev_mode = True
squad_size = 200 if dev_mode else len(train_dataset)
#corpus = [text, text2,text3, text4]
c_t = None #Current Title
c_q = [] #Current QG set
cqc = "" #Current concatenated questions
ccc = "" #Current context
h_q = [] #Lines of predicted question (concated by each topic)
r_q = [] #Lines of actual question (concated by each topic)
c_c = [] #Lines of context (for each topic)
# +
def wq(ta, xt=hyp):
with open(xt, 'w+', encoding='utf-8') as f:
for t in tqdm(ta):
nt = nlp(t)
f.writelines([' '.join(nt), '\n'])
def wc(ta, xt=ref):
with open(xt, 'w+', encoding='utf-8') as f:
f.writelines('\n'.join(ta))
# +
print("QG for {0} records: ".format(squad_size))
for i in tqdm(range(0, squad_size)):
t_d = train_dataset[i]
#tdt = t_d["title"]
#Fill in first context
if i == 0:
ccc = t_d["context"]
#c_t = tdt
#Force write result when it reaches the end
if i == squad_size:
ccc = ""
#print(len(ccc), len(t_d["context"]))
#Skip if no context swap
if t_d["context"] == ccc:
cqc = cqc + "{} ".format(t_d["question"])
if i < squad_size - 1:
continue
#Title has been swapped. Retrieve predicted questions
c_q = nlp(ccc)
h_q.append(' '.join(cq["question"] for cq in c_q))
r_q.append(cqc)
c_c.append(ccc)
#Swap context
ccc = t_d["context"]
#Clear question segment
cqc = ""
print("Distinct context found: {0}".format(len(hyp)))
print("Writing {0}...".format(hyp))
wc(h_q, xt=hyp)
print("Writing {0}...".format(ref))
wc(r_q, xt=ref)
print("Writing {0}...".format(ctx))
wc(c_c, xt=ctx)
# -
metrics_dict = compute_metrics(hypothesis=hyp, references=[ref, ctx], no_skipthoughts=True, no_glove=True)
# +
print("Writing result to {0}...".format(res))
json_res = json.dumps(metrics_dict, indent = 4)
with open(res, 'w', encoding='utf-8') as f:
f.writelines(json_res)
# -
| zz3b_e2e_squad_minimal.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Note
# * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# +
# Dependencies and Setup
import pandas as pd
import numpy as np
# File to Load (Remember to Change These)
purchase_file = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data_df = pd.read_csv(purchase_file)
purchase_data_df.head()
# +
#Display statistical overview of DataFrame for later reference
purchase_data_df.describe()
# -
# ## Player Count
# * Display the total number of players
#
#Calculation for Players Count
total_players=(purchase_data_df["SN"].unique())
player_count=len(total_players)
player_count
# ## Purchasing Analysis (Total)
# * Run basic calculations to obtain number of unique items, average price, etc.
#
#
# * Create a summary data frame to hold the results
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display the summary data frame
#
#Calculation of Total Unique Items
unique_items = purchase_data_df["Item ID"].nunique()
unique_items
#Calculation of Average Purchase Price
average_price = purchase_data_df["Price"].mean()
average_price
#Calculation of Total Number of Purchases
total_num_purchases = purchase_data_df["Purchase ID"].count()
total_num_purchases
#Calculation of Total Revenue
total_price = purchase_data_df["Price"].sum()
total_price
# +
#Calculations for Purchase Summary
purchase_summary = pd.DataFrame({"Number of Unique Items":unique_items,
"Average Price":[average_price],
"Number of Purchases":[total_num_purchases],
"Total Price":[total_price]})
purchase_summary["Average Price"] = purchase_summary["Average Price"].map("${:.2f}".format)
purchase_summary["Total Price"] = purchase_summary["Total Price"].map("${:.2f}".format)
purchase_summary
# -
# ## Gender Demographics
# * Percentage and Count of Male Players
#
#
# * Percentage and Count of Female Players
#
#
# * Percentage and Count of Other / Non-Disclosed
#
#
#
# +
#Calculation of Gender Demographics and Percentages
player_groupby = purchase_data_df.groupby('Gender')['SN'].nunique().reset_index()
player_groupby['Percentage of Players'] = 100 * player_groupby['SN']/player_groupby['SN'].sum()
player_summary = player_groupby[['Gender', 'Percentage of Players', 'SN']].sort_values(['Percentage of Players'])
player_summary = player_summary.reset_index(drop=True)
#formatting columns
player_summary['Percentage of Players'] = player_summary['Percentage of Players'].map("{:,.1f}%".format)
#setting index to Gender
player_demo_summary = player_summary.set_index('Gender')
player_demo_summary = player_demo_summary.rename(columns = {'SN': 'Total Count'})
player_demo_summary
# -
#
# ## Purchasing Analysis (Gender)
# * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender
#
#
#
#
# * Create a summary data frame to hold the results
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display the summary data frame
# +
#Calculate the metrics for Purchasing Analysis
gender_df = purchase_data_df.groupby(['Gender'])
pur_count = gender_df["Price"].count()
price_mean = gender_df["Price"].mean()
total_pur = gender_df["Price"].sum()
avg_price = gender_df["Price"].sum()/gender_df["Price"].count()
#Create dataframe using above metrics
purchase_analysis = pd.DataFrame({'Purchase Count': pur_count.astype(float).map('{:,}'.format),
'Average Purchase Price': price_mean.astype(float).map('${:,.3}'.format),
'Total Purchase Value': total_pur.astype(float).map('${:,.6}'.format),
'Avg Total Purchase per Person': avg_price.astype(float).map('${:,.3}'.format)})
purchase_analysis
# -
# ## Age Demographics
# * Establish bins for ages
#
#
# * Categorize the existing players using the age bins. Hint: use pd.cut()
#
#
# * Calculate the numbers and percentages by age group
#
#
# * Create a summary data frame to hold the results
#
#
# * Optional: round the percentage column to two decimal points
#
#
# * Display Age Demographics Table
#
# +
# Create the bins in which Data will be held
age_bins = [0, 9.90, 14.90, 19.90, 24.90, 29.90, 34.90, 39.90, 99999]
# Create the names for the eight bins
group_names = ['<10', '10-14','15-19', '20-24', '25-29', '30-34', '35-39', '40+']
by_age_purchase_data_df = purchase_data_df
by_age_purchase_data_df["Age Group"] = pd.cut(by_age_purchase_data_df["Age"], age_bins, labels=group_names)
by_age_purchase_data_df
#Create Group based on Bins
by_age_purchase_data_df = by_age_purchase_data_df.groupby("Age Group")
by_age_purchase_data_df.count()
group_by_age_df = pd.DataFrame(by_age_purchase_data_df.count())
group_by_age_df
#Calculations for Purchase ID
group_by_age_df["Purchase ID"] = (group_by_age_df["Purchase ID"]/player_count) * 100
group_by_age_df
#Calculations for Percentage
group_by_age_df["Purchase ID"] = group_by_age_df["Purchase ID"].map("{:,.2f}%".format)
group_by_age_df
orig_group_by_age_df = group_by_age_df[["Purchase ID", "SN"]]
orig_group_by_age_df
#Final Calculations and renaming of columns
final_group_by_age_df = orig_group_by_age_df.rename(columns={"Purchase ID": "Percentage of Players", "SN": "Total Count"})
final_group_by_age_df
# -
# ## Purchasing Analysis (Age)
# * Bin the purchase_data data frame by age
#
#
# * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below
#
#
# * Create a summary data frame to hold the results
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display the summary data frame
# +
#Count purchases by ages
purchase_data_df["Age Group"] = pd.cut(purchase_data_df["Age"], age_bins, labels=group_names)
purchase_data_df
#Run basic calculations
age_purchase_total = purchase_data_df.groupby(["Age Group"]).sum()["Price"].rename("Total Purchase Value")
average_age = purchase_data_df.groupby(["Age Group"]).mean()["Price"].rename("Average Purchase Price")
age_counts = purchase_data_df.groupby(["Age Group"]).count()["Price"].rename("Purchase Count")
normalized_total = age_purchase_total/final_group_by_age_df["Total Count"]
age_data = pd.DataFrame({"Purchase Count": age_counts, "Average Purchase Price": average_age, "Total Purchase Value": age_purchase_total, "Normalized Totals": normalized_total})
age_data["Average Purchase Price"] = age_data["Average Purchase Price"].map("{:,.2f}".format)
age_data["Normalized Totals"] = age_data["Total Purchase Value"].map("{:,.2f}".format)
age_data = age_data.loc[:,["Purchase Count", "Average Purchase Price", "Total Purchase Value", "Normalized Totals"]]
age_data
# +
## Top Spenders
# -
# * Run basic calculations to obtain the results in the table below
#
#
# * Create a summary data frame to hold the results
#
#
# * Sort the total purchase value column in descending order
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display a preview of the summary data frame
#
#
# +
#group purchase data by screen names
spender_stats = purchase_data_df.groupby("SN")
purchase_count_spender = spender_stats["Purchase ID"].count()
average_purchase_price_spender = spender_stats["Price"].mean()
purchase_total_spender = spender_stats["Price"].sum()
top_spenders = pd.DataFrame({"Purchase Count": purchase_count_spender,
"Average Purchase Price": average_purchase_price_spender,
"Total Purchase Value": purchase_total_spender})
total_spenders = top_spenders.sort_values("Total Purchase Value", ascending = False)
total_spenders.style.format({"Average Purchase Total": "${:,.2f}",
"Average Purchase Price": "${:,.2f}",
"Total Purchase Value": "${:,.2f}"})
total_spenders.head()
# -
# ## Most Popular Items
# * Retrieve the Item ID, Item Name, and Item Price columns
#
#
# * Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value
#
#
# * Create a summary data frame to hold the results
#
#
# * Sort the purchase count column in descending order
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display a preview of the summary data frame
#
#
# +
#Create new data frame
purchase_data_df.columns
popular_items_grouped = purchase_data_df.groupby(["Item ID", "Item Name"])
purchase_count = popular_items_grouped["Item Name"].count()
unit_price = popular_items_grouped["Price"].unique()
purchase_value = popular_items_grouped["Price"].sum()
most_popular_df = pd.DataFrame({"Purchase Count": purchase_count,
"Item Price": unit_price,
"Total Purchase Value": purchase_value})
most_popular_df = most_popular_df.sort_values("Purchase Count", ascending=False)
most_popular_df.style.format({"Item Price": "${:,.2f}",
"Total Purchase Value": "${:,.2f}"})
most_popular_df.head()
# -
# ## Most Profitable Items
# * Sort the above table by total purchase value in descending order
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display a preview of the data frame
#
#
# +
#Most Profitable Items
purchase_data_df.columns
most_profitable_df = purchase_data_df.groupby(["Item ID", "Item Name"])
purchase_count = most_profitable_df["Item Name"].count()
unit_price = most_profitable_df["Price"].unique()
purchase_value = most_profitable_df["Price"].sum()
most_profitable_df= pd.DataFrame({"Purchase Count": purchase_count,
"Item Price": unit_price,
"Total Purchase Value": purchase_value})
most_profitable_df
most_profitable_df = most_profitable_df.sort_values("Total Purchase Value", ascending=False).head()
most_profitable_df["Item Price"] = most_profitable_df["Item Price"].astype(float).map("${:,.2f}".format)
most_profitable_df["Total Purchase Value"] = most_profitable_df["Total Purchase Value"].astype(float).map("${:,.2f}".format)
most_profitable_df.head(5)
# -
| Instructions/HeroesOfPymoli/HeroesOfPymoli_starter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# #Python Libraries
import numpy as np
import scipy as sp
import pandas as pd
import statsmodels
import pandas_profiling
# %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import os
import sys
import time
import requests
import datetime
import missingno as msno
import gc
# from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
# #X_trainval, X_test, y_trainval, y_test = train_test_split(X, y)
# #X_train, X_val, y_train, y_val = train_test_split(X_trainval, y_trainval)
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import LabelEncoder
from sklearn.cross_validation import train_test_split
import xgboost as xgb
# -
# # Ensemble - My XGBoost Model with Kaggle LB Train subset and Custome Features with https://www.kaggle.com/cttsai/blend-app-channel-and-app-mean/code
# !wc -l model_3_xbg_submission_v4.csv
# !wc -l model_kaggle_lb_submission_appmean.csv
df_1 = pd.read_csv("model_3_xbg_submission_v4.csv")
df_2 = pd.read_csv("model_kaggle_lb_submission_appmean.csv")
df_1.head()
df_2.head()
sub = pd.DataFrame()
sub['click_id'] = df_1['click_id']
weight_df1 = 0.5
weight_df2 = 0.5
sub['is_attributed'] = (df_1['is_attributed']*weight_df1 + df_2['is_attributed']*weight_df2)
sub.head()
sub.to_csv('model_4_ensemble_xbg_kagglelbappmean_subv1.csv',index=False)
# !head model_4_ensemble_xbg_kagglelbappmean_subv1.csv
# !wc -l model_4_ensemble_xbg_kagglelbappmean_subv1.csv
# # Ensemble 2
# !wc -l ensemble_data/*
df_1 = pd.read_csv("ensemble_data/ftrl_submission.csv")
df_2 = pd.read_csv("ensemble_data/kartik_ensemble_1.csv")
df_3 = pd.read_csv("ensemble_data/lgb_sub_tint.csv")
df_4 = pd.read_csv("ensemble_data/sub_lgb_balanced99.csv")
df_5 = pd.read_csv("ensemble_data/sub_mix.csv")
sub = pd.DataFrame()
sub['click_id'] = df_2['click_id']
weight_df1 = 0.2
weight_df2 = 0.2
weight_df3 = 0.2
weight_df4 = 0.2
weight_df5 = 0.2
sub['is_attributed'] = (df_1['is_attributed']*weight_df1 +
df_2['is_attributed']*weight_df2 +
df_3['is_attributed']*weight_df3 +
df_4['is_attributed']*weight_df4 +
df_5['is_attributed']*weight_df5)
sub.to_csv('model_5_ensemble_5csvs.csv',index=False)
# !head model_5_ensemble_5csvs.csv
# # Ensemble 3
# +
"""
xgb - https://www.kaggle.com/pranav84/xgboost-on-hist-mode-ip-addresses-dropped
ftrl - https://www.kaggle.com/ogrellier/ftrl-in-chunck
nn - https://www.kaggle.com/shujian/mlp-starter?scriptVersionId=2754301
lgb - https://www.kaggle.com/pranav84/lightgbm-fixing-unbalanced-data-val-auc-0-977?scriptVersionId=2761828
usam - https://www.kaggle.com/cartographic/undersampler
means - https://www.kaggle.com/prashantkikani/weighted-app-chanel-os
"""
# #LOGIT_WEIGHT = .5 #0.9640
LOGIT_WEIGHT = .8
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from scipy.special import expit, logit
almost_zero = 1e-10
almost_one = 1 - almost_zero
"""
Ensemble 3.1 - 0.9640
models = {
'xgb ': "ensemble_data/sub_xgb_hist_pos_weight_9591.csv",
'ftrl1': "ensemble_data/ftrl_submission.csv",
'nn ': "ensemble_data/sub_mlp_9502.csv",
'lgb ': "ensemble_data/sub_lgb_balanced99_9631.csv",
'usam ': "ensemble_data/pred_9562.csv",
'means': "ensemble_data/subnew_9565.csv",
'ftrl2': "ensemble_data/ftrl_submission.csv"
}
weights = {
'xgb ': .08,
'ftrl1': .04,
'nn ': .05,
'lgb ': .65,
'usam ': .05,
'means': .07,
'ftrl2': .06
}
"""
"""
Ensemble 3.2 - 0.9642
models = {
'xgb ': "ensemble_data/xgb_sub_9610.csv",
'ftrl1': "ensemble_data/ftrl_submission_9606.csv",
'nn ': "ensemble_data/sub_mlp_9502.csv",
'lgb ': "ensemble_data/sub_lgb_balanced99_9631.csv",
'usam ': "ensemble_data/pred_9562.csv",
'means': "ensemble_data/subnew_9565.csv",
'ftrl2': "ensemble_data/ftrl_submission_9606.csv"
}
weights = {
'xgb ': .10,
'ftrl1': .04,
'nn ': .05,
'lgb ': .60,
'usam ': .05,
'means': .07,
'ftrl2': .09
}
"""
"""
Ensemble 3.3 - Same as the one above without the Neural Network - 0.9651
models = {
'xgb ': "ensemble_data/xgb_sub_9610.csv",
'ftrl1': "ensemble_data/ftrl_submission_9606.csv",
'lgb ': "ensemble_data/sub_lgb_balanced99_9631.csv",
'usam ': "ensemble_data/pred_9562.csv",
'means': "ensemble_data/subnew_9565.csv",
'ftrl2': "ensemble_data/ftrl_submission_9606.csv"
}
weights = {
'xgb ': .15,
'ftrl1': .04,
'lgb ': .60,
'usam ': .05,
'means': .07,
'ftrl2': .09
}
"""
"""
Ensemble 3.4 - Same as the one above, modified weights - 0.9653
models = {
'xgb ': "ensemble_data/xgb_sub_9610.csv",
'ftrl1': "ensemble_data/ftrl_submission_9606.csv",
'lgb ': "ensemble_data/sub_lgb_balanced99_9631.csv",
'means': "ensemble_data/subnew_9565.csv",
'ftrl2': "ensemble_data/ftrl_submission_9606.csv"
}
weights = {
'xgb ': .20,
'ftrl1': .06,
'lgb ': .60,
'means': .05,
'ftrl2': .09
}
"""
"""
Ensemble 3.5 - Same as the one above, modified weights - 0.9680
models = {
'xgb ': "ensemble_data/xgb_sub_9635.csv",
'ftrl1': "ensemble_data/ftrl_submission_9606.csv",
'lgb ': "ensemble_data/sub_lgb_balanced99_9667.csv",
'ftrl2': "ensemble_data/ftrl_submission_9606.csv"
}
weights = {
'xgb ': .25,
'ftrl1': .06,
'lgb ': .60,
'ftrl2': .09
}
"""
""" - 0.9684
LOGIT_WEIGHT = .8
models = {
'xgb': "ensemble_data/xgb_sub_9645.csv",
'ftrl1': "ensemble_data/wordbatch_fm_ftrl_9615.csv",
'lgb': "ensemble_data/sub_lgb_balanced99_9675.csv",
'dl_support': "ensemble_data/dl_support_9653.csv"
}
weights = {
'xgb': .10,
'ftrl1': .10,
'lgb': .60,
'dl_support': .20
}
"""
LOGIT_WEIGHT = .2
models = {
'xgb': "ensemble_data/xgb_sub_9645.csv",
'ftrl1': "ensemble_data/wordbatch_fm_ftrl_9615.csv",
'lgb': "ensemble_data/sub_lgb_balanced99_9675.csv",
'dl_support': "ensemble_data/dl_support_9653.csv"
}
weights = {
'xgb': .10,
'ftrl1': .10,
'lgb': .60,
'dl_support': .20
}
print(sum(weights.values()))
subs = {m:pd.read_csv(models[m]) for m in models}
first_model = list(models.keys())[0]
n = subs[first_model].shape[0]
ranks = {s:subs[s]['is_attributed'].rank()/n for s in subs}
logits = {s:subs[s]['is_attributed'].clip(almost_zero,almost_one).apply(logit) for s in subs}
logit_avg = 0
rank_avg = 0
for m in models:
s = logits[m].std()
print(m, s)
logit_avg = logit_avg + weights[m]*logits[m] / s
rank_avg = rank_avg + weights[m]*ranks[m]
logit_rank_avg = logit_avg.rank()/n
final_avg = LOGIT_WEIGHT*logit_rank_avg + (1-LOGIT_WEIGHT)*rank_avg
final_sub = pd.DataFrame()
final_sub['click_id'] = subs[first_model]['click_id']
final_sub['is_attributed'] = final_avg
final_sub.to_csv("sub_kartik_mix_v6.csv", index=False)
# -
# !wc -l sub_kartik_mix_v6.csv
# +
weights = {
'xgb ': .08,
'ftrl1': .04,
'nn ': .05,
'lgb ': .65,
'usam ': .05,
'means': .07,
'ftrl2': .06
}
print(sum(weights.values()))
# -
weights = {
'xgb ': .10,
'ftrl1': .04,
'nn ': .05,
'lgb ': .60,
'usam ': .05,
'means': .07,
'ftrl2': .09
}
print(sum(weights.values()))
weights = {
'xgb ': .15,
'ftrl1': .04,
'lgb ': .60,
'usam ': .05,
'means': .07,
'ftrl2': .09
}
print(sum(weights.values()))
# +
weights = {
'xgb ': .15,
'ftrl1': .06,
'lgb ': .60,
'means': .10,
'ftrl2': .09
}
print(sum(weights.values()))
# -
| 07_TalkingDataAdTrackingFraudDetectionChallenge/02_Ensemble_Models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import quandl
import matplotlib.pyplot as plt
import statsmodels.formula.api as sm
quandl.ApiConfig.api_key = '<KEY>'
goog_table = quandl.get('WIKI/GOOG')
amzn_table = quandl.get('WIKI/AMZN')
ebay_table = quandl.get('WIKI/EBAY')
wal_table = quandl.get('WIKI/WMT')
aapl_table = quandl.get('WIKI/AAPL')
goog = goog_table.loc['2016',['Close']]
amzn = amzn_table.loc['2016',['Close']]
ebay = ebay_table.loc['2016',['Close']]
wal = wal_table.loc['2016',['Close']]
aapl = aapl_table.loc['2016',['Close']]
goog_log = np.log(goog.Close).diff().dropna()
amzn_log = np.log(amzn.Close).diff().dropna()
ebay_log = np.log(ebay.Close).diff().dropna()
wal_log = np.log(wal.Close).diff().dropna()
aapl_log = np.log(aapl.Close).diff().dropna()
df = pd.concat([goog_log,amzn_log,ebay_log,wal_log,aapl_log],axis = 1).dropna()
df.columns = ['goog','amzn','ebay','wal','aapl']
df.tail()
model = sm.ols(formula = 'amzn~goog+ebay+wal+aapl',data = df).fit()
print(model2.summary())
simple = sm.ols(formula = 'amzn ~ goog',data = df).fit()
print(simple.summary())
from datetime import datetime
url = 'https://www.quantconnect.com/tutorials/wp-content/uploads/2017/08/F-F_Research_Data_5_Factors_2x3_daily.csv'
fama_table = pd.read_csv(url)
index = [datetime.strptime(str(x), "%Y%m%d") for x in fama_table.iloc[:,0]]
fama_table.index = index
fama_table = fama_table.iloc[:,1:]
fama = fama_table['2016']
fama = fama.rename(columns = {'Mkt-RF':'MKT'})
fama = fama.apply(lambda x: x/100)
fama_df = pd.concat([fama,amzn_log],axis = 1)
fama_model = sm.ols(formula = 'Close~MKT+SMB+HML+RMW+CMA',data = fama_df).fit()
print(fama_model.summary())
result = pd.DataFrame({'simple regression':simple.predict(),'fama_french':fama_model.predict(),'sample':df.amzn},index = df.index)
plt.figure(figsize = (15,7.5))
plt.plot(result['2016-7':'2016-9'].index,result.loc['2016-7':'2016-9','simple regression'])
plt.plot(result['2016-7':'2016-9'].index,result.loc['2016-7':'2016-9','fama_french'])
plt.legend()
plt.show()
plt.figure()
simple.resid.plot.density()
plt.show()
print 'residual mean: ', np.mean(fama_model.resid)
print 'residual variance: ', np.var(fama_model.resid)
plt.figure(figsize = (20,10))
plt.scatter(df.goog,simple.resid)
plt.axhline(0.05,color = 'r')
plt.axhline(-0.05,color = 'r')
plt.axhline(0,color = 'black')
plt.xlabel('x value')
plt.ylabel('residual')
plt.show()
from pandas.core import datetools
from statsmodels.stats import diagnostic as dia
het = dia.het_breuschpagan(fama_model.resid,fama_df[['MKT','SMB','HML','RMW','CMA']][1:])
print('p-value of Heteroskedasticity: ', het[-1])
dia.het_breuschpagan(simple.resid,pd.DataFrame(df.goog))
print((float(factorial(10))/(factorial(7)*factorial(10-7)))*(0.7**7)*(0.3**3))
| 05 Introduction to Financial Python[]/10 Multiple Linear Regression/10 Multiple Linear Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + slideshow={"slide_type": "-"}
# @IMPORT-MERGE
import numpy as np
import pandas as pd
from munch import Munch
from plaster.tools.zplots import zplots
from plaster.run.plots import plots
from plaster.run.plots import plots_dev as pdev
from plaster.run.plots import plots_dev_pro as ppro
from plaster.run.plots import plots_dev_ptm as pptm
from plaster.run.run import RunResult
from plaster.run.job import JobResult,MultiJobResult
from plaster.tools.ipynb_helpers.displays import hd
from plaster.tools.log.log import error, debug
from plaster.tools.utils.utils import json_print,np_safe_divide,munch_abbreviation_string
# +
# @REMOVE-FROM-TEMPLATE
#
z = zplots.setup()
job = JobResult("../../../jobs_folder/yoda_small_multi_2__survey/")
# If you are running this report by dropping it into a job folder,
# then comment the above and uncomment this line before running:
# job = JobResult("./")
# -
# ## Optionally change proteins or PTMs of interest
# +
# You typically do not need to edit this cell, just execute it.
# Your job will have defined proteins-of-interest (POI) as well as
# any PTMs for proteins in the job. You can however set this here,
# and it will affect how the survey decides which protease/label-schemes
# are "best". With POI and PTMs set, whether from the original job definition
# or in this cell, you can further *reduce* this domain with pro_subset and
# ptm_subset filters in the next cell.
# job.set_pros_of_interest( protein_ids=[ 'P10636-8', 'P2798'] ) # can be empty list to set none
# job.set_pro_ptm_locs( protein_id='P10636-8', ptms='181;182;185') # can be empty string or ;-delimited list.
# job.get_pro_ptm_locs( protein_id='P10636-8' ) # to see the current setting
# Here we'll print the current proteins of interest - this will include
# any PTMs that are set on them.
print( "Proteins of Interest")
poi_df = job.get_pros_of_interest().drop( ['run_i','run_name'],axis='columns').drop_duplicates('pro_id')
display(poi_df)
# -
# ## Edit Filters and Find Best Runs
# +
def best_runs_for_objective(filters,title_extra=''):
best_runs = job.get_nn_stats_df( filters=filters )
hd('h1',f'Best runs for objective: {filters.objective} {title_extra}')
hd('h3', 'Filters' )
json_print(filters)
print()
pd.set_option('display.max_columns',None)
display(best_runs)
return best_runs
# Edit the filters here, then run this cell
#
filters = Munch(
allow_proline_at_2=True, # True or False
exclude_runs=[], # [] or List of runs to exclude, e.g. ['gluc_ph4_c_k_de_y_9880']
include_runs=[], # [] or List of runs to consider, e.g. ['gluc_ph4_c_k_de_y_9880']
max_dyes_per_ch=5, # None, or integer
max_pep_len=50, # None, or integer
max_ptms_per_pep=None, # None, or integer
multi_peptide_metric='dist_min',# None, 'dist_min', or 'dist_avg'
n_best_schemes=10, # integer - display top n best protease/label schemes
n_peps_per_scheme=1, # integer - display top n peps per best scheme found
objective='protein_id', # 'protein_id', 'coverage', or 'ptms'
poi_only=True, # limit to 'proteins of interest'?
pro_subset=[], # Reduce domain of proteins to consider, e.g. ['Q14997']
ptm_subset=[], # Reduce domain of ptms to consider, e.g. [181,184]
verbose=0, # set to 1 for various info on filtering (dev)
)
best_runs = best_runs_for_objective(filters)
# The following line saves your best_runs dataframe to a CSV named for the filter settings.
# Uncomment to save your csv.
# user = ''
# best_runs.to_csv(f'./survey_best_runs_{user}_{munch_abbreviation_string(filters)}.csv',index=False,float_format="%g")
# +
# The following is an example of how you might choose to look at best runs for protein
# identification for two proteins, first individually to see which runs are the very
# best for each protein individually, and then together to see which runs produce the
# best combined result via composite nearest-neighbor distance for their best peptides.
# This example uses the yoda_small_multi_2__survey job (or similar) which seeks to identify
# two proteins in the mixture.
if False:
filters.poi_only = True # only look at proteins of interest, which we'll further limit below
filters.multi_peptide_metric = 'dist_min' # it's ok if this is on even when doing 1 protein
filters.pro_subset = ['Q14997'] # find best runs for this protein
best_runs = best_runs_for_objective(filters, 'Q14997' )
filters.pro_subset = ['P40306'] # find best runs for this protein
best_runs = best_runs_for_objective(filters, 'P40306')
filters.pro_subset = [] # remove specific subset, so we'll be looking at all proteins of interest (those two)
best_runs = best_runs_for_objective(filters, 'Both together')
filters.multi_peptide_metric=None # Now just get best peptides from *any* POI to see which proteins need help
filters.n_best_schemes=50
best_runs = best_runs_for_objective(filters, 'Best peps either protein')
# +
# The following is an example of how you might choose to look at best runs for ptm
# identification for a handful of PTM locations - first by individual location and
# then together. It depends on what question you are asking. Do you want to find
# the best individual runs per PTM location (like the train_and_test_template_ptm
# will illustrate for you), or do you want to find runs that, while probably not as
# optimal for any given location, will give you some measure of the "best" result
# for all locations combined?
if False:
filters.ptm_subset = [181] # find best runs for this ptm
best_runs = best_runs_for_objective(filters, 'PTM 181' )
filters.ptm_subset = [199] # find best runs for this ptm
best_runs = best_runs_for_objective(filters, 'PTM 199')
filters.ptm_subset = [181,199] # find best runs if you want one run to see both PTMs
best_runs = best_runs_for_objective(filters, 'Both together')
filters.multi_peptide_metric=None # Now just get best PTM peptides across runs to see which PTMs need help
filters.n_best_schemes=50
best_runs = best_runs_for_objective(filters, 'Best runs either PTM')
# -
| plaster/gen/nb_templates/survey_template.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## This notebook will help you train a latent Point-Cloud GAN.
#
# (Assumes latent_3d_points is in the PYTHONPATH and that a trained AE model exists)
# +
import numpy as np
import os.path as osp
import matplotlib.pylab as plt
from latent_3d_points.src.point_net_ae import PointNetAutoEncoder
from latent_3d_points.src.autoencoder import Configuration as Conf
from latent_3d_points.src.neural_net import MODEL_SAVER_ID
from latent_3d_points.src.in_out import snc_category_to_synth_id, create_dir, PointCloudDataSet, \
load_all_point_clouds_under_folder
from latent_3d_points.src.general_utils import plot_3d_point_cloud
from latent_3d_points.src.tf_utils import reset_tf_graph
from latent_3d_points.src.vanilla_gan import Vanilla_GAN
from latent_3d_points.src.w_gan_gp import W_GAN_GP
from latent_3d_points.src.generators_discriminators import latent_code_discriminator_two_layers,\
latent_code_generator_two_layers
# -
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
# Specify where the raw point-clouds and the pre-trained AE are.
# +
# Top-dir of where point-clouds are stored.
top_in_dir = '../data/shape_net_core_uniform_samples_2048/'
ae_configuration = '../data/single_class_ae/configuration'
# +
# Where to save GANs check-points etc.
top_out_dir = '../data/'
experiment_name = 'latent_gan_with_chamfer_ae'
ae_epoch = 500 # Epoch of AE to load.
bneck_size = 128 # Bottleneck-size of the AE
n_pc_points = 2048 # Number of points per model.
class_name = raw_input('Give me the class name (e.g. "chair"): ').lower()
# -
# Load point-clouds.
syn_id = snc_category_to_synth_id()[class_name]
class_dir = osp.join(top_in_dir , syn_id)
all_pc_data = load_all_point_clouds_under_folder(class_dir, n_threads=8, file_ending='.ply', verbose=True)
print('Shape of DATA =', all_pc_data.point_clouds.shape)
# Load pre-trained AE
reset_tf_graph()
ae_conf = Conf.load(ae_configuration)
ae_conf.encoder_args['verbose'] = False
ae_conf.decoder_args['verbose'] = False
ae = PointNetAutoEncoder(ae_conf.experiment_name, ae_conf)
ae.restore_model(ae_conf.train_dir, ae_epoch, verbose=True)
# Use AE to convert raw pointclouds to latent codes.
latent_codes = ae.get_latent_codes(all_pc_data.point_clouds)
latent_data = PointCloudDataSet(latent_codes)
print('Shape of DATA =', latent_data.point_clouds.shape)
# Check the decoded AE latent-codes look descent.
L = ae.decode(latent_codes)
i = 0
plot_3d_point_cloud(L[i][:, 0], L[i][:, 1], L[i][:, 2], in_u_sphere=True);
i = 20
plot_3d_point_cloud(L[i][:, 0], L[i][:, 1], L[i][:, 2], in_u_sphere=True);
# +
# Set GAN parameters.
use_wgan = True # Wasserstein with gradient penalty, or not?
n_epochs = 1 # Epochs to train.
plot_train_curve = True
save_gan_model = False
saver_step = np.hstack([np.array([1, 5, 10]), np.arange(50, n_epochs + 1, 50)])
# If true, every 'saver_step' epochs we produce & save synthetic pointclouds.
save_synthetic_samples = True
# How many synthetic samples to produce at each save step.
n_syn_samples = latent_data.num_examples
# Optimization parameters
init_lr = 0.0001
batch_size = 50
noise_params = {'mu':0, 'sigma': 0.2}
noise_dim = bneck_size
beta = 0.5 # ADAM's momentum.
n_out = [bneck_size] # Dimensionality of generated samples.
if save_synthetic_samples:
synthetic_data_out_dir = osp.join(top_out_dir, 'OUT/synthetic_samples/', experiment_name)
create_dir(synthetic_data_out_dir)
if save_gan_model:
train_dir = osp.join(top_out_dir, 'OUT/latent_gan', experiment_name)
create_dir(train_dir)
# +
reset_tf_graph()
if use_wgan:
lam = 10 # lambda of W-GAN-GP
gan = W_GAN_GP(experiment_name, init_lr, lam, n_out, noise_dim, \
latent_code_discriminator_two_layers,
latent_code_generator_two_layers,\
beta=beta)
else:
gan = Vanilla_GAN(experiment_name, init_lr, n_out, noise_dim,
latent_code_discriminator_two_layers, latent_code_generator_two_layers,
beta=beta)
# -
accum_syn_data = []
train_stats = []
# Train the GAN.
for _ in range(n_epochs):
loss, duration = gan._single_epoch_train(latent_data, batch_size, noise_params)
epoch = int(gan.sess.run(gan.increment_epoch))
print(epoch, loss)
if save_gan_model and epoch in saver_step:
checkpoint_path = osp.join(train_dir, MODEL_SAVER_ID)
gan.saver.save(gan.sess, checkpoint_path, global_step=gan.epoch)
if save_synthetic_samples and epoch in saver_step:
syn_latent_data = gan.generate(n_syn_samples, noise_params)
syn_data = ae.decode(syn_latent_data)
np.savez(osp.join(synthetic_data_out_dir, 'epoch_' + str(epoch)), syn_data)
for k in range(3): # plot three (syntetic) random examples.
plot_3d_point_cloud(syn_data[k][:, 0], syn_data[k][:, 1], syn_data[k][:, 2],
in_u_sphere=True)
train_stats.append((epoch, ) + loss)
if plot_train_curve:
x = range(len(train_stats))
d_loss = [t[1] for t in train_stats]
g_loss = [t[2] for t in train_stats]
plt.plot(x, d_loss, '--')
plt.plot(x, g_loss)
plt.title('Latent GAN training. (%s)' %(class_name))
plt.legend(['Discriminator', 'Generator'], loc=0)
plt.tick_params(axis='x', which='both', bottom='off', top='off')
plt.tick_params(axis='y', which='both', left='off', right='off')
plt.xlabel('Epochs.')
plt.ylabel('Loss.')
| latent_3d_points_py3/notebooks/train_latent_gan.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Apriori Algorithm: Super Market Example
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from apyori import apriori
store_data = pd.read_csv('../../../datasets/unsupervised_ML/Apriori_Algorithm/store_data.csv', header=None)
store_data.head(10)
store_data.columns
# +
## preprocess
# -
records = []
for i in range(0, 7501):
records.append([str(store_data.values[i,j]) for j in range(0, 20)])
from mlxtend.preprocessing import TransactionEncoder
te = TransactionEncoder()
te_ary = te.fit(records).transform(records)
store_data = pd.DataFrame(te_ary, columns=te.columns_)
store_data
# +
## Applying apriori
# -
#association_rules = apriori(records, min_support=0.0045, min_confidence=0.2, min_lift=3, min_length=2)
association_rules = apriori(records, min_support=0.0045, min_lift=3)
association_results = list(association_rules)
# +
## Viewing Resulsts
# -
print(len(association_results))
print(association_results[0])
for item in association_results:
# first index of the inner list
# Contains base item and add item
pair = item[0]
items = [x for x in pair]
print("Rule: " + items[0] + " -> " + items[1])
#second index of the inner list
print("Support: " + str(item[1]))
#third index of the list located at 0th
#of the third index of the inner list
print("Confidence: " + str(item[2][0][2]))
print("Lift: " + str(item[2][0][3]))
print("=====================================")
| notebooks/unsupervised_ML/Apriori_Algorithm/Apriori_Algorithm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="coastal-manual"
import numpy as np
from numpy import loadtxt
import pylab as pl
from IPython import display
from RcTorch import *
from matplotlib import pyplot as plt
from scipy.integrate import odeint
# %matplotlib inline
# + id="4e-rXL3fLBDU"
#pip install rctorch==0.7
# + id="needed-panel"
#this method will ensure that the notebook can use multiprocessing on jupyterhub or any other linux based system.
try:
mp.set_start_method("spawn")
except:
pass
torch.set_default_tensor_type(torch.FloatTensor)
# %matplotlib inline
# + id="limiting-albert"
#helper functions
def pltTr(x,y,clr='cyan', mark='o'):
plt.plot(x.detach().numpy(), y.detach().numpy(),
marker=mark, color=clr, markersize=8, label='truth', alpha = 0.9)
def pltPred(x,y,clr='red', linS='-'):
plt.plot(x.detach().numpy(), y.detach().numpy(),
color=clr, marker='.', linewidth=2, label='RC')
from decimal import Decimal
def convert2pd(tensor1, tensor2):
pd_ = pd.DataFrame(np.hstack((tensor1.detach().cpu().numpy(), tensor2.detach().cpu().numpy())))
pd_.columns = ["t", "y"]
return pd_
'%.2E' % Decimal('40800000000.00000000000000')
def param(t,N,y0):
f = 1 - torch.exp(-t)
f_dot = 1 - f
#f = t
#f_dot=1
return y0 + f*N
#define a reparameterization function
def reparam(t, y0 = None, N = None, dN_dt = None, t_only = False):
f = 1 - torch.exp(-t)
f_dot = 1 - f
if t_only:
return f, f_dot
y = y0 + N*f
if dN_dt:
ydot = dN_dt * f + f_dot * N
else:
ydot = None
return y, ydot
def reparam(t, order = 1):
exp_t = torch.exp(-t)
derivatives_of_g = []
g = 1 - exp_t
#0th derivative
derivatives_of_g.append(g)
g_dot = 1 - g
return g, g_dot
# + id="enhanced-prescription"
def force(X, A = 0):
return torch.zeros_like(X)
lam =1
def hamiltonian(x, p, lam = lam):
return (1/2)*(x**2 + p**2) + lam*x**4/4
def custom_loss(X , y, ydot, out_weights, f = force,
reg = True, ode_coefs = None, mean = True,
enet_strength = None, enet_alpha = None, init_conds = None, lam = 1):
y, p = y[:,0].view(-1,1), y[:,1].view(-1,1)
ydot, pdot = ydot[:,0].view(-1,1), ydot[:,1].view(-1,1)
#with paramization
L = (ydot - p)**2 + (pdot + y + lam * y**3 - force(X))**2
#if mean:
L = torch.mean(L)
if reg:
#assert False
weight_size_sq = torch.mean(torch.square(out_weights))
weight_size_L1 = torch.mean(torch.abs(out_weights))
L_reg = enet_strength*(enet_alpha * weight_size_sq + (1- enet_alpha) * weight_size_L1)
L = L + 0.1 * L_reg
y0, p0 = init_conds
ham = hamiltonian(y, p)
ham0 = hamiltonian(y0, p0)
L_H = (( ham - ham0).pow(2)).mean()
assert L_H >0
L = L + 0.1 * L_H
#print("L1", hi, "L_elastic", L_reg, "L_H", L_H)
return L
# + id="practical-preparation"
lineW = 3
lineBoxW=2
def plot_result(esn, xtrain, v0s = [1], y0s = [1.3], plot_gt = True, loglog = False,
ode_coefs = None, force_k = 0, fileName=None, backprop_f = None, ax = None,
solve = None , out_weights = None, epochs = None, reg = None, gamma_cyclic = None
):
RC = esn
if not ax:
fig, ax = plt.subplots(1,1, figsize = (8, 6))
t_pow = 0
for i, v0 in enumerate(v0s):
y0 = y0s[i]
train_args = {"burn_in" : int(BURN_IN),
"ODE_order" : 1,
"force" : force,
"reparam_f" : reparam,
"init_conditions" : [float(y0), float(v0)],
"ode_coefs" : ode_coefs,
"y" : None,
"X" : xtrain.view(-1,1),
"eq_system" : True,
#"out_weights" : out_weights
}
if not i:
y, ydot = esn.fit(**train_args, SOLVE = solve, out_weights = out_weights)
ode_coefs_copy = ode_coefs.copy()
states_dict = {"s" : RC.states.clone(),
"s1" : RC.states_dot.clone(),
"G" : RC.G,
"ex" : RC.extended_states.clone(),
"sb1": RC.sb1,
"sb" : RC.sb
}
if esn.ODE_order == 2:
states_dict["s2"] = RC.states_dot2.clone()
states_dict["sb2"] = RC.sb2.clone()
#t2 = time.perf_counter()
else:
y, ydot = RC.fit(preloaded_states_dict = states_dict, SOLVE = solve,
**train_args, out_weights = out_weights)
if not out_weights:
if backprop_f:
weight_dict = backprop_f(esn, epochs = epochs,reg = reg)
#y, ydot = esn.fit(**train_args, out_weights = weight_dict, SOLVE = False)
y,ydot = weight_dict["y"], weight_dict["ydot"]
esn = weight_dict["RC"]
ode_coefs_copy = ode_coefs.copy()
if ode_coefs[0] == "t**2":
sp = esn.X**2
t_pow = 2
ode_coefs_copy[0] = sp
def f(u, t ,lam=0,A=0,W=1):
x, px = u # unpack current values of u
derivs = [px, -x - lam*x**3 +A*np.sin(W*t)] # you write the derivative here
return derivs
# Scipy Solver
def NLosc_solution(t, x0, px0, lam=0, A=0,W=1):
u0 = [x0, px0]
# Call the ODE solver
solPend = odeint(f, u0, t.cpu(), args=(lam,A,W,))
xP = solPend[:,0]; pxP = solPend[:,1];
return xP, pxP
y_truth, v_truth = NLosc_solution(esn.X.squeeze().data,1.3,1,lam=1, A=0, W= 0)
p = y[:,1].cpu()# + v0
yy = y[:,0].cpu()# + y0
X = esn.X.cpu()
#y_truth = odeint(ODE_numSolver,y0,np.array(esn.X.cpu().view(-1,)))
if y0==1:
extraWidth = 2; color = 'k'
else: extraWidth=0; color = 'b'
if not i:
ax.plot(X, yy, color, linewidth=lineW+extraWidth, label = "x_hat", color = "blue" )
ax.plot(X, p, color, linewidth=lineW+extraWidth, label = "p_hat", color = "red" )
#ax.plot(X, torch.cos(X),'--', linewidth=lineW, alpha=0.85, label = "p_gt", color = "red")
#ax.plot(X, torch.sin(X),'--', linewidth=lineW, alpha=0.85, label = "x_gt", color = "blue")
ax.plot(X, v_truth,'--', linewidth=lineW, alpha=0.85, label = "p_gt_", color = "red")
ax.plot(X, y_truth,'--', linewidth=lineW, alpha=0.85, label = "x_gt_", color = "blue")
else:
ax.plot(X, yy, color, linewidth=lineW+extraWidth, color = "blue")
ax.plot(X, p,'--r', linewidth=lineW, alpha=0.85, color = "red")
ax.plot(X, v_truth,'--', linewidth=lineW, alpha=0.85, color = "red")
ax.plot(X, y_truth,'--', linewidth=lineW, alpha=0.85, color = "blue")
## Formating Figure
# Changing spine style
ax = plt.gca()
for ps in ['top','bottom','left','right']:
ax.spines[ps].set_linewidth(lineBoxW)
plt.xlabel(r'$t$')
plt.ylabel(r'$y(t)$')
plt.legend()
return esn
def optimize_last_layer(esn,
SAVE_AFTER_EPOCHS = 1,
epochs = 45000,
custom_loss = custom_loss,
EPOCHS_TO_TERMINATION = None,
f = force,
lr = 0.05,
reg = None,
plott = True,
plot_every_n_epochs = 2000):#gamma 0.1, spikethreshold 0.07 works
with torch.enable_grad():
#define new_x
new_X = esn.extended_states.detach()
spikethreshold = esn.spikethreshold
#force detach states_dot
esn.states_dot = esn.states_dot.detach().requires_grad_(False)
#define criterion
criterion = torch.nn.MSELoss()
#assert esn.LinOut.weight.requires_grad and esn.LinOut.bias.requires_grad
#assert not new_X.requires_grad
#define previous_loss (could be used to do a convergence stop)
previous_loss = 0
#define best score so that we can save the best weights
best_score = 0
#define the optimizer
optimizer = optim.Adam(esn.parameters(), lr = lr)
#optimizer = torch.optim.SGD(model.parameters(), lr=100)
if esn.gamma_cyclic:
cyclic_scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, 10**-6, 0.01,
gamma = esn.gamma_cyclic,#0.9999,
mode = "exp_range", cycle_momentum = False)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=esn.gamma)
lrs = []
#define the loss history
loss_history = []
if plott:
#use pl for live plotting
fig, ax = pl.subplots(1,3, figsize = (16,4))
t = esn.X#.view(*N.shape).detach()
g, g_dot = esn.G
y0 = esn.init_conds[0]
flipped = False
flipped2 = False
pow_ = -4
floss_last = 0
try:
assert esn.LinOut.weight.requires_grad and esn.LinOut.bias.requires_grad
except:
esn.LinOut.weight.requires_grad_(True)
esn.LinOut.bias.requires_grad_(True)
#bail
#begin optimization loop
for e in range(epochs):
optimizer.zero_grad()
N = esn.forward( esn.extended_states )
N_dot = esn.calc_Ndot(esn.states_dot)
y = g *N
ydot = g_dot * N + g * N_dot
y[:,0] = y[:,0] + esn.init_conds[0]
y[:,1] = y[:,1] + esn.init_conds[1]
assert N.shape == N_dot.shape, f'{N.shape} != {N_dot.shape}'
#assert esn.LinOut.weight.requires_grad and esn.LinOut.bias.requires_grad
#total_ws = esn.LinOut.weight.shape[0] + 1
#weight_size_sq = torch.mean(torch.square(esn.LinOut.weight))
loss = custom_loss(esn.X, y, ydot, esn.LinOut.weight, reg = reg, ode_coefs = esn.ode_coefs,
init_conds = esn.init_conds, enet_alpha= esn.enet_alpha, enet_strength = esn.enet_strength)
loss.backward()
optimizer.step()
if esn.gamma_cyclic and e > 100 and e <5000:
cyclic_scheduler.step()
lrs.append(optimizer.param_groups[0]["lr"])
floss = float(loss)
loss_history.append(floss)
if e == 10**3:
if floss > 10**(5):
EPOCHS_TO_TERMINATION = e + 50
if e == 10**4:
if floss > 10**(2.5):
EPOCHS_TO_TERMINATION = e + 50
if e > 0:
loss_delta = float(np.log(floss_last) - np.log(floss))
if loss_delta > esn.spikethreshold:# or loss_delta < -3:
lrs.append(optimizer.param_groups[0]["lr"])
scheduler.step()
if not e and not best_score:
best_bias, best_weight, best_fit = esn.LinOut.bias.detach(), esn.LinOut.weight.detach(), y.clone()
if e > SAVE_AFTER_EPOCHS:
if not best_score:
best_score = min(loss_history)
if floss < best_score:
best_bias, best_weight = esn.LinOut.bias.detach(), esn.LinOut.weight.detach()
best_score = float(loss)
best_fit = y.clone()
best_ydot = ydot.clone()
# else:
# if floss < best_score:
# best_bias, best_weight = esn.LinOut.bias.detach(), esn.LinOut.weight.detach()
# best_score = float(loss)
# best_fit = y.clone()
# best_ydot = ydot.clone()
if e >= EPOCHS_TO_TERMINATION:
return {"weights": best_weight, "bias" : best_bias, "y" : best_fit,
"loss" : {"loss_history" : loss_history}, "best_score" : torch.tensor(best_score),
"RC" : esn}
floss_last = floss
if plott and e:
if e % plot_every_n_epochs == 0:
for param_group in optimizer.param_groups:
print('lr', param_group['lr'])
ax[0].clear()
logloss_str = 'Log(L) ' + '%.2E' % Decimal((loss).item())
delta_loss = ' delta Log(L) ' + '%.2E' % Decimal((loss-previous_loss).item())
print(logloss_str + ", " + delta_loss)
ax[0].plot(y.detach().cpu(), label = "exact")
ax[0].set_title(f"Epoch {e}" + ", " + logloss_str)
ax[0].set_xlabel("t")
ax[1].set_title(delta_loss)
ax[1].plot(N_dot.detach().cpu())
#ax[0].plot(y_dot.detach(), label = "dy_dx")
ax[2].clear()
#weight_size = str(weight_size_sq.detach().item())
#ax[2].set_title("loss history \n and "+ weight_size)
ax[2].loglog(loss_history)
ax[2].set_xlabel("t")
[ax[i].legend() for i in range(3)]
previous_loss = loss.item()
#clear the plot outputt and then re-plot
display.clear_output(wait=True)
display.display(pl.gcf())
return {"weights": best_weight, "bias" : best_bias, "y" : best_fit, "ydot" : best_ydot,
"loss" : {"loss_history" : loss_history}, "best_score" : torch.tensor(best_score),
"RC" : esn}
# + id="expensive-contractor"
#y0s = array([-1. , -0.25, 0.5 , 1.25])
torch.set_default_dtype(torch.float32)
# + colab={"base_uri": "https://localhost:8080/"} id="artificial-exclusive" outputId="2e10c59c-592a-4273-b0c2-e54754cbe860"
log_vars = ['connectivity', 'llambda', 'llambda2', 'noise', 'regularization', 'dt', 'enet_strength']
#trained to 20*pi
hps = {'dt': 0.001,
'n_nodes': 500,
'connectivity': 0.019946997092875757,
'spectral_radius': 2.4289157390594482,
'regularization': 49.04219249279563,
'leaking_rate': 0.0032216429244726896,
'bias': 0.3808490037918091,
'enet_alpha': 0.2040003091096878,
'enet_strength': 0.07488961475845243,
'spikethreshold': 0.4231834411621094,
'gamma': .09350859373807907,
'gamma_cyclic' : 0.9999}
for key, val in hps.items():
if key in log_vars:
print(key, np.log10(val))
else:
print(key, val)
# + colab={"base_uri": "https://localhost:8080/"} id="historic-liberal" outputId="fcde4903-229d-4aa5-c85f-0298a3e62362"
BURN_IN = 500
#declare the bounds dict. See above for which variables are optimized in linear vs logarithmic space.
bounds_dict = {"connectivity" : (-2, -1.4), #(-2, -0.5),
"spectral_radius" : (2.2, 2.6),#(0.01, 1),
"n_nodes" : 500,
"regularization" : 1.69, #(-4.4, 2.6),
"leaking_rate" : (0.00322 - 0.002, 0.00322 + 0.002),
"dt" : -3,#-3,
"bias": (-0.5, 0.5),
"enet_alpha": (0.18, 0.22), #(0,1.0),
"enet_strength": (-1.32,-0.92),
"spikethreshold" : (0.35,0.45),
"gamma" : (0.08,0.12),
"gamma_cyclic" : (float(np.log10(0.9997)), float(np.log10(0.99999))),#(-0.002176919254274547, 0)
}
#set up data
x0, xf = 0, 4*np.pi
nsteps = int(abs(xf - x0)/(10**bounds_dict["dt"]))
xtrain = torch.linspace(x0, xf, nsteps, requires_grad=False).view(-1,1)
int(xtrain.shape[0])
# + colab={"base_uri": "https://localhost:8080/", "height": 695} id="living-coordination" outputId="c5218826-e571-48e5-d189-02b880096345"
#declare the esn_cv optimizer: this class will run bayesian optimization to optimize the bounds dict.
esn_cv = EchoStateNetworkCV(bounds = bounds_dict,
interactive = True,
batch_size = 1,
cv_samples = 1,
initial_samples = 100, #200
subsequence_length = int(xtrain.shape[0] * 0.98),
validate_fraction = 0.5,
random_seed = 209,
success_tolerance = 10,
ODE_order = 1,
length_min = 2 **(-8),
esn_burn_in = BURN_IN,
log_score = True,
activation_function = torch.sin,
act_f_prime = torch.cos,
)
#optimize:
opt = True
if opt:
opt_hps = esn_cv.optimize(y = None,
x = xtrain.view(-1,1),
reparam_f = reparam,
ODE_criterion = custom_loss,
init_conditions = [[1.1, 1.3], 1],#[[0,1], [0,1]],
force = force,
ode_coefs = [1, 1],
rounds =1,
backprop_f = optimize_last_layer,
solve = True,
eq_system = True,
n_outputs = 2,
epochs = 5000,
reg_type = "ham",
tr_score_prop = 0.2)
# + id="instrumental-oxford"
if opt:
opt_hps
# + id="9FfE_WCWW4GA"
esn_cv.n_outputs
# + id="moderate-story"
#opt_hps
#new hps
hps = {'dt': 10**-2.2, #0.00630957344480193,
'n_nodes': 500,
'connectivity': 0.0032730501495831926,
'spectral_radius': 8, #1.4158440828323364,
'regularization': 1.5068021807798724,
'leaking_rate': 0.059490688145160675,
'bias': -0.048827290534973145}
new_hps = {'dt': 0.01,
'n_nodes': 500,
'connectivity': 0.0012518575764582111,
'spectral_radius': 1.1966601610183716,
'regularization': 16.545863672039996,
'leaking_rate': 0.06009502336382866,
'bias': 0.3623389005661011,
'enet_alpha': 0.8732492327690125,
'enet_strength': 0.011039982688091154}
new_new_hps = {'dt': 0.015848931924611134,
'n_nodes': 500,
'connectivity': 0.019411325024276192,
'spectral_radius': 1.0023764371871948,
'regularization': 0.01620633637515373,
'leaking_rate': 0.064253069460392,
'bias': 0.42768096923828125,
'enet_alpha': 0.6743161678314209,
'enet_strength': 0.8529825590176218}
#trained to 20*pi
hps = {'dt': 0.015848931924611134,
'n_nodes': 500,
'connectivity': 0.011412976296653454,
'spectral_radius': 1.5883185863494873,
'regularization': 0.00017807099501162684,
'leaking_rate': 0.13014408946037292,
'bias': 0.9991035461425781,
'enet_alpha': 0.3216418921947479,
'enet_strength': 4.858497457864491,
'spikethreshold': 0.3982628881931305,
'gamma': 0.09541413187980652}
#trained to 20*pi round 2
hps = {'dt': 0.015848931924611134,
'n_nodes': 500,
'connectivity': 0.07016350849568936,
'spectral_radius': 1.2355562448501587,
'regularization': 1.9761536690744939,
'leaking_rate': 0.03428209573030472,
'bias': 0.9089397192001343,
'enet_alpha': 0.2660914659500122,
'enet_strength': 3.898602924275761,
'spikethreshold': 0.4618821144104004,
'gamma': 0.0948069617152214}
afternoon_hps = {'dt': 0.01, #0.001, #0.01
'n_nodes': 500,
'connectivity': 0.020193996324265714,
'spectral_radius': 1.418228268623352,
'regularization': 13.826029502079747,
'leaking_rate': 0.06767291575670242,
'bias': -1.1795610189437866,
'enet_alpha': 0.2708361744880676,
'enet_strength': 0.015112827558814506,
'spikethreshold': 0.4739722013473511,
'gamma': 0.05922722443938255}
# + [markdown] id="described-brass"
# #esn_cv.Y_turbo.detach().cpu())
# Y_turbo = esn_cv.Y_turbo.data.cpu()
# plt.plot(Y_turbo)
# + id="three-performance"
plt.plot(xtrain)
# + id="naughty-knife"
# plot_result(esn, xtrain, v0s = np.array([1]),
# y0s = [1.3],plot_gt = True, ode_coefs = [1,1],
# force_k = 0,
# backprop_f = optimize_last_layer,solve = True, epochs = 80000, reg = False)
# plt.plot(esn.states[:,1:10].detach().cpu());
# + id="silver-maryland"
def fit_and_test(RC, xtrain, xtest, y0 = 1.3, v0 = 1, ode_coefs = [1,1],
solve = None, epochs = None, reg = None, plott = None):
train_args = {"burn_in" : int(BURN_IN),
"ODE_order" : 1,
"force" : force,
"reparam_f" : reparam,
"init_conditions" : [float(y0), float(v0)],
"ode_coefs" : ode_coefs,
"y" : None,
"X" : xtrain.view(-1,1),
"eq_system" : True,
#"out_weights" : out_weights
}
#fit
y, ydot = RC.fit(**train_args, SOLVE = solve)#, out_weights = out_weights)
states_dict = {"s" : RC.states.clone(),
"s1" : RC.states_dot.clone(),
"G" : RC.G,
"ex" : RC.extended_states.clone(),
"sb1": RC.sb1,
"sb" : RC.sb
}
weight_dict = optimize_last_layer(RC, epochs = epochs,reg = reg, plott = plott)
RC = weight_dict["RC"]
#y, ydot = esn.fit(**train_args, preloaded_states_dict = states_dict, out_weights = weight_dict, SOLVE = False)
#test
score, pred, _ = RC.test(y = torch.ones_like(xtest.to(esn.device)), X = xtest.to(esn.device), reparam = reparam, ODE_criterion = custom_loss)
return esn.X.cpu().data, weight_dict["fit"].cpu().data, ydot.cpu().data, pred.cpu().data, weight_dict
def integrator_sol(esn):
def f(u, t ,lam=0,A=0,W=1):
x, px = u # unpack current values of u
derivs = [px, -x - lam*x**3 +A*np.sin(W*t)] # you write the derivative here
return derivs
# Scipy Solver
def NLosc_solution(t, x0, px0, lam=0, A=0,W=1):
u0 = [x0, px0]
# Call the ODE solver
solPend = odeint(f, u0, t.cpu(), args=(lam,A,W,))
xP = solPend[:,0]; pxP = solPend[:,1];
return xP, pxP
y_truth, v_truth = NLosc_solution(esn.X.squeeze().data,1.3,1,lam=1, A=0, W= 0)
return y_truth, v_truth
def plot_sol(X, yy, gt, xtest, pred, train_lim = None):
plt.figure(figsize = (12, 5))
print(yy[0,:].shape)
plt.plot(X, yy[:,0].cpu(), label = "pred", color = "red")
plt.plot(X, gt[0], '--', color = 'r')
plt.axvline(train_lim, label = "train_limit")
plt.plot(X, yy[:,1].cpu(), label = "pred", color = "b", linewidth = 5, alpha = 0.5)
plt.plot(X, gt[1], '--', color = 'b', linewidth = 5, alpha = 0.5)
plt.plot(xtest, pred, color = "green")
# + id="better-liabilities"
may12hps = {'dt': 0.001,
'n_nodes': 500,
'connectivity': 0.019946997092875757,
'spectral_radius': 2.4289157390594482,
'regularization': 49.04219249279563,
'leaking_rate': 0.0032216429244726896,
'bias': 0.3808490037918091,
'enet_alpha': 0.2040003091096878,
'enet_strength': 0.07488961475845243,
'spikethreshold': 0.4231834411621094,
'gamma': .09350859373807907,
'gamma_cyclic' : 0.9999}
may13hps ={'gamma_cyclic': 1,#0.9998,
'spikethreshold': 0.25,
'enet_alpha': 0.2,
'dt': 0.01,
'regularization': 13.803842646028846,
'n_nodes': 600,
'connectivity': 0.01344268178203971,
'spectral_radius': 2.459860324859619,
'leaking_rate': 0.0045151556842029095,
#'input_scaling': 0.7782557606697083,
'bias': -0.7429814338684082,
'enet_strength': 0.04331694643272608,
'gamma': 0.08337975293397903}
may15hps = {'dt': 0.001,
'regularization': 48.97788193684461,
'n_nodes': 500,
'connectivity': 0.017714821964432213,
'spectral_radius': 2.3660330772399902,
'leaking_rate': 0.0024312976747751236,
'bias': 0.37677669525146484,
'enet_alpha': 0.2082211971282959,
'enet_strength': 0.118459548397668,
'spikethreshold': 0.43705281615257263,
'gamma': 0.09469877928495407,
'gamma_cyclic': 0.999860422666841}
hp_set = may15hps
# + id="southeast-chorus"
esn = EchoStateNetwork(**hp_set,
random_state = 209,
feedback = False,
id_ = 10,
activation_f = torch.sin,
act_f_prime = torch.cos,
dtype = torch.float32, n_outputs = 2)
factor = 1#0.6*0.7
base = 10*np.pi
factor = 1.2
x0, xf, xf2 = 0, base, base*factor
nsteps = int(abs(xf - x0)/(hp_set["dt"]))
nsteps2 = int(abs(xf2 - xf)/(hp_set["dt"]))
xtrain = torch.linspace(x0, xf, nsteps, requires_grad=False).view(-1,1)
xtest = torch.cat((xtrain,xtrain+xtrain[-1]), axis = 0)[len(xtrain):]
dt1, dt2 = float(xtest[1] - xtest[0]), float(xtrain[1]- xtrain[0])
#assert dt1 == dt2, f'{dt1} != {dt2}'
xx, yy, yydot, yypred, weight_dict = fit_and_test(esn, xtrain, xtest, epochs = 50000, reg = False, plott = True, solve = True)
# + id="stopped-unknown"
plt.plot(esn.states[:, 1:10]);
# + id="completed-devices"
xtrain[1]- xtrain[0], xtest[1]- xtest[0]
# + id="educated-candy"
gt = integrator_sol(esn)
plot_sol(xx, yy,gt, xtest, yypred, xf)
# + id="emerging-container"
# + id="advance-roman"
sns.heatmap(esn.laststate.cpu().data.numpy().reshape(-1,1))
# + id="suburban-newark"
xtest.shape, weight_dict["weights"].shape
# + id="peaceful-adult"
xtest2 = torch.linspace(x0, xf2, nsteps2, requires_grad=False).view(-1,1)
# + id="dental-spank"
esn = EchoStateNetwork(**may12hps,
random_state = 209,
feedback = False,
id_ = 10,
backprop = False,
activation_f = torch.sin,
act_f_prime = torch.cos,
dtype = torch.float32, n_outputs = 2)
train_args = {"burn_in" : int(BURN_IN),
"ODE_order" : 1,
"force" : force,
"reparam_f" : reparam,
"init_conditions" : [float(1.3), float(1)],
"ode_coefs" : [1,1],
"y" : None,
"X" : xtrain.view(-1,1),#
"eq_system" : True,
"out_weights" : weight_dict,
"SOLVE" : False
}
y, ydot = esn.fit(**train_args)
# + id="welsh-tamil"
yhat = esn.predict(None, x = xtest.cuda(), continuation = True, #torch.ones_like(xtest2.cuda()), ,
continue_force = True)
# + id="naked-chess"
xtrain.shape
# + id="decent-marshall"
plt.plot(esn.X.cpu(), y.cpu().detach())
plt.plot(xtest,yhat[1].cpu().detach())
# + id="upset-begin"
esn.X_val
# + id="killing-married"
esn.laststate
# + id="hearing-score"
import pickle
#t2, ys, gts, ws, bs, Ls = result
plot_data = {"time": xx,
"ypreds" : yy,
"extrapolation" : yypred,
"gts" : gt}
with open('nonlinear_oscillator_plot.pickle', 'wb') as handle:
pickle.dump(plot_data, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('nonlinear_oscillator_plot.pickle', 'rb') as handle:
b = pickle.load(handle)
repr_data = {"time": xx,
"hyper_params" : may12hps,
"out_weights" : {"weights": [weight_dict["weights"]],
"bias": [weight_dict["bias"]]},
"burn_in" : BURN_IN,
"epochs" : 30000,
"learning_rate": 0.04,
"loss_history" : weight_dict["loss"],
"info" : "run on 30k epochs with both lr schedulers.",
"v0" : 1,
"y0" : 1.3}
with open('nonlinear_oscillator_reproduce.pickle', 'wb') as handle:
pickle.dump(repr_data, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('nonlinear_oscillator_reproduce.pickle', 'rb') as handle:
b = pickle.load(handle)
# + id="cleared-jungle"
esn = EchoStateNetwork(**afternoon_hps,
random_state = 209,
id_ = 10,
activation_f = torch.sin,
act_f_prime = torch.cos,
dtype = torch.float32, n_outputs = 2)
extrapolate(esn, 0 , (np.pi * 20), (np.pi * 20)*1.2, epochs = 100000, solve = True, reg = False)
# + id="unlike-corruption"
orig_BO_train_len_pi_prop = 0.6*0.7
extrapolate(esn, 0, (np.pi * 4), (np.pi * 4)*1.2, epochs = 20000)
# + id="waiting-moore"
assert False
best_weights = {"weights" : esn.LinOut.weight.data,
"bias": esn.LinOut.bias.data}
# + id="quarterly-tattoo"
xf0,xf1, dt = 0, (np.pi * 20), esn.dt
# + id="wrapped-cleaners"
train_args = {"burn_in" : int(BURN_IN),
"ODE_order" : 1,
"force" : force,
"reparam_f" : reparam,
"init_conditions" : [float(1.3), float(1)],
"ode_coefs" : [1,1],
"y" : None,
"X" : xtrain.view(-1,1),
"eq_system" : True,
"out_weights" : best_weights
}
y, ydot = esn.fit(**train_args, SOLVE = False)
#nsteps_test = int((xf2 - x0)/dt_)
#nsteps_test2 = int((xf2 - xf1)/dt_)
#print(f'dt = {dt_}')
#xtest = torch.linspace(x0, xf2, steps = nsteps_test, requires_grad=False).view(-1,1)
#xtest2 = torch.linspace(xf1, xf2, steps = nsteps_test2, requires_grad=False).view(-1,1)
# + id="clear-memorabilia"
# + id="inappropriate-deadline"
def f(u, t ,lam=0,A=0,W=1):
x, px = u # unpack current values of u
derivs = [px, -x - lam*x**3 +A*np.sin(W*t)] # you write the derivative here
return derivs
# Scipy Solver
def NLosc_solution(t, x0, px0, lam=0, A=0,W=1):
u0 = [x0, px0]
# Call the ODE solver
solPend = odeint(f, u0, t.cpu(), args=(lam,A,W,))
xP = solPend[:,0]; pxP = solPend[:,1];
return xP, pxP
y_truth, v_truth = NLosc_solution(esn.X.squeeze().data,1.3,1,lam=1, A=0, W= 0.5)
#p = y[:,1].cpu()# + v0
#yy = y[:,0].cpu()# + y0
# + id="attempted-asthma"
plt.plot((y[:,1].cpu()))
plt.plot(v_truth)
# + id="working-building"
x, p = esn.yfit[:,0].view(-1,1), esn.yfit[:,1].view(-1,1)
xdot, pdot = esn.ydot[:,0].view(-1,1), esn.ydot[:,1].view(-1,1)
plt.plot(custom_loss(esn.X, esn.yfit, esn.ydot, None, mean = False))
plt.plot(x, label = "x")
plt.plot(p, label = "p")
plt.legend();
| final_notebooks/.ipynb_checkpoints/final_systems_BO-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py36_pyro
# language: python
# name: py36_pyro
# ---
# +
# %load_ext autoreload
# %autoreload 2
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import os
import pickle
import time
from tqdm.notebook import tqdm
import torch
torch.set_default_tensor_type(torch.DoubleTensor)
from spatial_scene_grammars.nodes import *
from spatial_scene_grammars.rules import *
from spatial_scene_grammars.scene_grammar import *
from spatial_scene_grammars.visualization import *
from spatial_scene_grammars_examples.packages.all_observable_grammar import *
from spatial_scene_grammars.parsing import *
from spatial_scene_grammars.sampling import *
from spatial_scene_grammars.parameter_estimation import *
from spatial_scene_grammars.dataset import *
import meshcat
import meshcat.geometry as meshcat_geom
# -
if 'vis' not in globals():
vis = meshcat.Visualizer()
vis.delete()
base_url = "http://127.0.0.1"
meshcat_url = base_url + ":" + vis.url().split(":")[-1]
print("Meshcat url: ", meshcat_url)
from IPython.display import HTML
HTML("""
<div style="height: 400px; width: 100%; overflow-x: auto; overflow-y: hidden; resize: both">
<iframe src="{url}" style="width: 100%; height: 100%; border: none"></iframe>
</div>
""".format(url=meshcat_url))
# +
# Draw a random sample from the grammar with its initial params and visualize it.
#torch.random.manual_seed(5)
torch.random.manual_seed(46)
grammar = SpatialSceneGrammar(
root_node_type = BoxGroup,
root_node_tf = drake_tf_to_torch_tf(RigidTransform(p=[0.0, 0., 0.]))
)
pre_projection_tree = grammar.sample_tree(detach=True)
nlp_projected_tree = project_tree_to_feasibility(deepcopy(pre_projection_tree), jitter_q=0.01)
tree = project_tree_to_feasibility(deepcopy(pre_projection_tree), do_forward_sim=True, zmq_url=vis.window.zmq_url)
vis["sample"].delete()
draw_scene_tree_contents_meshcat(pre_projection_tree, zmq_url=vis.window.zmq_url, prefix="sample/pre_projected/contents")
draw_scene_tree_structure_meshcat(pre_projection_tree, zmq_url=vis.window.zmq_url, prefix="sample/pre_projected/structure")
draw_scene_tree_contents_meshcat(nlp_projected_tree, zmq_url=vis.window.zmq_url, prefix="sample/nlp_projected/contents")
draw_scene_tree_structure_meshcat(nlp_projected_tree, zmq_url=vis.window.zmq_url, prefix="sample/nlp_projected/structure")
draw_scene_tree_contents_meshcat(tree, zmq_url=vis.window.zmq_url, prefix="sample/sim_projected/contents")
draw_scene_tree_structure_meshcat(tree, zmq_url=vis.window.zmq_url, prefix="sample/sim_projected/structure")
# +
#samples = do_fixed_structure_hmc_with_constraint_penalties(
# grammar, tree, num_samples=100, subsample_step=2,
# with_nonpenetration=True, zmq_url=vis.window.zmq_url
#)
# +
#for k, tree in enumerate(samples):
# draw_scene_tree_contents_meshcat(tree, zmq_url=vis.window.zmq_url, prefix="samples/contents/%d/structure" % k)
# draw_scene_tree_structure_meshcat(tree, zmq_url=vis.window.zmq_url, prefix="samples/structure/%d/structure" % k)
# -
# # Simulated langevin
# +
simulation_tree = deepcopy(tree)
builder, mbp, scene_graph, node_to_free_body_ids_map, body_id_to_node_map = compile_scene_tree_to_mbp_and_sg(simulation_tree)
mbp.Finalize()
visualizer = ConnectMeshcatVisualizer(builder, scene_graph,
zmq_url=vis.window.zmq_url, prefix="simulation")
langevin_source = builder.AddSystem(
StochasticLangevinForceSource(mbp, simulation_tree, node_to_free_body_ids_map, body_id_to_node_map)
)
builder.Connect(mbp.get_state_output_port(),
langevin_source.get_input_port(0))
builder.Connect(langevin_source.get_output_port(0),
mbp.get_applied_spatial_force_input_port())
diagram = builder.Build()
diag_context = diagram.CreateDefaultContext()
sim = Simulator(diagram)
sim.set_target_realtime_rate(1.)
visualizer.start_recording()
sim.AdvanceTo(5.)
visualizer.stop_recording()
visualizer.publish_recording(play=True, repetitions=1)
| spatial_scene_grammars_examples/packages/demo_sampling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # The Harmonic Oscillator Strikes Back
# *Note:* Much of this is adapted/copied from https://flothesof.github.io/harmonic-oscillator-three-methods-solution.html
# This week we continue our adventures with the harmonic oscillator.
#
# 
# The harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x:
#
# $$F=-kx$$
#
# The potential energy of this system is
#
# $$V = {1 \over 2}k{x^2}$$
# These are sometime rewritten as
#
# $$ F=- \omega_0^2 m x, \text{ } V(x) = {1 \over 2} m \omega_0^2 {x^2}$$
#
# Where $\omega_0 = \sqrt {{k \over m}} $
# If the equilibrium value of the harmonic oscillator is not zero, then
#
# $$ F=- \omega_0^2 m (x-x_{eq}), \text{ } V(x) = {1 \over 2} m \omega_0^2 (x-x_{eq})^2$$
# ## 1. Harmonic oscillator from last time (with some better defined conditions)
# Applying the harmonic oscillator force to Newton's second law leads to the following second order differential equation
#
# $$ F = m a $$
#
# $$ F= -m \omega_0^2 (x-x_{eq}) $$
#
# $$ a = - \omega_0^2 (x-x_{eq}) $$
#
# $$ x(t)'' = - \omega_0^2 (x-x_{eq}) $$
# The final expression can be rearranged into a second order homogenous differential equation, and can be solved using the methods we used above
# This is already solved to remind you how we found these values
import sympy as sym
sym.init_printing()
# **Note** that this time we define some of the properties of the symbols. Namely, that the frequency is always positive and real and that the positions are always real
omega0,t=sym.symbols("omega_0,t",positive=True,nonnegative=True,real=True)
xeq=sym.symbols("x_{eq}",real=True)
x=sym.Function("x",real=True)
x(t),omega0
dfeq=sym.Derivative(x(t),t,2)+omega0**2*(x(t)-xeq)
dfeq
sol = sym.dsolve(dfeq)
sol
sol,sol.args[0],sol.args[1]
# **Note** this time we define the initial positions and velocities as real
x0,v0=sym.symbols("x_0,v_0",real=True)
ics=[sym.Eq(sol.args[1].subs(t, 0), x0),
sym.Eq(sol.args[1].diff(t).subs(t, 0), v0)]
ics
solved_ics=sym.solve(ics)
solved_ics
# ### 1.1 Equation of motion for $x(t)$
full_sol = sol.subs(solved_ics[0])
full_sol
# ### 1.2 Equation of motion for $p(t)$
m=sym.symbols("m",positive=True,nonnegative=True,real=True)
p=sym.Function("p")
sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t))
# ## 2. Time average values for a harmonic oscillator
# If we want to understand the average value of a time dependent observable, we need to solve the following integral
#
#
# $${\left\langle {A(t)} \right\rangle}_t = \begin{array}{*{20}{c}}
# {\lim }\\
# {\tau \to 0}
# \end{array}\frac{1}{\tau }\int\limits_0^\tau {A(t)dt} $$
# ### 2.1 Average position ${\left\langle {x} \right\rangle}_t$ for a harmonic oscillator
tau=sym.symbols("tau",nonnegative=True,real=True)
xfunc=full_sol.args[1]
xavet=(xfunc.integrate((t,0,tau))/tau).limit(tau,sym.oo)
xavet
# The computer does not always make the best choices the first time. If you treat each sum individually this is not a hard limit to do by hand. The computer is not smart. We can help it by inseting an `expand()` function in the statement
xavet=(xfunc.integrate((t,0,tau))/tau).expand().limit(tau,sym.oo)
xavet
# ### 2.2 Excercise: Calculate the average momenta ${\left\langle {p} \right\rangle}_t$ for a harmonic oscillator
import sympy as sym
sym.init_printing()
m=sym.symbols("m",positive=True,nonnegative=True,real=True)
p=sym.Function("p")
sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t))
tau=sym.symbols("tau",nonnegative=True,real=True)
pfunc=sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t)).args[1]
pavet=(pfunc.integrate((t,0,tau))/tau).limit(tau,sym.oo)
pavet
# ### 2.3 Exercise: Calculate the average kinetic energy of a harmonic oscillator
kefunc=(sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t)).args[1])**2/(2*m)
keavt=(kefunc.integrate((t,0,tau))/tau).expand().limit(tau,sym.oo)
keavt
# +
# sym.AccumBounds?
# -
sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t)).args[1]**2/(2*m)
((m*sol.args[1].subs(solved_ics[0]).diff(t)**2/(2*m)))
m*sol.args[1].subs(solved_ics[0]).diff(t)**2/(2*m)
# ## 3. Ensemble (Thermodynamic) Average values for a harmonic oscillator
# If we want to understand the thermodynamics ensemble average value of an observable, we need to solve the following integral.
#
#
# $${\left\langle {A(t)} \right\rangle}_{T} = \frac{\int{A e^{-\beta H}dqdp}}{\int{e^{-\beta H}dqdp} } $$
#
# You can think of this as a Temperature average instead of a time average.
#
# Here $\beta=\frac{1}{k_B T}$ and the classical Hamiltonian, $H$ is
#
# $$ H = \frac{p^2}{2 m} + V(q)$$
#
# **Note** that the factors of $1/h$ found in the classical partition function cancel out when calculating average values
# ### 3.1 Average position ${\left\langle {x} \right\rangle}_t$ for a harmonic oscillator
# For a harmonic oscillator with equilibrium value $x_{eq}$, the Hamiltonian is
# $$ H = \frac{p^2}{2 m} + \frac{1}{2} m \omega_0 (x-x_{eq})^2 $$
# First we will calculate the partition function $\int{e^{-\beta H}dqdp}$
k,T=sym.symbols("k,T",positive=True,nonnegative=True,real=True)
xT,pT=sym.symbols("x_T,p_T",real=True)
ham=sym.Rational(1,2)*(pT)**2/m + sym.Rational(1,2)*m*omega0**2*(xT-xeq)**2
beta=1/(k*T)
bolz=sym.exp(-beta*ham)
z=sym.integrate(bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo))
z
# Then we can calculate the numerator $\int{A e^{-\beta H}dqdp}$
#
numx=sym.integrate(xT*bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo))
numx
# And now the average value
xaveT=numx/z
xaveT
# ### 3.2 Exercise: Calculate the average momenta ${\left\langle {p} \right\rangle}_t$ for a harmonic oscillator
#
# After calculating the value, explain why you think you got this number
nump=sym.integrate(pT*bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo))
nump
paveT=nump/z
paveT
# ### 3.3 Exercise: Calculate the average kinetic energy
#
# The answer you get here is a well known result related to the energy equipartition theorem
keaveT=sym.integrate(pT**2/(2*m)*bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo))/z
keaveT
# # Back to the lecture
# ## 4. Exercise Verlet integrators
# In this exercise we will write a routine to solve for the equations of motion for a hamonic oscillator.
#
# Plot the positions and momenta (seprate plots) of the harmonic oscillator as a functions of time.
#
# Calculaate trajectories using the following methods:
# 1. Exact solution
# 2. Simple taylor series expansion
# 3. Predictor-corrector method
# 4. Verlet algorithm
# 5. Leapfrog algorithm
# 6. Velocity Verlet algorithm
# %matplotlib inline
#1 Exact Solution
##Position
omega0,t=sym.symbols("omega_0,t",positive=True,nonnegative=True,real=True)
xeq=sym.symbols("x_{eq}",real=True)
x=sym.Function("x",real=True)
full_sol1 = sym.simplify(full_sol.subs({x0:10, xeq:0 , v0:10, omega0:1}))
sym.plot(full_sol1.rhs,(t,-10,10))
#1 Exact Solution
##Momentum
m=sym.symbols("m",positive=True,nonnegative=True,real=True)
p=sym.Function("p")
sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t))
momentum=sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t))
momentum1=sym.simplify(momentum.subs({x0:10, xeq:0, v0:10, omega0:1, m:1}))
sym.plot(momentum1.rhs,(t,-10,10))
#2 Simple Taylor Series Expansion
import sympy as sym
import numpy as np
import matplotlib.pyplot as plt
x_t0=0
t=0.25
v_t0=1
xlist=[]
for i in range(0,500):
v_t=v_t0+(1/2)*t**2*(-(x_t0))
v_t0=v_t
x_t= x_t0+(v_t0)*(t)
x_t0=x_t
xlist.append(x_t)
plt.plot(xlist)
plt.xlabel('x')
plt.ylabel('y')
plt.grid(True)
plt.title('Taylor series approximation')
# +
#3 Was told to skip by fellow classmates!
# -
#4 Verlet algorithm
import sympy as sym
import numpy as np
import matplotlib.pyplot as plt
x_t0=0
x_t1=1
t=0.4
a=1
xlist=[]
for i in range(0,100):
x_t2=2*x_t1-x_t0+t**2*(-(x_t1))
x_t0=x_t1
x_t1=x_t2
xlist.append(x_t2)
plt.plot(xlist)
plt.xlabel('x')
plt.ylabel('y')
plt.grid(True)
plt.title('Verlet Algorithm-Position')
#4 Verlet algorithm
import sympy as sym
import numpy as np
import matplotlib.pyplot as plt
x_t0=0
x_t2=1
t=2
xlist=[]
for i in range(0,100):
v_t=(x_t2-x_t0)/2*t
x_t0=x_t2
x_t2=v_t
xlist.append(v_t)
plt.plot(xlist)
plt.xlabel('x')
plt.ylabel('y')
plt.grid(True)
plt.title('Verlet Algorithm-Velocity')
#5 Leapfrog algorithm
##Position
import sympy as sym
import numpy as np
import matplotlib.pyplot as plt
x_t0=1
v_minushalft=0
t=0.2
xlist=[]
for i in range(0,100):
v_halft=v_minushalft+(t)*(-(x_t0))
v_minushalft=v_halft
x_t1=x_t0+(t)*(v_halft)
x_t0=x_t1
xlist.append(x_t1)
plt.plot(xlist)
plt.xlabel('x')
plt.ylabel('y')
plt.grid(True)
plt.title('Leapfrog Algorithm-Position')
#5 Leapfrog algorithm
##Position
import sympy as sym
import numpy as np
import matplotlib.pyplot as plt
x_t0=1
t=0.3
v_minushalft=1
v_halft=2
xlist=[]
for i in range(0,100):
v_t=(1/2)*((v_halft)+(v_minushalft))
v_minushalft=v_t
v_halft=v_minushalft+(t)*(-(x_t0))
v_minushalft=v_halft
x_t1=x_t0+(t)*(v_halft)
x_t0=x_t1
xlist.append(v_t)
plt.plot(xlist)
plt.xlabel('x')
plt.ylabel('y')
plt.grid(True)
plt.title('Leapfrog Algorithm-Velocity')
#6 Velocity Verlet Algorithm
import sympy as sym
import numpy as np
import matplotlib.pyplot as plt
dt=0.1
x_0=1
v_0=1
xlist=[]
for i in range(0,100):
x_1 = x_0 + v_0 * dt + 0.5 * -x_0 * dt * dt
x_0 = x_1
v_1 = v_0 + 0.5 * (-x_0 + -x_1) * dt
v_0 = v_1
xlist.append(x_1)
#print(xlist)
plt.plot(xlist)
plt.xlabel('x')
plt.ylabel('y')
plt.grid(True)
plt.title('Velocity Verlet-Position Approx')
#6 Velocity Verlet Algorithm
import sympy as sym
import numpy as np
import matplotlib.pyplot as plt
x_t0=1
t=2
v_t0=1
xlist=[]
for i in range(0,50):
v_halft=v_t0+(1/2)*t*(-x_t0)
x_t0=v_halft
xlist.append(v_halft)
plt.plot(xlist)
plt.xlabel('x')
plt.ylabel('y')
plt.grid(True)
plt.title('Velocity Verlet-Velocity Approximation')
| harmonic_student.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 2A.eco - Mise en pratique des séances 1 et 2 - Utilisation de pandas et visualisation
#
# Trois exercices pour manipuler les donner, manipulation de texte, données vélib.
from jyquickhelper import add_notebook_menu
add_notebook_menu()
# ## Données
#
# Les données sont téléchargeables à cette adresse : [td2a_eco_exercices_de_manipulation_de_donnees.zip](https://github.com/sdpython/ensae_teaching_cs/raw/master/_doc/notebooks/td2a_eco/data/td2a_eco_exercices_de_manipulation_de_donnees.zip). Le code suivant permet de les télécharger automatiquement.
from pyensae.datasource import download_data
files = download_data("td2a_eco_exercices_de_manipulation_de_donnees.zip",
url="https://github.com/sdpython/ensae_teaching_cs/raw/master/_doc/notebooks/td2a_eco/data/")
files
# ## Exercice 1 - manipulation des textes
#
# Durée : 20 minutes
#
# 1. Importer la base de données relatives aux joueurs de la Coupe du Monde 2014 >> `Players_WC2014.xlsx`
# 2. Déterminer le nombre de joueurs dans chaque équipe et créer un dictionnaire { équipe : Nombre de joueurs}
# 3. Déterminer quels sont les 3 joueurs qui ont couvert le plus de distance. Y a t il un biais de sélection ?
# 4. Parmis les joueurs qui sont dans le premier décile des joueurs plus rapides, qui a passé le plus clair de son temps à courrir sans la balle ?
# ## Exercice 2 - Les villes
#
# Durée : 40 minutes
#
# 1. Importer la base des villes villes.xls
# 2. Les noms de variables et les observations contiennent des espaces inutiles (exemple : 'MAJ ') : commnecer par nettoyer l'ensemble des chaines de caractères (à la fois dans les noms de colonnes et dans les observations)
# 3. Trouver le nombre de codes INSEE différents (attention aux doublons)
# 4. Comment calculer rapidement la moyenne, le nombre et le maximum pour chaque variable numérique ? (une ligne de code)
# 5. Compter le nombre de villes dans chaque Region et en faire un dictionnaire où la clé est la région et la valeur le nombre de villes
# 6. Représenter les communes en utilisant
#
# * matplotlib
# * une librairie de cartographie (ex : folium)
# ## Exercice 3 - Disponibilité des vélibs
#
# Durée : 30 minutes
#
# 1. Importer les données sous la forme d'un dataFrame
#
# - `velib_t1.txt` - avec les données des stations à un instant $t$
# - `velib_t2.txt` - avec les données des stations à un instant $t + 1$
#
# 2. Représenter la localisation des stations vélib dans Paris
#
# - représenter les stations en fonction du nombre de places avec un gradient
#
# 3. Comparer pour une station donnée l'évolution de la disponibilité (en fusionnant les deux bases $t$ et $t+1$)
#
# - représenter les stations qui ont connu une évolution significative (plus de 5 changements) avec un gradient de couleurs
| _doc/notebooks/td2a_eco/td2a_eco_exercices_de_manipulation_de_donnees.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import time
import datetime
import pandas as pd
import numpy as np
from scipy import stats
from pytrends.request import TrendReq
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
# +
countries_list = {
'usa': {'code': 'US', 'eu': 'US'},
'uk': {'code': 'GB', 'eu': 'UK'},
'canada': {'code': 'CA', 'eu': 'CA'},
#'australia': {'code': 'AU', 'eu': 'AU'},
#'nigeria': {'code': 'NG', 'eu': 'NG'},
#'south africa': {'code': 'ZA', 'eu': 'ZA'},
'ireland': {'code': 'IE', 'eu': 'IE'},
#'new zealand': {'code': 'NZ', 'eu': 'NZ'},
#'jamaica': {'code': 'JM', 'eu': 'JM'}
}
phrase_list = {
'a': 'loss of smell',
'b': 'lost sense of smell',
'c': 'no sense of smell',
'd': 'can\'t smell'
}
# +
pytrend = TrendReq(timeout=(10,60))
def get_trend_data_stacked(countries, phrases):
result = pd.DataFrame()
counter = 0
for c in countries:
#Payload variables
geo_id = countries[c]['code']
eu_id = countries[c]['eu']
for p in phrases:
#Set phrase
phrase = phrases[p]
pytrend.build_payload([phrase], timeframe='2019-12-28 2020-03-27', geo=geo_id)
trends_result = pytrend.interest_over_time()
trends_result = trends_result.rename(columns={phrase: 'trend_index'})
#Cleanup
if 'isPartial' in trends_result.columns:
cleanup = trends_result.drop(columns=['isPartial'])
cleanup['geo_id'] = geo_id
cleanup['eu_id'] = eu_id
cleanup['phrase'] = phrase
cleanup['phrase_id'] = p
#Set first df if empty
if result.empty:
result = cleanup
else:
result = result.append(cleanup)
#WHY?! Only adding this sleep makes google trends work
time.sleep(7)
counter += 1
print(counter, ' / ', (len(countries) * len(phrases)))
result['year_week'] = result.index.year.astype(str) + "_" + result.index.week.astype(str)
return result
# +
#Uncomment line below to refresh phrase trend data...
#...However there is no need to do this as the data is limited at from 23rd March onwards
#phrase_results = get_trend_data_stacked(countries_list, phrase_list)
# +
#phrase_results.to_csv('phrase_results.csv')
# -
raw_corona = pd.read_excel('https://www.ecdc.europa.eu/sites/default/files/documents/COVID-19-geographic-disbtribution-worldwide.xlsx',
index_col=None, usecols=['dateRep', 'cases', 'deaths', 'countriesAndTerritories', 'geoId'],
dtype={'cases': int, 'deaths': int, 'countriesAndTerritories': str, 'geoId': str},
parse_dates=['dateRep'])
# +
#Offline file from previous set
df_csv = pd.read_csv('phrase_results.csv', parse_dates=['date']).reset_index()
df_csv = df_csv.loc[df_csv['geo_id'].isin(['US', 'GB', 'CA', 'IE'])]
#Add baseline data
df_csv['pre_corona_trend_baseline'] = df_csv.apply(
lambda x: df_csv['trend_index'].loc[(df_csv['geo_id'] == x['geo_id']) & (df_csv['phrase_id'] == x['phrase_id']) & (df_csv['date'] < '2020-01-31')].mean(), axis=1)
df_csv['baseline_diff'] = df_csv.apply(lambda x: 0 if(x['pre_corona_trend_baseline'] == 0)\
else (x['trend_index'] / x['pre_corona_trend_baseline']), axis=1)
phrase_trends_merge = df_csv.merge(raw_corona, left_on=['eu_id', 'date'], right_on=['geoId', 'dateRep'])
phrase_trends_merge['cases_index'] = phrase_trends_merge.apply(
lambda x: 100 * (x['cases'] / phrase_trends_merge['cases'].loc[phrase_trends_merge['geo_id'] == x['geo_id']].max()), axis=1)
phrase_trends_merge['deaths_index'] = phrase_trends_merge.apply(
lambda x: 100 * (x['deaths'] / phrase_trends_merge['deaths'].loc[phrase_trends_merge['geo_id'] == x['geo_id']].max()), axis=1)
# -
def plot_trends():
df = phrase_trends_merge.loc[phrase_trends_merge['date'] > '2020-02-15']
fig, ax = plt.subplots(3, 4, figsize=(25,11))
myFmt = mdates.DateFormatter('%m / %d')
for p in phrase_list:
ax[0,0].plot(df['date'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'US')],
df['trend_index'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'US')], label=df['geo_id'], lw=2)
ax[0,0].xaxis.set_major_locator(plt.MaxNLocator(10))
ax[0,0].tick_params(axis='y', labelsize=20)
ax[0,0].set_ylim(0, 100)
ax[0,0].set_ylabel('Search interest \n(raw)', size=22)
ax[0,0].set_title('USA', size=24)
ax[0,0].legend(phrase_list.values(), loc='upper left', fontsize=16)
for p in phrase_list:
ax[1,0].plot(df['date'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'US')],
df['baseline_diff'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'US')], label=df['geo_id'], lw=2)
ax[1,0].xaxis.set_major_locator(plt.MaxNLocator(10))
ax[1,0].tick_params(axis='y', labelsize=20)
ax[1,0].set_ylim(0, 100)
ax[1,0].set_ylabel('Search interest \n(baseline normalised)', size=22)
ax[1,0].legend(phrase_list.values(), loc='upper left', fontsize=16)
ax[2,0].plot(df['date'].loc[df['geo_id'] == 'US'], df['deaths'].loc[df['geo_id'] == 'US'],\
label=df['geo_id'], color='purple', lw=2)
ax[2,0].xaxis.set_major_locator(plt.MaxNLocator(10))
ax[2,0].tick_params(axis='x', labelrotation=50, labelsize=20)
ax[2,0].tick_params(axis='y', labelsize=20)
ax[2,0].set_ylim(0, 140)
ax[2,0].set_ylabel('deaths', size=22)
ax[2,0].xaxis.set_major_formatter(myFmt)
for p in phrase_list:
ax[0,1].plot(df['date'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'GB')],
df['trend_index'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'GB')], label=df['geo_id'], lw=2)
ax[0,1].xaxis.set_major_locator(plt.MaxNLocator(10))
ax[0,1].set_ylim(0, 100)
ax[0,1].set_title('UK', size=24)
for p in phrase_list:
ax[1,1].plot(df['date'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'GB')],
df['baseline_diff'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'GB')], label=df['geo_id'], lw=2)
ax[1,1].xaxis.set_major_locator(plt.MaxNLocator(10))
ax[1,1].set_ylim(0, 100)
ax[2,1].plot(df['date'].loc[df['geo_id'] == 'GB'], df['deaths'].loc[df['geo_id'] == 'GB'],\
label=df['geo_id'], color='purple', lw=2)
ax[2,1].xaxis.set_major_locator(plt.MaxNLocator(10))
ax[2,1].tick_params(labelrotation=50, labelsize=20)
ax[2,1].set_ylim(0, 140)
ax[2,1].xaxis.set_major_formatter(myFmt)
for p in phrase_list:
ax[0,2].plot(df['date'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'CA')],
df['trend_index'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'CA')], label=df['geo_id'], lw=2)
ax[0,2].xaxis.set_major_locator(plt.MaxNLocator(10))
ax[0,2].set_ylim(0, 100)
ax[0,2].set_title('Canada', size=24)
for p in phrase_list:
ax[1,2].plot(df['date'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'CA')],
df['baseline_diff'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'CA')], label=df['geo_id'], lw=2)
ax[1,2].xaxis.set_major_locator(plt.MaxNLocator(10))
ax[1,2].set_ylim(0, 100)
ax[2,2].plot(df['date'].loc[df['geo_id'] == 'CA'], df['deaths'].loc[df['geo_id'] == 'CA'],\
label=df['geo_id'], color='purple', lw=2)
ax[2,2].xaxis.set_major_locator(plt.MaxNLocator(10))
ax[2,2].tick_params(labelrotation=50, labelsize=20)
ax[2,2].set_ylim(0, 140)
ax[2,2].xaxis.set_major_formatter(myFmt)
for p in phrase_list:
ax[0,3].plot(df['date'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'IE')],
df['trend_index'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'IE')], label=df['geo_id'], lw=2)
ax[0,3].xaxis.set_major_locator(plt.MaxNLocator(10))
ax[0,3].set_ylim(0, 100)
ax[0,3].set_title('Ireland', size=24)
for p in phrase_list:
ax[1,3].plot(df['date'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'IE')],
df['baseline_diff'].loc[(df['phrase_id'] == p) & (df['geo_id'] == 'IE')], label=df['geo_id'], lw=2)
ax[1,3].xaxis.set_major_locator(plt.MaxNLocator(10))
ax[1,3].set_ylim(0, 100)
ax[2,3].plot(df['date'].loc[df['geo_id'] == 'IE'], df['deaths'].loc[df['geo_id'] == 'IE'],\
label=df['geo_id'], color='purple', lw=2)
ax[2,3].xaxis.set_major_locator(plt.MaxNLocator(10))
ax[2,3].tick_params(labelrotation=50, labelsize=20)
ax[2,3].set_ylim(0, 140)
ax[2,3].xaxis.set_major_formatter(myFmt)
for ax in fig.get_axes():
ax.label_outer()
fig.suptitle('Daily deaths and search interest for "anosmia" over time by country', fontsize=25, y=1)
fig.tight_layout(rect=(0,0,1,0.94))
fig.savefig('no_smell.png')
plot_trends()
def spearman():
df = phrase_trends_merge
for country in ['GB', 'US', 'CA', 'IE']:
for p in phrase_list:
res = stats.spearmanr(df['deaths'].loc[(df['phrase_id'] == p) & (df['geo_id'] == country)],\
df['trend_index'].loc[(df['phrase_id'] == p) & (df['geo_id'] == country)])
res_correlation = round(res.correlation, 3)
if (res.pvalue < 0.001):
p_res = '<0.001'
else:
p_res = round(res.pvalue, 3)
print(country, ' ', phrase_list[p], ' ', res_correlation, ' ', p_res)
spearman()
| loss_of_smell.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import matplotlib.pyplot as plt
import pandas as pd
df = pd.read_csv('/Users/sanjana/Downloads/f1-comp - Sheet1.csv')
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, gridspec_kw={'wspace': 0.3}, figsize=(15,5), dpi=100)
ax0.plot(df['iter'], df['rnn_keyword_f1_macro'], color='r', label='WestRNN')
ax0.plot(df['iter'], df['cnn_keyword_f1_macro'], color='b', label='WestCNN')
ax0.legend()
ax0.set_xlabel('Iterations')
ax0.set_ylabel('f1 macro')
ax1.plot(df['iter'], df['rnn_keyword_f1_micro'], color='r', label='WestRNN')
ax1.plot(df['iter'], df['cnn_keyword_f1_micro'], color='b', label='WestCNN')
ax1.legend()
ax1.set_xlabel('Iterations')
ax1.set_ylabel('f1 weighted')
plt.savefig('comp1.png')
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, gridspec_kw={'wspace': 0.3}, figsize=(15,5), dpi=100)
ax0.plot(df['iter'], df['rnn_docs_f1_macro'], color='r', label='WestRNN')
ax0.plot(df['iter'], df['cnn_docs_f1_macro'], color='b', label='WestCNN')
ax0.legend()
ax0.set_xlabel('Iterations')
ax0.set_ylabel('f1 macro')
ax1.plot(df['iter'], df['rnn_docs_f1_micro'], color='r', label='WestRNN')
ax1.plot(df['iter'], df['cnn_docs_f1_micro'], color='b', label='WestCNN')
ax1.legend()
ax1.set_xlabel('Iterations')
ax1.set_ylabel('f1 weighted')
plt.savefig('comp2.png')
df = pd.read_csv('/Users/sanjana/Downloads/num_docs - Sheet1.csv')
fig, ax = plt.subplots()
ax.plot(df['num_docs'], df['cnn_tfidf'], color='b', label='WestCNN-Tfidf', marker='o')
ax.plot(df['num_docs'], df['cnn_ranking'], color='b', label='WestCNN-Ranking', marker='o', linestyle='--')
ax.plot(df['num_docs'], df['rnn_tfidf'], color='r', label='WestRNN-Tfidf', marker='o')
ax.plot(df['num_docs'], df['rnn_ranking'], color='r', label='WestRNN-Ranking', marker='o', linestyle='--')
ax.legend()
ax.set_xlabel('Number of documents per class')
ax.set_ylabel('f1 macro')
plt.savefig('num_docs.png')
# + pycharm={"name": "#%%\n"}
df = pd.read_csv('/Users/sanjana/Downloads/num_keywords - Sheet1.csv')
fig, ax = plt.subplots()
ax.plot(df['num_keywords'], df['rnn_f1_macro'], color='r', label='WestRNN', marker='o')
ax.plot(df['num_keywords'], df['cnn_f1_macro'], color='b', label='WestCNN', marker='o')
ax.legend()
ax.set_xlabel('Number of keywords per class')
ax.set_ylabel('f1 macro')
plt.savefig('num_keywords.png')
# + pycharm={"name": "#%%\n"}
df = pd.read_csv('/Users/sanjana/Downloads/alpha - Sheet1.csv')
fig, ax = plt.subplots()
ax.plot(df['alpha'], df['rnn'], color='r', label='WestRNN', marker='o')
ax.plot(df['alpha'], df['cnn'], color='b', label='WestCNN', marker='o')
ax.legend()
ax.set_xlabel('alpha')
ax.set_ylabel('f1 macro')
plt.savefig('alpha.png')
# + pycharm={"name": "#%%\n"}
df = pd.read_csv('/Users/sanjana/Downloads/beta - Sheet1.csv')
fig, ax = plt.subplots()
ax.plot(df['beta'], df['rnn'], color='r', label='WestRNN', marker='o')
ax.plot(df['beta'], df['cnn'], color='b', label='WestCNN', marker='o')
ax.legend()
ax.set_xlabel('beta')
ax.set_ylabel('f1 macro')
plt.savefig('beta.png')
# + pycharm={"name": "#%%\n"}
df = pd.read_csv('/Users/sanjana/Downloads/gamma - Sheet1.csv')
fig, ax = plt.subplots()
ax.plot(df['gamma'], df['rnn'], color='r', label='WestRNN', marker='o')
ax.plot(df['gamma'], df['cnn'], color='b', label='WestCNN', marker='o')
ax.legend()
ax.set_xlabel('gamma')
ax.set_ylabel('f1 macro')
plt.savefig('gamma.png')
# + pycharm={"name": "#%%\n"}
df = pd.read_csv('/Users/sanjana/Downloads/class_imbalance - Sheet1.csv')
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, figsize=(15,5), dpi=100)
ax0.plot(df['iter'], df['cnn_f1-normal'], label='normal')
ax0.plot(df['iter'], df['cnn_f1-spam'], label='spam')
ax0.plot(df['iter'], df['cnn_f1-abusive'], label='abusive')
ax0.plot(df['iter'], df['cnn_f1-hateful'], label='hateful')
ax0.set_xlabel('Iterations')
ax0.set_ylabel('f1 scores')
ax0.set_ylim([0,1])
ax0.set_title('WestCNN')
ax0.legend()
ax1.plot(df['iter'], df['rnn_f1-normal'], label='normal')
ax1.plot(df['iter'], df['rnn_f1-spam'], label='spam')
ax1.plot(df['iter'], df['rnn_f1-abusive'], label='abusive')
ax1.plot(df['iter'], df['rnn_f1-hateful'], label='hateful')
ax1.set_xlabel('Iterations')
ax1.set_ylabel('f1 scores')
ax0.set_title('RestCNN')
ax1.set_ylim([0,1])
ax1.legend()
plt.savefig('class_f1.png')
# + pycharm={"name": "#%%\n"}
df = pd.read_csv('/Users/sanjana/Downloads/class_imbalance - Sheet1.csv')
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, figsize=(15,5), dpi=100)
ax0.plot(df['iter'], df['cnn_f1-normal'], label='normal')
ax0.plot(df['iter'], df['cnn_f1-spam'], label='spam')
ax0.plot(df['iter'], df['cnn_f1-abusive'], label='abusive')
ax0.plot(df['iter'], df['cnn_f1-hateful'], label='hateful')
ax0.set_xlabel('Iterations')
ax0.set_ylabel('f1 scores')
ax0.set_ylim([0,1])
ax0.set_title('WestCNN')
ax0.legend()
ax1.plot(df['iter'], df['rnn_f1-normal'], label='normal')
ax1.plot(df['iter'], df['rnn_f1-spam'], label='spam')
ax1.plot(df['iter'], df['rnn_f1-abusive'], label='abusive')
ax1.plot(df['iter'], df['rnn_f1-hateful'], label='hateful')
ax1.set_xlabel('Iterations')
ax1.set_ylabel('f1 scores')
ax0.set_title('RestCNN')
ax1.set_ylim([0,1])
ax1.legend()
plt.savefig('class_f1.png')
# -
df = pd.read_csv('/Users/sanjana/Downloads/class_imbalance - Sheet1.csv')
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, figsize=(15,5), dpi=100)
ax0.plot(df['iter'], df['cnn_f1-normal'], label='normal')
ax0.plot(df['iter'], df['cnn_f1-spam'], label='spam')
ax0.plot(df['iter'], df['cnn_f1-abusive'], label='abusive')
ax0.plot(df['iter'], df['cnn_f1-hateful'], label='hateful')
ax0.set_xlabel('Iterations')
ax0.set_ylabel('f1 scores')
ax0.set_ylim([0,1])
ax0.set_title('WestCNN')
ax0.legend()
ax1.plot(df['iter'], df['rnn_f1-normal'], label='normal')
ax1.plot(df['iter'], df['rnn_f1-spam'], label='spam')
ax1.plot(df['iter'], df['rnn_f1-abusive'], label='abusive')
ax1.plot(df['iter'], df['rnn_f1-hateful'], label='hateful')
ax1.set_xlabel('Iterations')
ax1.set_ylabel('f1 scores')
ax0.set_title('RestCNN')
ax1.set_ylim([0,1])
ax1.legend()
plt.savefig('class_f1.png')
df = pd.read_csv('/Users/sanjana/Downloads/alpha - Sheet1.csv')
fig, ax = plt.subplots()
ax.plot(df['alpha'], df['rnn'], color='r', label='WestRNN', marker='o')
ax.plot(df['alpha'], df['cnn'], color='b', label='WestCNN', marker='o')
ax.legend()
ax.set_xlabel('alpha')
ax.set_ylabel('f1 macro')
plt.savefig('alpha.png')
df = pd.read_csv('/Users/sanjana/Downloads/beta - Sheet1.csv')
fig, ax = plt.subplots()
ax.plot(df['beta'], df['rnn'], color='r', label='WestRNN', marker='o')
ax.plot(df['beta'], df['cnn'], color='b', label='WestCNN', marker='o')
ax.legend()
ax.set_xlabel('beta')
ax.set_ylabel('f1 macro')
plt.savefig('beta.png')
df = pd.read_csv('/Users/sanjana/Downloads/gamma - Sheet1.csv')
fig, ax = plt.subplots()
ax.plot(df['gamma'], df['rnn'], color='r', label='WestRNN', marker='o')
ax.plot(df['gamma'], df['cnn'], color='b', label='WestCNN', marker='o')
ax.legend()
ax.set_xlabel('gamma')
ax.set_ylabel('f1 macro')
plt.savefig('gamma.png')
df = pd.read_csv('/Users/sanjana/Downloads/class_imbalance - Sheet1.csv')
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, figsize=(15,5), dpi=100)
ax0.plot(df['iter'], df['cnn_f1-normal'], label='normal')
ax0.plot(df['iter'], df['cnn_f1-spam'], label='spam')
ax0.plot(df['iter'], df['cnn_f1-abusive'], label='abusive')
ax0.plot(df['iter'], df['cnn_f1-hateful'], label='hateful')
ax0.set_xlabel('Iterations')
ax0.set_ylabel('f1 scores')
ax0.set_ylim([0,1])
ax0.set_title('WestCNN')
ax0.legend()
ax1.plot(df['iter'], df['rnn_f1-normal'], label='normal')
ax1.plot(df['iter'], df['rnn_f1-spam'], label='spam')
ax1.plot(df['iter'], df['rnn_f1-abusive'], label='abusive')
ax1.plot(df['iter'], df['rnn_f1-hateful'], label='hateful')
ax1.set_xlabel('Iterations')
ax1.set_ylabel('f1 scores')
ax0.set_title('RestCNN')
ax1.set_ylim([0,1])
ax1.legend()
plt.savefig('class_f1.png')
df = pd.read_csv('/Users/sanjana/Downloads/class_imbalance - Sheet1.csv')
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, figsize=(15,5), dpi=100)
ax0.plot(df['iter'], df['cnn_f1-normal'], label='normal')
ax0.plot(df['iter'], df['cnn_f1-spam'], label='spam')
ax0.plot(df['iter'], df['cnn_f1-abusive'], label='abusive')
ax0.plot(df['iter'], df['cnn_f1-hateful'], label='hateful')
ax0.set_xlabel('Iterations')
ax0.set_ylabel('f1 scores')
ax0.set_ylim([0,1])
ax0.set_title('WestCNN')
ax0.legend()
ax1.plot(df['iter'], df['rnn_f1-normal'], label='normal')
ax1.plot(df['iter'], df['rnn_f1-spam'], label='spam')
ax1.plot(df['iter'], df['rnn_f1-abusive'], label='abusive')
ax1.plot(df['iter'], df['rnn_f1-hateful'], label='hateful')
ax1.set_xlabel('Iterations')
ax1.set_ylabel('f1 scores')
ax0.set_title('RestCNN')
ax1.set_ylim([0,1])
ax1.legend()
plt.savefig('class_f1.png')
| analysis/plots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="L0J138mj7p1s"
# # Document AI Form Parser Visualizer
# This notebook shows you how to analyze a pdf using the Google Cloud DocumentAI API
# + colab={"base_uri": "https://localhost:8080/"} id="_v0XtSwn7fmN" outputId="373b9379-f6ac-451a-e428-ad9219fb31d0"
# Install necessary Python libraries
# !pip install google-cloud-documentai
# !pip install wand
# !pip install pillo
# !pip install tabulate
# !apt-get update
# !apt-get install libmagickwand-dev
# + id="Y8eO6Kcp7v2x"
from google.cloud import documentai_v1beta3 as documentai
from wand.image import Image as WImage
from PIL import Image, ImageDraw
import os
import pandas as pd
from tabulate import tabulate
# + [markdown] id="q83GOFsUHlYA"
# ## Download our sample pdf from GCS
# + id="0iBwYfBtHHBr"
PDF_URI = "gs://cesummit_workshop_data/invoice.pdf" #@param {type: "string"}
# + colab={"base_uri": "https://localhost:8080/"} id="LiEgCOrHIL7Q" outputId="edbe5adb-40cc-44bd-abd4-4dc9b615224f"
# Download the doc
# !gsutil cp $PDF_URI ./invoice.pdf
# -
# ## Set your processor variables
# + id="k3c1mTa6IOk3"
PROJECT_ID = "YOUR_PROJECT_ID_HERE"
LOCATION = "us" # Format is 'us' or 'eu'
PROCESSOR_ID = "PROCESSOR_ID" # Create processor in Cloud Console
# -
# The following code calls the synchronous API and parses the form fields and values.
# + id="hO3yJpDoJ3Zf"
def process_document_sample():
# Instantiates a client
client = documentai.DocumentProcessorServiceClient()
# The full resource name of the processor, e.g.:
# projects/project-id/locations/location/processor/processor-id
# You must create new processors in the Cloud Console first
name = f"projects/{PROJECT_ID}/locations/{LOCATION}/processors/{PROCESSOR_ID}"
with open('invoice.pdf', "rb") as image:
image_content = image.read()
# Read the file into memory
document = {"content": image_content, "mime_type": "application/pdf"}
# Configure the process request
request = {"name": name, "document": document}
# Recognizes text entities in the PDF document
result = client.process_document(request=request)
document = result.document
entities = document.entities
print("Document processing complete.\n\n")
if result.human_review_operation:
print ("Triggered HITL operation: {}".format(result.human_review_operation))
# For a full list of Document object attributes, please reference this page: https://googleapis.dev/python/documentai/latest/_modules/google/cloud/documentai_v1beta3/types/document.html#Document
types = []
values = []
confidence = []
# Grab each key/value pair and their corresponding confidence scores.
for entity in entities:
types.append(entity.type_)
values.append(entity.mention_text)
confidence.append(round(entity.confidence,4))
# Create a Pandas Dataframe to print the values in tabular format.
df = pd.DataFrame({'Type': types, 'Value': values, 'Confidence': confidence})
print(tabulate(df, headers='keys', tablefmt='psql'))
return document
def get_text(doc_element: dict, document: dict):
"""
Document AI identifies form fields by their offsets
in document text. This function converts offsets
to text snippets.
"""
response = ""
# If a text segment spans several lines, it will
# be stored in different text segments.
for segment in doc_element.text_anchor.text_segments:
start_index = (
int(segment.start_index)
if segment in doc_element.text_anchor.text_segments
else 0
)
end_index = int(segment.end_index)
response += document.text[start_index:end_index]
return response
# -
doc = process_document_sample()
# + [markdown] id="rjxTY06qnFnU"
# ## Draw the bounding boxes
# We will now download the pdf above a jpg and use the spatial data to mark our values.
# + id="HjfbWczcnE_V"
JPG_URI = "gs://cesummit_workshop_data/invoice.jpg" #@param {type: "string"}
# -
# Download the doc
# !gsutil cp $JPG_URI ./invoice.jpg
# + id="NyHpYBN8_45g"
im = Image.open('invoice.jpg')
draw = ImageDraw.Draw(im)
for entity in doc.entities:
# Draw the bounding box around the entities
vertices = []
for vertex in entity.page_anchor.page_refs[0].bounding_poly.normalized_vertices:
vertices.append({'x': vertex.x * im.size[0], 'y': vertex.y * im.size[1]})
draw.polygon([
vertices[0]['x'], vertices[0]['y'],
vertices[1]['x'], vertices[1]['y'],
vertices[2]['x'], vertices[2]['y'],
vertices[3]['x'], vertices[3]['y']], outline='blue')
# + id="Kt6jzmisRefT"
im
# -
# # Human in the loop (HITL) Operation
# **Only complete this section if a HITL Operation is triggered**
lro_id = "HITL Operation" # ex. projects/660199673046/locations/us/operations/174674963333130330
# +
client = documentai.DocumentProcessorServiceClient()
operation = client._transport.operations_client.get_operation(lro_id)
if operation.done:
print("HITL location: {} ".format(str(operation.response.value)[5:-1]))
else:
print('Waiting on human review.')
# -
# !gsutil cp "HITL_LOCATION" response.json
with open("response.json", "r") as file:
import json
entities = {}
data = json.load(file)
for entity in data['entities']:
entities[entity['type']] = entity['mentionText']
for t in entities:
print("{} : {}\n ".format(t, entities[t]))
| specialized_form_parser.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import random
import os
import shutil
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
import torchvision.transforms as transforms
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torchvision.datasets as dsets
import torchvision
from scipy.ndimage.filters import gaussian_filter
import PIL
from PIL import Image
random.seed(42)
# +
class resBlock(nn.Module):
def __init__(self, in_channels=64, out_channels=64, k=3, s=1, p=1):
super(resBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels, out_channels, k, stride=s, padding=p)
self.bn1 = nn.BatchNorm2d(out_channels)
self.conv2 = nn.Conv2d(out_channels, out_channels, k, stride=s, padding=p)
self.bn2 = nn.BatchNorm2d(out_channels)
def forward(self, x):
y = F.relu(self.bn1(self.conv1(x)))
return self.bn2(self.conv2(y)) + x
class resTransposeBlock(nn.Module):
def __init__(self, in_channels=64, out_channels=64, k=3, s=1, p=1):
super(resTransposeBlock, self).__init__()
self.conv1 = nn.ConvTranspose2d(in_channels, out_channels, k, stride=s, padding=p)
self.bn1 = nn.BatchNorm2d(out_channels)
self.conv2 = nn.ConvTranspose2d(out_channels, out_channels, k, stride=s, padding=p)
self.bn2 = nn.BatchNorm2d(out_channels)
def forward(self, x):
y = F.relu(self.bn1(self.conv1(x)))
return self.bn2(self.conv2(y)) + x
class VGG19_extractor(nn.Module):
def __init__(self, cnn):
super(VGG19_extractor, self).__init__()
self.features1 = nn.Sequential(*list(cnn.features.children())[:3])
self.features2 = nn.Sequential(*list(cnn.features.children())[:5])
self.features3 = nn.Sequential(*list(cnn.features.children())[:12])
def forward(self, x):
return self.features1(x), self.features2(x), self.features3(x)
# -
vgg19_exc = VGG19_extractor(torchvision.models.vgg19(pretrained=True))
vgg19_exc = vgg19_exc.cuda()
# ### Designing Encoder (E)
# +
class Encoder(nn.Module):
def __init__(self, n_res_blocks=5):
super(Encoder, self).__init__()
self.n_res_blocks = n_res_blocks
self.conv1 = nn.Conv2d(3, 64, 3, stride=2, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_1' + str(i+1), resBlock(in_channels=64, out_channels=64, k=3, s=1, p=1))
self.conv2 = nn.Conv2d(64, 32, 3, stride=2, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_2' + str(i+1), resBlock(in_channels=32, out_channels=32, k=3, s=1, p=1))
self.conv3 = nn.Conv2d(32, 8, 3, stride=1, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_3' + str(i+1), resBlock(in_channels=8, out_channels=8, k=3, s=1, p=1))
self.conv4 = nn.Conv2d(8, 1, 3, stride=1, padding=1)
def forward(self, x):
y = F.relu(self.conv1(x))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_1'+str(i+1))(y))
y = F.relu(self.conv2(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_2'+str(i+1))(y))
y = F.relu(self.conv3(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_3'+str(i+1))(y))
y = self.conv4(y)
return y
E1 = Encoder(n_res_blocks=10)
# -
# ### Designing Decoder (D)
# +
class Decoder(nn.Module):
def __init__(self, n_res_blocks=5):
super(Decoder, self).__init__()
self.n_res_blocks = n_res_blocks
self.conv1 = nn.ConvTranspose2d(1, 8, 3, stride=1, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_1' + str(i+1), resTransposeBlock(in_channels=8, out_channels=8, k=3, s=1, p=1))
self.conv2 = nn.ConvTranspose2d(8, 32, 3, stride=1, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_2' + str(i+1), resTransposeBlock(in_channels=32, out_channels=32, k=3, s=1, p=1))
self.conv3 = nn.ConvTranspose2d(32, 64, 3, stride=2, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_3' + str(i+1), resTransposeBlock(in_channels=64, out_channels=64, k=3, s=1, p=1))
self.conv4 = nn.ConvTranspose2d(64, 3, 3, stride=2, padding=1)
def forward(self, x):
y = F.relu(self.conv1(x))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_1'+str(i+1))(y))
y = F.relu(self.conv2(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_2'+str(i+1))(y))
y = F.relu(self.conv3(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_3'+str(i+1))(y))
y = self.conv4(y)
return y
D1 = Decoder(n_res_blocks=10)
# -
# ### Putting it in box, AE
class AE(nn.Module):
def __init__(self, encoder, decoder):
super(AE, self).__init__()
self.E = encoder
self.D = decoder
def forward(self, x):
h_enc = self.E(x)
# print('encoder out checking for nan ', np.isnan(h_enc.data.cpu()).any())
y = self.D(h_enc)
# print('decoder out checking for nan ', np.isnan(y.data.cpu()).any())
return y
A = AE(E1, D1)
A = A.cuda()
# ### Dataloading and stuff
# ##### Auto encoder accepts 181X181 as input and outputs 181X181 as output, however the bottle neck output i.e that of encoder is much smaller
# +
def mynorm2(x):
m1 = torch.min(x)
m2 = torch.max(x)
if m2-m1 < 1e-6:
return x-m1
else:
# return x-m1
return (x-m1)/(m2-m1)
mytransform2 = transforms.Compose(
[transforms.RandomCrop((181,181)),
# transforms.Lambda( lambda x : Image.fromarray(gaussian_filter(x, sigma=(10,10,0)) )),
# transforms.Resize((41,41)),
transforms.ToTensor(),
transforms.Lambda( lambda x : mynorm2(x) )])
# ])
trainset = dsets.ImageFolder(root='../sample_dataset/train/',transform=mytransform2)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2)
testset = dsets.ImageFolder(root='../sample_dataset/test/',transform=mytransform2)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=True, num_workers=2)
# functions to show an image
def imshow(img):
#img = img / 2 + 0.5
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
def imshow2(img):
m1 = torch.min(img)
m2 = torch.max(img)
# img = img/m2
if m2-m1 < 1e-6:
img = img/m2
else:
img = (img-m1)/(m2-m1)
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = next(dataiter) #all the images under the same 'unlabeled' folder
# print(labels)
# show images
print('a training batch looks like ...')
imshow(torchvision.utils.make_grid(images))
# -
# ### Some more visualisation
dataiter = iter(trainloader)
images, labels = next(dataiter) #all the images under the same 'unlabeled' folder
imshow(torchvision.utils.make_grid(images[0,:,:,:]))
# ### training thingy
def save_model(model, model_name):
try:
os.makedirs('../saved_models')
except OSError:
pass
torch.save(model.state_dict(), '../saved_models/'+model_name)
print('model saved at '+'../saved_models/'+model_name)
# dataloader = iter(trainloader)
testiter = iter(testloader)
def eval_model(model):
testX, _ = next(testiter)
model.cpu()
X = testX
print('input looks like ...')
plt.figure()
imshow(torchvision.utils.make_grid(X))
X = Variable(X)
Y = model(X)
print('output looks like ...')
plt.figure()
imshow2(torchvision.utils.make_grid(Y.data.cpu()))
return X
A.load_state_dict(torch.load('../saved_models/camelyon16_AE_181_last.pth'))
A = A.cuda()
shownX = eval_model(A)
# #### Encoded space is shown below, encoded space is 1X46X46
plt.figure()
imshow(torchvision.utils.make_grid(shownX.data))
Y1 = A.E(Variable(shownX.data))
plt.figure()
imshow2(torchvision.utils.make_grid(Y1.data))
Z1 = A.D(Y1)
plt.figure()
imshow2(torchvision.utils.make_grid(Z1.data))
tis = np.zeros((4,1,46,46))
for i in range(4):
t = Y1.data.numpy()[i,0,:,:]
m = np.min(t)
M = np.max(t)
tis[i,0,:,:] = (t-m)/(M-m)
tis
plt.imshow(tis[0,0,:,:], cmap='gray')
# +
from skimage import data, img_as_float
from skimage import exposure
matplotlib.rcParams['font.size'] = 8
ti_rescale = np.zeros((4,1,46,46))
ti_eq = np.zeros((4,1,46,46))
ti_adapteq = np.zeros((4,1,46,46))
for i in range(4):
img = tis[i,0,:,:]
p2, p98 = np.percentile(img, (2, 98))
img_rescale = exposure.rescale_intensity(img, in_range=(p2, p98))
img_eq = exposure.equalize_hist(img)
img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03)
ti_rescale[i,0,:,:] = img_rescale
ti_eq[i,0.:,:] = img_eq
ti_adapteq[i,0,:,:] = img_adapteq
# -
imshow2(torchvision.utils.make_grid(torch.from_numpy(ti_rescale)))
imshow2(torchvision.utils.make_grid(torch.from_numpy(ti_eq)))
imshow2(torchvision.utils.make_grid(torch.from_numpy(ti_adapteq)))
A.E
| notebooks/AE_load_testbed.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
import requests
import numpy as np
import cv2
import matplotlib.pyplot as plt
# +
# Consumir API
url='https://randomuser.me/api/'
respuesta=requests.get(url)
data=json.loads(respuesta.content)
results=data["results"]
src=results[0]["picture"]["large"]
print(src)
# Traer imagen del API
respuesta = requests.get(src)
file="result/api.jpg"
if respuesta.status_code == 200:
with open(file, 'wb') as f:
f.write(respuesta.content)
print("imagen guardada")
imagen = cv2.imread(file)
rgb= cv2.cvtColor(imagen, cv2.COLOR_BGR2RGB)
print(rgb.shape)
plt.imshow(rgb)
plt.axis('off')
# -
| Client-API.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="3hGq9yG4Uawz"
# !curl -L "https://public.roboflow.com/ds/uFKCkSNz3Q?key=Tj06QPwBIk" > roboflow.zip; unzip roboflow.zip; rm roboflow.zip
# + id="XeNRRWxtUpVu" executionInfo={"status": "ok", "timestamp": 1641186784804, "user_tz": -540, "elapsed": 544, "user": {"displayName": "\uc815\uc9c0\ud6c8", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "12609373033132338332"}}
# !mkdir data
# + colab={"base_uri": "https://localhost:8080/"} id="Dhb5pz5xUx6C" executionInfo={"status": "ok", "timestamp": 1641186859506, "user_tz": -540, "elapsed": 1429, "user": {"displayName": "\uc815\uc9c0\ud6c8", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "12609373033132338332"}} outputId="8280bae9-5b7c-42aa-8c1b-df572f0b5f77"
# !git clone https://github.com/ultralytics/yolov5.git
# + id="M7gpG8QKVMkM"
# !pip install -r /content/yolov5/requirements.txt
# + colab={"base_uri": "https://localhost:8080/"} id="6P14IOsSVXbw" executionInfo={"status": "ok", "timestamp": 1641191752356, "user_tz": -540, "elapsed": 4856313, "user": {"displayName": "\uc815\uc9c0\ud6c8", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "12609373033132338332"}} outputId="ceda0414-3507-4bde-d61b-7116f6cf9028"
# !python /content/yolov5/train.py --img 416 --batch 2 --epochs 20 --data /content/data/data.yaml --cfg /content/yolov5/models/yolov5s.yaml --weights yolov5s.pt --name data_results
# + colab={"base_uri": "https://localhost:8080/"} id="xjnmJwfFV8X6" executionInfo={"status": "ok", "timestamp": 1641193388093, "user_tz": -540, "elapsed": 7369, "user": {"displayName": "\uc815\uc9c0\ud6c8", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "12609373033132338332"}} outputId="2c0e6cf3-afa4-4875-e71c-c8099c6ef723"
# !python /content/yolov5/detect.py --source 0
# + id="2aq_jno9v2Wb" executionInfo={"status": "ok", "timestamp": 1641193574082, "user_tz": -540, "elapsed": 4, "user": {"displayName": "\uc815\uc9c0\ud6c8", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "12609373033132338332"}}
from glob import glob
train_img_list = glob('/content/data/train/images/*.jpg')
valid_img_list = glob('/content/data/valid/images/*.jpg')
# + colab={"base_uri": "https://localhost:8080/"} id="3OQx-lx_woqM" executionInfo={"status": "ok", "timestamp": 1641193585397, "user_tz": -540, "elapsed": 2, "user": {"displayName": "\uc815\uc9c0\ud6c8", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "12609373033132338332"}} outputId="29c87da7-fff1-4c91-b6f9-c5ce820e7ec1"
train_img_list[:3]
# + id="KN-GqhWxwrbH" executionInfo={"status": "ok", "timestamp": 1641193808373, "user_tz": -540, "elapsed": 3, "user": {"displayName": "\uc815\uc9c0\ud6c8", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "12609373033132338332"}}
with open('/content/data/train.txt', 'w') as f:
f.write('/n'.join(train_img_list) + '/n')
with open('/content/data/valid.txt', 'w') as f:
f.write('/n'.join(valid_img_list) + '/n')
# + colab={"base_uri": "https://localhost:8080/"} id="EsMETonoxh22" executionInfo={"status": "ok", "timestamp": 1641194129751, "user_tz": -540, "elapsed": 517, "user": {"displayName": "\uc815\uc9c0\ud6c8", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "12609373033132338332"}} outputId="cf2e92b8-31f0-4486-b6d8-f84061b5b913"
import yaml
with open('/content/data/data.yaml', 'r') as f:
data = yaml.full_load(f)
print(data)
# + id="INz9KQFNyHS0"
from IPython.display import Image
import os
# !python /content/yolov5/detect.py --weights /content/data/99_train_data_results/weights/best.pt --img 416 --conf 0.5 --source /content/data/valid/images
# + colab={"base_uri": "https://localhost:8080/", "height": 1000, "output_embedded_package_id": "1p2EG-of2_H1M8lsj04I4QlOYvHqw2OBM"} id="Ka9A2G7Iz_Pz" executionInfo={"status": "ok", "timestamp": 1641198446382, "user_tz": -540, "elapsed": 8666, "user": {"displayName": "\uc815\uc9c0\ud6c8", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "12609373033132338332"}} outputId="7b63dfee-aa30-4b44-ac78-fd38a5678d3a"
import random
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# %matplotlib inline
pred_img_list = glob('/content/yolov5/runs/detect/exp3/*.jpg')
samples = random.choices(population=range(0,len(pred_img_list)), k=25)
samples
plt.figure(figsize=(20,20))
for idx, n in enumerate(samples):
plt.subplot(5, 5, idx+1)
image = plt.imread(f"{pred_img_list[n]}")
plt.imshow(image)
plt.tight_layout()
plt.show()
# + id="TBOQlHvQDGrt"
| Localization and Classification_Fruitdata/yolov5_Fruitdata.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# Import 3rd party libraries
import os
import json
import requests
import numpy as np
import pandas as pd
import seaborn as sns
import geopandas as gpd
import matplotlib.pyplot as plt
from shapely.geometry import Point, LineString, Polygon
# Configure Notebook
# %matplotlib inline
#plt.style.use('dark_background')
plt.style.use('seaborn-colorblind')
#plt.style.use('ggplot')
sns.set_context("talk")
import warnings
warnings.filterwarnings('ignore')
# +
# import datasets
## import LTS data
lts_gpd = gpd.read_file('lts data/centerline_LTS_July2021.shp')
## import centrelines data
centrelines_gpd = gpd.read_file('centreline data/Centreline - Version 2.geojson')
## import all modes volume data
allmodes = pd.read_csv('all modes traffic data/tmcs_2010_2019.csv')
## Import the neighbourhood data and format it correctly.
fp_k, fp_k2 = 'Neighbourhood Data/neighbourhood-profiles-2016-csv (1).csv', \
'Neighbourhood Data/Neighbourhoods.geojson'
neighbourhood_boundaries = gpd.read_file(fp_k2)
# Import bikelanes dataset
path_b = 'raw_data/bike-network-data-4326.geojson'
bikelane_gpd = gpd.read_file(path_b)
# +
# sum turning movements for total intersection volume by mode
allmodes['car volume']=(allmodes['sb_cars_r']+allmodes['sb_cars_t']+allmodes['sb_cars_l']+
allmodes['nb_cars_r']+allmodes['sb_cars_t']+allmodes['sb_cars_l']+
allmodes['wb_cars_r']+allmodes['wb_cars_t']+allmodes['wb_cars_l']+
allmodes['eb_cars_r']+allmodes['eb_cars_t']+allmodes['eb_cars_l'])
allmodes['truck volume']=(allmodes['sb_truck_r']+allmodes['sb_truck_t']+allmodes['sb_truck_l']+
allmodes['nb_truck_r']+allmodes['sb_truck_t']+allmodes['sb_truck_l']+
allmodes['wb_truck_r']+allmodes['wb_truck_t']+allmodes['wb_truck_l']+
allmodes['eb_truck_r']+allmodes['eb_truck_t']+allmodes['eb_truck_l'])
allmodes['ped volume']=(allmodes['nx_peds']+allmodes['sx_peds']+allmodes['ex_peds']+
allmodes['wx_peds'])
allmodes['bike volume']=(allmodes['nx_bike']+allmodes['sx_bike']+allmodes['ex_bike']+
allmodes['wx_bike'])
# -
# plot distribution of volumes
print(allmodes[['car volume','truck volume','ped volume','bike volume']].describe())
allmodes[['car volume','truck volume','ped volume','bike volume']].boxplot()
# Large spread in volumes is expected - busy intersections will have significantly higher volumes than idle intersections. No clear outliers.
# check what percentage of volumes are zero
print('car volume:', allmodes['car volume'].tolist().count(0)/len(allmodes)*100, '%')
print('truck volume:', allmodes['truck volume'].tolist().count(0)/len(allmodes)*100, '%')
print('ped volume', allmodes['ped volume'].tolist().count(0)/len(allmodes)*100, '%')
print('bike volume', allmodes['bike volume'].tolist().count(0)/len(allmodes)*100, '%')
# We have lots of car volume data. No truck volume at 7% of intersections and no pedestrian volume at 6% of intersections - these numbers seem reasonable as we don't expect trucks and pedestrians at every intersection. No bike volume at 48% of intersections - this seems high. Could just be that people don't bike at these intersections, but we would probably expect intersections with pedestrians to have bikes as well. Let's see if we can model using car, truck, and pedestrian volumes given we have lots of data for these modes.
# Our traffic volume dataset has multiple 15min readings for each intersection. Let's quickly check how many intersections we have data for.
len(allmodes['location'].unique())
# Looks like we have data for a decent number of intersections.
# Our volume data was collected over 10 years from 2010 to 2019. It wouldn't be fair to compare/average data collected over different years - there is likely to be an inflationary effect i.e. traffic volumes increase each year. We need to convert volumes to "2019 equivalent" before we can compare them.
# create a duplicate dataframe we can manipulate
allmodes_dt = allmodes
# extract the year of data collection
allmodes_dt['year'] = pd.to_datetime(allmodes_dt['count_date']).dt.year
# let's try plotting all car volumes by year
ax = sns.jointplot(data=allmodes_dt,
x='year', y='car volume',
kind='reg', scatter_kws={"s": 4}, height=10, x_jitter=0.12, color='springgreen')
plt.show()
# There doesn't seem to be a clear trend. Since we have plotted all car volume data it is misleading - we could have 2010 data from a busy intersection and 2019 data from an idle intersection and without context this would imply traffic was decreasing.
#
# Instead, let's assume traffic increases at the same rate as population. Between 2011 and 2016 census, Toronto's population increased from 5,583,064 to 5,928,040 (https://www12.statcan.gc.ca/census-recensement/2016/dp-pd/prof/details/page.cfm?Lang=E&Geo1=CMACA&Code1=535&Geo2=PR&Code2=35&Data=Count&SearchText=Caledon%20East&SearchType=Begins&SearchPR=01&B1=All). We can calculate the compound annual growth rate (CAGR):
CAGR = (5928040/5583064)**(1/(2016-2011))-1
CAGR
# Therefore Toronto's population (and we assume traffic volume) increase at a rate of 1.2% per year. Let's calculate the 2019 equivalent traffic volumes for each mode.
allmodes_dt['car volume 2019eq'] = allmodes_dt.apply(lambda x: x['car volume']*(1+CAGR)**(2019-x['year']), axis=1)
allmodes_dt['truck volume 2019eq'] = allmodes_dt.apply(lambda x: x['truck volume']*(1+CAGR)**(2019-x['year']), axis=1)
allmodes_dt['ped volume 2019eq'] = allmodes_dt.apply(lambda x: x['ped volume']*(1+CAGR)**(2019-x['year']), axis=1)
# Given we have multiple traffic volume readings for each intersection, let's use the median traffic volume for each intersection.
volumes = allmodes_dt.groupby('centreline_id').median().reset_index()[['centreline_id',
'car volume 2019eq',
'truck volume 2019eq',
'ped volume 2019eq']]
# We have our volumes dataset. Our LTS dataset includes properties for each road segment, including the FNODE (from intersection) and TNODE (to intersection). For each road, let's append the traffic volumes at the from and to intersections. The FNODE and TNODE values in the LTS dataset will line up with the centreline_id in our volumes dataset.
volumes
# +
# append volumes at from intersection
lts_volumes = pd.merge(lts_gpd, #left
volumes, #right
how='left',
left_on='FNODE',
right_on='centreline_id',
)
# append volumes at to intersection
lts_volumes = pd.merge(lts_volumes, #left
volumes, #right
how='left',
left_on='TNODE',
right_on='centreline_id',
)
# Convert LTS to str column for better plotting
lts_volumes['LTS str'] = lts_volumes['LTS'].astype('str')
# -
# Let's rename the columns so we remember which is which. "x" columns are the from intersection data, and "y" columns are the to intersection data. While we're at it, we can drop the centreline_id columns - these are redundant since we have the FNODE and TNODE.
lts_volumes = lts_volumes.rename(columns={"car volume 2019eq_x": "car volume from",
"car volume 2019eq_y": "car volume to",
"truck volume 2019eq_x": "truck volume from",
"truck volume 2019eq_y": "truck volume to",
"ped volume 2019eq_x": "ped volume from",
"ped volume 2019eq_y": "ped volume to",
})
lts_volumes = lts_volumes.drop(columns=['centreline_id_x', 'centreline_id_y'])
# Let's drop rows where we don't have volume data at both intersections.
lts_volumes = lts_volumes.dropna(subset=['car volume from', 'car volume to']).reset_index(drop=True)
# The LTS dataset we imported classifed each road into one of four LTS classification: 1,2,3,4. For practical purposes, we don't really care about the actual classification, but rather if the road is high accessibility (LTS 1 and 2) or low accessibility (LTS 3 and 4). Let's add a binary classifier column that tells us if a road is high or low accessibility.
lts_volumes['high access'] = lts_volumes['LTS'].apply(lambda x: 1 if x <= 2 else 0)
lts_volumes
# Before test train splitting our data, we want to add the neighbourhood/k-mean cluster labels to the data so that spatial CV can be performed. While we're at it, let's add the population density of the neighbourhood each road is in as a feature.
# +
# extract the neighbourhood id - we will use this to join the population densities
neighbourhood_boundaries['neigh_id'] = neighbourhood_boundaries['AREA_NAME'].str.extract('(\d+)')
neighbourhood_boundaries = neighbourhood_boundaries[['geometry','AREA_ID','neigh_id']]
# Make sure we have a consistent coordinate reference for our spatial join
lts_volumes_metre = lts_volumes.to_crs("EPSG:26917")
neighbourhood_metre = neighbourhood_boundaries.to_crs("EPSG:26917")
# -
# Merge the two datasets, drop irrelevant information, fillnas if needed
lts_volumes_metre = gpd.sjoin(lts_volumes_metre, #left
neighbourhood_metre, #right
how='left',
op='within',
)
lts_volumes_metre['AREA_ID'] = lts_volumes_metre['AREA_ID'].fillna(101)
lts_volumes_metre['neigh_id'] = lts_volumes_metre['neigh_id'].fillna(0)
lts_volumes_metre = lts_volumes_metre.drop('index_right',axis=1)
# +
# append population densities
densities = gpd.read_file(fp_k).iloc[[0,7],6:-1].T.reset_index().drop(columns=['index']).rename(columns={0: "neigh", 7: "pop_density"})
lts_volumes_metre = pd.merge(lts_volumes_metre, #left
densities, #right
how='left',
left_on='neigh_id',
right_on='neigh',
).drop(columns=['neigh_id', 'neigh'])
# convert pop_density string to float
lts_volumes_metre['pop_density'] = lts_volumes_metre['pop_density'].str.replace(',', '').astype(float)
# -
lts_volumes_metre
# We have some NaN population densities. We will fill these in with the median, but only after the train/test split to avoid data leakage.
# Some roads are likely to have bike lanes, which will impact the bike accessibility. Let's add a binary label that indicates where a road has a bike lane or not.
# +
# Merge the tags; We already know where the trails are, we just want the bikelanes on roads
bikelane_gpd['CNPCLASS'].unique()
streets = ['Fast Busy Street', 'Quiet Street' 'Fasy Busy Street']
bikelane_gpd = bikelane_gpd[bikelane_gpd['CNPCLASS'].isin(streets)]
print(bikelane_gpd.shape)
bikelane_gpd.head()
bikelane_gpd['INFRA_HIGHORDER'].unique()
# -
# The dataset includes multiple types of bike lanes. We will only consider bike lanes and cycle tracks - the other types are unlikely to have a meaningful affect on accessibility.
bikelane_gpd[bikelane_gpd['INFRA_HIGHORDER'].isin(['Bike Lane','Bike Lane - Buffered','Cycle Track','Contraflow Cycle Track'])]
# +
# Merge the bikelane features info to the features frame.
bikelane_gpd_m = bikelane_gpd[['INFRA_HIGHORDER','geometry']].to_crs("EPSG:26917")
bikelane_gpd_m['geometry'] = bikelane_gpd_m['geometry'].buffer(20)
lts_volumes_metre = gpd.sjoin(lts_volumes_metre, #left
bikelane_gpd_m, #right
how='left',
op='within',
).drop(columns=['index_right']).rename(columns={"INFRA_HIGHORDER": "bike lane"})
lts_volumes_metre['bike lane']=lts_volumes_metre['bike lane'].apply(lambda x: 0 if pd.isnull(x) else 1)
lts_volumes_metre
# -
# This dataset is ready for modelling. Let's split the dataset into the train and test datasets.
# +
# Do a 80/20 test train split and save these to a csv
from sklearn.model_selection import train_test_split
# stratify the split across the LTS labels
train, test = train_test_split(lts_volumes_metre, test_size=0.2, stratify=lts_volumes_metre['LTS'])
# write to csv files
train.to_csv('allmodes_train.csv')
test.to_csv('allmodes_test.csv')
| edanalysis/eda_allmodes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# <h1>Distances</h1>
# <p>In this notebook, we will use sktime for time series distance computation</p>
# <h3>Preliminaries</h3>
# + pycharm={"name": "#%%\n"}
import matplotlib.pyplot as plt
import numpy as np
from sktime.datasets import load_macroeconomic
from sktime.distances import distance
# + [markdown] pycharm={"name": "#%% md\n"}
# <h2>Distances</h2>
# The goal of a distance computation is to measure the similarity between the time series
# 'x' and 'y'. A distance function should take x and y as parameters and return a float
# that is the computed distance between x and y. The value returned should be 0.0 when
# the time series are the exact same, and a value greater than 0.0 that is a measure of
# distance between them, when they are different.
#
# Take the following two time series:
# + pycharm={"name": "#%%\n"}
X = load_macroeconomic()
country_d, country_c, country_b, country_a = np.split(X["realgdp"].to_numpy()[3:], 4)
plt.plot(country_a, label="County D")
plt.plot(country_b, label="Country C")
plt.plot(country_c, label="Country B")
plt.plot(country_d, label="Country A")
plt.xlabel("Quarters from 1959")
plt.ylabel("Gdp")
plt.legend()
# + [markdown] pycharm={"name": "#%% md\n"}
# The above shows a made up scenario comparing the gdp growth of four countries (country
# A, B, C and D) by quarter from 1959. If our task is to determine how different country
# C is from our other countries one way to do this is to measure the distance between
# each country.
# <br>
#
# How to use the distance module to perform tasks such as these, will now be outlined.
# + [markdown] pycharm={"name": "#%% md\n"}
# <h2>Distance module</h2>
# To begin using the distance module we need at least two time series, x and y and they
# must be numpy arrays. We've established the various time series we'll be using for this
# example above as country_a, country_b, country_c and country_d. To compute the distance
# between x and y we can use a euclidean distance as shown:
# + pycharm={"name": "#%%\n"}
# Simple euclidean distance
distance(country_a, country_b, metric="euclidean")
# + [markdown] pycharm={"name": "#%% md\n"}
# Shown above taking the distance between country_a and country_b, returns a singular
# float that represents their similarity (distance). We can do the same again but compare
# country_d to country_a:
# + pycharm={"name": "#%%\n"}
distance(country_a, country_d, metric="euclidean")
# + [markdown] pycharm={"name": "#%% md\n"}
# Now we can compare the result of the distance computation and we find that country_a is
# closer to country_b than country_d (27014.7 < 58340.1).
#
# We can further confirm this result by looking at the graph above and see the green line
# (country_b) is closer to the red line (country_a) than the orange line (country d).
# <br>
# <h3>Different metric parameters</h3>
# Above we used the metric "euclidean". While euclidean distance is appropriate for simple
# example such as the one above, it has been shown to be inadequate when we have larger
# and more complex timeseries (particularly multivariate). While the merits of each
# different distance won't be described here (see documentation for descriptions of each),
# a large number of specialised time series distances have been implement to get a better
# accuracy in distance computation. These are:
# <br><br>
# 'euclidean', 'squared', 'dtw', 'ddtw', 'wdtw', 'wddtw', 'lcss', 'edr', 'erp'
# <br><br>
#
# All of the above can be used as a metric parameter. This will now be demonstrated:
# + pycharm={"name": "#%%\n"}
print("Euclidean distance: ", distance(country_a, country_d, metric="euclidean"))
print("Squared euclidean distance: ", distance(country_a, country_d, metric="squared"))
print("Dynamic time warping distance: ", distance(country_a, country_d, metric="dtw"))
print(
"Derivative dynamic time warping distance: ",
distance(country_a, country_d, metric="ddtw"),
)
print(
"Weighted dynamic time warping distance: ",
distance(country_a, country_d, metric="wdtw"),
)
print(
"Weighted derivative dynamic time warping distance: ",
distance(country_a, country_d, metric="wddtw"),
)
print(
"Longest common subsequence distance: ",
distance(country_a, country_d, metric="lcss"),
)
print(
"Edit distance for real sequences distance: ",
distance(country_a, country_d, metric="edr"),
)
print(
"Edit distance for real penalty distance: ",
distance(country_a, country_d, metric="erp"),
)
# + [markdown] pycharm={"name": "#%% md\n"}
# While many of the above use euclidean distance at their core, they change how it is
# used to account for various problems we encounter with time series data such as:
# alignment, phase, shape, dimensions etc. As mentioned for specific details on how to
# best use each distance and what it does see the documentation for that distance.
#
# <h3>Custom parameters for distances</h3>
# In addition each distance has a different set of parameters. How these are passed to
# the 'distance' function will now be outlined using the 'dtw' example. As stated for
# specific parameters for each distance please refer to the documentation.
# <br><br>
# Dtw is a O(n^2) algorithm and as such a point of focus has been trying to optimise the
# algorithm. A proposal to improve performance is to restrict the potential alignment
# path by putting a 'bound' on values to consider when looking for an alignment. While
# there have been many bounding algorithms proposed the two most popular are Sakoe-Chiba
# bounding or Itakuras parallelogram bounding. How these two work will briefly be
# outlined using the LowerBounding class:
# + pycharm={"name": "#%%\n"}
from sktime.distances import LowerBounding
x = np.zeros((6, 6))
y = np.zeros((6, 6)) # Create dummy data to show the matrix
LowerBounding.NO_BOUNDING.create_bounding_matrix(x, y)
# + [markdown] pycharm={"name": "#%% md\n"}
# Above shows a matrix that maps each index in 'x' to each index in 'y'. Dtw without
# bounding will consider all of these indexes (indexes in bound we define as finite
# values (0.0)). However, we can change the indexes that are considered using
# Sakoe-Chibas bounding like so:
# + pycharm={"name": "#%%\n"}
LowerBounding.SAKOE_CHIBA.create_bounding_matrix(x, y, sakoe_chiba_window_radius=0.5)
# + [markdown] pycharm={"name": "#%% md\n"}
# The matrix that is produced follows the same concept as no bounding where each index
# between x and y are assigned a value. If the value is finite (0.0) it is considered
# inbound and infinite out of bounds. Using Sakoe-Chiba bounding matrix with a window
# radius of 1 we can see we get a diagonal from 0,0 to 5,5 where values inside the
# window are 0.0 and values outside are infinite. This reduces the compute time of
# dtw as we are considering 12 less potential indexes (12 values are infinite).
# <br><br>
# As mentioned there are other bounding techniques that use different 'shapes' over the
# matrix such a Itakuras parallelogram which as the name implies produces a parallelogram
# shape over the matrix.
# + pycharm={"name": "#%%\n"}
LowerBounding.ITAKURA_PARALLELOGRAM.create_bounding_matrix(x, y, itakura_max_slope=0.3)
# + [markdown] pycharm={"name": "#%% md\n"}
# With that base introductory to bounding algorithms and why we may want to use them
# how do we use it in our distance computation. There are two ways:
# + pycharm={"name": "#%%\n"}
# Create two random unaligned time series to better illustrate the difference
rng = np.random.RandomState(42)
n_timestamps, n_features = 10, 19
x = rng.randn(n_timestamps, n_features)
y = rng.randn(n_timestamps, n_features)
# First we can specify the bounding matrix to use either via enum or int (see
# documentation for potential values):
print(
"Dynamic time warping distance with Sakoe-Chiba: ",
distance(x, y, metric="dtw", lower_bounding=LowerBounding.SAKOE_CHIBA, window=1.0),
) # Sakoe chiba
print(
"Dynamic time warping distance with Itakura parallelogram: ",
distance(x, y, metric="dtw", lower_bounding=2, itakura_max_slope=0.2),
) # Itakura parallelogram using int to specify
print(
"Dynamic time warping distance with Sakoe-Chiba: ",
distance(x, y, metric="dtw", lower_bounding=LowerBounding.NO_BOUNDING),
) # No bounding
# + pycharm={"name": "#%%\n"}
| examples/distances.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import math as math
import scipy as sci
# +
df = pd.read_csv(r'C:\Users\<NAME>\Documents\Python Scripts\Data Science\DS take home challenge\DataSet\Spanish Translation A\Translation_Test\test_table.csv')
df2 = pd.read_csv(r'C:\Users\<NAME>\Documents\Python Scripts\Data Science\DS take home challenge\DataSet\Spanish Translation A\Translation_Test\user_table.csv')
# -
df.head()
df2.head()
##DF ID is unique
dfunique=df.groupby('user_id').nunique()
dfunique[dfunique['user_id']!=1]
##DF2 ID is unique
dfunique=df2.groupby('user_id').nunique()
dfunique[dfunique['user_id']!=1]
# another way to get counts
df.user_id.value_counts()
dfc = pd.merge(df,df2, on = 'user_id', how = 'inner')
dfc.head(10)
df.shape
df2.shape
#looks like some visitor information is the in the user_table
dfc.shape
nospain= dfc[dfc['country'] != 'Spain']
old = dfc[dfc['test'] == 0]
oldcon=old.groupby('country')['conversion'].mean()
new = dfc[dfc['test'] == 1]
newcon = new.groupby('country')['conversion'].mean()
conv = pd.merge(oldcon,newcon,on='country',how='left')
conv
# +
plt.figure(figsize=(14,5))
sns.barplot(x = 'country', y = 'conversion', hue = 'test', data = nospain)
##8 out of 16 markets experienced decrease on converstion rate after the change
## Increase Mexico Uruguay Nicaragua Peru Chile Paraguay Panama CostaRica
## Decrease Venezuela Bolivia Columbia Salvador Argentina Ecuador Guatamala Hondura
# +
##t test
##small p value reject null that 2 means are the same confirms that effect is negative
sci.stats.ttest_ind(old[old['country'] != 'Spain']['conversion'],new[new['country'] != 'Spain']['conversion'])
# +
## Doesnt neccessarily mean local translation is bad, there might be other factors need to further segment to see
# -
#time didnt have effect, same effect on different dates
plt.figure(figsize=(14,5))
sns.barplot(x = 'date', y = 'conversion', hue = 'test', data = nospain)
# +
plt.figure(figsize=(14,5))
sns.barplot(x = 'country', y = 'test', data = nospain)
## looks like urugat abd Argentina has a lot more test than control. uneven split
# -
nospain.groupby('country')['test'].mean()
# +
noSUA = nospain[-nospain['country'].isin (['Uruguay','Argentina'])]
# -
plt.figure(figsize=(14,5))
sns.barplot(x = 'country', y = 'test', data = noSUA)
# +
noSUA.groupby('test')['conversion'].mean()
##after deleting those countries, conversion rate is actually higher in test groups
# +
sci.stats.ttest_ind(noSUA[noSUA['test']==0]['conversion'], noSUA[noSUA['test']==1]['conversion'])
## t test showing the localizing language has no effect
# -
plt.figure(figsize=(14,5))
sns.barplot(x = 'browser_language', y = 'conversion', hue = 'test', data = noSUA)
plt.figure(figsize=(14,5))
sns.barplot(x = 'browser_language', y = 'conversion', hue = 'test', data = noSUA)
| 02_Spanish Translation .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MULTILINGUAL STATISTICAL TERMINOLOGY DATASET
# ___
# ### About Notebook
# In this notebook we will do analysis of languages mostly it will be visuals/graphs.
# ___
# ### Load Packages
# Lets load packages that we need to achieve the goal above
import os
import re
import numpy as np
import pandas as pd
from os import path
from PIL import Image
from io import StringIO
import matplotlib.pyplot as plt
from wordcloud import WordCloud, STOPWORDS
# %matplotlib inline
pd.options.display.max_colwidth = 100
# ### Load Data
# ___
# Lets load data from csv
df = pd.read_csv('../data/csv/Multilingual_Statistical_-terminology_2013.csv')
# ### Partial View of data
# ___
# Lets check how data looks
df.head()
# #### Categories
# From previous exercise we know our data had 32 categories **(they were called chapters in the pdf)**
### Lets display number of categories, we will start by removing leading spaces
#print("Number of Categories: ", len(df.Category.str.strip().unique()))
print("Number of Categories: ", len(df.Category.unique()))
### Lets remove and clean up leading spaces and spaces after text
df.Category = df.Category.str.strip()
# #### Visualize Categories
# ___
# Visual presentation helps in analyzing data, pictures are better than looking at long texts and numbers
# +
import seaborn as sns
from textwrap import wrap
def horizontal_bar_chart(df, x, y, label, figsize=(16, 16)):
"""
This customize horizontal bar chart from seaborn(sns as aliased above)
Args:
df: dataframe
x: x-axis column
y: y-axis column
label: string to label the graph
figsize: figure size to make chart small or big
Returns:
None
"""
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=figsize)
ax = sns.barplot(x=x, y=y, data=df,
label=label, color="b", palette=["#00008b"])
total = df.values[:, 1].sum()
for i, v in enumerate(df.values[:, 1]):
ax.text(v + 3, i + .25, str(format(v / total * 100, '.2f')) + '% (' + str(v) + ')')
labels = [ '\n'.join(wrap(l, 20)) for l in df.values[:, 0]]
ax.set_yticklabels(labels)
x_value=['{:,.0f}'.format(x/total * 100) + '%' for x in ax.get_xticks()]
plt.xticks(list(plt.xticks()[0]) + [10])
ax.set_xticklabels(x_value)
plt.ylabel('')
plt.xlabel('')
sns.despine(left=True, bottom=True)
### Word cloud Custom function ####
def plot_wordcloud(df, column, name):
""" This is custom word cloud that is ready to
be used with any picture you would like to use for word cloud
Args:
df: pandas dataframe
column: str -> target column
name: str -> name of the picture without extension for .png preferebly
Return:
None
"""
d = path.dirname(__file__) if "__file__" in locals() else os.getcwd()
mask = np.array(Image.open(path.join(d, '../images/' + name + ".png")))
stopwords = set(STOPWORDS)
wc = WordCloud(background_color="white", max_words=10000, mask=mask,
stopwords=stopwords, contour_width=3, contour_color='white')
wc.generate(' '.join(df[column]))
plt.figure(figsize=(16, 12))
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.show()
# -
horizontal_bar_chart(pd.DataFrame(df.Category.value_counts(ascending=True)[::-1].reset_index()),
'Category', 'index', 'Category')
df[df.Category == 'Social conditions / Personal services'].head()
df.Sepedi = df.Sepedi.astype(str)
plot_wordcloud(df[df.Category == 'Social conditions / Personal services'], 'Sepedi', 'Comment')
plot_wordcloud(df[df.Category == 'Social conditions / Personal services'], 'English_Term', 'Comment')
# ## Classification
# ___
# Lets start with simple classification and see what kind of results we will get
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, accuracy_score, auc, confusion_matrix
from sklearn.ensemble import StackingClassifier, RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.model_selection import train_test_split
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(df.Sepedi)
X_train, X_val, y_train, y_val = train_test_split(X, df.Category, test_size=0.20)
vectorizer.get_feature_names()[:30]
# ### Training and comparing
# ___
# Train and compare two classifiers, we will start by fitting the two classifier namely `LogisticRegression` and `RandomForestClassifier` and compare the results before we do proper translation in the next coming notebook
clf_lg = LogisticRegression().fit(X_train, y_train)
clf_rf = RandomForestClassifier().fit(X_train, y_train)
# #### Making predictions
lg_y_pred = clf_lg.predict(X_val)
rf_y_pred = clf_rf.predict(X_val)
print('Logistic Regresion Report', accuracy_score(y_val, lg_y_pred))
print('Random Forest Report', accuracy_score(y_val, rf_y_pred))
import warnings
warnings.filterwarnings('ignore')
print('Linear Regression')
print(classification_report(y_val, lg_y_pred))
print('Random Forest')
print(classification_report(y_val, rf_y_pred))
df_cm = pd.DataFrame(confusion_matrix(y_val, rf_y_pred))
plt.figure(figsize = (16,12))
sns.heatmap(df_cm, annot=True)
# ### This is just, a start go ahead and do more, stay tuned for the next translation notebook
| notebooks/16-12-2019-analysing-languages.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# @[TOC]
# ## 库
import os
import csv
import requests
import xlwt
import re
import json
import time
# ### 配置
# + jupyter={"source_hidden": true}
#根据个人浏览器信息进行修改
headers = {
'User-Agent':'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Mobile Safari/537.36'
,
'Cookie': '_T_WM=67706607048; WEIBOCN_FROM=1110006030; ALF=1582777481; SCF=AqQddu0eGCw6Wh1xPsTyigWBFJH-P0ACsyLUFzNakys5tF6kBCjVpv4O6BDEGM4gShv5JHfuyjMoLBKfT5-Xwsc.; SUB=_2A25zK8jDDeRhGeNP41UT9yjIyj6IHXVQ1-iLrDV6PUJbktAKLUHSkW1NTk4PgJoxaitdQXaQL6znAIMdvJJs4-5l; SUBP=0033WrSXqPxfM725Ws9jqgMF55529P9D9W5q.Hx0pIs7PKpACzdnFYSZ5JpX5K-hUgL.Fo-p1hMES0qXeKz2dJLoIpUeBc8EdFH8SC-4BbHFSFH81F-RSF-4Sntt; SUHB=0qjEKc2Va_YMLH; SSOLoginState=1580185747; MLOGIN=1; XSRF-TOKEN=<PASSWORD>; M_WEIBOCN_PARAMS=luicode%3D10000011%26lfid%3D2304135671786192_-_WEIBO_SECOND_PROFILE_WEIBO%26fid%3D2304135671786192_-_WEIBO_SECOND_PROFILE_WEIBO%26uicode%3D10000011'
#'ALF=1581501545; _T_WM=67706607048; H5_wentry=H5; backURL=https%3A%2F%2Fm.weibo.cn%2Fapi%2Fcomments%2Fshow%3Fid%3DIr5j4iRXW%26page%3D3; XSRF-TOKEN=<PASSWORD>; WEIBOCN_FROM=1110006030; MLOGIN=1; SSOLoginState=1580006602; SCF=AqQddu0eGCw6Wh1xPsTyigWBFJH-P0ACsyLUFzNakys5zFt06rZeA1gEI0iP7HfWxZntbpMr8WTWhrxEdSVGB58.; SUB=_2A25zKIyaDeRhGeNP41UT9yjIyj6IHXVQ0hTSrDV6PUJbktAKLRL-kW1NTk4PgHLYgtoeuxFzuGDIDcybzoEoXvq9; SUBP=0033WrSXqPxfM725Ws9jqgMF55529P9D9W5q.Hx0pIs7PKpACzdnFYSZ5JpX5KzhUgL.Fo-p1hMES0qXeKz2dJLoIpUeBc8EdFH8SC-4BbHFSFH81F-RSF-4Sntt; SUHB=0IIlrfWMMkVsTI; M_WEIBOCN_PARAMS=uicode%3D20000174'
}
# -
#文件保存地址
addrRoot='C:/Users/cascara/Desktop/seedcup/csv/blog/fans/'
# +
#是否获取转发者具体个人信息
getConcreteInfoList=True#False#True#
isLogin=False#True#True
#是否登入采集个人信息
#无信息打印字符
infoNoExistStr='未知'
# -
#是否处理微博文本内容
processText = True
# ### 构造表格,采集数据内容(修改这里获取想要的信息)
# + active=""
# 博主的信息单独收集:转发的:转发reposts_count、评论comments_count、点赞数量attitudes_count、粉丝数量followers_count
#
# 原始的retweeted_status:转发reposts_count、评论comments_count、点赞数量attitudes_count
# 原始用户的user:用户名screen_name、id、粉丝数量followers_count
# + active=""
# 获取个人具体信息范围、排列
# + jupyter={"source_hidden": true}
#获取个人具体信息范围、排列
infoRangeDict={
'性别':True,
'所在地':True,
'生日':False,
'家乡':False,
'公司':True,
'大学':True,
'昵称':False,
'简介':False,
'注册时间':False,
'阳光信用':False,
#若无信息显示
'infoNoExist':'未知'
}
# + active=""
# 获取博主信息范围、排列
# + jupyter={"source_hidden": true}
#获取博主信息范围、排列
userRangeDict={
'id':True,# 1323527941
'screen_name': True,#"Vista看天下"
'profile_image_url': False,#"https://tva2.sinaimg.cn/crop.0.0.180.180.180/4ee36f05jw1e8qgp5bmzyj2050050aa8.jpg?KID=imgbed,tva&Expires=1580290462&ssig=xPIoKDRR56"
'profile_url':False,# "https://m.weibo.cn/u/1323527941?uid=1323527941&luicode=10000011&lfid=1076031323527941"
'statuses_count': False,#微博数 77256
'verified': False,#true
'verified_type':False,# 3
'verified_type_ext': False,#0
'verified_reason': False,#"《Vista看天下》官方微博"
'close_blue_v': False,#false
'description': True,#"一个有趣的蓝V"
'gender': True,# "m"
'mbtype': False,#12
'urank': False,#48
'mbrank': False,#6
'follow_me':False,# false
'following':False,# false
'followers_count': True,#19657897
'follow_count': True,#1809
'cover_image_phone': False,#"https://tva1.sinaimg.cn/crop.0.0.640.640.640/549d0121tw1egm1kjly3jj20hs0hsq4f.jpg"
'avatar_hd': False,#"https://ww2.sinaimg.cn/orj480/4ee36f05jw1e8qgp5bmzyj2050050aa8.jpg"
'like': False,#false
'like_me': False,#false
'badge': False,#{enterprise: 1, gongyi_level: 1, bind_taobao: 1, dzwbqlx_2016: 1, follow_whitelist_video: 1,…}
#若无信息显示
'infoNoExist':'未知'
}
# -
# ### 文件命名
# + active=""
# 使用示例:
# tweeter='王'
# fp = open(addrFile(tweeter),'w+',newline='',encoding='utf-16')
# fp.close()
#
# 使用库函数:
# os
# -
def addrFile(tweeter,suffix):
path=addrRoot+str(tweeter)+'/'
if os.path.exists(path) is False:
os.makedirs(path)
address=path+tweeter+suffix+'.csv'
return address
# ### 生成信息标题
# + jupyter={"source_hidden": true} active=""
# 将字典Dict中为True的条目生成标题,加前缀prefix
#
# 使用实例:
# print(getInfoTitle(blogRangeDict,'原文'))
# 打印结果:
# ['原文created_at', '原文text', '原文reposts_count', '原文comments_count', '原文attitudes_count']
# + jupyter={"source_hidden": true}
def getInfoTitle(Dict,prefix):
titleList=[]
for item in Dict:
if(Dict.get(item) is True):
titleList.append(prefix+item)
return (titleList)
# -
# ## 工具类,用来去除爬取的正文中一些不需要的链接、标签等
# + jupyter={"source_hidden": true}
#工具类,用来去除爬取的正文中一些不需要的链接、标签等
class Tool:
deleteImg = re.compile('<img.*?>')
newLine =re.compile('<tr>|<div>|</tr>|</div>')
deleteAite = re.compile('//.*?:')
deleteAddr = re.compile('<a.*?>.*?</a>|<a href='+'\'https:')
deleteTag = re.compile('<.*?>')
deleteWord = re.compile('回复@|回覆@|回覆|回复')
@classmethod
def replace(cls,x):
x = re.sub(cls.deleteWord,'',x)
x = re.sub(cls.deleteImg,'',x)
x = re.sub(cls.deleteAite,'',x)
x = re.sub(cls.deleteAddr, '', x)
x = re.sub(cls.newLine,'',x)
x = re.sub(cls.deleteTag,'',x)
return x.strip()
# -
# ### 构造粉丝界面的url
###某微博账户的全部微博内容
def contentURL(id,pages):
i=0
urls=[]
for page in pages:
if page not in [0,1]:
urls+=['https://m.weibo.cn/api/container/getIndex?containerid=231051_-_followers_-_'+str(id)+'&page='+str(page)]
return urls
# + jupyter={"source_hidden": true}
#将字典类型的信息格式传递为需要的信息列表
def getInfoList(infoDict,rangeDict):
infoList=[]
for item in rangeDict:
if rangeDict.get(item) is True:
content=infoDict.get(item)
if content is not None:
#处理微博文本内容
if item =='text':
if processText is True:
content=Tool.replace(content)
infoList.append(content)
else:
infoList.append(rangeDict['infoNoExist'])
return infoList
# -
# ### 统计所有的粉丝数
###在已有的一系列urls中进行操作
###筛选出微博转发内容进行操作
def reRatio(urls,csvWriter):
notEnd= True
fansUserTitle=getInfoTitle(userRangeDict,'')#粉丝信息标题
infoTitle=getInfoTitle(infoRangeDict,'')#原文博主个人主页信息标题
#写表格的标题
if getConcreteInfoList is True:
csvWriter.writerow(fansUserTitle+infoTitle)
else:
csvWriter.writerow(fansUserTitle)
for url in urls:
response = requests.get(url,headers=headers)
resjson = json.loads(response.text)
if resjson['ok'] ==0:
print(url)
notEnd=False
break
cards=resjson['data']['cards']
if(len(cards)==1):
try:
cards=cards[0]['card_group']
except:
print(url)
print(cards)
notEnd=False
break
#遍历一个页面的所有微博
for card in cards:
try:
#fans个人信息
fansUserInfoDict=card['user']
infoList=[]
#原文博主数据
fansUserInfoList=getInfoList(fansUserInfoDict,userRangeDict)
infoList+=fansUserInfoList
fansUserID=fansUserInfoDict['id']
#fansUserID为粉丝账号的ID
#可在此对id进行信息采集
if getConcreteInfoList is True:
infoDict=getInfo(isLogin,fansUserID)
otherInfoList=getInfoList(infoDict,infoRangeDict)
infoList+=otherInfoList
#print(infoList)
#保存数据至csv
csvWriter.writerow(infoList)
#不断获取该博主对的影响力
#break
except:
pass
#延时,防止反爬
time.sleep(3)
return notEnd
# ### 获取个人主页中信息
# + jupyter={"source_hidden": true} active=""
# 使用示例:
# response = requests.get(url)
# txt=response.text
# print(drillInfo(txt))
#
# 结果如下:
# {'昵称': '甘肃华熙文化',
# '简介': '【马丛珊.禅绣艺术,世界纹绣大师学院甘肃分院】服务生命之美;践行匠心为本,艺心创造,慈心发扬校训,微信mashan5374,☎13109439909',
# '性别': '女',
# '所在地': '甘肃 兰州'}
# + jupyter={"source_hidden": true}
def drillInfo(txt):
keyInfo={}
try:
resjson = json.loads(txt)
infodata = resjson.get('data')
cards = infodata.get('cards')
for l in range(0,len(cards)):
temp = cards[l]
card_group = temp.get('card_group')
#判断获取信息类型
for card in card_group:
#将信息传入字典
name=card.get('item_name')
if name is not None:
content=card.get('item_content')
keyInfo[name]=content
except:
pass
return keyInfo
# -
# ### 构建通过id访问个人主页的url
# + jupyter={"source_hidden": true}
def infoUrl(id):
url = "https://m.weibo.cn/api/container/getIndex?containerid=230283"+str(id)+"_-_INFO"
return url
# -
# ## 爬取某id博主的个人信息
# + jupyter={"source_hidden": true} active=""
# 为防止反复爬取,将原文整体保存为文件,格式为 信息卡片长度(2 or 5)+id+博主id
# 不登录2含有性别、所在地
# 登录5含有性别、所在地、星座、大学、公司等完整信息
# 若存在所需文件,则从文件读取信息,否则爬取,同时保存文件
#
# 若爬取未成功,返回-1
#
# 使用库函数:
# os
# + jupyter={"source_hidden": true}
def getInfo(state,id):
address=addrRoot+'info/'+str(state)+'id'+str(id)+'.txt'
path=addrRoot+'info/'
if os.path.exists(path) is False:
os.makedirs(path)
try:
#已有文件
if(os.path.exists(address)==True):
fp = open(address,'r',encoding='utf-16')
txt=fp.read()
info=drillInfo(txt)
fp.close()
else:
fp = open(address,'w+',encoding='utf-16')
url=infoUrl(id)
if state is True:
response = requests.get(url,headers=headers)
else:
response = requests.get(url)
txt=response.text
fp.write(response.text)
info=drillInfo(txt)
fp.close()
except:
info=-1
return info
# + active=""
# 获取特定个人信息
# + jupyter={"source_hidden": true}
def getExatInfo(item,state,id):
info=getInfo(state,id)
content=info.get(item)
if content is not None:
return content
else:
return infoNoExistStr
# +
### 构造热门界面访问
# -
def downloadData(id):
tweeter=getExatInfo('昵称',2,int(id))
batch=0
while(1):
fileAddr=addrFile(tweeter,'batch'+str(batch))
if os.path.exists(fileAddr) is True:
print(fileAddr+'已存在,跳过采集')
else:
print('文件将写入:'+fileAddr)
fp = open(fileAddr,'w+',newline='',encoding='utf-16')
writer=csv.writer(fp)
if reRatio(contentURL(id,range(20*batch,20*(batch+1))),writer) is False:
fp.close()
break
fp.close()
print('第'+str(batch)+'批数据已记录完毕')
batch+=1
# + active=""
#
# #陈赫
# id=1574684061
# #MorningGlory_肖战资源博
# id=5735501478
#
# #靳东
# id=1093897112
# #李健
# id=1744395855
#
# #干部
# id=6472269230
#
# #陶勇
# id=5899876484
#
# #姚晨
# id=1266321801
#
# #鞠婧祎
# id=3669102477
#
# #韩红
# #id=1922542315
#
#
# #穿帮君
# id=5671786192
#
# #汉堡爸爸
# id=2784421224
#
# #蔡徐坤
#
# id=1776448504
#
#
# #林书豪
# id=2106855375
#
# #干部
# id=6472269230
#
# #任嘉伦
# id=3800468188
#
# #肖战
# id=1792951112
#
#
# #迪丽热巴
# id=1669879400
#
#
# #科比
# id=3264072325
#
# #雷军
# 1749127163
# +
id=input('博主id:')
downloadData(id)
# -
| fansMap/acquaireData.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + outputHidden=false inputHidden=false
# %load_ext autoreload
# %autoreload 2
# + outputHidden=false inputHidden=false
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.signal import convolve2d
from scipy.optimize import minimize
from scipy.special import gamma
import scipy.stats as st
from sklearn.metrics import mean_squared_error
from gzbuilder_analysis.parsing import sanitize_param_dict
import gzbuilder_analysis.rendering as rendering
from gzbuilder_analysis.rendering.sersic import sersic2d, oversampled_sersic_component, _b
from gzbuilder_analysis.fitting.jupyter import UpdatableDisplay
import lib.galaxy_utilities as gu
from tqdm import tqdm
from numba import jit, prange, vectorize
# + outputHidden=false inputHidden=false
sid_list = np.loadtxt('lib/subject-id-list.csv', dtype='u8')
simard_df = pd.read_csv('lib/simard-catalog.csv', index_col=0)
best_models = pd.read_pickle('lib/best_individual.pickle')
# + outputHidden=false inputHidden=false
PARAMS = np.array(('roll', 'rEff', 'axRatio', 'i0'))
def comp_to_p(d):
mu = np.array(d['mu'])
return np.concatenate((mu, [d[k] for k in PARAMS]))
def comp_from_p(p):
return dict(mu=p[:2], **{k: v for k, v in zip(PARAMS, p[2:])})
# + outputHidden=false inputHidden=false
s0 = 'float32('+('float32,'*9)+')'
s1 = 'float64('+('float64,'*9)+')'
@vectorize([s0, s1], target='parallel')
def _sersic_func(x, y, mux, muy, roll, rEff, axRatio, i0, n):
out = 0.0
ds = [-0.4, -0.2, 0.0, 0.2, 0.4]
for dx in ds:
for dy in ds:
out += sersic2d(
x + dx, y + dy,
mu=(mux, muy), roll=roll, rEff=rEff,
axRatio=axRatio, i0=i0, c=2,
n=n
) / 25
return out
# + outputHidden=false inputHidden=false
def bulge_disk_model(p, cx, cy):
disk = _sersic_func(cx, cy, *p[:6], 1)
bulge = _sersic_func(cx, cy, *p[6:12], 4)
# disk = sersic2d(cx, cy, mu=p[:2], roll=p[2], rEff=p[3],
# axRatio=p[4], c=2, i0=p[5], n=1.0)
# bulge = sersic2d(cx, cy, mu=p[6:8], roll=p[8], rEff=p[9],
# axRatio=p[10], c=2, i0=p[11], n=4.0)
return disk + bulge
# + outputHidden=false inputHidden=false
def f(p, target, cx, cy, psf=np.ones((1,1)), pixel_mask=np.ones(1),
loss=mean_squared_error):
im = bulge_disk_model(p, cx, cy)
im_psf = convolve2d(
im, psf, mode='same', boundary='symm'
)
return loss(target * pixel_mask, im_psf * pixel_mask)
# + outputHidden=false inputHidden=false
display(_m)
display(p0)
# + outputHidden=false inputHidden=false
plt.imshow(oversampled_sersic_component(m['disk'], image_size))
# + outputHidden=false inputHidden=false
i = best_models.index[0]
m = best_models.loc[i]['Model']
p0 = np.concatenate((comp_to_p(m['disk']), comp_to_p(m['bulge'])))
_m = {'disk': dict(m['disk']), 'bulge': dict(m['bulge']), 'bar': None, 'spiral': np.array([])}
_m['bulge']['n'] = 4
image_size = gu.get_diff_data(i)['width']
psf = gu.get_psf(i)
cx, cy = np.mgrid[0:image_size, 0:image_size]
im = bulge_disk_model(p0, cy, cx)
im2 = rendering.calculate_model(
_m, psf=None, image_size=image_size
)
f, ax = plt.subplots(ncols=3, figsize=(20, 5))
ax[0].imshow(im)
ax[1].imshow(im2)
c = ax[2].imshow(im - im2)
plt.colorbar(c, ax=ax)
# + outputHidden=false inputHidden=false
BOUNDS = [
(0, 512), (0, 512), # mux, muy
(-np.inf, np.inf), (-np.inf, np.inf), # roll, rEff
(0, np.inf), # axRatio
(0, np.inf), # i0
] * 2
# + outputHidden=false inputHidden=false
def get_fitted_bd_model(subject_id, progress=True):
psf = gu.get_psf(subject_id)
diff_data = gu.get_diff_data(subject_id)
pixel_mask = 1 - np.array(diff_data['mask'])[::-1]
galaxy_data = np.array(diff_data['imageData'])[::-1]
m = best_models.loc[subject_id]['Model']
if m['disk'] is None or m['bulge'] is None:
return None
p0 = comp_to_p(m['disk']) + comp_to_p(m['bulge'])
image_size = galaxy_data.shape[0]
cx, cy = np.mgrid[0:image_size, 0:image_size]
if progress:
with tqdm(desc='Fitting model', leave=False) as pbar:
def update_bar(p):
pbar.update(1)
res = minimize(
f, p0, callback=update_bar,
args=(galaxy_data, cx, cy, psf, pixel_mask),
bounds=BOUNDS
)
else:
res = minimize(
f, p0, callback=update_bar,
args=(galaxy_data, cx, cy, psf, pixel_mask)
)
fit_disk = sanitize_param_dict(
comp_from_p(res['x'][:6].tolist())
)
fit_bulge = sanitize_param_dict(
comp_from_p(res['x'][6:].tolist())
)
return fit_disk, fit_bulge, res
# + outputHidden=false inputHidden=false
out = pd.Series([]).rename('fit_results')
# + outputHidden=false inputHidden=false
d = UpdatableDisplay('')
for i, subject_id in enumerate(sid_list):
d('{} / {} : {}'.format(i, len(sid_list), subject_id))
if subject_id in out.index:
continue
try:
out[subject_id] = get_fitted_bd_model(subject_id)
except KeyboardInterrupt:
break
# -
# Calculate bulge to total fractions:
# + outputHidden=false inputHidden=false
def get_flux(**comp):
re = comp['rEff'] / 3
Se = comp['i0'] / 2
n = comp['n']
k = _b(n)
q = comp['axRatio']
return 2 * np.pi * re**2 * Se * np.exp(k) * n * k**(-2*n) * gamma(2*n) / q
def get_bt(disk, bulge):
b = get_flux(n=4, **bulge)
t = b + get_flux(n=1, **disk)
return b / t
# + outputHidden=false inputHidden=false
bt = out.dropna().apply(lambda r: get_bt(r[0], r[1]))
# + outputHidden=false inputHidden=false
st.pearsonr(bt, simard_df['__B_T_r'].loc[bt.index])
# + outputHidden=false inputHidden=false
plt.figure(figsize=(6, 6), dpi=75)
plt.plot(simard_df['__B_T_r'].loc[bt.index], bt, '.')
plt.gca().add_artist(plt.Line2D((0, 1), (0, 1), color='k', alpha=0.2))
plt.axis('equal')
plt.xlim(0, .9)
plt.ylim(0, .9)
plt.tight_layout()
# + outputHidden=false inputHidden=false
plt.figure(figsize=(10, 8), dpi=100)
subject_id = np.random.choice(out.dropna().index)
psf = gu.get_psf(subject_id)
diff_data = gu.get_diff_data(subject_id)
pixel_mask = 1 - np.array(diff_data['mask'])[::-1]
galaxy_data = np.array(diff_data['imageData'])[::-1]
r = comp_to_p(out.loc[subject_id][0]) + comp_to_p(out.loc[subject_id][1])
image_size = galaxy_data.shape[0]
cx, cy = np.mgrid[0:image_size, 0:image_size]
im_psf = convolve2d(
bulge_disk_model(r, cx, cy), psf, mode='same', boundary='symm'
)
plt.subplot(231)
plt.imshow(galaxy_data, vmin=0, vmax=1)
plt.subplot(232)
s = mean_squared_error(galaxy_data * pixel_mask, im_psf * pixel_mask)
plt.title('{}: {:.4f}'.format(subject_id, s))
plt.imshow(im_psf, vmin=0, vmax=1)
plt.subplot(233)
full_model = rendering.calculate_model(
best_models.loc[subject_id]['Model'],
psf=gu.get_psf(subject_id),
image_size=gu.get_diff_data(subject_id)['width'],
)
s2 = mean_squared_error(galaxy_data * pixel_mask, full_model * pixel_mask / 0.8)
plt.title(s2)
plt.imshow(full_model, vmin=0, vmax=1)
plt.subplot(235)
d = galaxy_data * pixel_mask - im_psf * pixel_mask
plt.imshow(d / 0.8, cmap='RdGy', vmin=-np.abs(d).max(), vmax=np.abs(d).max())
plt.subplot(236)
d2 = rendering.compare_to_galaxy(
full_model, galaxy_data, psf=None, pixel_mask=pixel_mask, stretch=False
)
plt.imshow(d2, cmap='RdGy', vmin=-np.abs(d).max(), vmax=np.abs(d).max())
# + outputHidden=false inputHidden=false
out.loc[subject_id][2]
# + outputHidden=false inputHidden=false
best_models.loc[20902040]
# + outputHidden=false inputHidden=false
simard_df.loc[subject_id].keys()
# + outputHidden=false inputHidden=false
simard_df.loc[subject_id]['__B_T_r']
# + outputHidden=false inputHidden=false
sanitize_param_dict(fit_disk)['axRatio'], 1 / np.arctan(simard_df.loc[subject_id]['i'])
# + outputHidden=false inputHidden=false
sanitize_param_dict(fit_bulge)['axRatio'], 1 - simard_df.loc[subject_id]['e']
| sersic_fit_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# 
# <center><em>Copyright! This material is protected, please do not copy or distribute. by:<NAME></em></center>
# ***
# <h1 align="center">Udemy course : Python Bootcamp for Data Science 2021 Numpy Pandas & Seaborn</h1>
#
# ***
# ## 9.1 Handling Missing Data
# First we import numpy and pandas libraries:
# + hide_input=false
import numpy as np
import pandas as pd
# -
# Here we create a series that contains missing values:
# + hide_input=false
ser1 = pd.Series([23, 54, np.nan, None])
ser1
# -
# We can check for missing values using the function **isnull()**:
# + hide_input=false
ser1.isnull()
# -
# We can aslo check for missing values using the function **isna()**:
# + hide_input=false
ser1.isna()
# -
# The same functions can also be applied to **string objects**, like this example:
# + hide_input=false
ser2 = pd.Series(['green','black', 'white', None ,'red'])
ser2
# + hide_input=false
ser2.isnull()
# -
# ***
#
# <h1 align="center">Thank You</h1>
#
# ***
| 9.1 Handling Missing Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # Transposed Convolution
# :label:`sec_transposed_conv`
#
# The layers we introduced so far for convolutional neural networks, including
# convolutional layers (:numref:`sec_conv_layer`) and pooling layers (:numref:`sec_pooling`), often reduce the input width and height, or keep them unchanged. Applications such as semantic segmentation (:numref:`sec_semantic_segmentation`) and generative adversarial networks (:numref:`sec_dcgan`), however, require to predict values for each pixel and therefore needs to increase input width and height. Transposed convolution, also named fractionally-strided convolution :cite:`Dumoulin.Visin.2016` or deconvolution :cite:`Long.Shelhamer.Darrell.2015`, serves this purpose.
#
# + origin_pos=1 tab=["mxnet"]
from mxnet import init, np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l
npx.set_np()
# + [markdown] origin_pos=3
# ## Basic 2D Transposed Convolution
#
# Let us consider a basic case that both input and output channels are 1, with 0 padding and 1 stride. :numref:`fig_trans_conv` illustrates how transposed convolution with a $2\times 2$ kernel is computed on the $2\times 2$ input matrix.
#
# 
# :label:`fig_trans_conv`
#
# We can implement this operation by giving matrix kernel $K$ and matrix input $X$.
#
# + origin_pos=4 tab=["mxnet"]
def trans_conv(X, K):
h, w = K.shape
Y = np.zeros((X.shape[0] + h - 1, X.shape[1] + w - 1))
for i in range(X.shape[0]):
for j in range(X.shape[1]):
Y[i:i + h, j:j + w] += X[i, j] * K
return Y
# + [markdown] origin_pos=5
# Remember the convolution computes results by `Y[i, j] = (X[i: i + h, j: j + w] * K).sum()` (refer to `corr2d` in :numref:`sec_conv_layer`), which summarizes input values through the kernel. While the transposed convolution broadcasts input values through the kernel, which results in a larger output shape.
#
# Verify the results in :numref:`fig_trans_conv`.
#
# + origin_pos=6 tab=["mxnet"]
X = np.array([[0., 1], [2, 3]])
K = np.array([[0., 1], [2, 3]])
trans_conv(X, K)
# + [markdown] origin_pos=7 tab=["mxnet"]
# Or we can use `nn.Conv2DTranspose` to obtain the same results. As `nn.Conv2D`, both input and kernel should be 4-D tensors.
#
# + origin_pos=9 tab=["mxnet"]
X, K = X.reshape(1, 1, 2, 2), K.reshape(1, 1, 2, 2)
tconv = nn.Conv2DTranspose(1, kernel_size=2)
tconv.initialize(init.Constant(K))
tconv(X)
# + [markdown] origin_pos=11
# ## Padding, Strides, and Channels
#
# We apply padding elements to the input in convolution, while they are applied to the output in transposed convolution. A $1\times 1$ padding means we first compute the output as normal, then remove the first/last rows and columns.
#
# + origin_pos=12 tab=["mxnet"]
tconv = nn.Conv2DTranspose(1, kernel_size=2, padding=1)
tconv.initialize(init.Constant(K))
tconv(X)
# + [markdown] origin_pos=14
# Similarly, strides are applied to outputs as well.
#
# + origin_pos=15 tab=["mxnet"]
tconv = nn.Conv2DTranspose(1, kernel_size=2, strides=2)
tconv.initialize(init.Constant(K))
tconv(X)
# + [markdown] origin_pos=17
# The multi-channel extension of the transposed convolution is the same as the convolution. When the input has multiple channels, denoted by $c_i$, the transposed convolution assigns a $k_h\times k_w$ kernel matrix to each input channel. If the output has a channel size $c_o$, then we have a $c_i\times k_h\times k_w$ kernel for each output channel.
#
#
# As a result, if we feed $X$ into a convolutional layer $f$ to compute $Y=f(X)$ and create a transposed convolution layer $g$ with the same hyperparameters as $f$ except for the output channel set to be the channel size of $X$, then $g(Y)$ should has the same shape as $X$. Let us verify this statement.
#
# + origin_pos=18 tab=["mxnet"]
X = np.random.uniform(size=(1, 10, 16, 16))
conv = nn.Conv2D(20, kernel_size=5, padding=2, strides=3)
tconv = nn.Conv2DTranspose(10, kernel_size=5, padding=2, strides=3)
conv.initialize()
tconv.initialize()
tconv(conv(X)).shape == X.shape
# + [markdown] origin_pos=20
# ## Analogy to Matrix Transposition
#
# The transposed convolution takes its name from the matrix transposition. In fact, convolution operations can also be achieved by matrix multiplication. In the example below, we define a $3\times 3$ input $X$ with a $2\times 2$ kernel $K$, and then use `corr2d` to compute the convolution output.
#
# + origin_pos=21 tab=["mxnet"]
X = np.arange(9.0).reshape(3, 3)
K = np.array([[0, 1], [2, 3]])
Y = d2l.corr2d(X, K)
Y
# + [markdown] origin_pos=22
# Next, we rewrite convolution kernel $K$ as a matrix $W$. Its shape will be $(4, 9)$, where the $i^\mathrm{th}$ row present applying the kernel to the input to generate the $i^\mathrm{th}$ output element.
#
# + origin_pos=23 tab=["mxnet"]
def kernel2matrix(K):
k, W = np.zeros(5), np.zeros((4, 9))
k[:2], k[3:5] = K[0, :], K[1, :]
W[0, :5], W[1, 1:6], W[2, 3:8], W[3, 4:] = k, k, k, k
return W
W = kernel2matrix(K)
W
# + [markdown] origin_pos=24
# Then the convolution operator can be implemented by matrix multiplication with proper reshaping.
#
# + origin_pos=25 tab=["mxnet"]
Y == np.dot(W, X.reshape(-1)).reshape(2, 2)
# + [markdown] origin_pos=27
# We can implement transposed convolution as a matrix multiplication as well by reusing `kernel2matrix`. To reuse the generated $W$, we construct a $2\times 2$ input, so the corresponding weight matrix will have a shape $(9, 4)$, which is $W^\top$. Let us verify the results.
#
# + origin_pos=28 tab=["mxnet"]
X = np.array([[0, 1], [2, 3]])
Y = trans_conv(X, K)
Y == np.dot(W.T, X.reshape(-1)).reshape(3, 3)
# + [markdown] origin_pos=30
# ## Summary
#
# * Compared to convolutions that reduce inputs through kernels, transposed convolutions broadcast inputs.
# * If a convolution layer reduces the input width and height by $n_w$ and $h_h$ time, respectively. Then a transposed convolution layer with the same kernel sizes, padding and strides will increase the input width and height by $n_w$ and $n_h$, respectively.
# * We can implement convolution operations by the matrix multiplication, the corresponding transposed convolutions can be done by transposed matrix multiplication.
#
# ## Exercises
#
# 1. Is it efficient to use matrix multiplication to implement convolution operations? Why?
#
# + [markdown] origin_pos=31 tab=["mxnet"]
# [Discussions](https://discuss.d2l.ai/t/376)
#
| scripts/d21-en/mxnet/chapter_computer-vision/transposed-conv.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lasso Scenario Creation Quickstart
#
# In this notebook we will run through:
#
# 1. Using a configuration file to run lasso
# 2. Setting up a base scenario and applying projects
# 3. Transforming the standard network format to the MetCouncil expected format
# 4. Exporting the network to a shapefile and csvs
# +
import os
import sys
import yaml
import pandas as pd
from network_wrangler import RoadwayNetwork
from network_wrangler import TransitNetwork
from network_wrangler import ProjectCard
from network_wrangler import Scenario
from network_wrangler import WranglerLogger
from lasso import ModelRoadwayNetwork
from lasso import StandardTransit
# -
# %load_ext autoreload
# %autoreload 2
import logging
logger = logging.getLogger("WranglerLogger")
logger.handlers[0].stream = sys.stdout
# if you don't want to see so much detail, set to logging.INFO or DEBUG
logger.setLevel(logging.DEBUG)
# ## Read a Config File
#
# Let's examine the configuration file and store it as `my_config` variable.
#
# Configuration files are written in YAML and read in as python dictionaries.
# +
MY_CONFIG_FILE = os.path.join(
os.path.dirname(os.path.abspath('')), "examples", "settings","my_config.yaml"
)
with open(MY_CONFIG_FILE) as f:
my_config = yaml.safe_load(f)
import json
print(json.dumps(my_config, indent=2))
## Alternatively this could be written in the notebook our selected via a notebook GUI
# -
# ## Create a Base Scenario
#
# Base scenarios must at the least specify a highway network but can also specify a directory where transit networks can be found.
#
# In this step the highway and transit networks are read in and validated to each other.
#
# In some cases, you may want to override the validation (after reviewing the errors) using the flag: `validate = False`.
# +
base_wrangler_path = os.path.join(os.path.dirname((os.path.dirname(os.path.abspath('')))),"network_wrangler")
WranglerLogger.info("Base Wrangler Path: {}".format(base_wrangler_path))
base_scenario = Scenario.create_base_scenario(
my_config["base_scenario"]["shape_file_name"],
my_config["base_scenario"]["link_file_name"],
my_config["base_scenario"]["node_file_name"],
roadway_dir=os.path.join(base_wrangler_path,my_config["base_scenario"]["input_dir"]),
transit_dir=os.path.join(base_wrangler_path,my_config["base_scenario"]["input_dir"])
)
# -
base_wrangler_path = os.path.join(os.path.dirname((os.path.dirname(os.path.abspath('')))),"network_wrangler")
WranglerLogger.info("Base Wrangler Path: {}".format(base_wrangler_path))
base_scenario = Scenario.create_base_scenario(
my_config["base_scenario"]["shape_file_name"],
my_config["base_scenario"]["link_file_name"],
my_config["base_scenario"]["node_file_name"],
roadway_dir=os.path.join(base_wrangler_path,my_config["base_scenario"]["input_dir"]),
transit_dir=os.path.join(base_wrangler_path,my_config["base_scenario"]["input_dir"]),
validate = False,
)
# #### Create project cards from projects that are explicitely specified in config
#
if len(my_config["scenario"]["project_cards_filenames"]) > 0:
project_cards_list = [
ProjectCard.read(filename, validate=False)
for filename in my_config["scenario"]["project_cards_filenames"]
]
else:
project_cards_list = []
project_cards_list
# ## Create Scenario
#
# A scenario is constructed with a base scenario and then selecting project cards to be added to that base scenario to create the new scenario.
#
# Projects can be added a variety of ways:
#
# 1. `card_directory` + `tags` will search a directory add project's who's project tags match *at least one of* the tags in the keyword.
# 2. `card_directory` + `glob_search` will search a directory add project's who's file name matches the [glob search text](https://docs.python.org/3/library/glob.html)
# 3. `project_cards_list` is a list of ProjectCard objects
#
# Optionally, you may specify that project card formats are not validated by setting they keyword:
# `validate = False`
#
# Projects that are not added in the initial scenario development can be added by using the following methods:
#
# - `add_project_card_from_file()`
# - `add_project_cards_from_directory()`
# - `add_project_cards_from_tags`
#
# Or by directly adding the project to the scenario's project attribute by running:
#
# ```python
# my_project = ProjectCard.read(path_to_card)
# my_scenario.projects += my_project
#
# ```
#
# +
my_scenario=None
my_scenario = Scenario.create_scenario(
base_scenario=base_scenario,
card_directory=os.path.join(base_wrangler_path,my_config["scenario"]["card_directory"]),
tags=my_config["scenario"]["tags"],
project_cards_list=project_cards_list,
glob_search=my_config["scenario"]["glob_search"],
validate_project_cards=False,
)
# -
# ### Apply all projects in scenario
# +
WranglerLogger.info("\nProjects in queue to be applied: \n - {}".format("\n - ".join(my_scenario.get_project_names())))
WranglerLogger.info("\n[Before] Applied Projects: \n - {}".format("\n - ".join(my_scenario.applied_projects)))
my_scenario.apply_all_projects()
WranglerLogger.info("\n[After] Applied Projects: \n - {}".format("\n - ".join(my_scenario.applied_projects)))
# -
# # Write out as MetCouncil Model Roadway Network
# Everything above was done in "pure wrangler" rather than lasso. However, we will need Lasso in order to add the MetCouncil specific variables. You can create a lasso ModelRoadwayNetwork object from the roadway network object and feed it any additional parameters from that `my_config` variable.
#
# You can see that the link variables for this network are the same as the standard roadway network at this point but that will change.
#
# Since this is a GeoDataFrame you can also use build-in Geopandas features to make simple plots based on these variables.
# +
model_road_net = ModelRoadwayNetwork.from_RoadwayNetwork(
my_scenario.road_net, parameters=my_config.get("my_parameters", {})
)
WranglerLogger.info("\nmodel_road_net columns:\n - {}".format("\n - ".join(model_road_net.links_df.columns)))
# -
model_road_net.links_df.plot("bike_access")
# ## Add MetCouncil variables
# At this point, we need to calculate all the variables into what MetCouncil's model is expecting. The method `roadway_standard_to_met_council_network()` broadly does the following:
#
# - creates a parallel managed lane network
# - calculates additional variables based on geography or other variables (i.e. county, assignment group, area type, etc)
# - flattens variables stored as continuous time values and determines their value by time period (i.e. lanes_am)
# - reprojects into MetCouncil's projection
model_road_net.roadway_standard_to_met_council_network()
WranglerLogger.info("\nmodel_road_net **links_metcouncil** columns:\n - {}".format("\n - ".join(model_road_net.links_metcouncil_df.columns)))
model_road_net.links_metcouncil_df.plot("lanes_AM")
# ## Export to shapefile
#
# As a last step, the network can be exported to a shapefile and paired CSVs after removing extraneous variables.
#
# (note that this step will also run the `roadway_standard_to_met_council_network()` method but I wanted to show it to you piecewise)
model_road_net.write_roadway_as_shp()
# # Export to fixed width file
model_road_net.write_roadway_as_fixedwidth()
# # Write out as MetCouncil Model Transit Network
#
# Similar to the roadway network, the first step is to convert it to a Lasso object, and then write it to a cube line file. Optionally, you could also export it to a shapefile to inspect using other means.
standard_transit = StandardTransit.fromTransitNetwork(my_scenario.transit_net)
standard_transit.feed
# Write out the StandardTransit Lasso object to a cube line file:
standard_transit.write_as_cube_lin()
| notebooks/Lasso Scenario Creation Quickstart.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import numpy as np
import os
import h5py
import time
from matplotlib import pyplot
if 0:
alt = 70
folder_in = '/home/saturn/caph/mpp228/HESS_data/HESS_data_MC/sim_telarray/\
phase2d/sim_telarray/phase2d/alt%i_h5/' % alt
else:
folder_in = '/home/saturn/caph/mpp228/HESS_data/HESS_data_MC/sim_telarray/\
phase2d/NSB1.00/Desert/Proton/20deg/180deg/0.0deg-ws0/Data/Data_h5/'
fns_in = os.listdir(folder_in)
pfns_in = list(np.sort([fn for fn in fns_in if fn.find('proton') > -1]))
gfns_in = list(np.sort([fn for fn in fns_in if fn.find('gamma') > -1]))
print(len(pfns_in), len(gfns_in), len(fns_in))
# +
#pfns_in[0], gfns_in[0]
# -
from dl1_data_handler.reader import DL1DataReaderSTAGE1, DL1DataReaderDL1DH
if 1:
channels_list = ['image']
tel_pars_list = ['obs_id',]
mc_info_list = ["true_alt", "true_az"] # mc parameters
else:
channels_list = ['image', 'image_mask', 'peak_time',]
tel_pars_list = ['obs_id', 'event_id', 'tel_id',
'camera_frame_hillas_intensity', 'camera_frame_hillas_x', 'camera_frame_hillas_y',
'camera_frame_hillas_width', 'camera_frame_hillas_length',
'camera_frame_hillas_psi', 'camera_frame_hillas_skewness', 'camera_frame_hillas_kurtosis',
'camera_frame_hillas_r', 'camera_frame_hillas_phi',
'morphology_num_pixels', 'morphology_num_medium_islands', 'morphology_num_large_islands',
'leakage_pixels_width_1', 'leakage_pixels_width_2',
'leakage_intensity_width_1', 'leakage_intensity_width_2',
'intensity_mean'] # tel parameters in cta-pipe .h5
mc_info_list = ["true_energy", "true_alt", "true_az"] # mc parameters
groups = ['mc_pars', 'tel_pars', 'tel_pars_extra']
mapping_settings = {'camera_types':['HESS-I']}
event_selection = None
def fn2reader(fn, mode='mono'):
return DL1DataReaderSTAGE1([fn],
mode=mode,
#example_identifiers_file="./examples.h5",
#selected_telescope_types=selected_telescope_types,
mapping_settings=mapping_settings,
#selected_telescope_ids={"LST_LST_LSTCam": LSTcams},
#image_channels = ['image', 'peak_time', 'image_mask'],
image_channels=channels_list,
parameter_list=tel_pars_list,
event_info=mc_info_list,
event_selection=event_selection)
fn = folder_in + pfns_in[0]
reader = fn2reader(fn)
# %%time
alt_az_p = np.array([r[2:] for r in reader]).T
pyplot.hist(alt_az_p[0], bins=100);
pyplot.title('Proton')
pyplot.hist(alt_az_p[1], bins=100);
pyplot.title('Proton')
# %%time
fn = folder_in + gfns_in[0]
reader = fn2reader(fn)
alt_az_g = np.array([r[2:] for r in reader]).T
pyplot.hist(alt_az_g[0], bins=100);
pyplot.title('Gamma')
pyplot.hist(alt_az_g[1], bins=100);
pyplot.title('Gamma')
image = reader[0][0].T[0]
pyplot.imshow(image)
| general/notebooks/view_hess_events.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge, Lasso
from sklearn.ensemble import RandomForestRegressor
from sklearn.svm import SVR
from scipy.stats import spearmanr
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
# -
# %cd ..
train_meta = pd.read_csv("../train.csv", index_col=0)
valid_meta = pd.read_csv("../valid.csv", index_col=0)
train_meta
correlations = pd.concat([pd.read_csv(f"data/correlationd{i}.csv", index_col=0) for i in range(0,9)], ignore_index=True)
correlations = correlations.dropna()
correlations["corr_abs"] = correlations.correlation.abs()
correlations = correlations.sort_values("corr_abs", ascending=False)
correlations[:30]
| aux/model-playground/notebooks/gapped.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Differentially Private Histograms
# ## Plotting the distribution of ages in `Adult`
import numpy as np
from diffprivlib import tools as dp
import matplotlib.pyplot as plt
# We first read in the list of ages in the Adult UCI dataset (the first column).
ages_adult = np.loadtxt("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
usecols=0, delimiter=", ")
# Using Numpy's native `histogram` function, we can find the distribution of ages, as determined by ten equally-spaced bins calculated by `histogram`.
hist, bins = np.histogram(ages_adult)
hist = hist / hist.sum()
# Using `matplotlib.pyplot`, we can plot a barchart of the histogram distribution.
plt.bar(bins[:-1], hist, width=(bins[1]-bins[0]) * 0.9)
plt.show()
# ## Differentially private histograms
# Using `diffprivlib`, we can calculate a differentially private version of the histogram. For this example, we use the default settings:
# - `epsilon` is 1.0
# - `range` is not specified, so is calculated by the function on-the-fly. This throws a warning, as it leaks privacy about the data (from `dp_bins`, we know that there are people in the dataset aged 17 and 90).
# +
dp_hist, dp_bins = dp.histogram(ages_adult)
dp_hist = dp_hist / dp_hist.sum()
plt.bar(dp_bins[:-1], dp_hist, width=(dp_bins[1] - dp_bins[0]) * 0.9)
plt.show()
# -
# **Privacy Leak:** In this setting, we know for sure that at least one person in the dataset is aged 17, and another is aged 90.
dp_bins[0], dp_bins[-1]
# **Mirroring the behaviour of `np.histogram`:** We can see that the bins returned by `diffprivlib.tools.histogram` are identical to those given by `numpy.histogram`.
np.all(dp_bins == bins)
# **Error:** We can see very little difference in the values of the histgram. In fact, we see an aggregate absolute error across all bins of the order of 0.01%. This is expected, due to the large size of the dataset (`n=48842`).
print("Total histogram error: %f" % np.abs(hist - dp_hist).sum())
# **Effect of `epsilon`:** If we decrease `epsilon` (i.e. **increase** the privacy guarantee), the error will increase.
# +
dp_hist, dp_bins = dp.histogram(ages_adult, epsilon=0.001)
dp_hist = dp_hist / dp_hist.sum()
print("Total histogram error: %f" % np.abs(hist - dp_hist).sum())
# -
# ## Deciding on the `range` parameter
# We know from the [dataset description](https://archive.ics.uci.edu/ml/datasets/adult) that everyone in the dataset is at least 17 years of age. We don't know off-hand what the upper bound is, so for this example we'll set the upper bound to `100`. As of 2019, less than 0.005% of the world's population is [aged over 100](https://en.wikipedia.org/wiki/Centenarian), so this is an appropriate simplification. Values in the dataset above 100 will be excluded from calculations.
# An `epsilon` of 0.1 still preserves the broad structure of the histogram.
# +
dp_hist2, dp_bins2 = dp.histogram(ages_adult, epsilon=0.1, range=(17, 100))
dp_hist2 = dp_hist2 / dp_hist2.sum()
plt.bar(dp_bins2[:-1], dp_hist2, width=(dp_bins2[1] - dp_bins2[0]) * 0.9)
plt.show()
# -
# ## Error for smaller datasets
# Let's repeate the first experiments above with a smaller dataset, this time the [Cleveland heart disease dataset](https://archive.ics.uci.edu/ml/datasets/heart+Disease) from the UCI Repository. This dataset has 303 samples, a small fractin of the Adult dataset processed previously.
ages_heart = np.loadtxt("https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.cleveland.data",
usecols=0, delimiter=",")
# We first find the histogram distribution using `numpy.histogram`.
heart_hist, heart_bins = np.histogram(ages_heart)
heart_hist = heart_hist / heart_hist.sum()
# And then find the histogram distribution using `diffprivlib.histogram`, using the defaults as before (with the accompanying warning).
dp_heart_hist, dp_heart_bins = dp.histogram(ages_heart)
dp_heart_hist = dp_heart_hist / dp_heart_hist.sum()
# And double-check that the bins are the same.
np.all(heart_bins == dp_heart_bins)
# We then see that the error this time is 3%, a 100-fold increase in error.
print("Total histogram error: %f" % np.abs(heart_hist - dp_heart_hist).sum())
# ## Mirroring Numpy's behaviour
# We can evaluate `diffprivlib.models.histogram` without any privacy by setting `epsilon = float("inf")`. This should give the exact same result as running `numpy.histogram`.
# +
heart_hist, _ = np.histogram(ages_heart)
dp_heart_hist, _ = dp.histogram(ages_heart, epsilon=float("inf"))
np.all(heart_hist == dp_heart_hist)
| notebooks/histograms.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Self-Driving Car Engineer Nanodegree
#
# ## Deep Learning
#
# ## Project: Build a Traffic Sign Recognition Classifier
#
# In this notebook, a template is provided for you to implement your functionality in stages which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission, if necessary. Sections that begin with **'Implementation'** in the header indicate where you should begin your implementation for your project. Note that some sections of implementation are optional, and will be marked with **'Optional'** in the header.
#
# In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
#
# >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
# ---
# ## Step 0: Load The Data
# +
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = "data/train.p"
testing_file = "data/test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_test, y_test = test['features'], test['labels']
# -
# ---
#
# ## Step 1: Dataset Summary & Exploration
#
# The pickled data is a dictionary with 4 key/value pairs:
#
# - `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
# - `'labels'` is a 2D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.
# - `'sizes'` is a list containing tuples, (width, height) representing the the original width and height the image.
# - `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**
#
# Complete the basic data summary below.
# +
### Replace each question mark with the appropriate value.
# TODO: Number of training examples
n_train = len(X_train)
# TODO: Number of testing examples.
n_test = len(X_test)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = 43
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
# -
# Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
#
# The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.
#
# **NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.
# +
### Data exploration visualization goes here.
### Feel free to use as many code cells as needed.
import random
import numpy as np
import matplotlib.pyplot as plt
import cv2
# Visualizations will be shown in the notebook.
# %matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
# +
sample_size=15
sample_idx= np.random.randint(len(X_train), size=sample_size)
sp=1
for r in range(3):
for c in range(5):
ax=plt.subplot(3,5,sp)
sample=X_train[sample_idx[sp-1]]
ax.imshow(sample.reshape(32,32,3), cmap="gray")
ax.axis('off')
sp+=1
plt.show()
# -
# ----
#
# ## Step 2: Design and Test a Model Architecture
#
# Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).
#
# There are various aspects to consider when thinking about this problem:
#
# - Neural network architecture
# - Play around preprocessing techniques (normalization, rgb to grayscale, etc)
# - Number of examples per label (some have more than others).
# - Generate fake data.
#
# Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
#
# **NOTE:** The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
# ### Implementation
#
# Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
# +
### Preprocess the data here.
### Feel free to use as many code cells as needed.
import cv2
def normalize_color(image_channel):
a=0.1
b=0.9
scale_min=0
scale_max=255
return a+(((image_channel-scale_min)*(b-a))/(scale_max-scale_min))
def split_f(image):
c_r, c_g, c_b= cv2.split(image)
return cv2.merge((normalize_color(c_r), normalize_color(c_g), normalize_color(c_b)))
# -
X_train=[ split_f(x) for x in X_train]
print("Image data shape =", X_train[0].shape)
# ### Question 1
#
# _Describe how you preprocessed the data. Why did you choose that technique?_
# **Answer:**
# I normalize the data, i.e. dividing by 255 so that the values are between [-1;1] and the subtracted the mean.
# The idea is to enhance the local intensity contrast of images so that we do not focus on the overall brightness of an image.
# +
### Generate data additional data (OPTIONAL!)
### and split the data into training/validation/testing sets here.
### Feel free to use as many code cells as needed.
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
X_train, X_validation, y_train, y_validation= train_test_split(X_train, y_train, test_size=0.2, random_state=0)
X_train, y_train = shuffle(X_train, y_train)
# -
# ### Question 2
#
# _Describe how you set up the training, validation and testing data for your model. **Optional**: If you generated additional data, how did you generate the data? Why did you generate the data? What are the differences in the new dataset (with generated data) from the original dataset?
# **Answer:**
# By using the train_test_split function which is provided by sklearn the training data is split into two samples, one for training and one for the validation of the trained results. The validation set is different from the test sample as the validation set is within training sesseion of the CNN, whereas the test data is only once after the CNN model has been tested completely. As a rule of thumb (according to the training video), 20% of the training data are a good sample for the the validation data.
#
# It is possible to generate new data if you rotate the pictures with certain angles by using the sklearn.transpose.rotate function (+/- 20 degrees of the original pic). Moreover, new pictures can be generated by enlighten/ darken the original pics.
# The code is like:
#
# from skimage.transform import rotate
# X_train2=[rotate(x,20) for x in X_train]
import tensorflow as tf
EPOCHS = 7
BATCH_SIZE = 128
# +
### Define your architecture here.
### Feel free to use as many code cells as needed.
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Hyperparameters
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(43))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
# -
# Features and Labels
x = tf.placeholder(tf.float32, (None, 32, 32, 3))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 43)
# +
# Training pipeline
rate = 0.002
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
# +
# Model Evaluation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
# -
# ### Question 3
#
# _What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see [Deep Neural Network in TensorFlow
# ](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/b516a270-8600-4f93-a0a3-20dfeabe5da6/concepts/83a3a2a2-a9bd-4b7b-95b0-eb924ab14432) from the classroom._
#
# **Answer:**
# The model is based on the LeNet architecture and comprises of the following steps:
#
# - Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.
# - Pooling step: Input = 28x28x6. Output = 14x14x6
# - Layer 2: Convolutional. Output = 10x10x16.
# - Pooling step: Input = 10x10x16. Output = 5x5x16
# - Layer 3: Fully Connected. Input = 400. Output = 120
# - Layer 4: Fully Connected. Input = 120. Output = 84
# - Layer 5: Fully Connected. Input = 84. Output = 43
#
# After each step/layer (except the last Layer 5) there is an activation function using a RELU function in order to add non-linearites to the model. This functions tends to give much better classification accuracy due to a number of nice, desirable properties (in contrast to the function tanh which was used in the original architecture of LeNet).
# +
### Train your model here.
### Feel free to use as many code cells as needed.
save_file = 'Model/model.ckpt'
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, save_file)
print("Model saved")
# -
# ### Question 4
#
# _How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)_
#
# **Answer:**
#
# - Type of optimizer: As an optimizer I used the AdamOptimizer which is more efficient than the GradientDesccent optimizer
# - Batch size: I varied the batch size between 128 and 500
# - Learning rate: as this rate controls the magnitude of the updates to the final layer during training I ended up with a learning rate of .002. As it is smaller, the learning will take longer, but it can help the overall precision.
# - Hyperparameters: I used the parameters mu = 0 and sigma = 0.1 as standard parameters in order to generate the first weights.
# ### Question 5
#
#
# _What approach did you take in coming up with a solution to this problem? It may have been a process of trial and error, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think this is suitable for the current problem._
# **Answer:**
#
# I used the Lenet as it is an network architecture which is on the hand one very efficient, but on the other hand its architecture is small for fitting classification problems.
#
# Within the training procedure I used an approch in which I tested very different values of the parameters like
# - learning rate: 0.001, batch size: 128/256/512, epochs: 15, hyperparameters: mu=0, sigma=0.1
# - learning rate: 0.001, batch size: 128/256, epochs: 15, hyperparameters: mu=0, sigma=1
# - learning rate: 0.001/ 0.002/ 0.005/ 0.01, batch size: 128/256, epochs: 15, hyperparameters: mu=0, sigma=0.1
# - learning rate: 0.002, batch size: 128/256/512, epochs: 15, hyperparameters: mu=0, sigma=0.1
#
# As the validation accuracy of the model varied differently within the test I ended up with the following configuration of the model:
#
# - learning rate: 0.002, batch size 128, epochs: 7, hyperparameters: mu=0, sigma=0.1
#
# With this model the validation accuracy ist 96,6%.
# ---
#
# ## Step 3: Test a Model on New Images
#
# Take several pictures of traffic signs that you find on the web or around you (at least five), and run them through your classifier on your computer to produce example results. The classifier might not recognize some local signs but it could prove interesting nonetheless.
#
# You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name.
# ### Implementation
#
# Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
# +
### Evaluate your model here.
with tf.Session() as sess:
saver.restore(sess, save_file)
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
# +
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import os
os.listdir("test_images/")
# -
def display_image():
dirname = 'test_images'
for name in os.listdir(dirname):
if name.endswith(".jpg"):
image = mpimg.imread(os.path.join(dirname, name))
print(name)
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
display_image()
# ### Question 6
#
# _Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It could be helpful to plot the images in the notebook._
#
#
# **Answer:**
# In the backgound of the pics there is in some cases a purely white background as the pics are taken directly from the web. Hence as the pics are clearly I would not expect any difficulties with the pics.
# +
### Run the predictions here.
### Feel free to use as many code cells as needed.
from PIL import Image
features1 = []
labels1 = []
dirname = 'test_images'
for name in os.listdir(dirname):
if name.endswith(".jpg"):
image = Image.open(os.path.join(dirname, name))
image.load()
feature = np.array(image, dtype=np.float32)
features1.append(feature)
label = os.path.split(name)[1][:2]
labels1.append(label)
features1=[split_f(x) for x in features1]
features1, labels1 = shuffle(features1, labels1)
with tf.Session() as sess:
saver.restore(sess, save_file)
predictions = sess.run(correct_prediction, feed_dict={x: features1, y: labels1})
print(predictions)
# -
# ### Question 7
#
# _Is your model able to perform equally well on captured pictures when compared to testing on the dataset? The simplest way to do this check the accuracy of the predictions. For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate._
#
# _**NOTE:** You could check the accuracy manually by using `signnames.csv` (same directory). This file has a mapping from the class id (0-42) to the corresponding sign name. So, you could take the class id the model outputs, lookup the name in `signnames.csv` and see if it matches the sign from the image._
#
# **Answer:**
# Based on the output then the model is 60% accurate. Compared to the data set the accuracy is lower (compared to 87.2% in the test data).
# +
### Visualize the softmax probabilities here.
### Feel free to use as many code cells as needed.
output = None
with tf.Session() as sess:
saver.restore(sess, save_file)
output=sess.run(logits, feed_dict={x: features1})
print(sess.run(tf.nn.top_k(tf.nn.softmax(output), k=3)))
# -
# ### Question 8
#
# *Use the model's softmax probabilities to visualize the **certainty** of its predictions, [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.html#top_k) could prove helpful here. Which predictions is the model certain of? Uncertain? If the model was incorrect in its initial prediction, does the correct prediction appear in the top k? (k should be 5 at most)*
#
# `tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
#
# Take this numpy array as an example:
#
# ```
# # (5, 6) array
# a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
# 0.12789202],
# [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
# 0.15899337],
# [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
# 0.23892179],
# [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
# 0.16505091],
# [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
# 0.09155967]])
# ```
#
# Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:
#
# ```
# TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
# [ 0.28086119, 0.27569815, 0.18063401],
# [ 0.26076848, 0.23892179, 0.23664738],
# [ 0.29198961, 0.26234032, 0.16505091],
# [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
# [0, 1, 4],
# [0, 5, 1],
# [1, 3, 5],
# [1, 4, 3]], dtype=int32))
# ```
#
# Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
# **Answer:**
# For the 1st pic (label 02 - Speed limit 50km) and the third pic (27 - Pedestrians.jpg) the model is very uncertain as the top3 probabilities does not include the correct label
# - picture 1: the top3 probabilities are ([40, 18, 21]) --> label 2 is not included
# - picture 3: the top3 probabilities are ([ 3, 15, 2]) --> label 27 is not included
# > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n",
# "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
| Traffic_Sign_Classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
'''
合并Excel表格
'''
import pandas as pd
import os
# -
# 定义需要合并的所有Excel的目录
workDir = "/Users/mac/Documents/workspaces/github/python/office/test01/"
# 获取当前目录所有的文件列表
fileNames = os.listdir(workDir)
# 定义一个列表
frames = []
# 循环遍历逐一读取excel文件
for fileName in fileNames:
# 获取当前文件的绝对路径
filePath = os.path.join(workDir,fileName)
# 使用pd去读excel文件
df = pd.read_excel(filePath)
frames.append(df)
#合并并输出到一个excel中
result = pd.concat(frames)
result.to_excel(workDir+"test.xls",index=False)
| office/01-ExcelUtils.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
x=int(input("enter the first number"))
if(x>=0):
if(x==0):
print("x=0")
else:
print("x is positive")
else:
print("x is negative")
x=int(input("enter a principle amount"))
y=int(input("enter the number of years"))
if x > 50000 :
r = 0.10
if y <= 6 :
r = 0.10
else :
r = 0.8
a=int(x)+int(x)*int(y)*float(r)
print(a)
if n>6:
r=0.1
else:
r=0.08
if p>50000
r=0.10
a=int(x)+int(x)*int(y)*float(r)
x="we are second years BBA student"
x[0]
x[1]
x. upper()
x.lower()
x.partition("BBA")
x.split("BBA")
x.startswith("b")
x.count("B")
x="this is Ernamkulam"
x.count("i")
x.count("i",3,6)
x.count("i",3,6)
x.index("E")
x.index("a")
x.index("i")
x="abcd123"
x.isalnum()
| python4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lecture 5: Conditioning Continued, Law of Total Probability
#
#
# ## Stat 110, Prof. <NAME>, Harvard University
#
# ----
# > Thinking conditionally is a condition for thinking.
# # _How can we solve a problem?_
#
# 1. Try simple and/or extreme cases.
# 1. Try to break the problem up into simpler pieces; recurse as needed.
#
# 
#
# Let $A_1, A_2, \dots, A_n$ partition sample space $S$ into disjoint regions that sum up to $S$. Then
#
# \begin{align}
# P(B) &= P(B \cap A_1) + P(B \cap A_2) + \dots + P(B \cap A_n) \\
# &= P(B|A_1)P(A_1) + P(B|A_2)P(A_2) + \dots + P(B|A_n)P(A_n)
# \end{align}
#
# Note that statistics is as much of an art as it is a science, and choosing the right partitioning is key. Poor choices of partitions may results in many, even more difficult to solve sub-problems.
#
# This is known as the __Law of Total Probability__. Conditional probability is important in its own right, and sometimes we use conditional probability to solve problems of unconditional probability, as above with $P(B)$.
#
# But conditional probability can be very subtle. You really need to think when using conditional probability.
# ## Ex. 1
#
# Let's consider a 2-card hand drawn from a standard playing deck. What is the probability of drawing 2 aces, given that we know one of the cards is an ace?
#
# \begin{align}
# P(\text{both are aces | one is ace}) &= \frac{P(\text{both are aces})}{P(\text{one is ace})} \\
# &= \frac{P(\text{both are aces})}{1 - P(\text{neither is ace})} \\
# &= \frac{\binom{4}{2}/\binom{52}{2}}{1 - \binom{48}{2}/\binom{52}{2}} \\
# &= \frac{1}{33}
# \end{align}
#
# But now think about this: What is the probability of drawing 2 aces, knowing that one of the cards __is the ace of spades__?
#
# \begin{align}
# P(\text{both are aces | ace of spades}) &= P(\text{other card is also an ace}) \\
# &= \frac{3}{51} \\
# &= \frac{1}{17}
# \end{align}
#
# _Notice how the fact that we know we have the ace of spades nearly doubles the probability of having 2 aces._
# ## Ex. 2
#
# Suppose there is a test for a disease, and this test is touted as being "95% accurate". The disease in question afflicts 1% of the population. Now say that there is a patient who tests positive for this disease under this test.
#
# First we define the events in question:
#
# Let $D$ be the event that the patient actually has the disease.
#
# Let $T$ be the event that the patient tests positive.
#
# Since that phrase "95% accurate" is ambiguous, we need to clarify that.
#
# \begin{align}
# P(T|D) = P(T^c|D^c) = 0.95
# \end{align}
#
# In other words, __conditioning on whether or not the patient has the disease__, we will assume that the test is 95% accurate.
#
# _What exactly are we trying to find?_
#
# What the patient really wants to know is not $P(T|D)$, which is the accuracy of the test; but rather $P(D|T)$, or the probability she has the disease given that the test returns positive. Fortunately, we know how $P(T|D)$ relates to $P(D|T)$.
#
# \begin{align}
# P(D|T) &= \frac{P(T|D)P(D)}{P(T)} ~~~~ & &\text{... Bayes Rule} \\
# &= \frac{P(T|D)P(D)}{P(T|D)P(D) + P(T|D^c)P(D^c)} ~~~~ & & \text{... by the Law of Total Probability} \\
# &= \frac{(0.95)(0.01)}{(0.95)(0.01) + (0.05)(0.99)} ~~~~ & & \text{... the rarity of the disease competes with the rarity of true negatives}\\
# &\approx 0.16
# \end{align}
# ## Common Pitfalls
#
# 1. Mistaking $P(A|B)$ for $P(B|A)$. This is also known as the [Prosecutor's Fallacy](https://en.wikipedia.org/wiki/Prosecutor%27s_fallacy), where instead of asking about the _probability of guilt (or innocence) given all the evidence_, we make the mistake of concerning ourselves with the _probability of the evidence given guilt_. An example of the Prosecutor's Fallacy is the [case of Sally Clark](https://en.wikipedia.org/wiki/Sally_Clark).
#
# 1. Confusing _prior_ $P(A)$ with _posterior_ $P(A|B)$. Observing that event $A$ occurred does __not__ mean that $P(A) = 1$. But $P(A|A) = 1$ and $P(A) \neq 1$.
#
# 1. Confusing _independence_ with __conditional independence__. This is more subtle than the other two.
#
# ### Definition
#
# Events $A$ and $B$ are __conditionally independent__ given event $C$, if
#
# \begin{align}
# P(A \cap B | C) = P(A|C)P(B|C)
# \end{align}
#
# In other words, conditioning on event $C$ does not give us any additional information on $A$ or $B$.
# ### _Does conditional independence given $C$ imply unconditional independence? _
#
# ### Ex. Chess Opponent of Unknown Strength
#
# Short answer, _no_.
#
# Consider playing a series of 5 games against a chess opponent of unknown strength. If we won all 5 games, then we would have a pretty good idea that we are the better chess player. So winning each successive game actually is providing us with information about the strength of our opponent.
#
# If we had prior knowledge about the strength of our opponent, meaning we condition on the strength of our opponent, then winning one game would not provide us with any additional information on the probability of winning the next.
#
# But if we do not condition on the strength of our opponent, meaning that we have no prior knowledge about our opponent, then successively winning a string a games actually _does_ give us information about the probability of winning the next game.
#
# So the games are conditionally independent given the strength of our opponent, but _not_ independent unconditionally.
# ### _Does unconditional independence imply conditional independence given $C$?_
#
# ### Ex. Popcorn and the Fire Alarm
#
# Again, short answer is _no_.
#
# You can see this in the case of some phenomenon with multiple causes.
#
# Let $A$ be the event of the fire alarm going off.
#
# Let $F$ be the event of a fire.
#
# Let $C$ be the event of someone making popcorn.
#
# Suppose that either $F$ (an actual fire) or $C$ (the guy downstairs popping corn) will result in $A$, the fire alarm going off.
# Further suppose that $F$ and $C$ are independent: knowing that there's a fire $F$ doesn't tell me anything about anyone making popcorn $C$; and vice versa.
#
# But the probability of a fire given that the alarm goes off __and__ no one is making any popcorn is given by $P(F|A,C^c) = 1$. After all, if the fire alarm goes off and no one is making popcorn, there can only be one explanation: _there must be a fire_.
#
# So $F$ and $C$ may be independent, but they are not _conditionally independent_ when we condition on event $A$. Knowing that nobody is making any popcorn when the alarm goes off can only mean that there is a fire.
#
# ----
# View [Lecture 5: Conditioning Continued, Law of Total Probability | Statistics 110](http://bit.ly/2PncEYX) on YouTube.
| Lecture_05.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# %matplotlib inline
import astropy.coordinates as coord
from astropy.table import Table, vstack
from astropy.io import fits
import astropy.units as u
import gala.coordinates as gc
# -
plt.style.use('notebook')
t = Table(fits.getdata('../data/pal5_ls_lite_grz.fits'))
points = np.array([t['g'] - t['r'], t['g']]).T
# +
ceq = coord.SkyCoord(ra=t['ra']*u.deg, dec=t['dec']*u.deg, frame='icrs')
cpal = ceq.transform_to(gc.Pal5)
wangle = 180*u.deg
# -
ra0 = 229.022083*u.deg
dec0 = -0.111389*u.deg
cluster_mask = np.sqrt((ceq.ra-ra0)**2 + (ceq.dec-dec0)**2)<0.5*u.deg
# ## Isochrones
# iso = Table.read('../data/mist_12.0_-1.50.cmd', format='ascii.commented_header', header_start=12)
iso = Table.read('../data/mist_11.5_-1.30.cmd', format='ascii.commented_header', header_start=12)
phasecut = (iso['phase']>=0) & (iso['phase']<5)
iso = iso[phasecut]
# +
# distance modulus
distance_app = 22.5*u.kpc
# distance_app = 18.6*u.kpc
dm = 5*np.log10((distance_app.to(u.pc)).value)-5
# main sequence + rgb
i_gr = iso['DECam_g']-iso['DECam_r'] + 0.06
i_rz = iso['DECam_r']-iso['DECam_z'] + 0.06
i_g = iso['DECam_g']+dm
i_r = iso['DECam_r']+dm
i_z = iso['DECam_z']+dm
i_left = i_gr - 0.12*(i_g/28)**5
i_right = i_gr + 0.12*(i_g/28)**5
# poly = np.hstack([np.array([i_left, i_g]), np.array([i_right[::-1], i_g[::-1]])]).T
# ind = (poly[:,1]<23.7) & (poly[:,1]>20)
# poly_main = poly[ind]
# path_main = mpl.path.Path(poly_main)
# -
# ### Plot CMDs
# +
fig, ax = plt.subplots(1,3,figsize=(10,10))
plt.sca(ax[0])
plt.plot(t['g'][cluster_mask] - t['r'][cluster_mask], t['g'][cluster_mask], 'k.', ms=1, alpha=0.5, zorder=0)
plt.plot(i_gr, i_g, 'r-', lw=1, zorder=1)
plt.xlim(-1,2)
plt.ylim(24,16)
plt.gca().set_aspect('equal')
plt.xlabel('(g - r)$_0$')
plt.ylabel('$g_0$')
plt.sca(ax[1])
plt.plot(t['r'][cluster_mask] - t['z'][cluster_mask], t['r'][cluster_mask], 'k.', ms=1, alpha=0.5, zorder=0)
plt.plot(i_rz, i_r, 'r-', lw=1, zorder=1)
plt.xlim(-1,2)
plt.ylim(24,16)
plt.gca().set_aspect('equal')
plt.xlabel('(r - z)$_0$')
plt.ylabel('$r_0$')
plt.sca(ax[2])
plt.plot(t['g'][cluster_mask] - t['r'][cluster_mask], t['r'][cluster_mask], 'k.', ms=1, alpha=0.5, zorder=0)
plt.plot(i_gr, i_r, 'r-', lw=1, zorder=1)
plt.xlim(-1,2)
plt.ylim(24,16)
plt.gca().set_aspect('equal')
plt.xlabel('(g - r)$_0$')
plt.ylabel('$r_0$')
plt.tight_layout()
# -
# ### Filter
# +
i_left = i_gr - 0.12*(i_g/28)**5
i_right = i_gr + 0.12*(i_g/28)**5
poly = np.hstack([np.array([i_left, i_g]), np.array([i_right[::-1], i_g[::-1]])]).T
ind = (poly[:,1]<23.7) & (poly[:,1]>20)
poly_main = poly[ind]
path_main = mpl.path.Path(poly_main)
# -
cmd_mask = path_main.contains_points(points)
plt.plot(t['g'][cmd_mask] - t['r'][cmd_mask], t['g'][cmd_mask], 'k.')
# +
gr_grid = np.linspace(-0.5, 2, 100)
p = np.array([1.7,-0.17])
grz_poly = np.poly1d(p)
np.save('../data/grz_poly', grz_poly)
gz_grid = grz_poly(gr_grid)
fig, ax = plt.subplots(1,2,figsize=(13,6), sharex=True, sharey=True)
plt.sca(ax[0])
plt.plot(t['g'][::1000] - t['r'][::1000], t['g'][::1000] - t['z'][::1000], 'k.', ms=1)
plt.plot(gr_grid, gz_grid+0.13 + 0.7*gr_grid**4 + 0.01*gr_grid, 'r:', zorder=10)
plt.plot(gr_grid, gz_grid-0.13 - 0.35*gr_grid**4 - 0.01*gr_grid, 'r:', zorder=10)
# plt.plot(gr_grid, gz_grid, 'r-', zorder=10)
plt.sca(ax[1])
plt.plot(t['g'][::1000] - t['r'][::1000], t['g'][::1000] - t['z'][::1000], 'k.', ms=1)
plt.plot(t['g'][cmd_mask] - t['r'][cmd_mask], t['g'][cmd_mask] - t['z'][cmd_mask], 'r.', ms=1, alpha=0.05)
plt.xlim(-0.5,2)
plt.ylim(-1,4)
plt.tight_layout()
# -
gz = t['g'] - t['z']
gr = t['g'] - t['r']
grz_mask = (gz<grz_poly(gr)+0.1) & (gz>grz_poly(gr)-0.1)
# +
fig, ax = plt.subplots(1,1,figsize=(10,15))
plt.plot(ceq.ra[cmd_mask & grz_mask], ceq.dec[cmd_mask & grz_mask], 'k.', ms=1, alpha=0.15)
plt.xlabel('RA [deg]')
plt.ylabel('Dec [deg]')
plt.gca().invert_xaxis()
plt.gca().set_aspect('equal')
# plt.savefig('../plots/decals_pal5_masks.png', dpi=150)
# -
| notebooks/pal5_grz_filter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Device
# ## Device types
# The DeviceManager supports different types of devices. At the time of writing this example, these are the supported device types:
#
# - USB: `USBDevice`
# - Ethernet/LAN: `LANDevice`
#
# All currently supported device types can be found in the `DeviceType` enumeration. If you want to extend the DeviceManager-project by another device type, it is absolutely essential, to add this type to the `DeviceType`-enumeration. Otherwise the device scanners and manager will not support that device type.
# +
from device_manager import DeviceType
list(DeviceType)
# -
# The USB-devices are represented by the class `USBDevice` and ethernet-devices are represented by the `LANDevice`-class. Both are subclasses of the abstract base class `Device`.
# +
from device_manager import USBDevice, LANDevice
usb_device = USBDevice()
lan_device = LANDevice()
# -
# The `DeviceType` enumeration is used at different places in the code, especially in from of `DeviceTypeDict`s. Those are dictionaries, that are using a `DeviceType` as key. You can find them in the classes `DeviceScanner` or `DeviceManager`. Using a `DeviceTypeDict` is very easy, because you do not have to pass a `DeviceTpe`-object as key, you could also use a string, a `Device`-type or `Device`-object. Internally the `DeviceType`-constructor is used for this feature:
print("'usb' :=", DeviceType("usb"))
print("USBDevice :=", DeviceType(USBDevice))
print("USBDevice() :=", DeviceType(USBDevice()))
# If you already have a `Device`-object, you could also use its `device_type`-property, to get the corresponding `DeviceType`-object:
usb_device.device_type
# ## Basic functionality
# ### Device properties
#
# As already mentioned above, all device types are based on the abstract class `Device`. All `Device`-objects have proprties for its main address/port and for additional aliases, if any.
# +
lan_device.address = "192.168.10.12"
lan_device.address_aliases = ["192.168.10.13", "mydevice.domain.com"]
usb_device.address = "/devices/pci0000:00/0000:00:02.0/usb1"
usb_device.address_aliases = "/dev/bus/usb/001/001"
# -
# All devices have a property returning their corresponding `DeviceType`.
print("lan_device.device_type:", lan_device.device_type)
print("usb_device.device_type:", usb_device.device_type)
# Additionally, each device has a `unique_identifier`-property, which contains the device's unique identifiers, as the name suggests. This property can be used, to compare different `Device`-objects. If their `unique_identifier`s are equal, you can be sure it is the same device, which may have multiple connections.
# ### Serialization functions
#
# All devices are serializable into a python dictionary. This is used by the `DeviceManager`, for example, which uses those dictionaries to serialize its devices into a JSON file.
#
# The serialization functions can also be used, to have a look into the `Device`-objects. All relevant properties of the device, that are not `None`, are contained by the dictionary:
usb_device.to_dict()
# ## Proper device types
# ### USB devices
#
# USB devices are represented by the `USBDevice`-class. Besides the inherited attributes, the `USBDevice` contains further attributes:
#
# - `vendor_id` (`int`): The manufacturer code, assigned by the USB committee (USB-IF)
# - `product_id` (`int`): The product/model code, assigned by the device's manufacturer
# - `revision_id` (`int`): The revision code
# - `serial` (`str`): The device's serial number
# +
usb_device.vendor_id = 0x3923
usb_device.product_id = 0x12d0
usb_device.revision_id = 0x0100
usb_device.serial = "0123456789AB"
usb_device.to_dict()
# -
# Because the USBDevice is an usb device, the `device_type`-property always returns `DeviceType.USB`. As `unique_identifier` the `USBDevice`-class uses its `vendor_id`, `product_id` and `serial`.
usb_device.unique_identifier
# To easily recognize the connected usb devices, you can look at the attributes `vendor_name` and `product_name`. These properties look up the `vendor_id` and `product_id` of the device to get the manufacturer and model name:
print("Vendor (0x{:04X}):".format(usb_device.vendor_id), usb_device.vendor_name)
print("Product (0x{:04X}):".format(usb_device.product_id), usb_device.product_name)
# ### Ethernet/LAN devices
#
# Ethernet devices are represented by the class `LANDevice`. It only has one additional attribute, which is the `mac_address` (the device's physical address). This is also the device's `unique_identifier`.
# +
lan_device.mac_address = "01-23-45-67-89-ab"
lan_device.to_dict()
# -
# As seen in the code lines above, the device automatically converts the device's mac address into a standardized format. This is done, to keep the devices comparable. The mac address is always stored with colon-seperators (`:`) and uppercase-hexadecimal components.
lan_device.unique_identifier
| docs/source/examples/Device.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/sagorbrur/bnlp/blob/master/notebook/bnlp_colab_testing.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="xqHRllMHSiDH"
# # BNLP
# BNLP is a natural language processing toolkit for Bengali Language. This tool will help you to tokenize Bengali text, Embedding Bengali words, Bengali POS Tagging, Construct Neural Model for Bengali NLP purposes.
#
# Here we provide a to z api level use of **BNLP**
# + [markdown] id="SXurTR8rSzdT"
# ## Installation
# + id="EqRr2C1sSag_" outputId="b5ede389-79d2-4732-dc60-dd385475db00" colab={"base_uri": "https://localhost:8080/", "height": 462}
# !pip install -U bnlp_toolkit
# + [markdown] id="aYS0EYaLS-OC"
# ## Downloading Pretrained model
#
# NB: POS TAG and NER model may need to download from https://github.com/sagorbrur/bnlp/blob/master/model/bn_pos_model.pkl and then upload it to colab.
#
# Otherwise it will arise an error.
# + id="4VHCHzC7TgQZ" outputId="4944c588-3105-46ef-ae1d-3c22b541b202" colab={"base_uri": "https://localhost:8080/", "height": 34}
# !mkdir models
# %cd models
# + id="3d0TozrSTm0x" outputId="34b41f51-a368-44e1-bd81-301a6a8d3d3f" colab={"base_uri": "https://localhost:8080/", "height": 935}
# !wget https://github.com/sagorbrur/bnlp/raw/master/model/bn_spm.model
# !wget https://github.com/sagorbrur/bnlp/raw/master/model/bn_spm.vocab
# !wget https://github.com/sagorbrur/bnlp/blob/master/model/bn_pos.pkl
# !wget https://github.com/sagorbrur/bnlp/blob/master/model/bn_ner.pkl
# + id="WV6m0AF7S5IC" outputId="7e261686-a136-4a5f-85f9-9fd525868b46" colab={"base_uri": "https://localhost:8080/", "height": 85}
# !pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
downloaded = drive.CreateFile({'id':"1DxR8Vw61zRxuUm17jzFnOX97j7QtNW7U"})
downloaded.GetContentFile('bengali_word2vec.zip')
# !unzip bengali_word2vec.zip
# !rm -rf bengali_word2vec.zip
# + id="4X8Rb7wwU-dJ" outputId="4ed82852-8344-429e-d881-eae791401307" colab={"base_uri": "https://localhost:8080/", "height": 51}
# !pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
downloaded = drive.CreateFile({'id':"1CFA-SluRyz3s5gmGScsFUcs7AjLfscm2"})
downloaded.GetContentFile('bengali_fasttext_wiki.zip')
# !unzip bengali_fasttext_wiki.zip
# !rm -rf bengali_fasttext_wiki.zip
# + id="L-Csx3qMV6bz" outputId="30b1e6c4-2009-40b0-8a68-03058b197c9b" colab={"base_uri": "https://localhost:8080/", "height": 34}
# %cd ..
# + [markdown] id="89_dmCXJWRWc"
# ## Tokenization
#
#
# + [markdown] id="VjI8S58uXL3K"
# ### Sentencepiece Tokenizer
# + id="yTnY9_gWWP7z" outputId="7ffe1df2-7aa1-4a67-9e81-6f5d92af207e" colab={"base_uri": "https://localhost:8080/", "height": 68}
from bnlp import SentencepieceTokenizer
bsp = SentencepieceTokenizer()
model_path = "./models/bn_spm.model"
input_text = "আমি ভাত খাই। সে বাজারে যায়।"
tokens = bsp.tokenize(model_path, input_text)
print(tokens)
text2id = bsp.text2id(model_path, input_text)
print(text2id)
id2text = bsp.id2text(model_path, text2id)
print(id2text)
# + [markdown] id="DhhO65MvXC2C"
# ### Basic Tokenizer
# + id="NvX910bCW7t6" outputId="d2e83cdb-6548-4bcb-b226-e6f8708575f1" colab={"base_uri": "https://localhost:8080/", "height": 34}
from bnlp import BasicTokenizer
basic_tokenizer = BasicTokenizer()
raw_text = "আমি বাংলায় গান গাই।"
tokens = basic_tokenizer.tokenize(raw_text)
print(tokens)
# + [markdown] id="pTOHycMkXNoC"
# ### NLTK Tokenizer
# + id="0DT-H9W0XFGC" outputId="eac0c079-ca6c-46b1-d311-5a40140161f4" colab={"base_uri": "https://localhost:8080/", "height": 51}
from bnlp import NLTKTokenizer
bnltk = NLTKTokenizer()
text = "আমি ভাত খাই। সে বাজারে যায়। তিনি কি সত্যিই ভালো মানুষ?"
word_tokens = bnltk.word_tokenize(text)
sentence_tokens = bnltk.sentence_tokenize(text)
print(word_tokens)
print(sentence_tokens)
# + [markdown] id="puSrp9bBXklx"
# ## Word Embedding
# + [markdown] id="k4JTRdBIXmmV"
# ### Bengali Word2Vec
# + id="aTPKDulmXT5B" outputId="00f715df-da14-48ec-be4a-ed27e887b472" colab={"base_uri": "https://localhost:8080/", "height": 1000}
from bnlp import BengaliWord2Vec
bwv = BengaliWord2Vec()
model_path = "models/bengali_word2vec.model"
word = 'আমার'
vector = bwv.generate_word_vector(model_path, word)
print(vector.shape)
print(vector)
# + id="TXN7oFCHYRdB" outputId="0711aec5-3fc6-4dc9-fdf5-68ba0b63ec0a" colab={"base_uri": "https://localhost:8080/", "height": 122}
from bnlp import BengaliWord2Vec
bwv = BengaliWord2Vec()
model_path = "models/bengali_word2vec.model"
word = 'গ্রাম'
similar = bwv.most_similar(model_path, word)
print(similar)
# + [markdown] id="FpQsPWd5ZcwR"
# ### Bengali Fasttext
# Install fasttext and restart runtime
# + id="m-ElsxXXIJHh" outputId="1ba5bef1-9c4c-471a-d6ab-b9829d5771cb" colab={"base_uri": "https://localhost:8080/", "height": 258}
# !pip install fasttext
# + id="ICGKCZbRYuue" outputId="70b19b82-958b-4f92-f259-4f1b23dd21e4" colab={"base_uri": "https://localhost:8080/", "height": 340}
from bnlp.embedding.fasttext import BengaliFasttext
bft = BengaliFasttext()
word = "গ্রাম"
model_path = "models/bengali_fasttext_wiki.bin"
word_vector = bft.generate_word_vector(model_path, word)
print(word_vector.shape)
print(word_vector)
# + [markdown] id="8tEPivPeZhqF"
# ## Bengali POS Tagging
# + id="FCMr4ZkbZPZG" outputId="ae236c52-05fb-4509-9a16-34c2f3a303e8" colab={"base_uri": "https://localhost:8080/", "height": 34}
from bnlp import POS
bn_pos = POS()
model_path = "models/bn_pos.pkl"
text = "আমি ভাত খাই।"
res = bn_pos.tag(model_path, text)
print(res)
# + [markdown] id="x4lzRvVaqtw7"
# ## Bengali Name Entity Recognition
# + id="Kr23_fZbZkul" outputId="4a61ae37-be2d-40d2-d9ff-cc7bfe0e8997" colab={"base_uri": "https://localhost:8080/", "height": 34}
from bnlp import NER
bn_ner = NER()
model_path = "models/bn_ner.pkl"
text = "সে ঢাকায় থাকে।"
result = bn_ner.tag(model_path, text)
print(result)
# + [markdown] id="86HNdJIIJmoY"
# # Bengali Corpus Class
# + [markdown] id="Ub43jYHZJ291"
# ## Stopwords and Punctuations
# + id="GMfaIogmtPnk" outputId="2937f07e-11cb-465a-b5d0-c31adcf3baf8" colab={"base_uri": "https://localhost:8080/", "height": 71}
from bnlp.corpus import stopwords, punctuations
stopwords = stopwords()
print(stopwords)
print(punctuations)
# + [markdown] id="KWtWAE7SJ9CL"
# ## Remove stopwords from text
# + id="FFmusO3OJ0YG" outputId="086c0a69-7b70-43a6-9020-9ecd1db41277" colab={"base_uri": "https://localhost:8080/", "height": 34}
from bnlp.corpus import stopwords
from bnlp.corpus.util import remove_stopwords
stopwords = stopwords()
raw_text = 'আমি ভাত খাই।'
result = remove_stopwords(raw_text, stopwords)
print(result)
# ['ভাত', 'খাই', '।']
# + id="DKK0lDFoJ_c6"
| notebook/bnlp_colab_testing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] tags=[]
# ## A stateless oracle: introduction
# #### 09.0 Winter School on Smart Contracts
# ##### <NAME> (<EMAIL>)
# 2022-03-24
#
# * Part 0: Theoretical introduction
# * Parts 1-4 are only relevant if you want to **create** an Oracle
# * Parts 5-6 are needed to **use** the oracle.
# -
# ## Introduction
# The distincion between "stateless" and "stateful" smart contracts is often defined in such a way that
# * Stateless Smart Contracts only evaluate the merits of a proposed transaction (without interacting with the blockchain), while
# * Stateful Smart Contracts fully interact with the blockchain
#
# Strictly speaking, however, **there are no stateless payment transactions**. Every payment transaction must be checked against the blockchain against a possible overspend, which requires a look-up on the blockchain.
#
# In the following, we will show how this fact can be used to create an oracle that is based entirely on "stateless" smart contracts.
# ## Goal
# The goal of this chapter is to create an oracle that is completely on chain and does not use stateful smart contracts. The principle will be illustrated with an oracle for the ALGO-USD exchange rate, but the principle is not limited to exchange rate or trading assets.
# ## In a nutshell
# 1. We create an Oracle coin and two accounts called `Price` and `Reserve`.
# 1. The (external) oracle will rebalance the holdings of the account such that **the holdings of `Price` will reflect the USD/ALGO exchange rate**.
# 2. We then create one Smart Signature for each `Price` and `Reserve` such that these accounts can transfer some or all of their coins exclusively to *themselves*.
# 3. We then create a Smart Signature for an exchange that uses the oracle to authorize atomic swaps that correctly reflect the exchange rate.
# ## Framework
#
# While a stateless contract (or transaction group of stateless contracts) has no access to the holdings of an address, all individual transaction amounts are available to all transactions in the transaction group. **The goal is hence to find a way to encode the holdings of an address in the form of a transaction amount.**
#
# Capital letters denote holdings and lower case letters denote amounts in asset transfer transactions.
#
# 1. The supply of an ASA is fixed and known without referring to the blockchain, formally $$T = const$$
# 2. If the creator of the ASA transfers all coins exclusively to two trusted accounts $P$ ("price") and $R$ ("reserve"), which are governed by smart contracts that only allow (a) transfers to oneself or (b) transfers to the other holding account, then their holdings add up to the total, fomally $$P + R = T$$
# 3. Any payment amount must be smaller or equal to the total holding of the payer, formally $$p \leq P$$ and $$r \leq R$$
# 4. The total amount transferred in one transaction group is hence $$p +r \leq P + R = T$$
# 5. If we now require (remembering $T=const$) $$p +r = T,$$ then the inequalities become equalities and we have $$p =P$$ and $$r=R$$
# 6. Hence the transaction amount $p$ reflects the holdings $P$. $\square$
#
# **Remark:** Condition (5.) also ensures against a potential malicious actor $S$ obtaining some orscle coins.
# #### The Smart Signature for the oracle
# The third line of the oracle_condition ensures that $p+r = T$
oracle_condition = And (
Gtxn[0].sender() == Addr(price_sig['public_addr']), # p
Gtxn[1].sender() == Addr(reserve_sig['public_addr']), # r
Gtxn[0].asset_amount() + Gtxn[1].asset_amount() == Int(int( 1e3 * 1e6 )) # p + r = T (small units)
)
# safety conditions omitted for simplicity, see 09.6 for entire smart signature
# #### Using the oracle
# To use the oracle in an atomic swap, we need a transaction group of 4 transactions:
# * `Txn[0]` a transaction of the *Price* account with itself to obtain the price
# * `Txn[1]` a transaction of the *Reserve* account to verify the price
# * Criterion: the amounts of `Txn[0]` and `Txn[1]` must add up exactly to the total supply of Oracle Coins
# * `Txn[2]` the ALGO transaction
# * `Txn[3]` the USDC transaction
# * Criterion: the amounts of `Txn[2]` and `Txn[3]` must correctly reflect the exchange rate
# * The exchange rate is obtained from `Txn[0]`
#
# **Note** transactions 0 and 1 are the oracle part, transactions 2 and 3 are the actual atomic swap
#
#
#
# +
oracle_condition = And (
Gtxn[0].sender() == Addr(price_sig['public_addr']), # p
Gtxn[1].sender() == Addr(reserve_sig['public_addr']), # r
Gtxn[0].asset_amount() + Gtxn[1].asset_amount() == Int(int( 1e3 * 1e6 )) # p + r = T (small units)
)
exchange_condition = And (
Gtxn[2].xfer_asset() == Int(0), # Txn2 is in ALGOs
Gtxn[3].xfer_asset() == Int(USDC_id), # Txn3 is in USDC
# Exchange rate in small units (note: Algo, Oracle and USDC *all* have 6 decimals)
# Exchange rate is taken from Gtxn[0].asset_amount()
# ALGO_amount * USD_per_ALGO == USD_amount
Gtxn[2].amount() * Gtxn[0].asset_amount() / Int(int(1e6)) == Gtxn[3].asset_amount(),
)
# safety conditions omitted for simplicity, see 09.6 for entire smart signature
exchange_pyteal = And(
exchange_condition,
oracle_condition,
safety_condition
)
# -
# ## Example
# A working example of the Oracle Coin, reflecting the USD/ALGO exchange rate can be found on the testnet:
# https://testnet.algoexplorer.io/asset/77534697
| ClassMaterial/09 - Oracles/09 - code/09.0_WSC_Oracle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import os
import re
import nltk
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
from nltk.tokenize import word_tokenize
# import these modules
from nltk.stem import WordNetLemmatizer
# # !pip install nltk
from numba import jit ,cuda
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize
# + tags=[]
#@jit
def Gathering_data():
# file path nor sarcastic datasets
not_sarcastic_d1 = "Datset2/Sample/GEN-sarc-notsarc.csv"
not_sarcastic_d2 = "Datset2/Sample/HYP-sarc-notsarc.csv"
not_sarcastic_d3 = "Datset2/Sample/RQ-sarc-notsarc.csv"
# Reading each Dataframe
not_sar_d1_df = pd.read_csv(not_sarcastic_d1)
not_sar_d2_df = pd.read_csv(not_sarcastic_d2)
not_sar_d3_df = pd.read_csv(not_sarcastic_d3)
#drop extra column from each dataframe
not_sar_d1_df.drop(columns=['id'],inplace=True)
not_sar_d2_df.drop(columns=['id'],inplace=True)
not_sar_d3_df.drop(columns=['id'],inplace=True)
# labeling the non sarcastic datasets for non sarcastic datasets
non_sarcastic_dic = {
'text' : [],
'class':[]
}
dir_path = os.listdir("Notebook/Datasets/Dataset1/notsarc")
# print(dir_path)
for file_name in dir_path:
try:
file = open("Notebook/Datasets/Dataset1/notsarc/"+file_name,"r+")
# dumpoing text of each file in the dictionary
non_sarcastic_dic["text"].append(file.read())
non_sarcastic_dic["class"].append('notsarc')
except:
continue
# turning dic to dataframe
not_sar_d4_df = pd.DataFrame.from_dict(non_sarcastic_dic)
# labeling the sarcastic datasets for non sarcastic datasets
sarcastic_dic = {
'text' : [],
'class':[]
}
dir_path = os.listdir("Notebook/Datasets/Dataset1/sarc")
# print(dir_path)
for file_name in dir_path:
try:
file = open("Notebook/Datasets/Dataset1/sarc/"+file_name,"r+")
# dumpoing text of each file in the dictionary
non_sarcastic_dic["text"].append(file.read())
non_sarcastic_dic["class"].append('sarc')
except:
continue
# turning dic to dataframe
sar_d5_df = pd.DataFrame.from_dict(sarcastic_dic)
frames = [not_sar_d1_df,
not_sar_d2_df,
not_sar_d3_df,
not_sar_d4_df,
sar_d5_df]
# combine frames
not_sarcastic_df = pd.concat(frames)
not_sarcastic_df.to_csv("pre-processed_datasets/pre-processed_sarcastic_data.csv")
def Preprocessing():
# file path
filePath = "pre-processed_datasets/pre-processed_sarcastic_data.csv"
df = pd.read_csv(filePath)
df.drop(columns=['Unnamed: 0'],inplace=True)
# Tokenization
for index , text in df.iterrows():
# case 0 : converting into the lower case:
text[1] = text[1].lower()
# print("Convering to lower case : \n",text[1])
# cleaing remove unseless character or special character
# case 1 : remove extra and white spaces from the string
# print(type(text[1]))
text[1] = text[1].strip()
text[1] = re.sub("[0-9!(),~@#.$%^&*/'()_=?-]*", '', text[1])
# print("Removed special character : \n",text[1])
stopword = stopwords.words('english')
text[1] = word_tokenize(text[1])
# print("Tokenization : \n" ,text[1])
# case 3 : stemming
ps = PorterStemmer()
# case 4 : lemmitization :
lemmatizer = WordNetLemmatizer()
for w in range(0,len(text[1])):
text[1][w] = ps.stem(text[1][w])
text[1][w] = lemmatizer.lemmatize(text[1][w])
# case 2: remove the stopwards
for sptword in stopword:
if sptword in text[1]:
text[1].remove(sptword)
print("Removed stopward character : \n",text[1] , " ",text[0])
if index >10:
break
# head
print(df.head(10))
df.to_csv("pre-processed_datasets/pre-processing_text.csv",index=False)
if __name__=="__main__":
#Gathering_data()
Preprocessing()
# +
hel = ['if', 'thats', 'true', 'then', 'freedom', 'of', 'speech', 'is', 'doomed', 'harassment', 'is', 'subjective', 'now', 'i', 'can', 'claim', 'that', 'a', 'book', 'i', 'dont', 'like', 'is', 'harassing', 'me', 'and', 'it', 'will', 'be', 'banned']
ps = PorterStemmer()
for w in range(0,len(hel)):
hel[w] = ps.stem(hel[w])
print(hel)
# -
l1 = ['if', 'that', 'true', 'then', 'freedom', 'of', 'speech', 'is', 'doom', 'harass', 'is', 'subject', 'now', 'i', 'can', 'claim', 'that', 'a', 'book', 'i', 'dont', 'like', 'is', 'harass', 'me', 'and', 'it', 'will', 'be', 'ban']
l2 = ['if', 'that', 'true', 'then', 'freedom', 'of', 'speech', 'is', 'doom', 'harass', 'is', 'subject', 'now', 'i', 'can', 'claim', 'that', 'a', 'book', 'i', 'dont', 'like', 'is', 'harass', 'me', 'and', 'it', 'will', 'be', 'ban']
print(l1==l2)
# +
# import these modules
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
print("rocks :", lemmatizer.lemmatize("rocks"))
print("corpora :", lemmatizer.lemmatize("corpora"))
# a denotes adjective in "pos"
print("better :", lemmatizer.lemmatize("better", pos ="a"))
# -
| .ipynb_checkpoints/SarcasticDetecton-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import numpy.linalg
# +
V = 0.75
E = 0.25
M__1=np.matrix([[1,-V], [0, 1]])
M_0=np.matrix([[E, -1], [1, 0]])
print(f'V: {V}, E: {E}')
print(M__1)
print(M_0)
# -
def commutator(A, B):
return A*B-B*A
M_1=M__1*np.linalg.matrix_power(M_0, 3)
A = np.linalg.matrix_power(commutator(M_1, M_0*np.linalg.matrix_power(M_1, 4)), 2)
print(A, np.power(V,2))
np.asmatrix(A)
| Proposition2-3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Hyperparameter tuning
#
# In the previous section, we did not discuss the parameters of random forest
# and gradient-boosting. However, there are a couple of things to keep in mind
# when setting these.
#
# This notebook gives crucial information regarding how to set the
# hyperparameters of both random forest and gradient boosting decision tree
# models.
#
# <div class="admonition caution alert alert-warning">
# <p class="first admonition-title" style="font-weight: bold;">Caution!</p>
# <p class="last">For the sake of clarity, no cross-validation will be used to estimate the
# testing error. We are only showing the effect of the parameters
# on the validation set of what should be the inner cross-validation.</p>
# </div>
#
# ## Random forest
#
# The main parameter to tune for random forest is the `n_estimators` parameter.
# In general, the more trees in the forest, the better the generalization
# performance will be. However, it will slow down the fitting and prediction
# time. The goal is to balance computing time and generalization performance when
# setting the number of estimators when putting such learner in production.
#
# Then, we could also tune a parameter that controls the depth of each tree in
# the forest. Two parameters are important for this: `max_depth` and
# `max_leaf_nodes`. They differ in the way they control the tree structure.
# Indeed, `max_depth` will enforce to have a more symmetric tree, while
# `max_leaf_nodes` does not impose such constraint.
#
# Be aware that with random forest, trees are generally deep since we are
# seeking to overfit each tree on each bootstrap sample because this will be
# mitigated by combining them altogether. Assembling underfitted trees (i.e.
# shallow trees) might also lead to an underfitted forest.
# +
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
data, target = fetch_california_housing(return_X_y=True, as_frame=True)
target *= 100 # rescale the target in k$
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
# +
import pandas as pd
from sklearn.model_selection import RandomizedSearchCV
from sklearn.ensemble import RandomForestRegressor
param_distributions = {
"n_estimators": [1, 2, 5, 10, 20, 50, 100, 200, 500],
"max_leaf_nodes": [2, 5, 10, 20, 50, 100],
}
search_cv = RandomizedSearchCV(
RandomForestRegressor(n_jobs=2), param_distributions=param_distributions,
scoring="neg_mean_absolute_error", n_iter=10, random_state=0, n_jobs=2,
)
search_cv.fit(data_train, target_train)
columns = [f"param_{name}" for name in param_distributions.keys()]
columns += ["mean_test_error", "std_test_error"]
cv_results = pd.DataFrame(search_cv.cv_results_)
cv_results["mean_test_error"] = -cv_results["mean_test_score"]
cv_results["std_test_error"] = cv_results["std_test_score"]
cv_results[columns].sort_values(by="mean_test_error")
# -
# We can observe in our search that we are required to have a large
# number of leaves and thus deep trees. This parameter seems particularly
# impactful in comparison to the number of trees for this particular dataset:
# with at least 50 trees, the generalization performance will be driven by the
# number of leaves.
#
# Now we will estimate the generalization performance of the best model by
# refitting it with the full training set and using the test set for scoring on
# unseen data. This is done by default when calling the `.fit` method.
error = -search_cv.score(data_test, target_test)
print(f"On average, our random forest regressor makes an error of {error:.2f} k$")
# ## Gradient-boosting decision trees
#
# For gradient-boosting, parameters are coupled, so we cannot set the parameters
# one after the other anymore. The important parameters are `n_estimators`,
# `learning_rate`, and `max_depth` or `max_leaf_nodes` (as previously discussed
# random forest).
#
# Let's first discuss the `max_depth` (or `max_leaf_nodes`) parameter. We saw
# in the section on gradient-boosting that the algorithm fits the error of the
# previous tree in the ensemble. Thus, fitting fully grown trees would be
# detrimental. Indeed, the first tree of the ensemble would perfectly fit
# (overfit) the data and thus no subsequent tree would be required, since there
# would be no residuals. Therefore, the tree used in gradient-boosting should
# have a low depth, typically between 3 to 8 levels, or few leaves ($2^3=8$ to
# $2^8=256$). Having very weak learners at each step will help reducing
# overfitting.
#
# With this consideration in mind, the deeper the trees, the faster the
# residuals will be corrected and less learners are required. Therefore,
# `n_estimators` should be increased if `max_depth` is lower.
#
# Finally, we have overlooked the impact of the `learning_rate` parameter until
# now. When fitting the residuals, we would like the tree to try to correct all
# possible errors or only a fraction of them. The learning-rate allows you to
# control this behaviour. A small learning-rate value would only correct the
# residuals of very few samples. If a large learning-rate is set (e.g., 1), we
# would fit the residuals of all samples. So, with a very low learning-rate, we
# will need more estimators to correct the overall error. However, a too large
# learning-rate tends to obtain an overfitted ensemble, similar to having a too
# large tree depth.
# +
from scipy.stats import loguniform
from sklearn.ensemble import GradientBoostingRegressor
param_distributions = {
"n_estimators": [1, 2, 5, 10, 20, 50, 100, 200, 500],
"max_leaf_nodes": [2, 5, 10, 20, 50, 100],
"learning_rate": loguniform(0.01, 1),
}
search_cv = RandomizedSearchCV(
GradientBoostingRegressor(), param_distributions=param_distributions,
scoring="neg_mean_absolute_error", n_iter=20, random_state=0, n_jobs=2
)
search_cv.fit(data_train, target_train)
columns = [f"param_{name}" for name in param_distributions.keys()]
columns += ["mean_test_error", "std_test_error"]
cv_results = pd.DataFrame(search_cv.cv_results_)
cv_results["mean_test_error"] = -cv_results["mean_test_score"]
cv_results["std_test_error"] = cv_results["std_test_score"]
cv_results[columns].sort_values(by="mean_test_error")
# -
#
# <div class="admonition caution alert alert-warning">
# <p class="first admonition-title" style="font-weight: bold;">Caution!</p>
# <p class="last">Here, we tune the <tt class="docutils literal">n_estimators</tt> but be aware that using early-stopping as
# in the previous exercise will be better.</p>
# </div>
#
# In this search, we see that the `learning_rate` is required to be large
# enough, i.e. > 0.1. We also observe that for the best ranked models, having a
# smaller `learning_rate`, will require more trees or a larger number of
# leaves for each tree. However, it is particularly difficult to draw
# more detailed conclusions since the best value of an hyperparameter depends
# on the other hyperparameter values.
# Now we estimate the generalization performance of the best model
# using the test set.
error = -search_cv.score(data_test, target_test)
print(f"On average, our GBDT regressor makes an error of {error:.2f} k$")
# The mean test score in the held-out test set is slightly better than the score
# of the best model. The reason is that the final model is refitted on the whole
# training set and therefore, on more data than the inner cross-validated models
# of the grid search procedure.
| sklearn/notes/ensemble_hyperparameters.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Ridge regularization
#
# Ridge regularization is an alternative to LASSO. It minimizes the L2-Norm instead of the L1-Norm, which means that the weights are maintained under control but not necessarily to zero. This offers a smoother alternative to LASSO, as the modelled function might still need to be high-dimensional in certain areas.
#
# Ridge regularization and its gradient are defined as follows:
class Ridge:
def __init__(self, _lambda):
self._lambda = _lambda
def __call__(self, theta):
return self._lambda * 0.5 * np.sum(theta**2)
def gradient(self, theta):
return self._lambda * theta
# The regularization is controlled by a factor typically referred to as Lambda. If Lambda equals 0, there is no regularization, otherwise, the larger the value of Lambda, the more the regularization term will impact the loss function.
#
# 
# As it can be seen on the animation above. Ridge regularization is “smoother”, as it maintains some of the complexity of the function, whilst tuning it down.
| notebooks/Learning Units/Linear Regression/Linear Regression - Chapter 5 - Ridge regularization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
# %config InlineBackend.figure_format = "retina"
from matplotlib import rcParams
rcParams["savefig.dpi"] = 200
rcParams["font.size"] = 8
import warnings
warnings.filterwarnings('ignore')
# -
# # Create numpy region mask
#
# In this tutorial we will show how to create a mask for arbitrary latitude and longitude grids.
# Import regionmask and check the version:
import regionmask
regionmask.__version__
# We define a lon/ lat grid with a 1° grid spacing, where the points define the middle of the grid. Additionally we create a grid that spans the edges of the grid for the plotting.
# +
import numpy as np
# define a longitude latitude grid
lon = np.arange(-179.5, 180)
lat = np.arange(-89.5, 90)
# for the plotting
lon_edges = np.arange(-180, 181)
lat_edges = np.arange(-90, 91)
# -
# Again we use the SREX regions. `regionmask` returns a `xarray.Dataset` - this can be converted to a `numpy` array by using `values`
mask = regionmask.defined_regions.srex.mask(lon, lat).values
# `mask` is now a `n_lon x n_lat` numpy array. Gridpoints that do not fall in a region are NaN, the gridpoints that fall in a region are encoded with the number of the region (here 1 to 26).
#
# The function `mask` determines if all cominations of points given in `lon` and `lat` lies within the polygon making up the region.
#
# We can now plot the `mask`:
# +
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
ax = plt.subplot(111, projection=ccrs.PlateCarree())
# pcolormesh does not handle NaNs, requires masked array
mask_ma = np.ma.masked_invalid(mask)
h = ax.pcolormesh(lon_edges, lat_edges, mask_ma, transform=ccrs.PlateCarree(), cmap='viridis')
ax.coastlines()
plt.colorbar(h, orientation='horizontal', pad=0.04);
# -
# Finally the `mask` can now be used to mask out all data that is not in a specific region.
# +
# create random data
data = np.random.randn(*lat.shape + lon.shape)
# only retain data in the Central Europe
data_ceu = np.ma.masked_where(mask != 12, data)
# -
# Plot the selected data
# +
# load cartopy
import cartopy.crs as ccrs
# choose a good projection for regional maps
proj=ccrs.LambertConformal(central_longitude=15)
# plot the outline of the central European region
ax = regionmask.defined_regions.srex.plot(regions=12, add_ocean=False, resolution='50m',
proj=proj, add_label=False)
ax.pcolormesh(lon_edges, lat_edges, data_ceu, transform=ccrs.PlateCarree())
# fine tune the extent
ax.set_extent([-15, 45, 40, 65], crs=ccrs.PlateCarree())
# -
# Finally we can obtain the region mean:
print('Global mean: ', np.mean(data))
print('Central Europe:', np.mean(data_ceu))
# ## Create a mask with a different lon/ lat grid
#
# The interesting thing of `gridmask` is that you can use any lon/ lat grid.
# Use a 5° x 5° grid:
# +
# define a longitude latitude grid
lon5 = np.arange(-177.5, 180, 5)
lat5 = np.arange(-87.5, 90, 5)
# for the plotting
lon5_edges = np.arange(-180, 181, 5)
lat5_edges = np.arange(-90, 91, 5)
mask5_deg = regionmask.defined_regions.srex.mask(lon5, lat5, xarray=False)
# +
ax = plt.subplot(111, projection=ccrs.PlateCarree())
# pcolormesh does not handle NaNs, requires masked array
mask5_ma = np.ma.masked_invalid(mask5_deg)
h = ax.pcolormesh(lon5_edges, lat5_edges, mask5_ma, transform=ccrs.PlateCarree(), cmap='viridis')
ax.coastlines()
plt.colorbar(h, orientation='horizontal', pad=0.04);
# -
# Now the grid cells are much larger.
| docs/notebooks/mask_numpy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Program Name: Graphing ODS Databases - SQL Conn 01 - AppData display inline graph-w-PX-color-export2html_v2
#Purpose: Chart scatter, line, box, aggr, violin, or bar with table data
#Author: <NAME>, Director, Data Governance
#Date: 2020.08.30 - 2021.06.30
#Errata: 0.1 Improvements can be made to script using for/looping through the databases
import os
import time, datetime
import sqlalchemy as db
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import pyodbc
import matplotlib
import plotly.express as px
import plotly.io as pio
#matplotlib.rcParams['figure.figsize'] = (30,5)
# %config InlineBackend.figure_format = 'retina'
from dotenv import load_dotenv # add this line
load_dotenv() # add this line
user = os.getenv('MySQLeUser')
password = os.getenv('<PASSWORD>')
host = os.getenv('MySQLeHOST')
db = os.getenv('MySQLeDB')
# %matplotlib inline
# %load_ext sql
dt1 = time.strftime("%d/%m/%Y")
dt2 = time.strftime("%d.%m.%Y")
dt3 = pd.to_datetime('today')
dt4 = pd.Timestamp('today').strftime("%Y%m%d")
# +
# This segment builds the appropriatte file system structure as a variable driven exercise
# Take time to set your 'eeeeeeeeee' number as variable 'pn' below
# and set the program directory variable called 'programDirectory' before running
# ===================================================================================================================
pn = r'eeeeeeeeee' #This represents the windows system employee login folder - IBM team uses a 9 number
# ===================================================================================================================
#r'C:\Users\eeeeeeeeee\Documents\Py\Daily\Charts_px\ApplicationData\{}.html'.format(FileName))
programDirectory = 'Daily' # Update this variable to wherever you want the program subfolder/files to be located
un = r'C:\Users'
cn = r'Documents\Py'
pc = r'Charts_px'
tn = r'ApplicationData'
#Set a parent directory
parentDirectory = "{}\{}\{}".format(un,pn,cn)
print('Parent Directory is: ', parentDirectory)
mode = 0o666
#Set path location for working with local file(s)
path = os.path.join(parentDirectory, programDirectory,)
pathT = os.path.join(parentDirectory, programDirectory, pc, tn)
try:
if not os.path.exists(path):
os.makedirs(path, mode)
print('Program Directory subfolder has been created: ', programDirectory)
else:
print('Directory:', programDirectory, '>>> Note ---- this folder already exists <<<')
except OSError as error:
print(error)
pass
try:
if not os.path.exists(pathT):
os.makedirs(pathT, mode)
print('Program Directory subfolder has been created: ', pathT)
else:
print('Directory:', pathT, '>>> Note ---- this folder already exists <<<')
except OSError as error:
print(error)
pass
now01 = datetime.datetime.now()
print('1.1.0 Processing Step Complete: ',now01.strftime("%Y-%m-%d %H:%M:%S"))
# +
conn = pyodbc.connect('Driver={SQL Server};'
'Server=DEVODSSQL;'
'Database=Greg;'
'Trusted_Connection=yes;')
sql_query01 = pd.read_sql_query('''
select DISTINCT TableName
FROM [dbo].[tableRowCountApplicationData]
ORDER BY TableName;
'''
,conn) # Load the list of distinct tables to graph
sql_query02 = pd.read_sql_query('''
select *
FROM [dbo].[tableRowCountApplicationData]
ORDER BY TableName;
'''
,conn) # Load the list of distinct tables to graph
# -
df = pd.DataFrame(sql_query02)
#df = df.astype({'TodaysDate':np.int64,'RecordCount':np.int64})
df['TodaysDate'] = pd.to_datetime(df['TodaysDate'].astype(str), format='%Y%m%d')
df = df.sort_values(by=['TableName','SchemaName','DatabaseName','TodaysDate'], ascending=[True,True,True,True])
df = df.reset_index(drop=True)
dfappdata0001 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'ABC') & (df['TableName'] == 'ClasStuABCYrSum')]
dfappdata0002 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'dbo') & (df['TableName'] == 'AcademicContract')]
dfappdata0003 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'dbo') & (df['TableName'] == 'AcademicContractIntervention')]
dfappdata0004 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'dbo') & (df['TableName'] == 'HSSElectives')]
dfappdata0005 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'dbo') & (df['TableName'] == 'MHSElectives')]
dfappdata0006 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'dbo') & (df['TableName'] == 'ProdStudentTrendingGrades')]
dfappdata0007 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'dbo') & (df['TableName'] == 'ROSTER_GSC')]
dfappdata0008 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'dbo') & (df['TableName'] == 'ROSTER_GSS')]
dfappdata0009 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'dbo') & (df['TableName'] == 'Survey_Report_Codes')]
dfappdata0010 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'dbo') & (df['TableName'] == 'Survey_Report_Summary')]
dfappdata0011 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'dbo') & (df['TableName'] == 'Survey_Respondent_Report')]
dfappdata0012 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'dbo') & (df['TableName'] == 'sysdiagrams')]
dfappdata0013 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'ExternalStudents') & (df['TableName'] == 'Log4Net')]
dfappdata0014 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'GCPSMaxScan') & (df['TableName'] == 'Log4Net')]
dfappdata0015 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'GTID') & (df['TableName'] == 'AuditLog')]
dfappdata0016 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'GTID') & (df['TableName'] == 'G_GBATCH')]
dfappdata0017 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'GTID') & (df['TableName'] == 'G_GTID_HISTORY')]
dfappdata0018 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'GTID') & (df['TableName'] == 'G_GTID_IMPORT')]
dfappdata0019 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'GTID') & (df['TableName'] == 'G_GTID_IMPORT_ERROR_CODES')]
dfappdata0020 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'GTID') & (df['TableName'] == 'G_GTID_IMPORT_ERRORS')]
dfappdata0021 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'GTID') & (df['TableName'] == 'G_GTID_STU_BATCH')]
dfappdata0022 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'GTID') & (df['TableName'] == 'Log4Net')]
dfappdata0023 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'LogView') & (df['TableName'] == 'AccessRights')]
dfappdata0024 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'LogView') & (df['TableName'] == 'ApplicationLog')]
dfappdata0025 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'LogView') & (df['TableName'] == 'Log4Net')]
dfappdata0026 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'MockElection') & (df['TableName'] == 'BallotMeasure')]
dfappdata0027 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'MockElection') & (df['TableName'] == 'Candidate')]
dfappdata0028 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'MockElection') & (df['TableName'] == 'CandidateResults')]
dfappdata0029 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'MockElection') & (df['TableName'] == 'Cluster')]
dfappdata0030 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'MockElection') & (df['TableName'] == 'Defaults')]
dfappdata0031 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'MockElection') & (df['TableName'] == 'ElectionType')]
dfappdata0032 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'MockElection') & (df['TableName'] == 'Log4Net')]
dfappdata0033 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'MockElection') & (df['TableName'] == 'MeasureCluster')]
dfappdata0034 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'MockElection') & (df['TableName'] == 'MeasureResults')]
dfappdata0035 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'MockElection') & (df['TableName'] == 'Office')]
dfappdata0036 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'MockElection') & (df['TableName'] == 'OfficeCluster')]
dfappdata0037 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'MockElection') & (df['TableName'] == 'PartyType')]
dfappdata0038 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'MockElection') & (df['TableName'] == 'School')]
dfappdata0039 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'RunTime') & (df['TableName'] == 'Application')]
dfappdata0040 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'RunTime') & (df['TableName'] == 'ApplicationAvailibility')]
dfappdata0041 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'RunTime') & (df['TableName'] == 'ApplicationSchemaVersion')]
dfappdata0042 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SP') & (df['TableName'] == 'AttendanceByMonth')]
dfappdata0043 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SP') & (df['TableName'] == 'ClusterDemogrSummary')]
dfappdata0044 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SP') & (df['TableName'] == 'ClusterProfileSummary')]
dfappdata0045 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SP') & (df['TableName'] == 'EdLevelDemogrSummary')]
dfappdata0046 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SP') & (df['TableName'] == 'EdLevelProfile')]
dfappdata0047 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SP') & (df['TableName'] == 'EnvSettings')]
dfappdata0048 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SP') & (df['TableName'] == 'GradeLevel')]
dfappdata0049 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SP') & (df['TableName'] == 'HSGradeSummary')]
dfappdata0050 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SP') & (df['TableName'] == 'MSGradeSummary')]
dfappdata0051 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SP') & (df['TableName'] == 'SchoolDemogrSummary')]
dfappdata0052 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SP') & (df['TableName'] == 'SchoolProfileSummary')]
dfappdata0053 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SP') & (df['TableName'] == 'SchoolTchStuCounts')]
dfappdata0054 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SP') & (df['TableName'] == 'StudentCurrentYearSummary')]
dfappdata0055 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SP') & (df['TableName'] == 'TeacherStudentSummary')]
dfappdata0056 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SSIS_RunTime') & (df['TableName'] == 'RUN')]
dfappdata0057 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SSIS_RunTime') & (df['TableName'] == 'RUN_ITEM')]
dfappdata0058 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'SSIS_RunTime') & (df['TableName'] == 'RUN_PACKAGE_OVERRIDES')]
dfappdata0059 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StemRegistration') & (df['TableName'] == 'Cluster')]
dfappdata0060 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StemRegistration') & (df['TableName'] == 'Course')]
dfappdata0061 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StemRegistration') & (df['TableName'] == 'CourseCategory')]
dfappdata0062 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StemRegistration') & (df['TableName'] == 'Language')]
dfappdata0063 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StemRegistration') & (df['TableName'] == 'Registration')]
dfappdata0064 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StemRegistration') & (df['TableName'] == 'RegistrationStatus')]
dfappdata0065 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StemRegistration') & (df['TableName'] == 'School')]
dfappdata0066 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StemRegistration') & (df['TableName'] == 'UserComments')]
dfappdata0067 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentCheckInOut') & (df['TableName'] == 'CheckInCheckOutDetail')]
dfappdata0068 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentCheckInOut') & (df['TableName'] == 'LocationType')]
dfappdata0069 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentCheckInOut') & (df['TableName'] == 'LocationTypeOption')]
dfappdata0070 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentCheckInOut') & (df['TableName'] == 'Log4Net')]
dfappdata0071 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentCheckInOut') & (df['TableName'] == 'MediaCheckInType')]
dfappdata0072 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentCheckInOut') & (df['TableName'] == 'School')]
dfappdata0073 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentCheckInOut') & (df['TableName'] == 'SchoolConfiguration')]
dfappdata0074 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentCheckInOut') & (df['TableName'] == 'SchoolLocation')]
dfappdata0075 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentCheckInOut') & (df['TableName'] == 'SchoolSetting')]
dfappdata0076 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentCheckInOut') & (df['TableName'] == 'TypeCode')]
dfappdata0077 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'Addendum_Log')]
dfappdata0078 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'BasePicture')]
dfappdata0079 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_AbsenceType')]
dfappdata0080 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_Calendar')]
dfappdata0081 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_ClassSchedule')]
dfappdata0082 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_CodeTable')]
dfappdata0083 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_Course')]
dfappdata0084 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_CourseCategory')]
dfappdata0085 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_CourseOffering')]
dfappdata0086 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_CourseSearch')]
dfappdata0087 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_DisciplineInfractionRules')]
dfappdata0088 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_Educator')]
dfappdata0089 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_EducatorSearch')]
dfappdata0090 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_Employee')]
dfappdata0091 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_JobCode')]
dfappdata0092 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_Location')]
dfappdata0093 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_School')]
dfappdata0094 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_ServiceProgramDefinition')]
dfappdata0095 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_Student')]
dfappdata0096 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_StudentDistrictDetail')]
dfappdata0097 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_StudentEmail')]
dfappdata0098 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_StudentNext')]
dfappdata0099 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_StudentPicture')]
dfappdata0100 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_StudentSearch')]
dfappdata0101 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'DIM_Term')]
dfappdata0102 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_A504')]
dfappdata0103 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_AcademicContract')]
dfappdata0104 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_AlternateTransportation')]
dfappdata0105 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_Assessment')]
dfappdata0106 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_AttendanceByPeriod')]
dfappdata0107 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_AttendanceByType')]
dfappdata0108 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_AttendanceDaily')]
dfappdata0109 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_AttendanceDaily_History')]
dfappdata0110 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_BookFines')]
dfappdata0111 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_Books')]
dfappdata0112 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_BusRoute')]
dfappdata0113 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_Cafeteria')]
dfappdata0114 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_CCRPI')]
dfappdata0115 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_CheckInCheckOutDetail')]
dfappdata0116 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_ClinicVisit')]
dfappdata0117 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_CourseHistoryDetails')]
dfappdata0118 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_CurrentLetterScore')]
dfappdata0119 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_DisciplineIncident')]
dfappdata0120 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_DisciplineTotal')]
dfappdata0121 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_DistrictCourseMetrics')]
dfappdata0122 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_EducatorClass')]
dfappdata0123 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_EducatorClassMetrics')]
dfappdata0124 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_EmergencyContact')]
dfappdata0125 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_Enrollment')]
dfappdata0126 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_EnrollmentHistory')]
dfappdata0127 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_Fees')]
dfappdata0128 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_GraduationAssessment')]
dfappdata0129 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_Guardian')]
dfappdata0130 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_IEPExport')]
dfappdata0131 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_MedicalAlert')]
dfappdata0132 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_PPortal_LastLogin')]
dfappdata0133 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_SEI')]
dfappdata0134 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_ServiceProgramParticipation')]
dfappdata0135 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_SPG_Export')]
dfappdata0136 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_SPortal_LastLogin')]
dfappdata0137 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_StudentAssignmentScore')]
dfappdata0138 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_StudentClass')]
dfappdata0139 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_StudentClassMetrics')]
dfappdata0140 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_StudentClassScore')]
dfappdata0141 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_StudentEngagementInstrument')]
dfappdata0142 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_StudentEnrollmentPeriods')]
dfappdata0143 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_StudentMetaData')]
dfappdata0144 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_StudentRegisterDaily')]
dfappdata0145 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_StudentSchoolYear')]
dfappdata0146 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_StudentSibling')]
dfappdata0147 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_StudentTrendingGrades')]
dfappdata0148 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_StudentTrendingGrades_20180101')]
dfappdata0149 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_StudentTrendingGrades_20180328')]
dfappdata0150 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_TeacherViewTrendingGrades')]
dfappdata0151 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'FACT_YearlyAttendance')]
dfappdata0152 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'ImageSource')]
dfappdata0153 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'Log4Net')]
dfappdata0154 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'MAP_EducatorStudentCourse')]
dfappdata0155 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'SecurityToken')]
dfappdata0156 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfile') & (df['TableName'] == 'State')]
dfappdata0157 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfileStaging') & (df['TableName'] == 'D2L_AssignmentGrades')]
dfappdata0158 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfileStaging') & (df['TableName'] == 'D2L_FinalGrades')]
dfappdata0159 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentProfileStaging') & (df['TableName'] == 'FACT_TeacherViewTrendingGrades')]
dfappdata0160 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'AP_SP')]
dfappdata0161 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'Configuration')]
dfappdata0162 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'Counselors')]
dfappdata0163 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'CTI')]
dfappdata0164 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'Defaults')]
dfappdata0165 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'DIM_CoursePathways')]
dfappdata0166 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'DIM_CoursePathways_12202018')]
dfappdata0167 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'DIM_PathwaySpecs')]
dfappdata0168 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'DisplayTableMap')]
dfappdata0169 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'ESOL_SP')]
dfappdata0170 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'FACT_StudentAcademies')]
dfappdata0171 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'FACT_StudentCoursesTaken')]
dfappdata0172 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'FACT_StudentCreditsEarned')]
dfappdata0173 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'GTRegistrationSP')]
dfappdata0174 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'HomeSchoolCounselorsSP')]
dfappdata0175 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'Log4Net')]
dfappdata0176 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'MasterListSP')]
dfappdata0177 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'OpenHouse')]
dfappdata0178 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'ProgramChoices')]
dfappdata0179 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'RegistrationSP')]
dfappdata0180 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'RejectReason')]
dfappdata0181 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'Role')]
dfappdata0182 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'RolePermission')]
dfappdata0183 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'School')]
dfappdata0184 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'Submission')]
dfappdata0185 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'Submission_Original')]
dfappdata0186 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'SubmissionHistory')]
dfappdata0187 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'SubmissionState')]
dfappdata0188 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'User')]
dfappdata0189 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'UserComments')]
dfappdata0190 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'UserPermission')]
dfappdata0191 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'UserRolePermission')]
dfappdata0192 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'StudentRegistration') & (df['TableName'] == 'UserRoles')]
dfappdata0193 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'TCAccommodation') & (df['TableName'] == 'AccommodationToColMap')]
dfappdata0194 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'TCAccommodation') & (df['TableName'] == 'CombinedReportData')]
dfappdata0195 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'TCAccommodation') & (df['TableName'] == 'IEPAccommodations')]
dfappdata0196 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'TCAccommodation') & (df['TableName'] == 'StudentTeacherSchedule')]
dfappdata0197 = df[(df['DatabaseName'] == 'ApplicationData') & (df['SchemaName'] == 'TCAccommodation') & (df['TableName'] == 'TPP_Notes')]
d = dfappdata0001[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
d = d['TableName'].values[0]
print (d)
d = dfappdata0001[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
#print (d)
#FileName = ''.join(['iAppData,(f),(g)'])
print ('AppData_{}_{}'.format(f,g))
FileName = ('AppData_{}_{}'.format(f,g))
print (FileName)
print(pathT)
# +
d = dfappdata0001[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0001, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0002[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0002, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0003[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0003, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0004[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0004, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0005[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0005, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0006[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0006, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0007[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0007, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0008[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0008, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0009[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0009, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0010[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0010, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0011[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0011, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0012[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0012, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0013[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0013, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0014[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0014, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0015[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0015, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0016[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0016, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0017[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0017, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0018[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0018, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0019[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0019, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0020[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0020, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0021[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0021, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0022[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0022, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0023[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0023, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0024[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0024, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0025[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0025, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0026[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0026, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0027[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0027, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0028[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0028, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0029[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0029, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0030[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0030, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0031[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0031, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0032[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0032, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0033[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0033, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0034[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0034, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0035[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0035, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0036[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0036, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0037[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0037, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0038[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0038, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0039[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0039, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0040[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0040, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0041[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0041, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0042[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0042, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName))
plt.close()
# +
d = dfappdata0043[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0043, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_ClusterDemogrSummary_file.html')
plt.close()
# +
d = dfappdata0044[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0044, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_ClusterProfileSummary_file.html')
plt.close()
# +
d = dfappdata0045[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0045, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_EdLevelDemogrSummary_file.html')
plt.close()
# +
d = dfappdata0046[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0046, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_EdLevelProfile_file.html')
plt.close()
# +
d = dfappdata0047[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0047, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_EnvSettings_file.html')
plt.close()
# +
d = dfappdata0048[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0048, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_GradeLevel_file.html')
plt.close()
# +
d = dfappdata0049[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0049, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_HSGradeSummary_file.html')
plt.close()
# +
d = dfappdata0050[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0050, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_MSGradeSummary_file.html')
plt.close()
# +
d = dfappdata0051[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0051, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_SchoolDemogrSummary_file.html')
plt.close()
# +
d = dfappdata0052[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0052, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_SchoolProfileSummary_file.html')
plt.close()
# +
d = dfappdata0053[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0053, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_SchoolTchStuCounts_file.html')
plt.close()
# +
d = dfappdata0054[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0054, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_StudentCurrentYearSummary_file.html')
plt.close()
# +
d = dfappdata0055[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0055, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_TeacherStudentSummary_file.html')
plt.close()
# +
d = dfappdata0056[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0056, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_RUN_file.html')
plt.close()
# +
d = dfappdata0057[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0057, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_RUN_ITEM_file.html')
plt.close()
# +
d = dfappdata0058[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0058, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_RUN_PACKAGE_OVERRIDES_file.html')
plt.close()
# +
d = dfappdata0059[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0059, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_Cluster_file.html')
plt.close()
# +
d = dfappdata0060[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0060, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_Course_file.html')
plt.close()
# +
d = dfappdata0061[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0061, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_Course_file.html')
plt.close()
# +
d = dfappdata0062[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0062, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_Language_file.html')
plt.close()
# +
d = dfappdata0063[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0063, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_Registration_file.html')
plt.close()
# +
d = dfappdata0064[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0064, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_RegistrationStatus_file.html')
plt.close()
# +
d = dfappdata0065[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0065, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_SRSchool_file.html')
plt.close()
# +
d = dfappdata0066[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0066, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_UserComments_file.html')
plt.close()
# +
d = dfappdata0067[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0067, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_CheckInCheckOutDetail_file.html')
plt.close()
# +
d = dfappdata0068[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0068, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_LocationType_file.html')
plt.close()
# +
d = dfappdata0069[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0069, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_LocationTypeOption_file.html')
plt.close()
# +
d = dfappdata0070[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0070, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_SCLog4Net_file.html')
plt.close()
# +
d = dfappdata0071[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0071, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_MediaCheckInType_file.html')
plt.close()
# +
d = dfappdata0072[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0072, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_SCSchool_file.html')
plt.close()
# +
d = dfappdata0073[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0073, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_SchoolConfiguration_file.html')
plt.close()
# +
d = dfappdata0074[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0074, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_SchoolLocation_file.html')
plt.close()
# +
d = dfappdata0075[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0075, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_SchoolSetting_file.html')
plt.close()
# +
d = dfappdata0076[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0076, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_TypeCode_file.html')
plt.close()
# +
d = dfappdata0077[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0077, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_Addendum_Log_file.html')
plt.close()
# +
d = dfappdata0078[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0078, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_BasePicture_file.html')
plt.close()
# +
d = dfappdata0079[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0079, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_BasePicture_file.html')
plt.close()
# +
d = dfappdata0080[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0080, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_Calendar_file.html')
plt.close()
# +
d = dfappdata0081[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0081, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_ClassSchedule_file.html')
plt.close()
# +
d = dfappdata0082[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0082, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_CodeTable_file.html')
plt.close()
# +
d = dfappdata0083[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0083, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_Course_file.html')
plt.close()
# +
d = dfappdata0084[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0084, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_CourseCategory_file.html')
plt.close()
# +
d = dfappdata0085[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0085, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_CourseOffering_file.html')
plt.close()
# +
d = dfappdata0086[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0086, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_CourseSearch_file.html')
plt.close()
# +
d = dfappdata0087[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0087, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_DisciplineInfractionRules_file.html')
plt.close()
# +
d = dfappdata0088[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0088, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_Educator_file.html')
plt.close()
# +
d = dfappdata0089[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0089, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_EducatorSearch_file.html')
plt.close()
# +
d = dfappdata0090[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0090, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_Employee_file.html')
plt.close()
# +
d = dfappdata0091[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0091, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_JobCode_file.html')
plt.close()
# +
d = dfappdata0092[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0092, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_Location_file.html')
plt.close()
# +
d = dfappdata0093[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0093, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_School_file.html')
plt.close()
# +
d = dfappdata0094[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0094, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_ServiceProgramDefinition_file.html')
plt.close()
# +
d = dfappdata0095[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0095, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_Student_file.html')
plt.close()
# +
d = dfappdata0096[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0096, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_StudentDistrictDetail_file.html')
plt.close()
# +
d = dfappdata0097[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0097, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_StudentEmail_file.html')
plt.close()
# +
d = dfappdata0098[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0098, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_StudentNext_file.html')
plt.close()
# +
d = dfappdata0099[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0099, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_StudentPicture_file.html')
plt.close()
# +
d = dfappdata0100[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0100, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_StudentSearch_file.html')
plt.close()
# +
d = dfappdata0101[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0101, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_Term_file.html')
plt.close()
# +
d = dfappdata0102[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0102, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_A504_file.html')
plt.close()
# +
d = dfappdata0103[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0103, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_AcademicContract_file.html')
plt.close()
# +
d = dfappdata0104[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0104, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_AlternateTransportation_file.html')
plt.close()
# +
d = dfappdata0105[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0105, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_Assessment_file.html')
plt.close()
# +
d = dfappdata0106[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0106, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_AttendanceByPeriod_file.html')
plt.close()
# +
d = dfappdata0107[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0107, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_AttendanceByType_file.html')
plt.close()
# +
d = dfappdata0108[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0108, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_AttendanceDaily_file.html')
plt.close()
# +
d = dfappdata0109[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0109, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_AttendanceDaily_History_file.html')
plt.close()
# +
d = dfappdata0110[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0110, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_BookFines_file.html')
plt.close()
# +
d = dfappdata0111[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0111, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_Books_file.html')
plt.close()
# +
d = dfappdata0112[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0112, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_BusRoute_file.html')
plt.close()
# +
d = dfappdata0113[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0113, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_Cafeteria_file.html')
plt.close()
# +
d = dfappdata0114[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0114, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_CCRPI_file.html')
plt.close()
# +
d = dfappdata0115[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0115, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_CheckInCheckOutDetail_file.html')
plt.close()
# +
d = dfappdata0116[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0116, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_ClinicVisit_file.html')
plt.close()
# +
d = dfappdata0117[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0117, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_CourseHistoryDetails_file.html')
plt.close()
# +
d = dfappdata0118[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0118, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_CurrentLetterScore_file.html')
plt.close()
# +
d = dfappdata0119[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0119, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_DisciplineIncident_file.html')
plt.close()
# +
d = dfappdata0120[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0120, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_DisciplineTotal_file.html')
plt.close()
# +
d = dfappdata0121[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0121, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_DistrictCourseMetrics_file.html')
plt.close()
# +
d = dfappdata0122[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0122, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_EducatorClass_file.html')
plt.close()
# +
d = dfappdata0123[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0123, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_EducatorClassMetrics_file.html')
plt.close()
# +
d = dfappdata0124[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0124, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_EmergencyContact_file.html')
plt.close()
# +
d = dfappdata0125[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0125, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_Enrollment_file.html')
plt.close()
# +
d = dfappdata0126[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0126, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_EnrollmentHistory_file.html')
plt.close()
# +
d = dfappdata0127[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0127, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_Fees_file.html')
plt.close()
# +
d = dfappdata0128[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0128, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_GraduationAssessment_file.html')
plt.close()
# +
d = dfappdata0129[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0129, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_Guardian_file.html')
plt.close()
# +
d = dfappdata0130[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0130, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_IEPExport_file.html')
plt.close()
# +
d = dfappdata0131[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0131, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_MedicalAlert_file.html')
plt.close()
# +
d = dfappdata0132[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0132, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_PPortal_LastLogin_file.html')
plt.close()
# +
d = dfappdata0133[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0133, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_SEI_file.html')
plt.close()
# +
d = dfappdata0134[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0134, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_ServiceProgramParticipation_file.html')
plt.close()
# +
d = dfappdata0135[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0135, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_SPG_Export_file.html')
plt.close()
# +
d = dfappdata0136[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0136, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_SPortal_FACT_SPortal_LastLogin_file.html')
plt.close()
# +
d = dfappdata0137[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0137, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentAssignmentScore_file.html')
plt.close()
# +
d = dfappdata0138[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0138, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentClass_file.html')
plt.close()
# +
d = dfappdata0139[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0139, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentClassMetrics_file.html')
plt.close()
# +
d = dfappdata0140[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0140, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentClassScore_file.html')
plt.close()
# +
d = dfappdata0141[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0141, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentEngagementInstrument_file.html')
plt.close()
# +
d = dfappdata0142[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0142, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentEnrollmentPeriods_file.html')
plt.close()
# +
d = dfappdata0143[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0143, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentMetaData_file.html')
plt.close()
# +
d = dfappdata0144[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0144, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentRegisterDaily_file.html')
plt.close()
# +
d = dfappdata0145[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0145, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentSchoolYear_file.html')
plt.close()
# +
d = dfappdata0146[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0146, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentSibling_file.html')
plt.close()
# +
d = dfappdata0147[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0147, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentTrendingGrades_file.html')
plt.close()
# +
d = dfappdata0148[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0148, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentTrendingGrades_20180101_file.html')
plt.close()
# +
d = dfappdata0149[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0149, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentTrendingGrades_20180328_file.html')
plt.close()
# +
d = dfappdata0150[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0150, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_TeacherViewTrendingGrades_file.html')
plt.close()
# +
d = dfappdata0151[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0151, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_YearlyAttendance_file.html')
plt.close()
# +
d = dfappdata0152[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0152, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_ImageSource_file.html')
plt.close()
# +
d = dfappdata0153[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0153, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_Log4Net_file.html')
plt.close()
# +
d = dfappdata0154[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0154, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_MAP_EducatorStudentCourse_file.html')
plt.close()
# +
d = dfappdata0155[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0155, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_SecurityToken_file.html')
plt.close()
# +
d = dfappdata0156[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0156, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_State_file.html')
plt.close()
# +
d = dfappdata0157[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0157, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_D2L_AssignmentGrades_file.html')
plt.close()
# +
d = dfappdata0158[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0158, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_D2L_FinalGrades_file.html')
plt.close()
# +
d = dfappdata0159[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0159, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_TeacherViewTrendingGrades.html')
plt.close()
# +
d = dfappdata0160[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0160, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_AP_SP_file.html')
plt.close()
# +
d = dfappdata0161[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0161, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_Configuration_file.html')
plt.close()
# +
d = dfappdata0162[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0162, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_Counselors_file.html')
plt.close()
# +
d = dfappdata0163[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0163, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_CTI_file.html')
plt.close()
# +
d = dfappdata0164[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0164, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_Defaults_file.html')
plt.close()
# +
d = dfappdata0165[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0165, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_CoursePathways_file.html')
plt.close()
# +
d = dfappdata0166[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0166, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_CoursePathways_12202018_file.html')
plt.close()
# +
d = dfappdata0167[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0167, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DIM_PathwaySpecs_file.html')
plt.close()
# +
d = dfappdata0168[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0168, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_DisplayTableMap_file.html')
plt.close()
# +
d = dfappdata0169[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0169, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_ESOL_SP_file.html')
plt.close()
# +
d = dfappdata0170[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0170, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentAcademies_file.html')
plt.close()
# +
d = dfappdata0171[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0171, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentCoursesTaken_file.html')
plt.close()
# +
d = dfappdata0172[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0172, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_FACT_StudentCreditsEarned_file.html')
plt.close()
# +
d = dfappdata0173[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0173, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_GTRegistrationSP_file.html')
plt.close()
# +
d = dfappdata0174[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0174, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_HomeSchoolCounselorsSP_file.html')
plt.close()
# +
d = dfappdata0175[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0175, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_SPLog4Net_file.html')
plt.close()
# +
d = dfappdata0176[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0176, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_MasterListSP_file.html')
plt.close()
# +
d = dfappdata0177[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0177, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_OpenHouse_file.html')
plt.close()
# +
d = dfappdata0178[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0178, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_ProgramChoices_file.html')
plt.close()
# +
d = dfappdata0179[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0179, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_RegistrationSP_file.html')
plt.close()
# +
d = dfappdata0180[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0180, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_RejectReason_file.html')
plt.close()
# +
d = dfappdata0181[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0181, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_Role_file.html')
plt.close()
# +
d = dfappdata0182[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0182, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_RolePermission_file.html')
plt.close()
# +
d = dfappdata0183[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0183, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_School_file.html')
plt.close()
# +
d = dfappdata0184[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0184, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_Submission_file.html')
plt.close()
# +
d = dfappdata0185[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0185, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_Submission_Original_file.html')
plt.close()
# +
d = dfappdata0186[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0186, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_SubmissionHistory_file.html')
plt.close()
# +
d = dfappdata0187[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0187, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_SubmissionState_file.html')
plt.close()
# +
d = dfappdata0188[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0188, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_User_file.html')
plt.close()
# +
d = dfappdata0189[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0189, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_UserComments_file.html')
plt.close()
# +
d = dfappdata0190[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0190, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_UserPermission_file.html')
plt.close()
# +
d = dfappdata0191[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0191, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_UserRolePermission_file.html')
plt.close()
# +
d = dfappdata0192[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0192, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_UserRoles_file.html')
plt.close()
# +
d = dfappdata0193[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0050, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_AccommodationToColMap_file.html')
plt.close()
# +
d = dfappdata0194[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0194, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_CombinedReportData_file.html')
plt.close()
# +
d = dfappdata0195[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0195, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_IEPAccommodations_file.html')
plt.close()
# +
d = dfappdata0196[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0196, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_StudentTeacherSchedule_file.html')
plt.close()
# +
d = dfappdata0197[['DatabaseName', 'SchemaName','TableName']]
d = d.drop_duplicates()
e = d['DatabaseName'].values[0]
f = d['SchemaName'].values[0]
g = d['TableName'].values[0]
print (d)
FileName = ('AppData_{}_{}'.format(f,g))
fig = px.scatter(dfappdata0197, x = 'TodaysDate', y='RecordCount', size = 'RecordCount', color = 'RecordCount', title = '{0},{1},{2}'.format(e,f,g))
#fig.update_layout(title='Risk Assessment: Table activity recorded',
fig.update_layout(yaxis_zeroline=False, xaxis_zeroline=False)
fig.update_xaxes(title_font=dict(size=18, color='crimson'))
fig.update_yaxes(title_font=dict(size=18, color='crimson'))
fig.update_xaxes(ticks="inside")
fig.update_yaxes(ticks="inside")
fig.update_xaxes(tickangle=45, tickfont=dict(color='crimson', size=14))
fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Pink')
fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True, gridcolor='Red')
#fig.show()
fig.write_html(pathT+"\{}.html".format(FileName)) #interactive_appdata_TPP_Notes_file.html')
plt.close()
# -
now01 = datetime.datetime.now()
print('Interactive Chart Creation - Process Complete: ', now01.strftime("%Y-%m-%d %H:%M:%S"))
| SQL Conn 01 - AppData display inline graph-w-PX-color-export2html.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Overnight returns
# [Overnight Returns and Firm-Specific Investor Sentiment](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2554010)
#
# > **Abtract**: We explore the possibility that overnight returns can serve as a measure of firm-specific investor sentiment by analyzing whether they exhibit characteristics expected of a sentiment measure. First, we document short-term persistence in overnight returns, consistent with existing evidence of short-term persistence in share demand of sentiment-influenced retail investors. Second, we find that short-term persistence is stronger for harder-to-value firms, consistent with evidence that sentiment plays a larger role when there is less objective data available for valuation. Third, we show that stocks with high (low) overnight returns underperform (outperform) over the longer-term, consistent with evidence of temporary sentiment-driven mispricing.
#
# > **p 2, I**: The recent work of Berkman, Koch, Tuttle, and Zhang (2012) suggests that a stock’s
# overnight (close-to-open) return can serve as a measure of firm-level sentiment.
#
# > **p 3, I**: Specifically, Berkman et al. (2012) find that attention-generating events (high absolute returns or
# strong net buying by retail investors) on one day lead to higher demand by individual investors,
# concentrated near the open of the next trading day...This creates temporary price pressure at the
# open, resulting in elevated overnight returns that are reversed during the trading day.
#
# > **p 3, I**: We conduct three sets of analyses. **In the first
# we test for short-run persistence in overnight returns.** The basis for expecting this from a
# measure of sentiment is the evidence in Barber et al. (2009) that the order imbalances of retail
# investors, who are the investors most likely to exhibit sentiment, persist for periods extending
# over several weeks...In the third analysis we
# examine whether stocks with high overnight returns underperform those with low overnight
# returns over the long term.
# ## Install packages
import sys
# !{sys.executable} -m pip install -r requirements.txt
import cvxpy as cvx
import numpy as np
import pandas as pd
import time
import os
import quiz_helper
import matplotlib.pyplot as plt
# %matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (14, 8)
# ### data bundle
import os
import quiz_helper
from zipline.data import bundles
os.environ['ZIPLINE_ROOT'] = os.path.join(os.getcwd(), '..', '..','data','module_4_quizzes_eod')
ingest_func = bundles.csvdir.csvdir_equities(['daily'], quiz_helper.EOD_BUNDLE_NAME)
bundles.register(quiz_helper.EOD_BUNDLE_NAME, ingest_func)
print('Data Registered')
# ### Build pipeline engine
# +
from zipline.pipeline import Pipeline
from zipline.pipeline.factors import AverageDollarVolume
from zipline.utils.calendars import get_calendar
universe = AverageDollarVolume(window_length=120).top(500)
trading_calendar = get_calendar('NYSE')
bundle_data = bundles.load(quiz_helper.EOD_BUNDLE_NAME)
engine = quiz_helper.build_pipeline_engine(bundle_data, trading_calendar)
# -
# ### View Data¶
# With the pipeline engine built, let's get the stocks at the end of the period in the universe we're using. We'll use these tickers to generate the returns data for the our risk model.
# +
universe_end_date = pd.Timestamp('2016-01-05', tz='UTC')
universe_tickers = engine\
.run_pipeline(
Pipeline(screen=universe),
universe_end_date,
universe_end_date)\
.index.get_level_values(1)\
.values.tolist()
universe_tickers
# -
# # Get Returns data
# +
from zipline.data.data_portal import DataPortal
data_portal = DataPortal(
bundle_data.asset_finder,
trading_calendar=trading_calendar,
first_trading_day=bundle_data.equity_daily_bar_reader.first_trading_day,
equity_minute_reader=None,
equity_daily_reader=bundle_data.equity_daily_bar_reader,
adjustment_reader=bundle_data.adjustment_reader)
# -
# ## Get pricing data helper function
from quiz_helper import get_pricing
# ## get pricing data into a dataframe
# +
returns_df = \
get_pricing(
data_portal,
trading_calendar,
universe_tickers,
universe_end_date - pd.DateOffset(years=5),
universe_end_date)\
.pct_change()[1:].fillna(0) #convert prices into returns
returns_df
# -
# ## Sector data helper function
# We'll create an object for you, which defines a sector for each stock. The sectors are represented by integers. We inherit from the Classifier class. [Documentation for Classifier](https://www.quantopian.com/posts/pipeline-classifiers-are-here), and the [source code for Classifier](https://github.com/quantopian/zipline/blob/master/zipline/pipeline/classifiers/classifier.py)
from zipline.pipeline.classifiers import Classifier
from zipline.utils.numpy_utils import int64_dtype
class Sector(Classifier):
dtype = int64_dtype
window_length = 0
inputs = ()
missing_value = -1
def __init__(self):
self.data = np.load('../../data/project_4_sector/data.npy')
def _compute(self, arrays, dates, assets, mask):
return np.where(
mask,
self.data[assets],
self.missing_value,
)
sector = Sector()
# ## We'll use 2 years of data to calculate the factor
# **Note:** Going back 2 years falls on a day when the market is closed. Pipeline package doesn't handle start or end dates that don't fall on days when the market is open. To fix this, we went back 2 extra days to fall on the next day when the market is open.
factor_start_date = universe_end_date - pd.DateOffset(years=2, days=2)
factor_start_date
# ## Walk through "Returns" class
#
# We'll walk through how the `Returns` class works, because we'll create a new class that inherits from `Returns` in order to calculate a customized return.
#
# ### Returns inherits from CustomFactor
# The zipline package has a class [zipline.pipeline.factors.Returns](https://www.zipline.io/appendix.html?highlight=returns#zipline.pipeline.factors.Returns) which inherits from class [zipline.pipeline.CustomFactor](https://www.zipline.io/appendix.html?highlight=custom%20factor#zipline.pipeline.CustomFactor). The [source code for Returns is here](https://www.zipline.io/_modules/zipline/pipeline/factors/basic.html#Returns), and the [source code for CustomFactor is here.](https://www.zipline.io/_modules/zipline/pipeline/factors/factor.html#CustomFactor)
#
# **Please open the links to the documentation and source code and follow along with our notes about the code**
# ### Inputs variable
# The CustomFactor class takes the `inputs` as a parameter of the constructor for the class, otherwise it looks for a class-level variable named `inputs`. `inputs` takes a list of BoundColumn instances. These help us choose what kind of price-volume data to use as input. The `Returns` class sets this to
# ```
# inputs = [USEquityPricing.close]
# ```
# ### USEquityPricing class
# The class [USEquityPricing](https://www.zipline.io/appendix.html?highlight=usequitypricing#zipline.pipeline.data.USEquityPricing) has a couple BoundColumn instances that we can choose from.
# close = USEquityPricing.close
# high = USEquityPricing.high
# low = USEquityPricing.low
# open = USEquityPricing.open
# volume = USEquityPricing.volume
# ## Quiz 1
# If we wish to calculate close to open returns, which columns from USEquityPricing do you think we'll want to put into the list and set as `inputs`?
# ## Quiz 1 Answer here
# ### window_length variable
# The CustomFactor class takes `window_length` (an integer) as a constructor parameter, otherwise it looks for a class-level variable named `window_length`. If we chose a `window_length = 2` then this means that we'll be passing two days' worth of data (two rows) into the `compute` function.
# ## Quiz 2
# What window length would you choose if you were calculating daily close to open returns? Assume we have daily data.
# ## Answer 2 here
#
# ### Compute function
# The function definition of the `Returns` class includes the `compute` function
# ```
# def compute(self, today, assets, out, close):
# out[:] = (close[-1] - close[0]) / close[0]
#
# ```
# * `today`: this is handled by parent classes; it has the datetime for the "today" row for the given subset of data. We won't use it for this function implementation.
# * `assets`: this is handled by parent classes: it has the column header names for the "out" and "close". We won't use it for this function implementation.
# * `out`: this points to a numpy array that will store the result of our compute. It stores our "return" value of the `compute` function instead of explicitly returning a variable.
# * `*input`: a tuple of numpy arrays that contain input data that we'll use to compute a signal. In the `Returns` definition of `compute`, the input is a single value `close`, but we can list more if we need additional columns of data to compute a return.
#
#
# If we set the `window_length=2`, then the `compute` function gets two rows worth of data from `close`. The index 1 value is the most recent value, and the index 0 value is the earliest in time. Recall that in Python, the -1 index is the same as getting the highest indexed value, so with a numpy array of just length two, -1 gives us the value at index 1.
#
# So the line of code is calculating the one-day return using the close price, and storing that into the `out` variable.
#
# $ Return = \frac{close_1 - close_0}{close_0} $
# ## Quiz 3
# Given a numpy array for open prices called `open` and a numpy array for close prices called `close`, what code would you write to get the most recent open price? Assume that you have 2 days of data.
# ## Answer 3 here
#
# ## Close To Open Returns (Overnight Returns)
#
# The close-to-open return is the change in price between when the market closed on one day and when it opened on the next. So it's
#
# $ CloseToOpen = \frac{open_1 - close_0}{close_0}$
#
# We'll now create a class `CTO` that inherits from `Return`, and override the `compute` function.
# ## Quiz 4
# Create a customized class `CloseToOpenReturns` that inherit from the Returns class. Define the compute function to calculate overnight returns.
from zipline.pipeline.data import USEquityPricing
from zipline.pipeline.factors import Returns
class CloseToOpenReturns(Returns):
"""
"""
# TODO: Set window_length (we're calculating daily returns)
window_length = # ...
# TODO: set inputs
inputs = #[ ..., ...]
# The compute method is passed the current day, the assets list, a pre-allocated out vector, and the
# factor's items in the list `inputs`
def compute(self, today, assets, out, opens, closes):
#TODO: calculate close-to-open return and save into out[:]
out[:] = # ...
# ## Trailing overnight returns
#
# The cumulative overnight returns over a week may be predictive of future returns; hence it's a kind of momentum signal.
#
# $ TrailingOvernightReturns = \sum_{1}^{Days}CloseToOpen_t$
# Where $Days$ could be 5 if we are looking at a weekly window.
#
# So we want to take the `CloseToOpenReturns` as our input into another class, `TrailingOvernightReturns`, which also inherits from `Returns`.
# ### mask
# Note that we're going to create another class that inherits from `Returns`. Recall that `Returns` inherits from [CustomFactor](https://www.zipline.io/appendix.html?highlight=factor#zipline.pipeline.CustomFactor), which has a `mask` parameter for its constructor. The `mask` parameter takes in a `Filter` object, which determines which stock series get passed to the `compute` function. Note that when we used `AverageDollarVolume` and stored its output in the variable `universe`, this `universe` variable is of type `Filter`.
# ## Quiz 5
# If you wanted to create an object of type CloseToOpen, and also define the object so that it only computes returns on the set of stocks in universe that we selected earlier in this notebook, what code would you write?
# ## Answer 5 here
# ## numpy.nansum
# Numpy has a `nansum` function that treat NaN (not a number) as zeros. Note that by default, if we give numpy.nansum a 2D numpy array, it will calculate a single sum across all rows and columns. For our purposes, we want to compute a sum over 5 days (5 rows), and each column has daily close to open returns for a single stock. It helps to think of the a matrix (2D numpy array) as a nested list of lists. This makes it easier to decide whether to set `axis=0` or `axis=1`.
# ```
# tmp =
# [
# [stock1day1, stock2day1 ]
# [stock1day2, stock2day2 ]
# ...
# ]
# ```
# If we look at the outermost list, each element is a list that represents one day's worth of data. If we used `np.nansum(tmp,axis=0)`, this would sum across the days for each stock. If we think of this as a 2D matrix, setting `axis=0` is like calculating a sum for each column.
#
# If we set `axis=0`, this applies `nansum` to the outermost list (axis 0), so that we end up with:
# ```
# [
# sum_of_stock_1, sum_of_stock_2
# ]
# ```
# Alternatively, if we set `axis=1`, this applies `nansum` to the lists nested inside the outermost list. Each of these nested lists represent data for a single day, for all stocks, so that we get:
# ```
# [
# sum_of_day_1,
# sum_of_day_2,
# ]
# ```
# ## Example using numpy.nansum
# +
tmp = np.array([
[1, 2, 3],
[np.nan, np.nan, np.nan],
[1, 1, 1]
])
print(f"Sum across rows and columns: numpy.nansum(tmp) \n{np.nansum(tmp)}")
print(f"Sum for each column: numpy.nansum(tmp,axis=0) \n{np.nansum(tmp,axis=0)}")
print(f"Sum for each row: numpy.nansum(tmp,axis=1) \n{np.nansum(tmp,axis=1)}")
# -
# ## Quiz 6
# For our purposes, we want want a sum for each stock series. Which axis do you think we should choose?
# ## Answer 6 here
#
# ## Quiz 7
# Create a class TrailingOvernightReturns that inherits from Returns and takes the cumulative weekly sum of overnight returns.
class TrailingOvernightReturns(Returns):
"""
Sum of trailing close-to-open returns; we expect sentiment persistence at short horizons, so we
look at the 5-day (ie., 1 week) window
"""
# TODO: choose a window_length to calculate a weekly return
window_length = # ...
# TODO: set inputs to a list containing the daily close to open returns
# Filter the close to open returns by our stock universe
inputs = #[...]
def compute(self, today, assets, out, close_to_open):
#TODO: calculate the sum of close_to_open
#choose the axis so that there is a sum for each stock (each column)
#treat NaN as zeros
out[:] = # ...
# ## Quiz 8
# Create a factor by instantiating the TrailingOvernightReturns class that you just defined. Demean by sector, rank and covnert to a zscore.
# +
# TODO: create an overnight_returns_factor variable
# create a pipeline called p
p = Pipeline(screen=universe)
p.add(overnight_returns_factor, 'Overnight_Sentiment')
# -
# ## Visualize pipeline
p.show_graph(format='png')
# ## run pipeline and view the factor data
df = engine.run_pipeline(p, factor_start_date, universe_end_date)
df.head()
# ## Visualize factor returns
#
# These are returns that a theoretical portfolio would have if its stock weights were determined by a single alpha factor's values.
from quiz_helper import make_factor_plot
make_factor_plot(df, data_portal, trading_calendar, factor_start_date, universe_end_date);
# ## Solutions
# Check out the [solution notebook here.](./overnight_returns_solution.ipynb)
| Quiz/m4_multifactor_models/m4l3/overnight_returns.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python
# language: python
# name: python
# ---
# # Managing assignment files manually
# Distributing assignments to students and collecting them can be a logistical nightmare. If you are relying on distributing the release version of the assignment to the students using your institution's existing learning management system the process of downloading the students submissions from the learning management system and then getting them back into ``nbgrader`` can be simplified by relying on some of nbgrader's built-in functionality.
# + active=""
# .. contents:: Table of Contents
# :depth: 2
# -
# ## Releasing and fetching assignments
# After an assignment has been created using `nbgrader assign`, the instructor must actually release that assignment to students. This section of the documentation assumes you are using your institution's existing learning management system for distributing the release version of the assignment. It is also assumed that the students will fetch the assignments from - and submit their assignments to - the learning management system.
# + active=""
# If this is not the case and you are using nbgrader in a shared server environment, you can do this with the ``nbgrader release`` command (see :doc:`managing_assignment_files`)
# -
# ## Collecting assignments
# + active=""
# .. seealso::
#
# :doc:`/command_line_tools/nbgrader-zip-collect`
# Command line options for ``nbgrader zip_collect``
#
# :doc:`/plugins/zipcollect-plugin`
# Plugins for ``nbgrader zip_collect``
#
# :doc:`philosophy`
# More details on how the nbgrader hierarchy is structured.
#
# :doc:`/configuration/config_options`
# Details on ``nbgrader_config.py``
# -
# Once the students have submitted their assignments and you have downloaded these assignment files from your institution's learning management system, you can get theses files back into ``nbgrader`` by using the ``nbgrader zip_collect`` sub-command.
# ### Directory Structure:
# + active=""
# ``nbgrader zip_collect`` makes a few assumptions about how the downloaded assignment files are organized. By default, ``nbgrader zip_collect`` assumes that your downloaded assignment files will be organized with the following directory structure:
# ::
#
# {downloaded}/{assignment_id}/{collect_step}/
#
# where:
#
# * The ``downloaded`` directory is the main directory where your downloaded assignment files are placed.
# This location can also be customized (see the :doc:`configuration options </configuration/config_options>`)
# so that you can run the nbgrader commands from anywhere on your system, but still have them
# operate on the same directory.
#
# * ``assignment_id`` corresponds to the unique name of an assignment.
#
# * The ``collect_step`` sub-directory corresponds to a step in the ``zip_collect`` workflow.
# -
# ### Workflow
# + active=""
# The workflow for using ``nbgrader zip_collect`` is
#
# 1. You, as an instructor, place submitted files/archives in
# ::
#
# {downloaded}/{assignment_id}/{archive_directory}
#
# 2. You run ``nbgrader zip_collect {assignment_id}``, which will:
#
# a. Extract these archive files - or copy non-archive files - to
# ::
#
# {downloaded}/{assignment_id}/{extracted_directory}
#
# b. Copy these extracted files to
# ::
#
# {submitted_directory}/{student_id}/{assignment_id}/{notebook_id}.ipynb
#
# 3. At this point you can use ``nbgrader autograde`` to grade the files in the submitted directory
# (See :ref:`autograde-assignments`).
#
# There are a few subtleties about how ``nbgrader zip_collect`` determines the correct student and notebook ids, which we'll go through in the sections below.
# -
# ### Step 1: Download submission files or archives
# + active=""
# The first step in the collect process is to extract files from any archive (zip) files - and copy any non-archive files - found in the following directory:
# ::
#
# {downloaded}/{assignment_id}/{archive_directory}/
#
# where:
#
# * The ``archive_directory`` contains the actual submission files or archives downloaded from your institution's
# learning management system. ``nbgrader zip_collect`` assumes you have already created this directory and placed all
# downloaded submission files or archives in this directory.
# -
# For demo purposes we have already created the directories needed by the ``nbgrader zip_collect`` sub-command and placed the downloaded assignment submission files and archive (zip) files in there. For example we have one ``.zip`` file and one ``.ipynb`` file:
# + language="bash"
#
# ls -l downloaded/ps1/archive
# -
# But before we can run the ``nbgrader zip_collect`` sub-command we first need to specify a few config options:
# +
# %%file nbgrader_config.py
c = get_config()
# Only set for demo purposes so as to not mess up the other documentation
c.CourseDirectory.submitted_directory = 'submitted_zip'
# Only collect submitted notebooks with valid names
c.ZipCollectApp.strict = True
# Apply this regular expression to the extracted file filename (absolute path)
c.FileNameCollectorPlugin.named_regexp = (
'.*_(?P<student_id>\w+)_attempt_'
'(?P<timestamp>[0-9\-]+)_'
'(?P<file_id>.*)'
)
c.CourseDirectory.db_assignments = [dict(name="ps1", duedate="2015-02-02 17:00:00 UTC")]
c.CourseDirectory.db_students = [
dict(id="bitdiddle", first_name="Ben", last_name="Bitdiddle"),
dict(id="hacker", first_name="Alyssa", last_name="Hacker")
]
# -
# Setting the ``strict`` flag ``True`` skips any submitted notebooks with invalid names.
#
# By default the ``nbgrader zip_collect`` sub-command uses the ``FileNameCollectorPlugin`` to collect files from the ``extracted_directory``. This is done by sending each filename (**absolute path**) through to the ``FileNameCollectorPlugin``, which in turn applies a named group regular expression (``named_regexp``) to the filename.
#
# The ``FileNameCollectorPlugin`` returns ``None`` if the given file should be skipped or it returns an object that must contain the ``student_id`` and ``file_id`` data, and can optionally contain the ``timestamp``, ``first_name``, ``last_name``, and ``email`` data.
#
# Thus if using the default ``FileNameCollectorPlugin`` you must at least supply the ``student_id`` and ``file_id`` named groups. This plugin assumes all extracted files have the same filename or path structure similar to the downloaded notebook:
# + language="bash"
#
# ls -l downloaded/ps1/archive
# + active=""
# .. note::
#
# When collecting files in assignment sub-folders the ``file_id`` data must include the relative path to
# ``{assignment_id}`` and the filename in order to preserve the assignment directory structure.
#
# If you wish to use a custom plugin for the ``ZipCollectApp`` see :doc:`/plugins/zipcollect-plugin` for more information.
# -
# Before we extract the files, we also need to have run ``nbgrader assign``:
# + language="bash"
#
# nbgrader assign "ps1" --IncludeHeaderFooter.header=source/header.ipynb --force
# -
# ### Step 2: Extract, collect, and copy files
# + active=""
# With the ``nbgrader_config.py`` file created we can now run the ``nbgrader zip_collect`` sub-command. This will:
#
# a. Extract archive - or copy non-archive - files from the ``{archive_directory}`` into
# the following directory:
# ::
#
# {downloaded}/{assignment_id}/{extracted_directory}/
#
# b. Then collect and copy files from the ``extracted_directory`` above to the students
# ``submitted_directory``:
# ::
#
# {submitted_directory}/{student_id}/{assignment_id}/{notebook_id}.ipynb
#
# For example:
# + language="bash"
#
# nbgrader zip_collect ps1
# -
# After running the ``nbgrader zip_collect`` sub-command, the archive (zip) files were extracted - and the non-archive files were copied - to the ``extracted_directory``:
# + language="bash"
#
# ls -l downloaded/ps1/extracted/
# ls -l downloaded/ps1/extracted/notebooks/
# + active=""
# By default archive files will be extracted into their own sub-directory in the ``extracted_directory`` and any archive files, within archive files, will also be extracted into their own sub-directory along the path. To change this default behavior you can write your own extractor plugin for ``zip_collect`` (see :doc:`/plugins/zipcollect-plugin`).
#
# These extracted files were then collected and copied into the students ``submitted_directory``:
# + language="bash"
#
# ls -l submitted_zip
# + language="bash"
#
# ls -l submitted_zip/hacker/ps1/
# -
# ## Custom plugins
# + active=""
# .. seealso::
#
# :doc:`/plugins/zipcollect-plugin`
# Plugins for ``nbgrader zip_collect``
#
# Unfortunately, for the demo above, the timestamp strings from the filenames did not parse correctly:
# + language="bash"
#
# cat submitted_zip/hacker/ps1/timestamp.txt
# -
# This is an issue with the underlying ``dateutils`` package used by ``nbgrader``. But not to worry, we can easily create a custom collector plugin to correct the timestamp strings when the files are collected, for example:
# +
# %%file plugin.py
from nbgrader.plugins import FileNameCollectorPlugin
class CustomPlugin(FileNameCollectorPlugin):
def collect(self, submission_file):
info = super(CustomPlugin, self).collect(submission_file)
if info is not None:
info['timestamp'] = '{}-{}-{} {}:{}:{}'.format(
*tuple(info['timestamp'].split('-'))
)
return info
# + language="bash"
#
# # Use force flag to overwrite existing files
# nbgrader zip_collect --force --collector=plugin.CustomPlugin ps1
# -
# The ``--force`` flag is used this time to overwrite existing extracted and submitted files. Now if we check the timestamp we see it parsed correctly:
# + language="bash"
#
# cat submitted_zip/hacker/ps1/timestamp.txt
# -
# Note that there should only ever be *one* instructor who runs the ``nbgrader zip_collect`` command (and there should probably only be one instructor -- the same instructor -- who runs `nbgrader assign`, `nbgrader autograde` and `nbgrader formgrade` as well). However this does not mean that only one instructor can do the grading, it just means that only one instructor manages the assignment files. Other instructors can still perform grading by accessing the formgrader URL.
| nbgrader/docs/source/user_guide/managing_assignment_files_manually.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import upbit_trader_api
import pandas as pd
import numpy as np
print(dir(upbit_trader_api))
upbit_trader_api.get_market_code()[0:3]
upbit_trader_api.get_candle_data("KRW-BTC", 5, None, "5")
upbit_trader_api.get_orderbook_info("KRW-BTC")
# +
# need api keys for request order(buy, sell)/ cancel order/ order status
upbit_trader_api.ACCESS_KEY = ""
upbit_trader_api.SECRET_KEY = ""
# this is example u should
price = 7300000
volume = 1.0
# buy request
uuid = upbit_trader_api.create_order(market="KRW-BTC", side='bid', price=price, volume=volume)
# sell request
uuid = upbit_trader_api.create_order(market="KRW-BTC", side='ask', price=price, volume=volume)
# cancel order
upbit_trader_api.cancel_order(uuid)
# check order status
upbit_trader_api.get_order_info(uuid)
| .ipynb_checkpoints/example_upbit_trader_api-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="cx82H4OmEZVR" colab_type="text"
# #Encoder
#
# + [markdown] id="JASz-63lY64O" colab_type="text"
# ## Importación de librerías y datos
#
# Por medio de nuestra libería ESIOS_contoller.py importamos nuestro último dataset de datos y lo parseamos para su uso. Sirve tanto como para Drive como jupiter.
# + id="uCkvfteNY-od" colab_type="code" outputId="186701d8-2b25-4ba9-ff8e-2ca8c4e9830b" executionInfo={"status": "ok", "timestamp": 1570212024195, "user_tz": -120, "elapsed": 1384, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCMo4diOvC7X2o3loNf2tTLcnrDlcvQT2ZBFxsZLA=s64", "userId": "10058377044009387405"}} colab={"base_uri": "https://localhost:8080/", "height": 105}
import json, urllib, datetime, pickle, time
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import *
from sklearn.metrics import *
from keras.models import *
from keras.layers import *
from sklearn.preprocessing import *
from keras.optimizers import *
from scipy.stats import *
from importlib.machinery import SourceFileLoader
from math import sqrt
try:
from google.colab import drive
drive.mount('/content/drive')
path = '/content/drive/My Drive/TFM/01.Utils/ESIOS_contoller.py'
in_colab = True
except:
path = '../utils/ESIOS_contoller.py'
in_colab = False
esios_assembler = SourceFileLoader('esios', path).load_module()
esios_controller = esios_assembler.ESIOS(in_colab)
data_consumo = esios_controller.get_data()
# + [markdown] id="CaGx5ORyGZLI" colab_type="text"
# ## Preparación de los datos
# + id="FtIkBQUL74un" colab_type="code" outputId="f93b1de6-aa7b-4a92-c643-7274784a6f7a" executionInfo={"status": "ok", "timestamp": 1570209982554, "user_tz": -120, "elapsed": 3257, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCMo4diOvC7X2o3loNf2tTLcnrDlcvQT2ZBFxsZLA=s64", "userId": "10058377044009387405"}} colab={"base_uri": "https://localhost:8080/", "height": 510}
x_data_grouped = esios_controller.get_df_daily()
y_data_grouped = esios_controller.get_df_daily_all_day_prices()
x_data_grouped.head()
# + id="kCCDBSLzr0dx" colab_type="code" colab={}
columns_array = ['h'+str(i) for i in range(24)]
y_data_grouped = pd.DataFrame(y_data_grouped)
y_data_grouped = pd.DataFrame(y_data_grouped['PVPC_DEF'].values.tolist(), columns=columns_array)
# + id="68aDUNXrs6Hx" colab_type="code" outputId="93d28728-b1a6-4561-cc6c-af97c62e9165" executionInfo={"status": "ok", "timestamp": 1570209994106, "user_tz": -120, "elapsed": 675, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCMo4diOvC7X2o3loNf2tTLcnrDlcvQT2ZBFxsZLA=s64", "userId": "10058377044009387405"}} colab={"base_uri": "https://localhost:8080/", "height": 224}
y_data_grouped.tail()
# + id="btWBVlkhhGro" colab_type="code" outputId="6a512503-5e71-4cef-8663-be744d68e302" executionInfo={"status": "ok", "timestamp": 1570209996965, "user_tz": -120, "elapsed": 963, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCMo4diOvC7X2o3loNf2tTLcnrDlcvQT2ZBFxsZLA=s64", "userId": "10058377044009387405"}} colab={"base_uri": "https://localhost:8080/", "height": 224}
y_data_grouped = y_data_grouped.bfill()
y_data_grouped = y_data_grouped.ffill()
y_data_grouped.tail()
# + id="RdlhPsjAd7Eb" colab_type="code" outputId="4a217d48-846a-4aff-825a-02eb2a8d199b" executionInfo={"status": "ok", "timestamp": 1570211162620, "user_tz": -120, "elapsed": 1047, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCMo4diOvC7X2o3loNf2tTLcnrDlcvQT2ZBFxsZLA=s64", "userId": "10058377044009387405"}} colab={"base_uri": "https://localhost:8080/", "height": 51}
# Split the data
x_data_grouped = pd.DataFrame(x_data_grouped[['PVPC_DEF', 'Holiday']])
x_train, x_valid, y_train, y_valid = train_test_split(x_data_grouped, y_data_grouped, test_size=0.1, shuffle=False)
print('Xtrain_dim:', x_train.shape)
print('Ytrain_dim:', y_train.shape)
# + [markdown] id="24K8XGk7Eirr" colab_type="text"
# ## Modelo
#
# Podemos observar que la capa de salida consta con una dimensión de 24, que equivale a las 24 horas del día. El modelo, que es la parte del encoder del autoencoder, esta pensado para que le entren dos valores de entrada (El precio medio diario y si es festivo) y con estos valores estimar el precio de las 24 horas del día.
# + id="w3kB627ilOKG" colab_type="code" outputId="cb716f84-a404-43cc-b2ab-52cc52c21611" executionInfo={"status": "ok", "timestamp": 1570210002905, "user_tz": -120, "elapsed": 961, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCMo4diOvC7X2o3loNf2tTLcnrDlcvQT2ZBFxsZLA=s64", "userId": "10058377044009387405"}} colab={"base_uri": "https://localhost:8080/", "height": 411}
from keras.layers import Input, Dense
from keras.models import Model
from keras.regularizers import l2
encoding_dim = 24
model = Sequential()
model.add(Dense(40, activation='relu',
input_shape=(2,)))
model.add(Dense(output_dim=24,
activation='linear'))
opt = SGD(lr=0.01, momentum=0.9, decay=1e-6)
model.compile(optimizer='adam', loss='mse', metrics=['mse','mae','mape'])
model.summary()
# + id="urwkxdGult0Q" colab_type="code" outputId="a88d992d-9ba9-44f7-dbd2-d93d35c619c6" executionInfo={"status": "ok", "timestamp": 1570210058946, "user_tz": -120, "elapsed": 54042, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCMo4diOvC7X2o3loNf2tTLcnrDlcvQT2ZBFxsZLA=s64", "userId": "10058377044009387405"}} colab={"base_uri": "https://localhost:8080/", "height": 1000}
model.fit(x_train, y_train,epochs=300,
shuffle=False,
verbose=1)
# + id="xePErvl4sZYo" colab_type="code" outputId="0a46f1fa-c6eb-4506-cf00-fed1be7992ea" executionInfo={"status": "ok", "timestamp": 1570210076555, "user_tz": -120, "elapsed": 952, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCMo4diOvC7X2o3loNf2tTLcnrDlcvQT2ZBFxsZLA=s64", "userId": "10058377044009387405"}} colab={"base_uri": "https://localhost:8080/", "height": 111}
x_valid[:1]
# + id="5_z3-k0ymVkA" colab_type="code" outputId="8cd899c4-48a2-4eeb-b71d-fdb3f4c87c7a" executionInfo={"status": "ok", "timestamp": 1570210080556, "user_tz": -120, "elapsed": 971, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCMo4diOvC7X2o3loNf2tTLcnrDlcvQT2ZBFxsZLA=s64", "userId": "10058377044009387405"}} colab={"base_uri": "https://localhost:8080/", "height": 85}
print(model.predict(x_valid[2:3]))
# + [markdown] id="76Sr0bQOA6QZ" colab_type="text"
# ## Métricas
# + id="QRIGhoqer_Z1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="9fe9150a-f393-4bcc-8843-74f583c681d6" executionInfo={"status": "ok", "timestamp": 1570211354519, "user_tz": -120, "elapsed": 1313, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCMo4diOvC7X2o3loNf2tTLcnrDlcvQT2ZBFxsZLA=s64", "userId": "10058377044009387405"}}
y_valid = np.array(y_valid).flatten()
y_pred = np.array(model.predict(x_valid)).flatten()
esios_controller.get_metrics(y_valid, y_pred)
# + [markdown] id="chy4IBK45Aqm" colab_type="text"
# ## Save to use
# + id="sKYi2ndH3h7_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="1ceba022-b8b8-4451-cdbf-1579951da7dc" executionInfo={"status": "ok", "timestamp": 1570212252983, "user_tz": -120, "elapsed": 1041, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCMo4diOvC7X2o3loNf2tTLcnrDlcvQT2ZBFxsZLA=s64", "userId": "10058377044009387405"}}
esios_controller.save_keras_model('/content/drive/My Drive/TFM/01.Utils/data', model,'Encoder', False, True)
# + id="cfO9tojA6r-l" colab_type="code" colab={}
| modelos/Encoder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (pymt)
# language: python
# name: pymt-dev
# ---
# ## CHILD Landscape Evolution Model
# * Link to this notebook: https://github.com/csdms/pymt/blob/master/notebooks/child.ipynb
# * Install command: `$ conda install notebook pymt_child`
#
#
# Import the `Sedflux3D` component from `pymt`. All of the components available to `pymt` are located in `pymt.components`. Here I've renamed the component to be `Model` to show that you could run these same commands with other models as well. For instance, you could instead import `Child` with `from pymt.components import Child as Model` and repeat this exercise with Child instead.
# +
from __future__ import print_function
# Some magic to make plots appear within the notebook
# %matplotlib inline
import numpy as np # In case we need to use numpy
# -
# ### Run CHILD in PyMT
# We'll now do the same thing but this time with the Child model. Notice that the commands will be the same. *If you know how to run one PyMT component, you know how to run them all.*
# +
import pymt.models
model = pymt.models.Child()
# -
# You can now see the help information for Child. This time, have a look under the *Parameters* section (you may have to scroll down - it's the section after the citations). The *Parameters* section describes optional keywords that you can pass the the `setup` method. In the previous example we just used defaults. Below we'll see how to set input file parameters programmatically through keywords.
help(model)
# rm -rf _model # Clean up for the next step
# We can change input file paramters through `setup` keywords. The `help` description above gives a brief description of each of these. For this example we'll change the grid spacing, the size of the domain, and the duration of the simulation.
config_file, initdir = model.setup('_model',
grid_node_spacing=750.,
grid_x_size=20000.,
grid_y_size=40000.,
run_duration=1e6)
# The setup folder now only contains the child input file.
# ls _model
# Again, initialize and run the model for 10 time steps.
model.initialize(config_file, initdir)
for t in range(10):
model.update()
print(model.time)
# This time around it's now quite as clear what the units of time are. We can check in the same way as before.
model.time_units
# Update until some time in the future. Notice that, in this case, we update to a partial time step. Child is fine with this however some other models may not be. For models that can not update to times that are not full time steps, PyMT will advance to the next time step and interpolate values to the requested time.
model.update_until(201.5, units='year')
print(model.time)
# Child offers different output variables but we get them in the same way as before.
model.output_var_names
model.get_value('land_surface__elevation')
# We can query each input and output variable. PyMT attaches a dictionary to each component called `var` that provides information about each variable. For instance we can see that `"land_surface__elevation"` has units of meters, is an input and output variable, and is defined on the nodes of grid with id 0.
model.var['land_surface__elevation']
# If we plot this variable, we can visually see the unsructured triangular grid that Child has decomposed its grid into.
model.quick_plot('land_surface__elevation', edgecolors='k', vmin=-200, vmax=200, cmap='BrBG_r')
# As with the `var` attribute, PyMT adds a dictionary, called `grid`, to components that provides a description of each of the model's grids. Here we can see how the x and y positions of each grid node, and how nodes connect to one another to form faces (the triangles in this case). Grids are described using the ugrid conventions.
model.grid[0]
# Child initializes it's elevations with random noise centered around 0. We would like instead to give it elevations that have some land and some sea. First we'll get the x and y coordinates for each node along with their elevations.
x, y = model.get_grid_x(0), model.get_grid_y(0)
z = model.get_value('land_surface__elevation')
# All nodes above `y=y_shore` will be land, and all nodes below `y=y_shore` will be sea.
y_shore = 15000.
z[y < y_shore] -= 100
z[y >= y_shore] += 100
model.set_value('land_surface__elevation', z)
# Just to verify we set things up correctly, we'll create a plot.
model.quick_plot('land_surface__elevation', edgecolors='k', vmin=-200, vmax=200, cmap='BrBG_r')
# To get things going, we'll run the model for 5000 years and see what things look like.
model.update_until(5000.)
model.quick_plot('land_surface__elevation', edgecolors='k', vmin=-200, vmax=200, cmap='BrBG_r')
# We'll have some fun now by adding a simple uplift component. We'll run the component for another 5000 years but this time uplifting a corner of the grid by `dz_dt`.
dz_dt = .02
now = model.time
times, dt = np.linspace(now, now + 5000., 50, retstep=True)
for time in times:
model.update_until(time)
z = model.get_value('land_surface__elevation')
z[(y > 15000.) & (x > 10000.)] += dz_dt * dt
model.set_value('land_surface__elevation', z)
# A portion of the grid was uplifted and channels have begun eroding into it.
model.quick_plot('land_surface__elevation', edgecolors='k', vmin=-200, vmax=200, cmap='BrBG_r')
# We now stop the uplift and run it for an additional 5000 years.
model.update_until(model.time + 5000.)
model.quick_plot('land_surface__elevation', edgecolors='k', vmin=-200, vmax=200, cmap='BrBG_r')
model.get_value('channel_water_sediment~bedload__mass_flow_rate')
| notebooks/child.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Example of grouped regressions
#
# In this section, we want to demanstrate a slightly advanced example for using Pandas grouped transformation for performing many ordinary least square model fits in parallel. We reuse the weather data and try to predict the temperature of all stations with a very simple model per station.
# %matplotlib inline
# # 1 Load Data
# First we load data of a single year.
storageLocation = "s3://dimajix-training/data/weather"
# +
from pyspark.sql.functions import *
from pyspark.sql.types import *
rawWeatherData = spark.read.text(storageLocation + "/2003")
weather_all = rawWeatherData.select(
substring(col("value"),5,6).alias("usaf"),
substring(col("value"),11,5).alias("wban"),
to_timestamp(substring(col("value"),16,12),"yyyyMMddHHmm").alias("timestamp"),
to_timestamp(substring(col("value"),16,12),"yyyyMMddHHmm").cast("long").alias("ts"),
substring(col("value"),42,5).alias("report_type"),
substring(col("value"),61,3).alias("wind_direction"),
substring(col("value"),64,1).alias("wind_direction_qual"),
substring(col("value"),65,1).alias("wind_observation"),
(substring(col("value"),66,4).cast("float") / lit(10.0)).alias("wind_speed"),
substring(col("value"),70,1).alias("wind_speed_qual"),
(substring(col("value"),88,5).cast("float") / lit(10.0)).alias("air_temperature"),
substring(col("value"),93,1).alias("air_temperature_qual")
)
# -
# # 2 Analysis of one station
#
# First we only analyse a single station, just to check our approach and the expressiveness of our model. It won't be a very good fit, but it will be good enough for our needs to demonstrate the concept.
#
# So first we pick a single station, and we also only keep those records with a valid temeprature measurement.
weather_single = weather_all.where("usaf='954920' and wban='99999'").cache()
pdf = # YOUR CODE HERE
pdf
# ## 2.1 Create Feature Space
#
# Our model will simply predict the temperature depending on the time and day of year. We use sin and cos of with a day-wide period and a year-wide period as features for fitting the model.
# +
import math
import numpy as np
seconds_per_day = 24*60*60
seconds_per_year = 365*seconds_per_day
# Add sin and cos as features for fitting
pdf['daily_sin'] = np.sin(pdf['ts']/seconds_per_day*2.0*math.pi)
pdf['daily_cos'] = np.cos(pdf['ts']/seconds_per_day*2.0*math.pi)
pdf['yearly_sin'] = np.sin(pdf['ts']/seconds_per_year*2.0*math.pi)
pdf['yearly_cos'] = np.cos(pdf['ts']/seconds_per_year*2.0*math.pi)
# Make a plot, just to check how it looks like
pdf[0:200].plot(x='timestamp', y=['daily_sin','daily_cos','air_temperature'], figsize=[16,6])
# -
# ## 2.2 Fit model
#
# Now that we have the temperature and some features, we fit a simple model.
# +
import statsmodels.api as sm
# define target variable y
y = pdf['air_temperature']
# define feature variables X
X = pdf[['ts', 'daily_sin', 'daily_cos', 'yearly_sin', 'yearly_cos']]
X = sm.add_constant(X)
# fit model
model = sm.OLS(y, X).fit()
# perform prediction
pdf['pred'] = model.predict(X)
# Make a plot of real temperature vs predicted temperature
pdf[0:200].plot(x='timestamp', y=['pred','air_temperature'], figsize=[16,6])
# -
# ## 2.3 Inspect Model
#
# Now let us inspect the model, in order to find a way to store it in a Pandas DataFrame
# +
# YOUR CODE HERE
# -
type(model.params)
# Finally let us create a Pandas DataFrame from the model parameters. This code snippet will be needed later when we want to parallelize the fitting for different weather stations using Spark.
x_columns = X.columns
pd.DataFrame([[model.params[i] for i in x_columns]], columns=x_columns)
# # 3 Perform OLS for all stations
#
# Now we want to create a model for all stations. First we filter the data again, such that we only have valid temperature measurements.
valid_weather = weather_all.filter(weather_all.air_temperature_qual == 1)
# ## 3.1 Feature extraction
#
# Now we generate the same features, but this time we use Spark instead of Pandas operations. This simplifies later model fitting.
# +
import math
seconds_per_day = 24*60*60
seconds_per_year = 365*seconds_per_day
features = valid_weather.select(
valid_weather.usaf,
valid_weather.wban,
valid_weather.air_temperature,
valid_weather.ts,
lit(1.0).alias('const'),
sin(valid_weather.ts * 2.0 * math.pi / seconds_per_day).alias('daily_sin'),
cos(valid_weather.ts * 2.0 * math.pi / seconds_per_day).alias('daily_cos'),
sin(valid_weather.ts * 2.0 * math.pi / seconds_per_year).alias('yearly_sin'),
cos(valid_weather.ts * 2.0 * math.pi / seconds_per_year).alias('yearly_cos')
)
features.limit(10).toPandas()
# -
# ## 3.2 Fit Models
#
# Now we use a Spark Pandas grouped UDF in order to fit models for all weather stations in parallel.
# +
group_columns = ['usaf', 'wban']
y_column = 'air_temperature'
x_columns = ['ts', 'const', 'daily_sin', 'daily_cos', 'yearly_sin', 'yearly_cos']
schema = features.select(*group_columns, *x_columns).schema
@pandas_udf(schema, PandasUDFType.GROUPED_MAP)
def ols(pdf):
# Extract grouping information from appropriate columns
group = # YOUR CODE HERE
# Extract target variable
y = # YOUR CODE HERE
# Extract predictor variables
X = # YOUR CODE HERE
# Create model using Python statsmodel package to fit y to input variables x
model = sm.OLS(y, X).fit()
# Create a Pandas data frame with one row containing the grouping columns and all model parameters
return pd.DataFrame([group + [model.params[i] for i in x_columns]], columns=group_columns + x_columns)
# Now fit model for all weather stations in parallel using Spark
models = # YOUR CODE HERE
# -
models.limit(10).toPandas()
# ## 3.3 Inspect and compare results
#
# Now let's pick the same station again, and compare the model to the original model.
models.where("usaf='954920' and wban='99999'").toPandas()
model.params
| spark-training/spark-python/jupyter-advanced-udf/Grouped Regression - Skeleton.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib as mpl
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
import sklearn
import pandas as pd
import os
import sys
import time
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
print(sys.version_info)
for module in mpl, np, pd, sklearn, tf, keras:
print(module.__name__, module.__version__)
# +
fashion_mnist = keras.datasets.fashion_mnist
(x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data()
x_valid, x_train = x_train_all[:5000], x_train_all[5000:]
y_valid, y_train = y_train_all[:5000], y_train_all[5000:]
print(x_valid.shape, y_valid.shape)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
# +
# x = (x - u) / std
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
# x_train: [None, 28, 28] -> [None, 784]
x_train_scaled = scaler.fit_transform(
x_train.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28)
x_valid_scaled = scaler.transform(
x_valid.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28)
x_test_scaled = scaler.transform(
x_test.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28)
# +
# tf.keras.models.Sequential()
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
#这次我们做20层
for _ in range(20):
model.add(keras.layers.Dense(100, activation="relu"))
#结果层
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer = keras.optimizers.SGD(0.001),
metrics = ["accuracy"])
# -
model.summary()
model.layers
# +
# Tensorboard, earlystopping, ModelCheckpoint
logdir = './dnn-callbacks'
if not os.path.exists(logdir):
os.mkdir(logdir)
output_model_file = os.path.join(logdir,
"fashion_mnist_model.h5")
callbacks = [
keras.callbacks.TensorBoard(logdir),
keras.callbacks.ModelCheckpoint(output_model_file,
save_best_only = True),
# keras.callbacks.EarlyStopping(patience=5, min_delta=1e-3),
]
history = model.fit(x_train_scaled, y_train, epochs=100,
validation_data=(x_valid_scaled, y_valid),
callbacks = callbacks)
# -
history.history
# +
def plot_learning_curves(history):
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
plot_learning_curves(history)
#如果初期变化效果不明显
# 1. 参数众多,训练不充分
# 2. 梯度消失 -> 链式法则 -> 复合函数f(g(x))求导,前期参数太多
# -
model.evaluate(x_test_scaled, y_test, verbose=0)
| tf04_keras_classification_model-dnn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 1. Identify all molecules that contain a carboxyl group
# 2. Identify all the torsion angles that involve that carboyl group
# 3. Plot and see -- would expect there to be two modes separated by 180 degrees, sampled at least a little bit -- what's the relaxation time for this flip?
# 4. If any of this can be automated, put it in `bayes-implicit-solvent`
from bayes_implicit_solvent.solvation_free_energy import smiles_list, db, mol_top_sys_pos_list
for i in range(len(db)):
if db[i][2] == 'methyl formate':
print(i)
print(db[i])
break
smiles = db[i][1]
for j in range(len(smiles_list)):
if smiles_list[j] == smiles:
print(j)
break
smiles_list[j]
len(smiles_list), len(mol_top_sys_pos_list)
mol, top, sys, pos = mol_top_sys_pos_list[j]
atoms = [a.GetName() for a in mol.GetAtoms()]
atoms
len(atoms)
methyl_formate_traj_name = '../bayes_implicit_solvent/vacuum_samples/vacuum_samples_583.h5'
import mdtraj as md
traj = md.load(methyl_formate_traj_name)
traj
# +
def bonds_share_an_atom(bond1, bond2):
return bond1[1] == bond2[0]
#return len(set(bond1 + bond2)) < 4
def get_torsion_tuples(traj):
"""dumb brute-force O(n_bonds^3) iteration"""
bonds = list([(a.index, b.index) for (a,b) in traj.top.bonds]) + list([(b.index, a.index) for (a,b) in traj.top.bonds])
torsions = []
for bond1 in bonds:
for bond2 in bonds:
for bond3 in bonds:
if bonds_share_an_atom(bond1, bond2) and bonds_share_an_atom(bond2, bond3):
putative_torsion = (bond1[0], bond1[1], bond2[1], bond3[1])
if putative_torsion[-1] < putative_torsion[0]:
putative_torsion = putative_torsion[::-1]
if len(set(putative_torsion)) == 4:
torsions.append(putative_torsion)
return sorted(list(set(torsions)))
# -
torsions= get_torsion_tuples(traj)
len(torsions)
import pyemma
feat = pyemma.coordinates.featurizer(traj.topology)
feat.add_dihedrals(torsions, cossin=True)
X = feat.transform(traj)
# +
tica = pyemma.coordinates.tica(X, lag=20)
tica.timescales[0]
# -
X.shape
y = tica.get_output()[0]
import matplotlib.pyplot as plt
# %matplotlib inline
plt.plot(y[:,0][:1000])
feat.describe()
for i in range(len(feat.describe())):
plt.figure()
plt.plot(X[:,i], '.')
plt.title(feat.describe()[i])
for i in range(len(feat.describe())):
plt.figure()
plt.hist(X[:,i], bins=50);
plt.title(feat.describe()[i])
from pyemma.coordinates import acf
acfs = acf.acf(X.T)
acfs[0]
from statsmodels.tsa.stattools import acf
for i in range(len(feat.describe())):
plt.figure()
#plt.hist(X[:,i], bins=50);
plt.plot(acf(X[:,0], nlags=100))
plt.title(feat.describe()[i])
traj = traj.superpose(traj[5000])
traj[::100].save_pdb('methyl_formate_traj.pdb')
# +
# whew! that could be methyl formate...
[a.element for a in traj.top.atoms]
# +
# okay
# -
important_inds = [i for i in range(len(feat.describe())) if ('O1' in feat.describe()[i]) and ('O2' in feat.describe()[i])]
[feat.describe()[i] for i in important_inds]
import numpy as np
for i in important_inds:
plt.figure()
plt.plot(X[:,i], '.')
plt.title(feat.describe()[i])
plt.ylim(-np.pi / 2, np.pi / 2)
plt.hist()
| notebooks/check torsion in methyl formate-- didn't sample transition!.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ml-env
# language: python
# name: ml-env
# ---
# # Data 620 Assignment: High Frequency Words
#
# <NAME>, <NAME>
#
# June 23, 2020
# 1. Choose a corpus of interest.
# 2. How many total unique words are in the corpus? (Please feel free to define unique words in any interesting,
# defensible way).
# 3. Taking the most common words, how many unique words represent half of the total words in the corpus?
# 4. Identify the 200 highest frequency words in this corpus.
# 5. Create a graph that shows the relative frequency of these 200 words.
# 6. Does the observed relative frequency of these words follow Zipf’s law? Explain.
# 7. In what ways do you think the frequency of the words in this corpus differ from “all words in all corpora.”
# %matplotlib inline
import pandas as pd
import plotly as py
import plotly.graph_objs as go
from plotly.offline import init_notebook_mode, plot, iplot
import matplotlib.pyplot as plt
init_notebook_mode(connected=True)
import nltk, re, pprint
from nltk import word_tokenize
from urllib import request
from nltk.probability import FreqDist
import string
import re
from nltk.corpus import wordnet
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.tag import pos_tag
from nltk.corpus import stopwords
import numpy as np
# ### 1. Choose a corpus of interest
# We chose the corpus of the book Treasure Island by <NAME>. The corpus is found on the Gutenberg website.
#
# #### Process
#
# For this analysis we'll be cleaning the corpus and removing punctuations as well as other symbols. We'll also lemmatize the corpus. We'll then remove stopwords and look at the frequency of the unique words in the corpus. Hopefully, our data cleaning process will help yield sensible results for our most common words.
from urllib import request
url = "http://www.gutenberg.org/files/120/120-0.txt"
response = request.urlopen(url)
raw = response.read().decode('utf8')
# ### Get slice of corpus which contains the book
print(raw.find("PART ONE--"))
print(raw.rfind("End of Project Gutenberg'"))
raw = raw[4276:372462]
# ### Remove punctuations and symbols
#
# clean up and remove punctuation
raw = raw.replace('-', ' ')
new_raw = re.sub(r'[^\w\s]', '', raw)
# ### Tokenize the lower case of the text and get the count of tokens
#break up the string into words and change all to lower case
tokens = word_tokenize(new_raw.lower())
print('Count of Tokens: ' ,len(tokens))
# ### Create function to lemmatize words.
# +
def get_wordnet_pos(treebank_tag):
if treebank_tag.startswith('J'):
return wordnet.ADJ
elif treebank_tag.startswith('V'):
return wordnet.VERB
elif treebank_tag.startswith('N'):
return wordnet.NOUN
elif treebank_tag.startswith('R'):
return wordnet.ADV
else:
return 'n'
lemmatizer = WordNetLemmatizer()
def lemmatize_word(word):
try:
tag = get_wordnet_pos(nltk.pos_tag([word])[0][1])
return lemmatizer.lemmatize(word, pos=tag)
except:
pass
# -
# ### Lemmatize the tokens and remove numbers
tokens_lem = [lemmatize_word(x) for x in tokens]
tokens_lem = [x for x in tokens_lem if x.isalpha()]
# ### 2. How many total unique words are in the corpus?
#
# Distinct number of tokens will be used as total unique words.
print('Count of Unique Words: ', len(set(tokens_lem)))
# ### Take a look at the first 10 tokens.
tokens_lem[:10]
# ### Get the frequency distribution of the tokens and look at top 10
fdist1 = FreqDist(tokens_lem)
top10 = dict(fdist1.most_common(10))
top10
# ### Removing Stopwords
#
# Stopwords seem to be dominating our corpus so let's remove stopwords and run our analysis again
# +
stop = stopwords.words('english') + ['mr',
'mrs',
'miss',
'say',
'have',
'might',
'thought',
'would',
'could',
'make',
'much',
'dear',
'must',
'know',
'one',
'good',
'every',
'towards',
'give',
'dr',
'none',
'go',
'come',
'upon',
'get',
'see',
'like',
'appear',
'sometimes',
'the',
'and',
'a',
'be',
'i',
'of',
'to',
'have',
'in',
'he',
'that',
'you',
'it',
'his',
'my',
'with',
'for',
'on',
'say',
'but',
'me',
'at',
'we',
'all',
'not',
'this',
'by',
'him',
'one',
'there',
'now',
'man',
'so',
'do',
'out',
'they',
'go',
'well',
'from',
'come',
'if',
'like',
'up',
'see',
'no',
'when',
'put',
'take',
'begin',
'two',
'three',
'u',
'still',
'last',
'never',
'always',
'thing',
'tell']
filtered_tokens = [word for word in tokens_lem if word not in stop]
# -
fdist2 = FreqDist(filtered_tokens)
top10 = dict(fdist2.most_common(10))
top10
# We can see that the top words now take on a more district flavor, and match corpus thematically
# ### 3. Taking the most common words, how many unique words represent half of the total words in the corpus?
# The output below shows that the top 51 words account for 49.91% of the total words in the corpus. (non-filtered)
# The output below shows that the top 64 words account for 49.96% of the total words in the corpus. (non-filtered)
# +
samples = list(dict(list(fdist2.most_common(len(fdist2)))).keys())
freqs = [fdist2[sample] for sample in samples]
rel_freqs = [(fdist2[sample] / fdist2.N()) for sample in samples]
df_words = pd.DataFrame()
df_words['frequency'] = freqs
df_words['relfreq'] = rel_freqs
df_words['word'] = samples
df_words['cum_perc'] = df_words['relfreq'].cumsum()
print('Middle Words and Corresponding Frequencies')
df_words.loc[(df_words['cum_perc'] >= 0.495) & (df_words['cum_perc'] <= 0.505)]
# -
# ### 4. Identify the 200 highest frequency words in this corpus.
top200 = dict(fdist2.most_common(200))
print('Treasure Island by <NAME>')
print('Top 200 Words by Frequency')
print('Word Frequency')
print('-------------------')
for word, freq in top200.items():
print('{:<11} {}'.format(word, freq))
# ### 5. Create a graph that shows the relative frequency of these 200 words.
# +
df_words.sort_values(by='relfreq',
inplace=True,
ascending=False)
fig = go.Figure()
fig.add_trace(go.Bar(x = df_words['word'][:200], y = df_words['relfreq'][:200], orientation = 'v'));
fig.update_layout(go.Layout(
title='Relative Frequency of Top 200 Words in Treasure Island by <NAME>',
width=2000,
height=400,
yaxis=dict(
title='Relative Frequency' #, tickangle = 90
),
xaxis=dict(
title='Word', tickfont=dict(size=7)
)
));
fig.show();
# +
samples2 = list(dict(list(fdist2.most_common(len(fdist2)))).keys())
freqs2 = [fdist1[sample] for sample in samples2]
rel_freqs2 = [(fdist2[sample] / fdist2.N()) for sample in samples2]
df_words2 = pd.DataFrame()
df_words2['frequency'] = freqs2
df_words2['relfreq'] = rel_freqs2
df_words2['word'] = samples2
df_words2['cum_perc'] = df_words2['relfreq'].cumsum()
print('Middle Words and Corresponding Frequencies')
df_words2.loc[(df_words2['cum_perc'] >= 0.495) & (df_words2['cum_perc'] <= 0.505)]
# +
df_words2.sort_values(by='relfreq',
inplace=True,
ascending=False)
fig2 = go.Figure();
fig2.add_trace(go.Bar(x = df_words2['word'][:200], y = df_words2['relfreq'][:200], orientation = 'v'));
fig2.update_layout(go.Layout(
title='Relative Frequency of Top 200 Words in Treasure Island by <NAME>',
width=2000,
height=400,
yaxis=dict(
title='Relative Frequency' #, tickangle = 90
),
xaxis=dict(
title='Word', tickfont=dict(size=7)
)
));
fig2.show();
# -
# ### 6. Does the observed relative frequency of these words follow Zipf’s law? Explain.
# According to the text Natural Language Processing with Python:
# Zipf’s Law: Let *f(w)* be the frequency of a word *w* in free text. Suppose that all the words of a text are ranked according to their frequency, with the most frequent word first. Zipf’s Law states that the frequency of a word type is inversely
# proportional to its rank (i.e., *f × r = k*, for some constant *k*).
#
# The plot below of word rank vs. word frequency (both in log scale) is a relatively straight line. Therefore, we conclude that this corpus of Treasure Island follows Zipf's Law.
# +
fig = go.Figure()
fig.add_trace(go.Scatter(y = np.log(df_words['frequency']), x = np.log(df_words.index + 1)))
fig.update_layout(go.Layout(
title='Word Rank vs Word Frequency in Treasure Island<br>by <NAME>',
width=500,
height=500,
xaxis=dict(
title='Log Rank' #, tickangle = 90
),
yaxis=dict(
title='Log Frequency', tickfont=dict(size=7)
)
))
fig.show()
# -
# ### 7. In what ways do you think the frequency of the words in this corpus differ from “all words in all corpora.”
# Typically, once a corpus is cleaned, lemmatized and filtered for stopwords, you will find words thematically representing the topic of the corpus popping up as most frequent. Before we filtered out stopwords, the top words were 'the' 'and' 'a' 'be' 'i' 'of' and 'to'. This would most likely be the same if we looked at all words in all corpora. Once the corpus was filtered out, the top words became 'hand' 'captain' 'silver', 'doctor', 'time'. This would make sense for a novel such as 'Treasure Island'
#
| Assignment_Wk4Pt2/Wk4Pt2_jit_latest_contributions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Plotting and Visualization
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
PREVIOUS_MAX_ROWS = pd.options.display.max_rows
pd.options.display.max_rows = 20
np.random.seed(12345)
plt.rc('figure', figsize=(15, 9))
np.set_printoptions(precision=4, suppress=True)
# # %matplotlib notebook
# ## A Brief matplotlib API Primer
import matplotlib.pyplot as plt
import numpy as np
data = np.arange(10)
data
plt.plot(data)
# ### Figures and Subplots
fig = plt.figure(figsize=(12, 8))
ax1 = fig.add_subplot(2, 2, 1)
ax2 = fig.add_subplot(2, 2, 2)
ax3 = fig.add_subplot(2, 2, 3)
plt.plot(np.random.randn(50).cumsum(), 'k--')
_ = ax1.hist(np.random.randn(100), bins=20, color='k', alpha=0.3)
ax2.scatter(np.arange(30), np.arange(30) + 3 * np.random.randn(30))
fig, axes = plt.subplots(2, 3)
axes
# #### Adjusting the spacing around subplots
# subplots_adjust(left=None, bottom=None, right=None, top=None,
# wspace=None, hspace=None)
# fig, axes = plt.subplots(2, 2, sharex=True, sharey=True)
# for i in range(2):
# for j in range(2):
# axes[i, j].hist(np.random.randn(500), bins=50, color='k', alpha=0.5)
# plt.subplots_adjust(wspace=0, hspace=0)
fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(12, 8))
for i in range(2):
for j in range(2):
axes[i, j].hist(np.random.randn(500), bins=50, color='k', alpha=0.5)
plt.subplots_adjust(wspace=0, hspace=0)
# ### Colors, Markers, and Line Styles
# ax.plot(x, y, 'g--')
# ax.plot(x, y, linestyle='--', color='g')
from numpy.random import randn
plt.figure()
plt.plot(randn(30).cumsum(), 'ko--')
# plot(randn(30).cumsum(), color='k', linestyle='dashed', marker='o')
data = np.random.randn(30).cumsum()
plt.plot(data, 'k--', label='Default')
plt.plot(data, 'k-', drawstyle='steps-post', label='steps-post')
plt.legend(loc='best')
# ### Ticks, Labels, and Legends
# #### Setting the title, axis labels, ticks, and ticklabels
fig, ax = plt.subplots()
ax.plot(np.random.randn(1000).cumsum())
ax.set_xticks([0, 250, 500, 750, 1000])
ax.set_xticklabels(['one', 'two', 'three', 'four', 'five'],
rotation=30, fontsize='small')
ax.set_title('My first matplotlib plot')
ax.set_xlabel('Stages')
# props = {
# 'title': 'My first matplotlib plot',
# 'xlabel': 'Stages'
# }
# ax.set(**props)
# #### Adding legends
from numpy.random import randn
fig, ax = plt.subplots(figsize=(12, 8))
ax.plot(randn(1000).cumsum(), 'k', label='one')
ax.plot(randn(1000).cumsum(), 'b--', label='two')
ax.plot(randn(1000).cumsum(), 'r.', label='three')
ax.legend(loc='best')
# ### Annotations and Drawing on a Subplot
# ax.text(x, y, 'Hello world!',
# family='monospace', fontsize=10)
# +
from datetime import datetime
# %matplotlib inline
fig, ax = plt.subplots(figsize=(12, 8))
data = pd.read_csv('examples/spx.csv', index_col=0, parse_dates=True)
spx = data['SPX']
spx.plot(ax=ax, style='k-')
crisis_data = [
(datetime(2007, 10, 11), 'Peak of bull market'),
(datetime(2008, 3, 12), 'Bear Stearns Fails'),
(datetime(2008, 9, 15), 'Lehman Bankruptcy')
]
for date, label in crisis_data:
ax.annotate(label, xy=(date, spx.asof(date) + 75),
xytext=(date, spx.asof(date) + 225),
arrowprops=dict(facecolor='black', headwidth=4, width=2,
headlength=4),
horizontalalignment='left', verticalalignment='top')
# Zoom in on 2007-2010
ax.set_xlim(['1/1/2007', '1/1/2011'])
ax.set_ylim([600, 1800])
ax.set_title('Important dates in the 2008-2009 financial crisis')
# -
# fig = plt.figure()
# ax = fig.add_subplot(1, 1, 1)
#
# rect = plt.Rectangle((0.2, 0.75), 0.4, 0.15, color='k', alpha=0.3)
# circ = plt.Circle((0.7, 0.2), 0.15, color='b', alpha=0.3)
# pgon = plt.Polygon([[0.15, 0.15], [0.35, 0.4], [0.2, 0.6]],
# color='g', alpha=0.5)
#
# ax.add_patch(rect)
# ax.add_patch(circ)
# ax.add_patch(pgon)
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
rect = plt.Rectangle((0.2, 0.75), 0.4, 0.15, color='k', alpha=0.3)
circ = plt.Circle((0.7, 0.2), 0.15, color='b', alpha=0.3)
pgon = plt.Polygon([[0.15, 0.15], [0.35, 0.4], [0.2, 0.6]],
color='g', alpha=0.5)
ax.add_patch(rect)
ax.add_patch(circ)
ax.add_patch(pgon)
# ### Saving Plots to File
# plt.savefig('figpath.svg')
# plt.savefig('figpath.png', dpi=400, bbox_inches='tight')
# from io import BytesIO
# buffer = BytesIO()
# plt.savefig(buffer)
# plot_data = buffer.getvalue()
# ### matplotlib Configuration
# plt.rc('figure', figsize=(10, 10))
# font_options = {'family' : 'monospace',
# 'weight' : 'bold',
# 'size' : 'small'}
# plt.rc('font', **font_options)
# ## Plotting with pandas and seaborn
# ### Line Plots
s = pd.Series(np.random.randn(10).cumsum(), index=np.arange(0, 100, 10))
s
s.plot()
df = pd.DataFrame(np.random.randn(10, 4).cumsum(0),
columns=['A', 'B', 'C', 'D'],
index=np.arange(0, 100, 10))
df
df.plot()
# ### Bar Plots
fig, axes = plt.subplots(2, 1)
data = pd.Series(np.random.rand(16), index=list('abcdefghijklmnop'))
data.plot.bar(ax=axes[0], color='k', alpha=0.7)
data.plot.barh(ax=axes[1], color='k', alpha=0.7)
np.random.seed(12348)
df = pd.DataFrame(np.random.rand(6, 4),
index=['one', 'two', 'three', 'four', 'five', 'six'],
columns=pd.Index(['A', 'B', 'C', 'D'], name='Genus'))
df
df.plot.bar()
plt.figure()
df.plot.barh(stacked=True, alpha=0.5)
tips = pd.read_csv('examples/tips.csv')
party_counts = pd.crosstab(tips['day'], tips['size'])
party_counts
# Not many 1- and 6-person parties
party_counts = party_counts.loc[:, 2:5]
party_counts
# Normalize to sum to 1
party_pcts = party_counts.div(party_counts.sum(1), axis=0)
party_pcts
party_pcts.plot.bar()
import seaborn as sns
tips['tip_pct'] = tips['tip'] / (tips['total_bill'] - tips['tip'])
tips.head()
sns.barplot(x='tip_pct', y='day', data=tips, orient='h')
sns.barplot(x='tip_pct', y='day', hue='time', data=tips, orient='h')
sns.set(style="white")
# darkgrid, whitegrid, dark, white, ticks
# ### Histograms and Density Plots
plt.figure()
tips['tip_pct'].plot.hist(bins=50)
plt.figure()
tips['tip_pct'].plot.density()
plt.figure()
comp1 = np.random.normal(0, 1, size=200)
comp2 = np.random.normal(10, 2, size=200)
values = pd.Series(np.concatenate([comp1, comp2]))
sns.distplot(values, bins=100, color='k')
# ### Scatter or Point Plots
macro = pd.read_csv('examples/macrodata.csv')
data = macro[['cpi', 'm1', 'tbilrate', 'unemp']]
trans_data = np.log(data).diff().dropna()
trans_data[-5:]
fig, ax = plt.subplots()
ax = sns.regplot('m1', 'unemp', data=trans_data)
ax.set_title('Changes in log %s versus log %s' % ('m1', 'unemp'))
plt.figure()
sns.pairplot(trans_data, diag_kind='kde', plot_kws={'alpha': 0.2})
# ### Facet Grids and Categorical Data
sns.catplot(x='day', y='tip_pct', hue='time', col='smoker',
kind='bar', data=tips[tips.tip_pct < 1])
sns.catplot(x='day', y='tip_pct', row='time',
col='smoker',
kind='bar', data=tips[tips.tip_pct < 1])
sns.catplot(x='tip_pct', y='day', kind='box',
data=tips[tips.tip_pct < 0.5])
# ## Other Python Visualization Tools
pd.options.display.max_rows = PREVIOUS_MAX_ROWS
# ## Conclusion
| ch09.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: CoCoZap
# language: python
# name: cocozap
# ---
# # Heatmap Rio de Janeiro
# +
import pandas as pd
import numpy as np
import folium
from folium import plugins
import branca.colormap
from collections import defaultdict
# -
base_rj = pd.read_csv('base_heatmap_rj.csv')
cidades = pd.read_csv('https://raw.githubusercontent.com/sandeco/CanalSandeco/master/covid-19/cidades_brasil.csv')
# visualizando o data frame
base_rj.head()
cidades.head()
# Visualizando as informações do data frame
base_rj.info()
# Mudando o tipo da coluna para numero
base_rj['repr_pret_eleitos'] = base_rj['repr_pret_eleitos'].apply(lambda x: float(x.replace(".","").replace(",",".")))
base_rj['repr_negr_eleitos'] = base_rj['repr_negr_eleitos'].apply(lambda x: float(x.replace(".","").replace(",",".")))
base_rj['inves_negr100'] = base_rj['inves_negr100'].apply(lambda x: float(x.replace(".","").replace(",",".")))
base_rj['repr_negra100'] = base_rj['repr_negra100'].apply(lambda x: float(x.replace(".","").replace(",",".")))
# Excluindo colunas
base_rj.drop(['Unnamed: 6', 'Unnamed: 7'], axis=1, inplace= True
cities_rj = cidades.loc[cidades['codigo_uf'] == 33]
cities_rj.rename(columns={'nome': 'municipios'}, inplace=True)
cities_rj.info()
cities_rj.head()
cod = cities_rj['codigo_ibge'].to_list()
base_rj['codigo_ibge'] = cod
geoloc_base_rj = pd.merge(base_rj, cities_rj, on='codigo_ibge', how='left')
geoloc_base_rj.head()
geoloc_base_rj.info()
geoloc_base_rj.drop(['municipios_y', 'capital', 'codigo_uf_x', 'codigo_uf_y'], axis=1, inplace= True)
geoloc_base_rj.rename(columns={'municipios_x': 'municipios'}, inplace=True)
geoloc_base_rj.to_csv('geoloc_base_rj.csv')
geoloc_base_rj = pd.read_csv('geoloc_base_rj.csv')
# #### Heatmap Representatividade Pretos Eleitos
coordenadas1 = geoloc_base_rj[['latitude', 'longitude', 'repr_pret_eleitos']]
coordenadas1
geoloc_base_rj['repr_pret_eleitos'].describe()
map_1 = folium.Map(
width= '100%',
heigth = '100%',
location = [-22.0622469,-44.04462],
zoom_start = 8)
steps=20
colormap = branca.colormap.linear.YlOrRd_09.scale(0, 1).to_step(steps)
colormap.caption = 'Representatividade Pretos Eleitos'
gradient_map=defaultdict(dict)
for i in range(steps):
gradient_map[1/steps*i] = colormap.rgb_hex_str(1/steps*i)
colormap.add_to(map_1)
map_1 = map_1.add_child(plugins.HeatMap(coordenadas1,fill_opacity = 0.5, gradient = gradient_map,
show=True))
for i in range(0, len(geoloc_base_rj)):
folium.Circle(
location = [geoloc_base_rj.iloc[i]['latitude'] , geoloc_base_rj.iloc[i]['longitude']],
color = '#00FF69',
fill = '#00A1B3',
tooltip = '<li><bold> MUNICIPIO:' +str(geoloc_base_rj.iloc[i]['municipios']) + "</bold><li>"+
'<li><bold> Representatividade:' + str(geoloc_base_rj.iloc[i]['repr_pret_eleitos']) + "</bold><li>",
radius = (geoloc_base_rj.iloc[i]['repr_pret_eleitos']**1.1)
).add_to(map_1)
map_1
map_1.save('repr_pretos_eleitos_rj.html')
# #### Heatmap Representatividade Negros Eleitos
coordenadas2 = geoloc_base_rj[['latitude', 'longitude', 'repr_negr_eleitos']]
geoloc_base_rj['repr_negr_eleitos'].describe()
map_2 = folium.Map(
width= '100%',
heigth = '100%',
location = [-22.0622469,-44.04462],
zoom_start = 8)
steps=20
colormap2 = branca.colormap.linear.PuRd_04.scale(0,1).to_step(steps)
colormap2.caption = 'Representatividade Negros Eleitos'
gradient_map2=defaultdict(dict)
for i in range(steps):
gradient_map2[1/steps*i] = colormap2.rgb_hex_str(1/steps*i)
colormap2.add_to(map_2)
map_2 = map_2.add_child(plugins.HeatMap(coordenadas2,fill_opacity = 0.5, gradient = gradient_map2,
show=True))
for i in range(0, len(geoloc_base_rj)):
folium.Circle(
location = [geoloc_base_rj.iloc[i]['latitude'] , geoloc_base_rj.iloc[i]['longitude']],
color = '#00FF69',
fill = '#00A1B3',
tooltip = '<li><bold> MUNICIPIO:' +str(geoloc_base_rj.iloc[i]['municipios']) + "</bold><li>"+
'<li><bold> Representatividade:' + str(geoloc_base_rj.iloc[i]['repr_negr_eleitos']) + "</bold><li>",
radius = (geoloc_base_rj.iloc[i]['repr_negr_eleitos']**1.1)
).add_to(map_2)
folium.LayerControl(collapsed=False).add_to(map_2)
map_2
map_2.save('repr_negros_eleito_rj.html')
# #### Heatmap Investimento Negro
coordenadas3 = geoloc_base_rj[['latitude', 'longitude', 'inves_negr100']]
geoloc_base_rj['inves_negr100'].describe()
map_3 = folium.Map(
width= '100%',
heigth = '100%',
location = [-22.0622469,-44.04462],
zoom_start = 8)
steps=20
colormap3 = branca.colormap.linear.YlOrBr_09.scale(0, 1).to_step(steps)
colormap3.caption = 'Investimento Negro'
gradient_map3=defaultdict(dict)
for i in range(steps):
gradient_map3[1/steps*i] = colormap3.rgb_hex_str(1/steps*i)
colormap3.add_to(map_3)
map_3 = map_3.add_child(plugins.HeatMap(coordenadas1,fill_opacity = 0.5, gradient = gradient_map3,
show=True))
for i in range(0, len(geoloc_base_rj)):
folium.Circle(
location = [geoloc_base_rj.iloc[i]['latitude'] , geoloc_base_rj.iloc[i]['longitude']],
color = '#00FF69',
fill = '#00A1B3',
tooltip = '<li><bold> MUNICIPIO:' +str(geoloc_base_rj.iloc[i]['municipios']) + "</bold><li>"+
'<li><bold> Investimento:' + str(geoloc_base_rj.iloc[i]['inves_negr100']) + "</bold><li>",
radius = (geoloc_base_rj.iloc[i]['inves_negr100']*1.1)
).add_to(map_3)
map_3
map_3.save('invest_negro_rj.html')
# #### Heatmap Representatividade candidaturas Negras
coordenadas4 = geoloc_base_rj[['latitude', 'longitude', 'repr_negra100']]
map_4 = folium.Map(
width= '100%',
heigth = '100%',
location = [-22.0622469,-44.04462],
zoom_start = 8)
steps=20
colormap4 = branca.colormap.linear.Set1_09.scale(0, 1).to_step(steps)
colormap4.caption = 'Representatividade Candiaturas Negras'
gradient_map4=defaultdict(dict)
for i in range(steps):
gradient_map4[1/steps*i] = colormap4.rgb_hex_str(1/steps*i)
colormap4.add_to(map_4)
map_4 = map_4.add_child(plugins.HeatMap(coordenadas4, gradient = gradient_map4,
show=True))
for i in range(0, len(geoloc_base_rj)):
folium.Circle(
location = [geoloc_base_rj.iloc[i]['latitude'] , geoloc_base_rj.iloc[i]['longitude']],
color = '#00FF69',
fill = '#00A1B3',
tooltip = '<li><bold> MUNICIPIO:' +str(geoloc_base_rj.iloc[i]['municipios']) + "</bold><li>"+
'<li><bold> Representatividade:' + str(geoloc_base_rj.iloc[i]['repr_negra100']) + "</bold><li>",
radius = (geoloc_base_rj.iloc[i]['repr_negra100']**1.1)
).add_to(map_4)
map_4
map_4.save('repr_cand_negra_rj.html')
| Mapas_Eleitorais/heatmap_sp/.ipynb_checkpoints/heatmap_RJ-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 80} colab_type="code" id="Cb3ztjI91g4g" outputId="8105d586-494e-47c8-d2ed-85efac26273a"
import numpy as np
import pandas as pd
np.random.seed(42)
import tensorflow as tf
tf.set_random_seed(42)
from keras.models import Sequential, load_model
from keras.layers import Dense, Activation
from keras.layers import LSTM, Dropout
from keras.layers import TimeDistributed
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.callbacks import EarlyStopping
from keras.layers.core import Dense, Activation, Dropout, RepeatVector
from keras.optimizers import RMSprop
import matplotlib.pyplot as plt
import pickle
import sys
import heapq
import seaborn as sns
from pylab import rcParams
import unicodedata
import json
from sklearn.model_selection import train_test_split
# + colab={} colab_type="code" id="v0u8O2sg1x9D"
from keras.layers import Dense, Embedding, LSTM, SpatialDropout1D
from keras.models import Sequential, Input
from sklearn.feature_extraction.text import CountVectorizer
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
from keras.utils.np_utils import to_categorical
from keras.callbacks import EarlyStopping
from keras.optimizers import RMSprop, Adam
from keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation, GRU
from keras.layers import Bidirectional, GlobalMaxPool1D
from keras.models import Model
from keras import initializers, regularizers, constraints, optimizers, layers
# SEQUENCE SIZE PARAMETERS
CHARACTER_NUMBER_PREDICTION = 80
DATA_SET_COLLECTION_ITERATIONS = 50_000
SET_VARIABILITY = 3
# RNN PARAMETERS
EMB_DIM = 256
SEQ_UNITS = 256
DROP = .1
# TRAINING PARAMETERS
TEST_SIZE = 0.2
EPOCHS = 400
BATCH_SIZE = 256
VALIDATION_SPLIT = .2
# + colab={} colab_type="code" id="Ht9k06l411rL"
contentDf = pd.read_csv('bras_cubas_paragraphs.csv')
# + [markdown] colab_type="text" id="D7bzrODFBKsA"
# # Dataset preparation
# -
def strip_accents(text):
try:
text = unicode(text, 'utf-8')
except NameError: # unicode is a default on python 3
pass
text = unicodedata.normalize('NFD', text)\
.encode('ascii', 'ignore')\
.decode("utf-8")
return str(text)
# + colab={} colab_type="code" id="WKBLSpHGBWIO"
# Arrays we'll use to store our dataset
X_data = []
y_data = []
# Looping through our corpus...
for i in range(DATA_SET_COLLECTION_ITERATIONS):
# ... selecting a random paragraph from our book...
paragraphIndex = np.random.randint(0, len(contentDf.paragraphs))
currentParagraph = contentDf.paragraphs[paragraphIndex]
# ... sampling a slice of the selected paragraph...
# aux = 1
# dontAddThisSample = False
# while(len(currentParagraph) - CHARACTER_NUMBER_PREDICTION + 1 < CHARACTER_NUMBER_PREDICTION + 1):
# # currentParagraph += contentDf.paragraphs[np.random.randint(0, len(contentDf.paragraphs))]
# if (paragraphIndex + aux < len(contentDf.paragraphs)):
# currentParagraph += contentDf.paragraphs[paragraphIndex + aux]
# aux += 1
# else:
# dontAddThisSample = True
# break;
# if (dontAddThisSample):
# continue
if (len(currentParagraph) < CHARACTER_NUMBER_PREDICTION + 1):
continue
paragraphRegion = np.random.randint(0, len(currentParagraph) - CHARACTER_NUMBER_PREDICTION)
# Checking how many different chars are in the selected paragraph region
nChars = len(set(currentParagraph[paragraphRegion : paragraphRegion + CHARACTER_NUMBER_PREDICTION]))
if (nChars < SET_VARIABILITY):
continue
# Adding an excerpt of the paragraph to our X and y data.
X_data.append(currentParagraph[paragraphRegion : paragraphRegion + CHARACTER_NUMBER_PREDICTION].casefold())
y_data.append(currentParagraph[paragraphRegion + CHARACTER_NUMBER_PREDICTION].casefold())
# -
len(X_data), len(y_data)
# + [markdown] colab_type="text" id="qFbfudiPN5MP"
# In the cells below, we'll instantiate and fit a tokenizer.
# + colab={} colab_type="code" id="xVFYTOMkHjGJ"
tokenizer = Tokenizer(
num_words=500,
char_level=True,
filters=None,
lower=False,
oov_token=chr(1),
)
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="h1ml7sn-LmvV" outputId="846ab1e4-0915-4bbf-ade7-e530ccffd60b"
# %%time
tokenizer.fit_on_texts(X_data)
tokenizer.fit_on_texts(y_data)
word_index = tokenizer.word_index
index_word = tokenizer.index_word
# + colab={} colab_type="code" id="hs3LJUSBL8i0"
X = np.array(tokenizer.texts_to_sequences(X_data), dtype=np.int32)
y = np.array(list(map(word_index.get, y_data)))
# + colab={"base_uri": "https://localhost:8080/", "height": 153} colab_type="code" id="737CDqaGMkjf" outputId="a2fcdcf8-a09e-4a13-eb75-f18584ae39a3"
print(X)
print(y)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="eeTG9ePCOuPA" outputId="419dcf5e-c5e4-41de-a313-593c8f3bfded"
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=TEST_SIZE)
print(f"X train shape: {X_train.shape}, Y train shape : {y_train.shape}, X test shape: {X_test.shape}, Y test shape: {y_test.shape}")
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="X_VM3plzRsoQ" outputId="73a32fbe-5fd2-4e74-bfd3-4fe2256eaded"
len(tokenizer.word_index)
# + [markdown] colab_type="text" id="PmeWLbt-OMnZ"
# # Model creation
# + colab={} colab_type="code" id="EJZVax2mOO0T"
model = Sequential()
model.add(Embedding(len(tokenizer.word_index) + 1, EMB_DIM, input_length=X_train.shape[1]))
model.add(SpatialDropout1D(DROP))
model.add(Bidirectional(GRU(SEQ_UNITS, return_sequences=True, dropout=DROP,recurrent_dropout=DROP)))
model.add(SpatialDropout1D(DROP))
model.add(Bidirectional(GRU(SEQ_UNITS, return_sequences=True, dropout=DROP,recurrent_dropout=DROP)))
model.add(GlobalMaxPool1D())
model.add(Dense(len(tokenizer.word_index), activation='softmax'))
# + colab={"base_uri": "https://localhost:8080/", "height": 340} colab_type="code" id="PPwvhgBOP9LX" outputId="41b66a9e-0aa6-48af-fde5-a9ca5d7b258a"
model.compile(
optimizer=RMSprop(lr=0.001),
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy']
)
print(model.summary())
# + colab={"base_uri": "https://localhost:8080/", "height": 683} colab_type="code" id="M7TKfzAYQOxj" outputId="6627de62-d5f4-4dc7-d664-9d5eb2473bbf"
# %%time
history = model.fit(X_train, y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_split=VALIDATION_SPLIT,
callbacks=[EarlyStopping(monitor='val_loss',patience=1,
min_delta=1e-7)]
)
# -
# # Model testing
curr = X_test[0].copy().reshape((1, -1))
original = ''.join(map(tokenizer.index_word.get, curr[0]))
next_seq = []
while True:
_next = model.predict_classes(curr)[0]
next_char = tokenizer.index_word[_next]
if next_char == ' ':
break
next_seq.append(next_char)
curr[0, 0 : -1] = curr[0, 1 :]
curr[0, -1] = _next
print(original, ''.join(next_seq))
X_test[0]
for i in range(30):
curr = X_test[i].copy().reshape((1, -1))
original = ''.join(map(tokenizer.index_word.get, curr[0]))
blank_count = 0
next_seq = []
while True:
_next = model.predict_classes(curr)[0]
next_char = tokenizer.index_word[_next]
if next_char == ' ':
blank_count += 1
if blank_count == 6:
break
next_seq.append(next_char)
curr[0, 0 : -1] = curr[0, 1 :]
curr[0, -1] = _next
print(original, '--', ''.join(next_seq))
# # Model Saving
model.save('bras_cubas_80-keras_model.h5')
with open('bras_cubas_80-index_word.json', 'w') as f:
json.dump(tokenizer.index_word, f, ensure_ascii=False)
with open('bras_cubas_80-word_index.json', 'w') as f:
json.dump(tokenizer.word_index, f, ensure_ascii=False)
| notebooks/bras_cubas/Data&Model_gen-BrasCubas_80.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1 align="center"><font size="5"><b>Exploratory Data Analysis of NMIS Education Facility Data</b></font></h1>
# <p>This notebook is an exploratory data analysis of Nigerian MDGs Information System (NMIS) education facility data. Dataset include data from schools across the 36 states in Nigeria and the Federal Capital teritory.</p>
# <hr>
# <div class="alert alert-block alert-info" style="margin-top: 20px">
# <h2>Table of Contents</h2>
# <ul>
# <li><a href="#ref0">Data Cleaning</a></li>
# <li><a href="#ref1">Data Visualization</a></li>
# </ul>
# </div>
# <h2>Data Cleaning</h2><a name='ref0'></a>
# <p>First, we import libraries needed for data preparation, exploration and visualizations.</p>
#Import necessary libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
# %matplotlib inline
#plt.style.use('bmh')
# Load data into data frame
df = pd.read_csv('raw_data.csv')
df.head()
df.shape
df.columns
# Drop irrelevant columns
df.drop(['education_type', 'community', 'ward', 'facility_type_display', 'formhub_photo_id', 'gps', 'survey_id', 'latitude',
'longitude', 'date_of_survey', 'sector'], axis=1, inplace=True)
# Define a new order of columns
order = ['facility_id', 'facility_name',
'management', 'unique_lga',
'improved_water_supply', 'improved_sanitation', 'phcn_electricity',
'chalkboard_each_classroom_yn', 'num_classrms_total', 'num_toilets_total',
'num_tchrs_with_nce', 'num_tchr_full_time', 'num_students_male',
'num_students_female', 'num_tchrs_male', 'num_tchrs_female', 'num_students_total']
# Re-order columns
df = df[order]
# Rename Columns
df.columns = ['Facility_ID', 'Facility_name', 'Management', 'State',
'Improved_water_supply', 'Improved_sanitation', 'Public_electricity',
'Chalkboard/classroom', 'Classrooms', 'Toilets',
'NCE_teachers', 'Fulltime_teachers', 'Male_students',
'Female_students', 'Male_teachers',
'Female_teachers', 'Total_students']
df.head()
# Check if there are missing values
df.isnull().any()
# Check the total number of missing values for each column
df.isnull().sum()
# Check dataset shape to decide whether to drop missing values on rows or column
df.shape
# Since dataset appears to have almost 99,000 rows, drop rows with missing values and retain all attributes(columns)
df.dropna(axis=0, inplace=True)
# Check to see if there are still missing values
df.isnull().sum().any()
# Check the new shape of data
df.shape
df.head()
states = ["Abia", "Adamawa", "Akwa_Ibom", "Anambra", "Bauchi", "Bayelsa",
"Benue", "Borno", "Cross_River", "Delta", "Ebonyi", "Edo",
"Ekiti", "Enugu", "fct", "Gombe", "Imo", "Jigawa", "Kaduna",
"Kano", "Katsina", "Kebbi", "Kogi", "Kwara", "Lagos", "Nasarawa",
"Niger", "Ogun", "Ondo", "Osun", "Oyo", "Plateau", "Rivers",
"Sokoto", "Taraba", "Yobe", "Zamfara"]
# Join L.G.A(s) in State column into 37 unique values to represent 36 states in Nigeria plus FCT
for i in states:
df.loc[df['State'].str.contains(i, case=False), 'State'] = i
df.loc[df['State'].str.contains('fct', case=False), 'State'] = 'FCT - Abuja'
df.loc[df['State'].str.contains('Cross_River', case=False), 'State'] = 'Cross River'
df.loc[df['State'].str.contains('Akwa_Ibom', case=False), 'State'] = 'Akwa Ibom'
# Check the unique values
df['State'].unique()
# Check the number of unique values
df['State'].nunique()
# Check for duplicates
df.set_index('Facility_ID', inplace=True)
df.duplicated().value_counts()
# Drop duplicates
df.drop_duplicates(keep='first', inplace=True)
# Check if there's still any duplicate
df.duplicated().any()
df.head()
# Rename booleans to Yes and No to avoid issues
df[['Improved_water_supply', 'Improved_sanitation', 'Public_electricity', 'Chalkboard/classroom']]=\
df[['Improved_water_supply', 'Improved_sanitation', 'Public_electricity', 'Chalkboard/classroom']].replace([True, False], ['Yes', 'No'])
# Check categorical variables for unique values
unique = ['Management', 'Improved_water_supply', 'Improved_sanitation', 'Public_electricity', 'Chalkboard/classroom']
for i in unique:
print(i, df[i].unique())
# Rename faith_based strings in Management column to 'private'
df.loc[df['Management'].str.contains('faith'), 'Management'] = 'private'
# drop rows containing 'none' in Management column
df.reset_index(inplace=True)
none = df['Management'] == 'none'
none_id = df[none]['Facility_ID'].tolist()
df.set_index('Facility_ID', inplace=True)
df.drop(index=none_id, inplace=True)
df.reset_index(inplace=True)
# Check for changes
unique = ['Management', 'Improved_water_supply', 'Improved_sanitation', 'Public_electricity', 'Chalkboard/classroom']
for i in unique:
print(i, df[i].unique())
df.head()
# Descriptive Stattistics
df.describe()
# <h2>Data Visualization</h2><a name="ref1"></a>
plt.rcParams['figure.figsize'] = [18, 16]
df.plot(kind="density",subplots=True,layout=(3,3),sharex=False,sharey=False)
plt.show()
plt.rcParams['figure.figsize'] = [18, 16]
df.plot(kind="box",subplots=True,layout=(3,3),sharex=False,sharey=False)
plt.show()
# +
# Univariate Analysis of all categorical variables
fig = plt.figure(figsize=(13,15))
ax0 = fig.add_subplot(3,2,1)
ax1 = fig.add_subplot(3,2,2)
ax2 = fig.add_subplot(3,2,3)
ax3 = fig.add_subplot(3,2,4)
ax4 = fig.add_subplot(3,2,5)
sns.barplot(df['Management'].unique(), df['Management'].value_counts(), ax=ax0);
sns.barplot(df['Improved_water_supply'].unique(), df['Improved_water_supply'].value_counts(), ax=ax1);
sns.barplot(df['Improved_sanitation'].unique(), df['Improved_sanitation'].value_counts(), ax=ax2);
sns.barplot(df['Public_electricity'].unique(), df['Public_electricity'].value_counts(), ax=ax3);
sns.barplot(df['Chalkboard/classroom'].unique(), df['Chalkboard/classroom'].value_counts(), ax=ax4);
ax0.set_title('Private and Public Schools'), ax0.set_ylabel('Count')
ax1.set_title('Improved Water Supply'), ax1.set_ylabel('Count')
ax2.set_title('Improved Sanitation'), ax2.set_ylabel('Count')
ax3.set_title('Public Electricity'), ax3.set_ylabel('Count')
ax4.set_title('Chalkboard per Classroom')
plt.savefig('visuals/uni-categ-variables.png')
# -
# Bivariate analysis of Management and Improved water supply
table = pd.crosstab(df['Management'], df['Improved_water_supply'])
table.plot(kind='bar', stacked=True,figsize=(8,6));
plt.ylabel('Frequency Distribution')
plt.savefig('visuals/biv-mgt-water-supply.png')
# Bivariate analysis of Management and Improved sanitation
table = pd.crosstab(df['Management'], df['Improved_sanitation'])
table.plot(kind='bar', stacked=True,figsize=(8,6));
plt.ylabel('Frequency Distribution')
plt.savefig('visuals/biv-mgt-sanitation.png')
# Bivariate analysis of Management and Electricity
table = pd.crosstab(df['Management'], df['Public_electricity'])
table.plot(kind='bar', stacked=True,figsize=(8,6));
plt.ylabel('Frequency Distribution')
plt.savefig('visuals/biv-mgt-electricity.png')
# Bivariate analysis of Management and Chalkboard
table = pd.crosstab(df['Management'], df['Chalkboard/classroom'])
table.plot(kind='bar', stacked=True,figsize=(8,6));
plt.ylabel('Frequency Distribution')
plt.savefig('visuals/biv-mgt-chalkboard.png')
# Bivariate analysis of State and Improved water supply
table = pd.crosstab(df['State'], df['Improved_water_supply'])
table.plot(kind='bar', stacked=True,figsize=(13,6));
plt.ylabel('Frequency Distribution')
plt.savefig('visuals/biv-state-water-supply.png')
# Bivariate analysis of State and Improved sanitation
table = pd.crosstab(df['State'], df['Improved_sanitation'])
table.plot(kind='bar', stacked=True,figsize=(13,6));
plt.ylabel('Frequency Distribution')
plt.savefig('visuals/biv-state-sanitation.png')
# Bivariate analysis of State and Electricity
table = pd.crosstab(df['State'], df['Public_electricity'])
table.plot(kind='bar', stacked=True,figsize=(13,6));
plt.ylabel('Frequency Distribution')
plt.savefig('visuals/biv-state-electricity.png')
# Bivariate analysis of State and Chalkboard
table = pd.crosstab(df['State'], df['Chalkboard/classroom'])
table.plot(kind='bar', stacked=True,figsize=(13,6));
plt.ylabel('Frequency Distribution')
plt.savefig('visuals/biv-state-chalkboard')
# Categorical variables comparison by Management
grouped = df.groupby('Management')[['Classrooms','Toilets','NCE_teachers','Fulltime_teachers']].sum().transpose()
grouped.plot(kind='bar', stacked=True, figsize=(13,6));
plt.ylabel('Frequency Distribution')
plt.savefig('visuals/biv-categ-mgt.png')
# Bivariate analysis of Male and Female Teachers
teachers = df[['Male_teachers', 'Female_teachers']].sum()
plt.figure(figsize=(8,5))
teachers.plot(kind='barh', color=['g', 'b']);
plt.title('Distribution of Teachers by Gender')
plt.xlabel('Frequency Distribution')
plt.savefig('visuals/biv-teachers-gender.png')
df_num = df[['Classrooms','Toilets',
'NCE_teachers','Fulltime_teachers',
'Male_teachers','Female_teachers','Total_students']]
df_num.head()
# Heatmap of numerical variables Correlation
num_corr = df_num.corr()
#mask = num_corr < 0.5
plt.figure(figsize=(15,9))
sns.heatmap(num_corr, vmax=1.0, vmin=-1.0, linewidths=0.1,
annot=True, annot_kws={"size": 8}, square=True);
plt.savefig('visuals/heatmap.png')
# Possible variables for further relationship analysis
df_num = df[['Classrooms','Toilets',
'NCE_teachers','Fulltime_teachers',
'Male_teachers','Female_teachers','Total_students']]
corr = df_num.corr()['Total_students'][:-1]
corr_fairly = corr[abs(corr) > 0.4]
#Visualizing variables that correlate with Total students with values greater than 0.4
plt.style.use('bmh')
fig, ax = plt.subplots(figsize=(8,6))
from yellowbrick.target import FeatureCorrelation
X = df_num[corr_fairly.index]
Y = df_num['Total_students']
feature_names = X.columns
visual = FeatureCorrelation(method='pearson', label=feature_names, sort=True).fit(X,Y)
visual.poof();
plt.savefig('visuals/corr-total-students.png')
# Bivarate analysis of fulltime teachers, NCE teachers and total students
df_num.plot(kind='scatter',
x='Fulltime_teachers',
y='Total_students',
c='NCE_teachers',
cmap='viridis', figsize=(18,8))
plt.savefig('visuals/biv-full-nce-students.png')
| data-cleaning-exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: .venv
# language: python
# name: .venv
# ---
# ## Filtering & adding new records
#
# Filtering enables you to zoom in or out within a chart, allowing the viewer to focus on certain selected elements, or get more context. You can also add new records to the data on the chart which makes it easy to work with real-time sources.
#
# **Note:** Currently `Data.filter()` and `Data().set_filter()` only accept JavaScript expression as string. Data fields can be accessed via `record` object, see the examples below.
#
# We add two items from the Genres dimension - using the || operator - to the filter, so the chart elements that belong to the other two items will vanish from the chart.
# +
from ipyvizzu import Chart, Data, Config
chart = Chart()
data = Data()
data.add_dimension('Genres', [ 'Pop', 'Rock', 'Jazz', 'Metal'])
data.add_dimension('Types', [ 'Hard', 'Smooth', 'Experimental' ])
data.add_measure(
'Popularity',
[
[114, 96, 78, 52],
[56, 36, 174, 121],
[127, 83, 94, 58],
]
)
chart.animate(data)
chart.animate(Config({
"channels": {
"y": {
"set": ["Popularity", "Types"]
},
"x": {
"set": "Genres"
},
"label": {
"attach": "Popularity"
}
},
"color": {
"attach": "Types"
},
"title": "Filter by one dimension"
}))
filter1 = Data.filter("record['Genres'] == 'Pop' || record['Genres'] == 'Metal'")
chart.animate(filter1)
snapshot1 = chart.store()
# -
# Now we add a cross-filter that includes items from both the Genres and the Types dimensions. This way we override the filter from the previous state. If we weren't update the filter, Vizzu would use it in subsequent states.
# +
chart.animate(snapshot1)
chart.animate(Config({"title": "Filter by two dimensions"}))
filter2 = Data.filter("(record['Genres'] == 'Pop' || record['Genres'] == 'Metal') && record['Types'] == 'Smooth'")
chart.animate(filter2)
snapshot2 = chart.store()
# -
# Switching the filter off to get back to the original view.
# +
chart.animate(snapshot2)
chart.animate(Config({"title": "Filter off"}))
chart.animate(Data.filter(None))
snapshot3 = chart.store()
# -
# Here we add another record to the data set and update the chart accordingly.
# +
chart.animate(snapshot3)
chart.animate(Config({"title": "Adding new records"}))
data2 = Data()
records = [
['Soul', 'Hard', 91],
['Soul', 'Smooth', 57],
['Soul', 'Experimental', 115]
]
data2.add_records(records)
chart.animate(data2)
# -
# Note: combining this option with the store function makes it easy to update previously configured states with fresh data since this function saves the config and style parameters of the chart into a variable but not the data.
# Next chapter: [Without coordinates & noop channel](./without_coordinates.ipynb) ----- Previous chapter: [Orientation, split & polar](./orientation.ipynb) ----- Back to the [Table of contents](../index.ipynb#tutorial)
| docs/tutorial/filter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: project3
# language: python
# name: project3
# ---
# +
# importing dependencies here
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
# preprocessing
import re
import nltk
from nltk.corpus import stopwords
nltk.download("stopwords")
# lemmitizing
from nltk.stem import WordNetLemmatizer
# vectorization and pipeline
from sklearn.pipeline import make_pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import Normalizer
# class imbalance
from imblearn.pipeline import make_pipeline as imb_make_pipeline
from imblearn.under_sampling import RandomUnderSampler
from imblearn.over_sampling import RandomOverSampler
# ML models and Cross Validation
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn import svm
from sklearn.model_selection import cross_validate
# model evaluation
from imblearn.metrics import classification_report_imbalanced
from sklearn.metrics import average_precision_score
from sklearn.metrics import roc_auc_score
# sparse to dense
from sklearn.base import TransformerMixin
class DenseTransformer(TransformerMixin):
def fit(self, X, y=None, **fit_params):
return self
def transform(self, X, y=None, **fit_params):
return X.todense()
# saving the model
from joblib import dump
# performance check
import time
# -
# reading the email data
df = pd.read_csv(os.path.join("..", "input","email-spam-classification-dataset-csv", "emails.csv"))
# checking first 5 records
df.head()
# ## EDA
# checking for class imbalance
df["spam"].value_counts().plot(kind="bar")
# * Class 1 = Spam
# * Class 0 = Ham
# #### The dataset is imbalanced. I will handle class imbalance in the model pipeline.
# ### Checking for Null Values
df.isnull().sum()
# ### Some quick Stats check
print(df.info())
print(df.describe())
# ### Data Cleaning
# +
# converting posts into lower case
df["clean_text"] = df["text"].str.lower()
df["clean_text"] = df["clean_text"].str.replace(
re.compile(r"https?:\/\/(www)?.?([A-Za-z_0-9-]+)([\S])*"), ""
)
# dropping emails
df["clean_text"] = df["clean_text"].str.replace(re.compile(r"\S+@\S+"), "")
# dropping punctuations
df["clean_text"] = df["clean_text"].str.replace(re.compile(r"[^a-z\s]"), " ")
# dropping the word "subject"
df["clean_text"] = df["clean_text"].str.replace("subject", "")
# -
df.head()
# ### Lemmitizing
# +
# lemmitizing (excluding stop words in this step)
t = time.time()
lemmatizer = WordNetLemmatizer()
df["clean_text"] = df["clean_text"].apply(
lambda x: " ".join(
[
lemmatizer.lemmatize(word)
for word in x.split(" ")
if word not in stopwords.words("english")
]
)
)
print(f"Lemmitizing Time: {time.time() - t} seconds")
# -
# ## Building ML Model
X = df["clean_text"].values
y = df["spam"].values
# +
def build_model(model, X, y):
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, stratify=y
)
# model training
model.fit(X_train, y_train)
# y_hat
y_pred = model.predict(X_test)
# model evaluation
print(classification_report_imbalanced(y_test, y_pred))
cross_validation_report(model, X, y)
########################################################################################################
def build_proba_model(model, X, y):
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, stratify=y
)
# model training
model.fit(X_train, y_train)
# y_hat
y_pred = model.predict(X_test)
# y_probability
y_proba = model.predict_proba(X_test)[:, 1]
# precision recall score
average_precision = average_precision_score(y_test, y_proba)
# model evaluation
print(f"ROC-AUC Score: {roc_auc_score(y_test, y_proba):.2f}")
print(f"Average Precision-Recall Score: {average_precision:.2f}")
print(classification_report_imbalanced(y_test, y_pred))
cross_validation_report(model, X, y)
#####################################################################################################
def cross_validation_report(model, X, y):
raw_cv_report = cross_validate(
model, X, y, cv=5, scoring=("accuracy", "precision", "recall")
)
cv_report = {f"Avg {key}": raw_cv_report[key].mean() for key in raw_cv_report}
print("Cross Validation Report:\n--------------------------------")
for key in cv_report:
print(f"{key}: {cv_report[key]}")
return
# -
# ## Naive Bayes using Count Vectorizer
# +
ct_nb = imb_make_pipeline(
CountVectorizer(min_df=25, max_df=0.85, stop_words="english"),
RandomOverSampler(),
MultinomialNB(class_prior=None, fit_prior=True),
)
build_proba_model(ct_nb, X, y)
# -
# ## Naive Bayes using TF-IDF Vectorizer
# +
tf_nb = imb_make_pipeline(
TfidfVectorizer(min_df=25, max_df=0.85, stop_words="english"),
RandomOverSampler(),
MultinomialNB(class_prior=None, fit_prior=True),
)
build_proba_model(tf_nb, X, y)
# -
# ## Gaussian Naive Bayes using TF-IDF Vectorizer
# +
tf_gaussian_nb = imb_make_pipeline(
TfidfVectorizer(min_df=25, max_df=0.85, stop_words="english"),
RandomOverSampler(),
DenseTransformer(),
GaussianNB(),
)
build_proba_model(tf_gaussian_nb, X, y)
# -
# ## SVM with Count Vectorizer
# ### RBF kernel with variations of C
# C = 1 (default)
svm_pipe = imb_make_pipeline(
CountVectorizer(stop_words="english"),
RandomOverSampler(),
svm.SVC(kernel="rbf", C=1),
)
build_model(svm_pipe, X, y)
# C = 10
svm_pipe = imb_make_pipeline(
CountVectorizer(stop_words="english"),
RandomOverSampler(),
svm.SVC(kernel="rbf", C=10),
)
build_model(svm_pipe, X, y)
# C = 100
svm_pipe = imb_make_pipeline(
CountVectorizer(stop_words="english"),
RandomOverSampler(),
svm.SVC(kernel="rbf", C=100),
)
build_model(svm_pipe, X, y)
# ### Linear kernel with variations of C
# C = 1 (default)
svm_pipe = imb_make_pipeline(
CountVectorizer(stop_words="english"),
RandomOverSampler(),
svm.SVC(kernel="linear", C=1),
)
build_model(svm_pipe, X, y)
# C = 10
svm_pipe = imb_make_pipeline(
CountVectorizer(stop_words="english"),
RandomOverSampler(),
svm.SVC(kernel="linear", C=10),
)
build_model(svm_pipe, X, y)
# C = 100
svm_pipe = imb_make_pipeline(
CountVectorizer(stop_words="english"),
RandomOverSampler(),
svm.SVC(kernel="linear", C=100),
)
build_model(svm_pipe, X, y)
# ### Poly kernel with variations of C
# C = 1, default degree = 3
svm_pipe = imb_make_pipeline(
CountVectorizer(stop_words="english"),
RandomOverSampler(),
svm.SVC(kernel="poly", C=1, degree=3),
)
build_model(svm_pipe, X, y)
# C = 10
svm_pipe = imb_make_pipeline(
CountVectorizer(stop_words="english"),
RandomOverSampler(),
svm.SVC(kernel="poly", C=10, degree=3),
)
build_model(svm_pipe, X, y)
# C = 100
svm_pipe = imb_make_pipeline(
CountVectorizer(stop_words="english"),
RandomOverSampler(),
svm.SVC(kernel="poly", C=100, degree=3),
)
build_model(svm_pipe, X, y)
# ### Based on the best scores for accuracy, precision and recall, selecting SVM with rbf kernel and regularization value of 10 as our final model.
# #### Before training the final model on entire dataset, testing its performance with the variations of gamma parameter.
# gamma set to the default value of "scale"
svm_pipe = imb_make_pipeline(
CountVectorizer(stop_words="english"),
RandomOverSampler(),
svm.SVC(kernel="rbf", C=10, gamma="scale"),
)
build_model(svm_pipe, X, y)
# gamma set to "auto"
svm_pipe = imb_make_pipeline(
CountVectorizer(stop_words="english"),
RandomOverSampler(),
svm.SVC(kernel="rbf", C=10, gamma="auto"),
)
build_model(svm_pipe, X, y)
# ### Going ahead with the SVM trained model using default gamma value of scale and training it on the entire dataset
# +
# training the final model
svm_pipe_final = imb_make_pipeline(
CountVectorizer(stop_words="english"),
RandomOverSampler(),
svm.SVC(kernel="rbf", C=10, gamma="scale"),
)
svm_pipe_final.fit(X, y)
# saving the model
dump(svm_pipe_final, os.path.join("..", "model", "svm_spam_classifier.joblib"))
# -
| spamemailclassify.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Risk Aggregation POC - ELTs
#
# This notebook reads can be used to read in multiple ELTs, rollup the ELTs to an aggregate ELTs, and generate statistics and EP metrics from the ELTs.
#
# -----
# ### Install necessary libraries
# +
# # %pip install pandas
# # %pip install numpy
# # %pip install pyarrow
# # %pip install fastparquet
# # %pip install rpy2
# # %pip install scipy==1.6.0
# Install to connect to AWS S3
# # %pip install boto3
# # %pip install botocore==1.22.5
# # %pip install s3fs
# # %pip install fsspec
# +
### Import necessary libraries
# -
import os
import sys
import pandas as pd
import numpy as np
import aggregationtools.elt
import aggregationtools
from aggregationtools.elt import ELT
from aggregationtools import elt_calculator
import glob
# ### Import data from AWS S3
# AWS credentials may be provided explicitly with s3fs.S3FileSystem,
# but it is more secure to exclude the credentials from the code.
# Instead use the AWS CLI to `aws configure` credentials.
# +
# import glob
# import s3fs
# s3 = s3fs.S3FileSystem(anon=False)
# files = s3.glob('s3://fannie-mae-phase-3/03 Data Aggregation POC/01 Test/SCENARIO_DATA/EARTHQUAKE_CA_NY_FL/*.parquet')
# elts = pd.concat([pd.read_parquet('s3://' + fp) for fp in files])
# -
# ### Import local data
path = os.path.join('AggregationPOCScenarios', 'EARTHQUAKE_CA_NY_FL', '')
files = glob.glob(path + '*.parquet')
elts = pd.concat([pd.read_parquet(fp) for fp in files])
# ### Create ELT from imported data
# json_test = plts.to_dict(orient='records')
original_elt = aggregationtools.elt.ELT(data = elts)
original_elt.elt.head(5).style
# ### Group ELTs
grouped_elt = elt_calculator.group_elts(original_elt)
grouped_elt.elt.head(5).style
# ### Calculate statistics from grouped ELT
aal = grouped_elt.get_aal()
std = grouped_elt.get_standard_deviation()
grouped_elt.elt.style
covvar = grouped_elt.get_covvar()
print('AAL: ' + str(aal))
print('STD: ' + str(std))
print('CovVar: ' + str(covvar))
# ### Calculate AEP & OEP metrics from ELT
oep = elt_calculator.calculate_oep_curve(grouped_elt.elt)
standard_rp_oep = oep.get_standard_return_period_ep()
standard_rp_oep = pd.DataFrame.from_dict([standard_rp_oep]).T
standard_rp_oep = standard_rp_oep[standard_rp_oep.index.isin([1 / x for x in oep.RETURN_PERIODS])]
standard_rp_oep = standard_rp_oep.rename_axis("Return Period").sort_index(axis=0, ascending=True)
standard_rp_oep.columns = ["Loss"]
standard_rp_oep.index = np.reciprocal(standard_rp_oep.index).to_series().apply(lambda x: np.round(x,2))
# Return OEP results
standard_rp_oep.style.format("{:,.0f}")
aep = elt_calculator.calculate_aep_curve(grouped_elt.elt)
standard_rp_aep = aep.get_standard_return_period_ep()
standard_rp_aep = pd.DataFrame.from_dict([standard_rp_aep]).T
standard_rp_aep = standard_rp_aep[standard_rp_aep.index.isin([1 / x for x in aep.RETURN_PERIODS])]
standard_rp_aep = standard_rp_aep.rename_axis("Return Period").sort_index(axis=0, ascending=True)
standard_rp_aep.columns = ["Loss"]
standard_rp_aep.index = np.reciprocal(standard_rp_aep.index).to_series().apply(lambda x: np.round(x,2))
# Return AEP results
standard_rp_aep.style.format("{:,.0f}")
| Risk Aggregation POC - ELTs.ipynb |
% -*- coding: utf-8 -*-
% ---
% jupyter:
% jupytext:
% text_representation:
% extension: .m
% format_name: light
% format_version: '1.5'
% jupytext_version: 1.14.4
% kernelspec:
% display_name: Octave
% language: octave
% name: octave
% ---
% # <center>Numerikus Integrálás</center>
% ## <center>Összetett érintőformula, trapézformula és Simpson-formula</center>
% <br>
% <b>Példa.</b> Tekintsük az alábbi
%
% $$I(f)=\int_{4}^{5.2}\ln(x)\text{d}x$$
%
% integrált. Közelítsük $I(f)$ értékét
%
% + $6$ részintervallumra támaszkodó összetett trapézformulával: $I_{T,6}(f)$
% + $3$ részintervallumra támaszkodó összetett Simpson-formulával: $I_{S,3}(f)$
%
% Becsüljük meg az egyes közelítések hibáját is!
a = 4; b = 5.2;
m = 6;
h = (b-a)/m;
x = [a:h:b]
kvadratura(4,5.2,6,'log(x)','erinto')
kvadratura(4,5.2,6,'log(x)','trapez') % Trapez eseten m = m
kvadratura(4,5.2,6,'log(x)','Simpson') % Simspon eseten itt 2*m-et kell megadni
% + magic_args="Osszetett Trapez hibaja"
M_2 = 1/16
trapez_hiba = ((b-a)^3/(12*m^2))*M_2
%% Osszetett Simpson hibaja
M_4 = 3/128
simpson_hiba = ((b-a)^5/(2880*(m/2)^4))*M_4
% -
% <br>
% <b>Példa.</b> Hány részintervallumra kell osztani a $[0,\ \pi/4]$ intervallumot, ha az
%
% $$I(f)=\int_{0}^{\pi/4}\ln(\cos(x))\text{d}x$$
%
% integrál értékét összetett trapézformulával szeretnénk közelíteni úgy, hogy a hiba (felhasználó által előírt tolerancia TOL) értéke kisebb legyen, mint $0.5\cdot 10^{-2}$?
% +
a = 0; b = pi/4;
TOL = 0.5 * 1e-2;
M_2 = 2;
% Hiba ((b-a)^3/(12*m^2))*M_2 < TOL
m = sqrt((((b-a)^3/(12))*M_2)/TOL)
m = ceil(m)
% -
%
% ## <center>Gauss-kvadratúrák és adaptív kvadratúrák</center>
% <br>
% <b>Példa.</b> Tekintsük Kahaner híres kvadratúra teszt gyűteményéből a 21-es számút, azaz
%
% $$I(f)=\int_{0}^{1}\frac{1}{\cosh^2(10x-2)}+\frac{1}{\cosh^4(100x-40)}+\frac{1}{\cosh^6(1000x-600)}$$
%
%
% A feladat pontos értéke $30$ tizedesjegyig
%
% $$I(f)=0.210802735500549277375643255709$$
%
% Nézzük meg, hogy mi történik, hogya $m$ intervallumra osztással dolgozunk összetett Simpson-formula esetén vagy legfeljebb $m$ intervallumot használunk adaptív Gauss-kvadratúra esetén.
x = linspace(0,1,3000);
f = 1./((cosh(10*x-2)).^2) + 1./((cosh(100*x-40)).^4) + 1./((cosh(1000*x-600)).^6);
plot(x,f)
% +
m = 100000;
% Simpson
kvadratura(0,1,2*m,'1./((cosh(10*x-2)).^2) + 1./((cosh(100*x-40)).^4) + 1./((cosh(1000*x-600)).^6)','Simpson')
%Adaptiv Gauss-Konrod kvadratura
%adaptiv_gauss_konrod = quadgk (@(x) 1./((cosh(10*x-2)).^2) + 1./((cosh(100*x-40)).^4) + 1./((cosh(1000*x-600)).^6), 0, 1, "MaxIntervalCount", m)
% -
| Eloadas/Blokk#5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cv2
import numpy as np
# Load HAAR face classifier
face_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# Load functions
def face_extractor(img):
# Function detects faces and returns the cropped face
# If no face detected, it returns the input image
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces = face_classifier.detectMultiScale(gray, 1.3, 5)
if faces is ():
return None
# Crop all faces found
for (x,y,w,h) in faces:
cropped_face = img[y:y+h, x:x+w]
return cropped_face
# Initialize Webcam
cap = cv2.VideoCapture(0)
count = 0
# Collect 100 samples of your face from webcam input
while True:
ret, frame = cap.read()
if face_extractor(frame) is not None:
count += 1
face = cv2.resize(face_extractor(frame), (200, 200))
face = cv2.cvtColor(face, cv2.COLOR_BGR2GRAY)
# Save file in specified directory with unique name switch files to train second person
file_name_path = './facetask/harry/' + str(count) + '.jpg'
cv2.imwrite(file_name_path, face)
# Put count on images and display live count
cv2.putText(face, str(count), (50, 50), cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2)
cv2.imshow('Face Cropper', face)
else:
print("Face not found")
pass
if cv2.waitKey(1) == 13 or count == 100: #13 is the Enter Key
break
cap.release()
cv2.destroyAllWindows()
print("Collecting Samples Complete")
# -
# # 𝐌𝐨𝐝𝐞𝐥 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐟𝐨𝐫 𝐭𝐰𝐨 𝐏𝐞𝐨𝐩𝐥𝐞.....
# +
import cv2
import numpy as np
from os import listdir
from os.path import isfile, join
# Get the training data we previously made
data_path = './facetask/first/'
data_path1='./facetask/harry/'
onlyfiles = [f for f in listdir(data_path) if isfile(join(data_path, f))]
onlyfiles1 = [f1 for f1 in listdir(data_path1) if isfile(join(data_path1, f1))]
# Create arrays for training data and labels
Training_Data, Labels = [], []
Training_Data1, Labels1 = [], []
# Open training images in our datapath
# Create a numpy array for training data
for i, files in enumerate(onlyfiles):
image_path = data_path + onlyfiles[i]
images = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
Training_Data.append(np.asarray(images, dtype=np.uint8))
Labels.append(i)
for j, files in enumerate(onlyfiles1):
image_path1 = data_path1 + onlyfiles1[j]
images1 = cv2.imread(image_path1, cv2.IMREAD_GRAYSCALE)
Training_Data1.append(np.asarray(images1, dtype=np.uint8))
Labels1.append(j)
# Create a numpy array for both training data and labels
Labels = np.asarray(Labels, dtype=np.int32)
#creating a numpy array for second person data and labels
Labels1 = np.asarray(Labels1, dtype=np.int32)
priyanshu_model= cv2.face_LBPHFaceRecognizer.create()
# Let's train Priynshu model
priyanshu_model.train(np.asarray(Training_Data), np.asarray(Labels))
print("*******Model for Priyanshu trained sucessefully********")
harry_model= cv2.face_LBPHFaceRecognizer.create()
# Let's train our second model
harry_model.train(np.asarray(Training_Data1), np.asarray(Labels1))
print("******* Model trained for second person sucessefully*********")
# -
# # 𝐅𝐢𝐧𝐚𝐥 𝐌𝐨𝐯𝐞 𝐅𝐚𝐜𝐞 𝐑𝐞𝐜𝐨𝐠𝐧𝐢𝐭𝐢𝐨𝐧.....
# +
import cv2
import numpy as np
import pywhatkit as py
import os
import pyttsx3
import time
face_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
def face_detector(img, size=0.5):
# Convert image to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces = face_classifier.detectMultiScale(gray, 1.3, 5)
if faces is ():
return img, []
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,255),2)
roi = img[y:y+h, x:x+w]
roi = cv2.resize(roi, (200, 200))
return img, roi
# Open Webcam
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
image, face = face_detector(frame)
try:
face = cv2.cvtColor(face, cv2.COLOR_BGR2GRAY)
# "results" comprises of a tuple containing the label and the confidence value
results = priyanshu_model.predict(face)
results1=harry_model.predict(face)
#SETTING UP CONFIDENCE SCORE FOR MY FACE
if results[1]< 500:
confidence = int( 100 * (1 - (results[1])/400) )
display_string = str(confidence) + '% Confident it is User'
#SETTING UP CONFIDENCE SCORE FOR HARRY FACE
if results1[1]<500:
confidence_harry= int( 100 * (1 - (results1[1])/400) )
display_string1 = str(confidence_harry) + '% Confident it is User'
cv2.putText(image, display_string, (100, 120), cv2.FONT_HERSHEY_COMPLEX, 1, (255,120,150), 2)
if confidence> 85:
cv2.putText(image, "Hey Priyanshu !!", (250, 450), cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2)
cv2.imshow('Face Recognition', image)
ctr=1
#ctr 1 for my face it will send whatsapp and email
break
if confidence_harry>85:
cv2.putText(image, "Hey purva !!", (250, 450), cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2)
cv2.imshow('Face Recognition', image)
ctr=2
#ctr 2 for harry face it will launch an instance and attach volume
break
else:
cv2.putText(image, "I dont know, how r u ", (250, 450), cv2.FONT_HERSHEY_COMPLEX, 1, (0,0,255), 2)
cv2.imshow('Face Recognition', image )
engine=pyttsx3.init()
engine.setProperty("rate",160)
engine.say("Sorry , I know only Priyanshu and Harry Styles")
engine.runAndWait()
#pyttsx3.speak(" I know ,Prriyanshu ,and <NAME> ,only ,sorry")
except:
cv2.putText(image, "No Face Found", (220, 120) , cv2.FONT_HERSHEY_COMPLEX, 1, (0,0,255), 2)
cv2.putText(image, "looking for face", (250, 450), cv2.FONT_HERSHEY_COMPLEX, 1, (0,0,255), 2)
cv2.imshow('Face Recognition', image )
pass
if cv2.waitKey(1) == 13: #13 is theEnter Key
break
cap.release()
cv2.destroyAllWindows()
#email and whatsapp
if ctr==1:
#subprocess.run('engine=pyttsx3.init() engine.setProperty("rate",160) engine.say("hello Priyanshu , sending WhatsApp and Email now") engine.runAndWait()',shell=True)
#pyttsx3.speak("Hello ,Priyanshu , Sending WhatsApp and Email now")
engine=pyttsx3.init()
engine.setProperty("rate",160)
engine.say("hello Priyanshu , sending WhatsApp and Email now")
engine.runAndWait()
py.sendwhatmsg_instantly("+91 7302469023","Face detection")
py.sendMail("<EMAIL>","example2345@##$%","<EMAIL>","FACE DETECTED")
print("mail sent")
#FOR LAUNCHING INSTANCE AND ATTACHING STORAGE
elif ctr==2:
engine=pyttsx3.init()
engine.setProperty("rate",160)
engine.say("hello Harry Styles ,Launching ec2 instance and adding volume to it now")
engine.runAndWait()
#Launching instance commands
os.system("aws ec2 run-instances --image-id ami-0ad704c126371a549 --instance-type t2.micro --count 1 --subnet-id subnet-7b98c237 --security-group-ids sg-0dae0a1d6032375b2 --key-name first_os_key >ec2.txt")
print("Your Instance Had been Launched")
#storage
os.system(" aws ec2 create-volume --availability-zone ap-south-1b --size 5 --volume-type gp2 >ebs.txt")
print("volume created")
time.sleep(30)
engine=pyttsx3.init()
engine.setProperty("rate",160)
engine.say("To attach volume please run the next Cell")
engine.runAndWait()
# +
#Attaching 5GB volume to created instance
os.system("aws ec2 attach-volume --volume-id vol-07f3bf3d518f2657d --instance-id i-04d3e9e33cf79af78 --device /dev/xvdi")
engine=pyttsx3.init()
engine.setProperty("rate",160)
engine.say("Thank you ,Your volume is attached now")
engine.runAndWait()
# -
# +
#SEPERATE TRAINING FOR MY FACE
'''import cv2
import numpy as np
from os import listdir
from os.path import isfile, join
# Get the training data we previously made
data_path = './facetask/first/'
onlyfiles = [f for f in listdir(data_path) if isfile(join(data_path, f))]
# Create arrays for training data and labels
Training_Data, Labels = [], []
# Open training images in our datapath
# Create a numpy array for training data
for i, files in enumerate(onlyfiles):
image_path = data_path + onlyfiles[i]
images = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
Training_Data.append(np.asarray(images, dtype=np.uint8))
Labels.append(i)
# Create a numpy array for both training data and labels
Labels = np.asarray(Labels, dtype=np.int32)
priyanshu_model= cv2.face_LBPHFaceRecognizer.create()
# Let's train our model
priyanshu_model.train(np.asarray(Training_Data), np.asarray(Labels))
print(" Priyanshu Model trained sucessefully")'''
# +
#SEPERATE TRAINING FOR SECOND PERSON(HARRY) FACE
'''import cv2
import numpy as np
from os import listdir
from os.path import isfile, join
# Get the training data we previously made
data_path = './facetask/harry/'
onlyfiles = [f for f in listdir(data_path) if isfile(join(data_path, f))]
# Create arrays for training data and labels
Training_Data, Labels = [], []
# Open training images in our datapath
# Create a numpy array for training data
for i, files in enumerate(onlyfiles):
image_path = data_path + onlyfiles[i]
images = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
Training_Data.append(np.asarray(images, dtype=np.uint8))
Labels.append(i)
# Create a numpy array for both training data and labels
Labels = np.asarray(Labels, dtype=np.int32)
# Initialize facial recognizer
# model = cv2.face.createLBPHFaceRecognizer()
# NOTE: For OpenCV 3.0 use cv2.face.createLBPHFaceRecognizer()
# pip install opencv-contrib-python
# model = cv2.createLBPHFaceRecognizer()
harry_model= cv2.face_LBPHFaceRecognizer.create()
# Let's train our model
harry_model.train(np.asarray(Training_Data), np.asarray(Labels))
print("harry Model trained sucessefully")'''
| Face task 6/Final_TASK_6 FACE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: scanpy_env
# language: python
# name: scanpy_env
# ---
# ### Supervised Machine Learning Models for Cross Species comparison of supporting cells
# +
import numpy as np
import pandas as pd
import scanpy as sc
import matplotlib.pyplot as plt
import os
import sys
import anndata
def MovePlots(plotpattern, subplotdir):
os.system('mkdir -p '+str(sc.settings.figdir)+'/'+subplotdir)
os.system('mv '+str(sc.settings.figdir)+'/*'+plotpattern+'** '+str(sc.settings.figdir)+'/'+subplotdir)
sc.settings.verbosity = 3 # verbosity: errors (0), warnings (1), info (2), hints (3)
sc.settings.figdir = '/home/jovyan/Gonads/Flat_SupportVectorMachine_Fetal/SVM/training/'
sc.logging.print_versions()
sc.settings.set_figure_params(dpi=80) # low dpi (dots per inch) yields small inline figures
sys.executable
# -
# **Load our fetal samples**
human = sc.read('/nfs/team292/lg18/with_valentina/FCA-M5-annotatedCluster4Seurat.h5ad')
human = human[[i in ['female'] for i in human.obs['sex']]]
human.obs['stage'].value_counts()
# **Take fine grained annotations from Luz on supporting cells**
# +
supporting = pd.read_csv('/nfs/team292/lg18/with_valentina/supporting_nocycling_annotation.csv', index_col = 0)
print(supporting['annotated_clusters'].value_counts())
supporting = supporting[supporting['annotated_clusters'].isin(['coelEpi', 'sLGR5', 'sPAX8b', 'preGC_III_Notch', 'preGC_II',
'preGC_II_hypoxia', 'preGC_I_OSR1', 'sKITLG',
'ovarianSurf'])]
mapping = supporting['annotated_clusters'].to_dict()
human.obs['supporting_clusters'] = human.obs_names.map(mapping)
# Remove doublets as well as NaNs corresponding to cells from enriched samples
human.obs['supporting_clusters'] = human.obs['supporting_clusters'].astype(str)
human = human[[i not in ['nan'] for i in human.obs['supporting_clusters']]]
human.obs['supporting_clusters'].value_counts(dropna = False)
# -
### Join sub-states of preGC_II and preGC_III
joined = {'coelEpi' : 'coelEpi', 'sLGR5' : 'sLGR5', 'sPAX8b' : 'sPAX8b', 'preGC_III_Notch' : 'preGC_III', 'preGC_II' : 'preGC_II',
'preGC_II_hypoxia' : 'preGC_II', 'preGC_I_OSR1' : 'preGC_I_OSR1', 'sKITLG' : 'sKITLG',
'ovarianSurf' : 'ovarianSurf'}
human.obs['supporting_clusters'] = human.obs['supporting_clusters'].map(joined)
human.obs['supporting_clusters'].value_counts(dropna = False)
# **Intersect genes present in all fetal gonads scRNAseq datasets of human and mouse**
# Mouse ovary
mouse = sc.read("/nfs/team292/vl6/Mouse_Niu2020/supporting_mesothelial.h5ad")
mouse = anndata.AnnData(X= mouse.raw.X, var=mouse.raw.var, obs=mouse.obs)
mouse
# Extract the genes from all datasets
human_genes = human.var_names.to_list()
mouse_genes = mouse.var_names.to_list()
from functools import reduce
inters = reduce(np.intersect1d, (human_genes, mouse_genes))
len(inters)
cell_cycle_genes = [x.strip() for x in open(file='/nfs/users/nfs_v/vl6/regev_lab_cell_cycle_genes.txt')]
cell_cycle_genes = [x for x in cell_cycle_genes if x in list(inters)]
inters = [x for x in list(inters) if x not in cell_cycle_genes]
len(inters)
# **Subset fetal data to keep only these genes**
human = human[:, list(inters)]
human
# **Downsample more frequent classes**
myindex = human.obs['supporting_clusters'].value_counts().index
myvalues = human.obs['supporting_clusters'].value_counts().values
clusters = pd.Series(myvalues, index = myindex)
clusters.values
# +
import random
from itertools import chain
# Find clusters with > n cells
n = 1500
cl2downsample = clusters.index[ clusters.values > n ]
# save all barcode ids from small clusters
holder = []
holder.append( human.obs_names[[ i not in cl2downsample for i in human.obs['supporting_clusters'] ]] )
# randomly sample n cells in the cl2downsample
for cl in cl2downsample:
print(cl)
cl_sample = human[[ i == cl for i in human.obs['supporting_clusters'] ]].obs_names
# n = int(round(len(cl_sample)/2, 0))
cl_downsample = random.sample(set(cl_sample), n )
holder.append(cl_downsample)
# samples to include
samples = list(chain(*holder))
# Filter adata_count
human = human[[ i in samples for i in human.obs_names ]]
human.X.shape
# -
# **Preprocess the data**
# Per cell normalization
sc.pp.normalize_per_cell(human, counts_per_cell_after=1e4)
# Log transformation
sc.pp.log1p(human)
# Filter HVGs --> Select top 300 highly variable genes that will serve as features to the machine learning models
sc.pp.highly_variable_genes(human, n_top_genes = 300)
highly_variable_genes = human.var["highly_variable"]
human = human[:, highly_variable_genes]
# Scale
sc.pp.scale(human, max_value=10)
print('Total number of cells: {:d}'.format(human.n_obs))
print('Total number of genes: {:d}'.format(human.n_vars))
# **Import libraries**
# +
# Required libraries regardless of the model you choose
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
from sklearn.pipeline import Pipeline
# Library for Logistic Regression
from sklearn.linear_model import LogisticRegression
# Library for Random Forest
from sklearn.ensemble import RandomForestClassifier
# Library for Support Vector Machine
from sklearn.svm import SVC
# -
print("Loading data")
X = np.array(human.X) # Fetching the count matrix to use as input to the model
print(type(X), X.shape)
# Choose output variable, meaning the labels you want to predict
y = list(human.obs.supporting_clusters.astype('str'))
# Split the training dataset into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X,
y,
test_size=0.25, # This can be changed, though it makes sense to use 25-30% of the data for test
random_state=1234,
)
# **Option 1: Logistic Regression classifier**
# +
# Instantiate a Logistic Regression Classifier and specify L2 regularization
lr = LogisticRegression(penalty='l2', multi_class="multinomial", max_iter = 2000)
# Instantiate a PCA object
pca = PCA()
# Create pipeline object
pipe = Pipeline(steps=[('pca', pca), ('LogReg', lr)])
print('Hyperparameter tuning with exhaustive grid search')
# Choose a grid of hyperparameters values (these are arbitrary but reasonable as I took reference values from the documentation)
params_lr = {'LogReg__C' : [0.001, 0.01, 0.1, 1, 10, 100], 'LogReg__solver' : ["lbfgs", 'newton-cg', 'sag'],
'pca__n_components' : [0.7, 0.8, 0.9]}
# Use grid search cross validation to span the hyperparameter space and choose the best
grid_lr = RandomizedSearchCV(estimator = pipe, param_distributions = params_lr, cv = 5, n_jobs = -1)
# Fit the model to the training set of the training data
grid_lr.fit(X_train, y_train)
# Report the best parameters
print("Best CV params", grid_lr.best_params_)
# Report the best hyperparameters and the corresponding score
print("Softmax training accuracy:", grid_lr.score(X_train, y_train))
print("Softmax test accuracy:", grid_lr.score(X_test, y_test))
# -
# **Option 2: Support Vector Machine classifier**
# +
# Instantiate an RBF Support Vector Machine
svm = SVC(kernel = "rbf", probability = True)
# Instantiate a PCA
pca = PCA()
# Create pipeline object
pipe = Pipeline(steps=[('pca', pca), ('SVC', svm)])
print('Hyperparameter tuning with exhaustive grid search')
# Choose a grid of hyperparameters values (these are arbitrary but reasonable as I took reference values from the documentation)
params_svm = {'SVC__C':[0.1, 1, 10, 100], 'SVC__gamma':[0.001, 0.01, 0.1], 'pca__n_components': [0.7, 0.8, 0.9]}
# Use grid search cross validation to span the hyperparameter space and choose the best
grid_svm = RandomizedSearchCV(pipe, param_distributions = params_svm, cv=5, verbose =1, n_jobs = -1)
# Fit the model to the training set of the training data
grid_svm.fit(X_train, y_train)
# Report the best hyperparameters and the corresponding score
print("Best CV params", grid_svm.best_params_)
print("Best CV accuracy", grid_svm.best_score_)
# -
# **Option 3: Random Forest classifier**
# +
# Instantiate a Random Forest Classifier
SEED = 123
rf = RandomForestClassifier(random_state = SEED) # set a seed to ensure reproducibility of results
print(rf.get_params()) # Look at the hyperparameters that can be tuned
# Instantiate a PCA object
pca = PCA()
# Create pipeline object
pipe = Pipeline(steps=[('pca', pca), ('RF', rf)])
print('Hyperparameter tuning with exhaustive grid search')
# Choose a grid of hyperparameters values (these are arbitrary but reasonable as I took reference values from the documentation)
params_rf = {"RF__n_estimators": [50, 100, 200, 300], 'RF__min_samples_leaf': [1, 5], 'RF__min_samples_split': [2, 5, 10],
'pca__n_components' : [0.7, 0.8,0.9]}
# Use grid search cross validation to span the hyperparameter space and choose the best
grid_rf = RandomizedSearchCV(estimator = pipe, param_distributions = params_rf, cv = 5, n_jobs = -1)
# Fit the model to the training set of the training data
grid_rf.fit(X_train, y_train)
# Report the best hyperparameters and the corresponding score
print("Best CV params", grid_rf.best_params_)
print("Best CV accuracy", grid_rf.best_score_)
# -
# All 3 models return an object (which I called *grid_lr*, *grid_rf*, *grid_svm*, respectively) that has an attribute called **.best_estimator_** which holds the model with the best hyperparameters that was found using grid search cross validation. This is the model that you will use to make predictions.
# **Evaluating the model's performance on the test set of the training data**
predicted_labels = grid_svm.best_estimator_.predict(X_test) # Here as an example I am using the support vector machine model
report_rf = classification_report(y_test, predicted_labels)
print(report_rf)
print("Accuracy:", accuracy_score(y_test, predicted_labels))
# +
cnf_matrix = confusion_matrix(y_test, predicted_labels)
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
class_names=[0,1] # name of classes
fig, ax = plt.subplots()
tick_marks = np.arange(len(class_names))
plt.xticks(tick_marks, class_names)
plt.yticks(tick_marks, class_names)
# create heatmap
sns.heatmap(pd.DataFrame(cnf_matrix), annot=True, cmap="YlGnBu" ,fmt='g')
ax.xaxis.set_label_position("top")
plt.tight_layout()
plt.title('Confusion matrix', y=1.1)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
# -
print("Accuracy:", accuracy_score(y_test, predicted_labels))
grid_svm.best_estimator_.feature_names = list(human.var_names)
# **Predict cell types in the mouse data**
def process_and_subset_data(adata, genes):
# save the log transformed counts as raw
adata.raw = adata.copy()
# Per cell normalization
sc.pp.normalize_per_cell(adata, counts_per_cell_after=1e4)
# Log transformation
sc.pp.log1p(adata)
# Subset data
adata = adata[:, list(genes)]
# Scale
sc.pp.scale(adata, max_value=10)
return adata
def process_data(adata):
# Per cell normalization
sc.pp.normalize_per_cell(adata, counts_per_cell_after=1e4)
# Log transformation
sc.pp.log1p(adata)
# Scale
sc.pp.scale(adata, max_value=10)
import scipy
def make_single_predictions(adata, classifier):
#if scipy.sparse.issparse(adata.X):
#adata.X = adata.X.toarray()
adata_X = np.array(adata.X)
print(type(adata_X), adata_X.shape)
adata_preds = classifier.predict(adata_X)
adata.obs['human_classifier_supporting'] = adata_preds
print(adata.obs.human_classifier_supporting.value_counts(dropna = False))
def make_correspondence(classifier):
corr = {}
for i in range(0,len(classifier.classes_)):
corr[i] = classifier.classes_[i]
return corr
def make_probability_predictions(adata, classifier):
adata_X = np.array(adata.X)
print(type(adata_X), adata_X.shape)
proba_preds = classifier.predict_proba(adata_X)
df_probs = pd.DataFrame(np.column_stack(list(zip(*proba_preds))))
corr = make_correspondence(classifier)
for index in df_probs.columns.values:
celltype = corr[index]
adata.obs['prob_'+celltype] = df_probs[index].to_list()
# Mouse ovary (Niu et al., 2020)
# +
mouse = process_and_subset_data(mouse, grid_svm.best_estimator_.feature_names)
make_single_predictions(mouse, grid_svm.best_estimator_)
# -
make_probability_predictions(mouse, grid_svm.best_estimator_)
mouse
mouse.write('/nfs/team292/vl6/Mouse_Niu2020/supporting_cells_with_human_preds.h5ad')
mouse = sc.read('/nfs/team292/vl6/Mouse_Niu2020/supporting_cells_with_human_preds.h5ad')
mouse
mouse_predictions = mouse.obs[['prob_coelEpi', 'prob_ovarianSurf', 'prob_preGC_II', 'prob_preGC_III', 'prob_preGC_I_OSR1', 'prob_sKITLG', 'prob_sLGR5', 'prob_sPAX8b']]
mouse_predictions.columns = ['prob_coelEpi', 'prob_ovarianSurf', 'prob_preGC-II', 'prob_preGC-II-late', 'prob_preGC-I',
'prob_sKITLG', 'prob_sLGR5', 'prob_sPAX8b']
mouse_predictions.head()
mouse_predictions.to_csv('/nfs/team292/vl6/Mouse_Niu2020/mouse_Niu2020_supporting_predictions.csv')
| mouseNiu2020_SVM_supporting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# #!pip install tflearn
# -
#Importing required libraries
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.layers import BatchNormalization
import numpy as np
# +
np.random.seed(1000)
#Get data
import tflearn.datasets.oxflower17 as oxflower17
# -
x,y= oxflower17.load_data(one_hot=True)
x.shape, y.shape
import matplotlib.pyplot as plt
plt.imshow(x[6])
# +
#Sequential model
model=Sequential()
#1st convolutional layer
model.add(Conv2D(filters=96, input_shape=(224, 224, 3), kernel_size=(11, 11), strides=(4, 4), padding='valid'))
model.add(Activation('relu'))
#Pooling
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='valid'))
#Batch normalization before passing it to next layer
model.add(BatchNormalization())
#2nd convolutional layer
model.add(Conv2D(filters=256, kernel_size=(11, 11), strides=(1, 1), padding='valid'))
model.add(Activation('relu'))
#Pooling
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='valid'))
#Batch normalization
model.add(BatchNormalization())
#3rd convolutional layer
model.add(Conv2D(filters=384, kernel_size=(3, 3), strides=(1, 1), padding='valid'))
model.add(Activation('relu'))
#Batch normalization before passing it to next layer
model.add(BatchNormalization())
#4th convolutional layer
model.add(Conv2D(filters=384, kernel_size=(3, 3), strides=(1, 1), padding='valid'))
model.add(Activation('relu'))
#Batch normalization before passing it to next layer
model.add(BatchNormalization())
#5th convolutional layer
model.add(Conv2D(filters=256, kernel_size=(3, 3), strides=(1, 1), padding='valid'))
model.add(Activation('relu'))
#Pooling
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='valid'))
#Batch normalization
model.add(BatchNormalization())
#Passing it to a dense layer
model.add(Flatten())
#1st dense layer
model.add(Dense(4096, input_shape=(224*224*3, )))
model.add(Activation('relu'))
#Add dropout to prevent overfitting
model.add(Dropout(0.4))
#Batch normalization
model.add(BatchNormalization())
#2nd dense layer
model.add(Dense(4096))
model.add(Activation('relu'))
#Add dropout to prevent overfitting
model.add(Dropout(0.4))
#Batch normalization
model.add(BatchNormalization())
#3rd dense layer
model.add(Dense(1000))
model.add(Activation('relu'))
#Add dropout to prevent overfitting
model.add(Dropout(0.4))
#Batch normalization
model.add(BatchNormalization())
#Output layer
model.add(Dense(17))
model.add(Activation('softmax'))
model.summary()
# +
#Compile
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
#Fit the model
model.fit(x, y, batch_size=64, epochs=1, verbose=1, validation_split=0.2, shuffle=True)
#shuffle for shuffling the data in batches
# -
# We are getting warnings as the image data is large and we are loading with our memory
| Alexnet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# + _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0"
#Read the train and test datasets using pandas, already imported
train = pd.read_csv('/kaggle/input/house-prices-advanced-regression-techniques/train.csv')
test = pd.read_csv('/kaggle/input/house-prices-advanced-regression-techniques/test.csv')
train.head()
# -
#First, check if there are NaN in the train or test data
train[train.isna().any(axis = 1)].head()
test[test.isna().any(axis = 1)].head()
#There are NaN values, so use an strategy to fill them: Imputation
#However, first separate the validation group to avoid contamination during imputation
X = train.copy()
y = X.SalePrice
X.drop(['SalePrice'], axis=1, inplace=True)
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(
X, y, train_size=0.8, test_size=0.2, random_state=0)
# +
from sklearn.impute import SimpleImputer
my_imputer = SimpleImputer(strategy='most_frequent')
imputed_train = pd.DataFrame(my_imputer.fit_transform(X_train))
imputed_valid = pd.DataFrame(my_imputer.transform(X_valid))
#Make sure the columns and indexes stay congruent
imputed_train.columns = X_train.columns
imputed_valid.columns = X_valid.columns
imputed_train.index = X_train.index
imputed_valid.index = X_valid.index
# Take care of categorical variables
# Get list of categorical variables
s = (X_train.dtypes == 'object')
object_cols = list(s[s].index) #Selects the name of columns with previous condition true
#Handle_unknown: When this parameter is set to ‘ignore’ and an unknown category is
#encountered during transform, the resulting one-hot encoded columns for this feature
#will be all zeros.
#Sparse: Will return sparse matrix if set True else will return an array.
from sklearn.preprocessing import OneHotEncoder
OH_encoder = OneHotEncoder(handle_unknown='ignore', sparse=False)
OH_cols_train = pd.DataFrame(OH_encoder.fit_transform(imputed_train[object_cols]))
OH_cols_valid = pd.DataFrame(OH_encoder.transform(imputed_valid[object_cols]))
# One-hot encoding removed index; put it back
OH_cols_train.index = imputed_train.index
OH_cols_valid.index = imputed_valid.index
# Remove categorical columns (will replace with one-hot encoding)
num_train = imputed_train.drop(object_cols, axis=1)
num_valid = imputed_valid.drop(object_cols, axis=1)
# Add one-hot encoded columns to numerical features
OH_train = pd.concat([num_train, OH_cols_train], axis=1)
OH_valid = pd.concat([num_valid, OH_cols_valid], axis=1)
#Imputation removed the right types, and the nerwork complains
OH_train = OH_train.astype('int64')
OH_valid = OH_valid.astype('int64')
# -
#Check your results
OH_train.head()
OH_valid.head()
# +
#Using neural networks
from tensorflow import keras
from tensorflow.keras import layers
from keras.optimizers import Adam
model = keras.Sequential([
layers.BatchNormalization(input_dim=OH_train.shape[1]),
layers.Dense(128, activation='sigmoid'),
layers.BatchNormalization(),
layers.Dropout(0.5),
layers.Dense(64, activation='relu'),
layers.BatchNormalization(),
layers.Dropout(0.5),
layers.Dense(16, activation='relu'),
layers.BatchNormalization(),
layers.Dropout(0.5),
layers.Dense(1),
])
optimizer = Adam(lr=0.005, decay=0.)
'''MSLE will treat small differences between small true and predicted values
approximately the same as big differences between large true and predicted values
Use MSLE when doing regression, believing that your target, conditioned on the
input, is normally distributed, and you don’t want large errors to be
significantly more penalized than small ones, in those cases where the range
of the target value is large.'''
model.compile(optimizer=optimizer,
loss='msle',
metrics=['mse'])
from tensorflow.keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(
min_delta=0.001, # minimium amount of change to count as an improvement
patience=10, # how many epochs to wait before stopping
restore_best_weights=True,
)
# -
history = model.fit(
OH_train, y_train,
validation_data=(OH_valid, y_valid),
batch_size=128,
epochs=10000,
verbose=0,
callbacks=[early_stopping]
)
# +
# Get predictions
imputed_test = pd.DataFrame(my_imputer.transform(test))
imputed_test.columns = test.columns
imputed_test.index = test.index
OH_cols_test = pd.DataFrame(OH_encoder.transform(imputed_test[object_cols]))
# One-hot encoding removed index; put it back
OH_cols_test.index = imputed_test.index
# Remove categorical columns (will replace with one-hot encoding)
num_test = imputed_test.drop(object_cols, axis=1)
# Add one-hot encoded columns to numerical features
OH_test = pd.concat([num_test, OH_cols_test], axis=1)
#Imputation removed the right types, and the nerwork complains
OH_test = OH_test.astype('int64')
sale_price_preds = model.predict(OH_test)
print(sale_price_preds)
output = pd.DataFrame({'Id': OH_test.Id,
'SalePrice': sale_price_preds.T[0]})
output.to_csv('/kaggle/working/submit.csv', index=False)
| initial-steps-neural-network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="iVLzlBnyIoIF" colab_type="text"
# # **Schnorr Digital Signature Algorithm (EC SSA)**
# -
# # Setup
#
# btclib is needed: let's install/update it and import straight away some of its functions
# + id="LwhoTndVSaz3" colab_type="code" outputId="5a84f553-09ed-4172-a214-8b3bb9218c9d" executionInfo={"status": "ok", "timestamp": 1551985832713, "user_tz": -60, "elapsed": 4353, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12225439844942765057"}} colab={"base_uri": "https://localhost:8080/", "height": 36}
# !pip install --upgrade btclib
from hashlib import sha256 as hf
from btclib.numbertheory import mod_inv
from btclib.curve import mult
from btclib.curve import secp256k1 as ec
# + [markdown] id="bjYHR3nlIIoC" colab_type="text"
# For this exercise we use secp256k1 as elliptic curve and SHA256 as hash function:
# + id="ByGdee-DFiZb" colab_type="code" outputId="72feb1d0-b659-4ee7-9996-196e5c628d3f" executionInfo={"status": "ok", "timestamp": 1551985832717, "user_tz": -60, "elapsed": 4324, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12225439844942765057"}} colab={"base_uri": "https://localhost:8080/", "height": 184}
print(ec)
# + [markdown] id="XiRpjtu8MwI3" colab_type="text"
# Normally, hf is chosen such that its output size is roughly equal to the size of ec.n, since the overall security of the signature scheme will depend on the smallest of the two; however, the ECDSA standard support all combinations of sizes
# + id="Jf4YS8X6M26W" colab_type="code" outputId="4979efb8-b321-4916-9bb5-f8bdda1502a0" executionInfo={"status": "ok", "timestamp": 1551985832719, "user_tz": -60, "elapsed": 4303, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12225439844942765057"}} colab={"base_uri": "https://localhost:8080/", "height": 36}
print(hf().digest_size)
print(ec.nsize)
# + [markdown] id="ZR6PrjhMPwkr" colab_type="text"
# # **Digital Signature Protocol**
#
# ## 1. Key generation
#
# Private key (generated elsewhere, a fixed value here):
# + id="XzwPBqVoMZmg" colab_type="code" outputId="caf118ed-4994-40e4-c698-80a3ceb6f2ec" executionInfo={"status": "ok", "timestamp": 1551985832720, "user_tz": -60, "elapsed": 4285, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12225439844942765057"}} colab={"base_uri": "https://localhost:8080/", "height": 54}
q = 0x18E14A7B6A307F426A94F8114701E7C8E774E7F9A47E2C2035DB29A206321725
assert 0 < q < ec.n, "Invalid private key"
print("q:", q)
print("Hex(q):", hex(q))
# + [markdown] id="ON5UFxzaSrM6" colab_type="text"
# Corresponding Public Key:
# + id="1o0xWjJMQ5ST" colab_type="code" outputId="743d1e8c-6aa9-4959-9fce-05d2aff6a968" executionInfo={"status": "ok", "timestamp": 1551985832723, "user_tz": -60, "elapsed": 4268, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12225439844942765057"}} colab={"base_uri": "https://localhost:8080/", "height": 36}
Q = mult(q)
print("PubKey:", "02" if (Q[1] % 2 == 0) else "03", hex(Q[0]))
# + [markdown] id="N-TP47X3WkP3" colab_type="text"
# ##Signature (DSA)
# + [markdown] id="BdMTqEUBTHZs" colab_type="text"
# ###Message
# Message to be signed and its hash value:
# + id="YnDNcblIS4UU" colab_type="code" outputId="6f4140c7-5eb8-4097-dedc-873efc617829" executionInfo={"status": "ok", "timestamp": 1551985832726, "user_tz": -60, "elapsed": 4254, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12225439844942765057"}} colab={"base_uri": "https://localhost:8080/", "height": 36}
msg = "Paolo is afraid of ephemeral random numbers"
h_bytes = hf(msg.encode()).digest()
# hash(msg) must be transformed into an integer modulo ec.n:
h = int.from_bytes(h_bytes, 'big') % ec.n
assert h != 0
print("h:", hex(h))
# + [markdown] id="ZqSVkyLRUuAG" colab_type="text"
# ### Deterministic Ephemeral key
# The Ephemeral key k must be kept secret and never reused
# A good choice is to use a deterministic key:
#
# `k = hf(q||msghd)`
#
# different for each msg, private because of q
# + id="UxKdGjpyUctt" colab_type="code" outputId="7015c287-24b8-480d-b19c-2750434f32e6" executionInfo={"status": "ok", "timestamp": 1551985832728, "user_tz": -60, "elapsed": 4241, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12225439844942765057"}} colab={"base_uri": "https://localhost:8080/", "height": 36}
k_bytes = hf(q.to_bytes(32, 'big') + msg.encode()).digest()
k = int.from_bytes(k_bytes, 'big') % ec.n
assert k != 0
print("eph k:", hex(k))
# + [markdown] id="40K9Y6q1bh0V" colab_type="text"
# ###Signature Algorithm
# + id="uj3GStNgWZIG" colab_type="code" outputId="c7b994cd-5a72-4010-a3e2-834f2f337f5f" executionInfo={"status": "ok", "timestamp": 1551985832730, "user_tz": -60, "elapsed": 4227, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12225439844942765057"}} colab={"base_uri": "https://localhost:8080/", "height": 54}
K = mult(k)
r = K[0] % ec.n
# if r == 0 (extremely unlikely for large ec.n) go back to a different ephemeral key
assert r != 0
s = ((h + r*q)*mod_inv(k, ec.n)) % ec.n
# if s1 == 0 (extremely unlikely for large ec.n) go back to a different ephemeral key
assert s != 0
print("r:", hex(r))
print("s:", hex(s))
# + [markdown] id="mjNR4BIMZ7Id" colab_type="text"
# ## Signature verification (DSA)
# + id="VnnCnKJeZLJm" colab_type="code" outputId="5c62200b-d474-4a00-af1c-975e15a87fe8" executionInfo={"status": "ok", "timestamp": 1551985832731, "user_tz": -60, "elapsed": 4211, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12225439844942765057"}} colab={"base_uri": "https://localhost:8080/", "height": 36}
w = mod_inv(s, ec.n)
u = (h*w) %ec.n
v = (r*w) %ec.n
assert u != 0
assert v != 0
U = mult(u)
V = mult(v, Q)
x, y = ec.add(U, V)
print(r == x %ec.n)
# + [markdown] id="Em7Y-s6Ranzh" colab_type="text"
# ## Malleated Signature
# + id="YXG1olsbamU_" colab_type="code" outputId="1e8abaac-9c93-4b13-db04-c6f1be265d42" executionInfo={"status": "ok", "timestamp": 1551985832732, "user_tz": -60, "elapsed": 4200, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12225439844942765057"}} colab={"base_uri": "https://localhost:8080/", "height": 54}
sm = ec.n - s
print(" r:", hex(r))
print(" *sm:", hex(sm))
# + [markdown] id="ibDsuYeBc5Xi" colab_type="text"
# Malleated Signature verification:
# + id="A_vX6UuXc4BB" colab_type="code" outputId="001af29d-bb65-435a-951a-f112638d6173" executionInfo={"status": "ok", "timestamp": 1551985832733, "user_tz": -60, "elapsed": 4183, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12225439844942765057"}} colab={"base_uri": "https://localhost:8080/", "height": 36}
w = mod_inv(sm, ec.n)
u = (h*w) %ec.n
v = (r*w) %ec.n
assert u != 0
assert v != 0
U = mult(u)
V = mult(v, Q)
x, y = ec.add(U, V)
print(r == x %ec.n)
# + [markdown] id="dWP713sQdM12" colab_type="text"
# ##Humongous mistake
# + [markdown] id="lZBMrXuligvc" colab_type="text"
# ### Message
# Message to be signed and its hash value:
# + id="aecLiCr2dNIz" colab_type="code" outputId="f9549324-836f-48a1-e338-2007df35cd2a" executionInfo={"status": "ok", "timestamp": 1551985832734, "user_tz": -60, "elapsed": 4165, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12225439844942765057"}} colab={"base_uri": "https://localhost:8080/", "height": 36}
msg2 = "and Paolo is right to be afraid"
h_bytes = hf(msg2.encode()).digest()
# hash(msg) must be transformed into an integer modulo ec.n:
h2 = int.from_bytes(h_bytes, 'big') % ec.n
assert h2 != 0
print("h2:", hex(h2))
# + [markdown] id="NQi4pK-UevXN" colab_type="text"
# ### The mistake
# Reuse the same ephemeral deterministic key as the previous message:
# + id="Q4siGdMUev3t" colab_type="code" outputId="dcb10c54-18d7-4012-a9c7-6b594be80ace" executionInfo={"status": "ok", "timestamp": 1551985832736, "user_tz": -60, "elapsed": 4149, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12225439844942765057"}} colab={"base_uri": "https://localhost:8080/", "height": 54}
k2 = k #very bad! Never reuse the same ephemeral key!!!
print("eph k :", hex(k))
print("eph k2:", hex(k2))
# + [markdown] id="-wCVmqtgfM4C" colab_type="text"
# ###Signature Algorithm
# + id="QGvDYriVfcvC" colab_type="code" outputId="855b09a6-be0e-4521-9678-16583df6c5dd" executionInfo={"status": "ok", "timestamp": 1551985833052, "user_tz": -60, "elapsed": 4447, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12225439844942765057"}} colab={"base_uri": "https://localhost:8080/", "height": 54}
K2 = mult(k2)
r = K2[0] % ec.n
# if r == 0 (extremely unlikely for large ec.n) go back to a different ephemeral key
assert r != 0
s2 = ((h2 + r*q)*mod_inv(k2, ec.n)) %ec.n
# if s2 == 0 (extremely unlikely for large ec.n) go back to a different ephemeral key
assert s2 != 0
# bitcoin canonical 'low-s' encoding
if s2 > ec.n/2: s2 = ec.n - s2
print(" r:", hex(r))
print("s2:", hex(s2))
# + [markdown] id="xnatEJTMfnhd" colab_type="text"
# ### Signature verification
# + id="mf9EhfeAfmqn" colab_type="code" outputId="d5965cb1-3052-492e-ebdc-f9b1a37f4422" executionInfo={"status": "ok", "timestamp": 1551985833054, "user_tz": -60, "elapsed": 4429, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12225439844942765057"}} colab={"base_uri": "https://localhost:8080/", "height": 36}
w = mod_inv(s2, ec.n)
u = (h2*w) %ec.n
v = (r*w) %ec.n
assert u != 0
assert v != 0
U = mult(u)
V = mult(v, Q)
x, y = ec.add(U, V)
print(r == x %ec.n)
# + [markdown] id="hfQElvlnitgL" colab_type="text"
# ### Exercise
# Because of this mistake is now possible to calculate the private key from the 2 signatures.
# + id="RENovvlCi0Ww" colab_type="code" colab={}
| ipynb/SSA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="GC7zSrbOWiz0"
# # Week 4 Assignment: Create a VGG network
# -
# In this exercise, you will build a class that implements a [VGG network](https://towardsdatascience.com/vgg-neural-networks-the-next-step-after-alexnet-3f91fa9ffe2c) that can be trained to classify images. The model will look something like this:
#
# <img src='VGG.png'>
#
# It is primarily made up of a series of Conv2D layers followed by a softmax activated layers to classify the image. As you can see, this will be a handful and the code will look huge if you specify each layer individually. As shown in the lectures, you can instead use model subclassing to build complex architectures. You can encapsulate repeating parts of a network then reuse that code when building the final model. You will get to practice that in this exercise. Let's get started!
# + colab={} colab_type="code" id="Z01I5nj0NAOu"
import tensorflow as tf
import tensorflow_datasets as tfds
import utils
# -
# ## Create named-variables dynamically
#
# In this assignment, you will see the use of the Python function `vars()`. This will allow you to use a for loop to define and set multiple variables with a similar name, such as var1, var2, var3.
#
# Please go through the following examples to get familiar with `vars()`, as you will use it when building the VGG model.
# - You'll start by defining a class `MyClass`
# - It contains one variable `var1`.
# - Create an object of type `MyClass`.
# +
# Define a small class MyClass
class MyClass:
def __init__(self):
# One class variable 'a' is set to 1
self.var1 = 1
# Create an object of type MyClass()
my_obj = MyClass()
# -
# Python classes have an attribute called `__dict__`.
# - `__dict__` is a Python dictionary that contains the object's instance variables and values as key value pairs.
my_obj.__dict__
# If you call `vars()` and pass in an object, it will call the object's `__dict__` attribute, which is a Python dictionary containing the object's instance variables and their values as ke
vars(my_obj)
# You may be familiar with adding new variable like this:
# +
# Add a new instance variable and give it a value
my_obj.var2 = 2
# Calls vars() again to see the object's instance variables
vars(my_obj)
# -
# Here is another way that you can add an instance variable to an object, using `vars()`.
# - Retrieve the Python dictionary `__dict__` of the object using vars(my_obj).
# - Modify this `__dict__` dictionary using square bracket notation and passing in the variable's name as a string: `['var3'] = 3`
# +
# Call vars, passing in the object. Then access the __dict__ dictionary using square brackets
vars(my_obj)['var3'] = 3
# Call vars() to see the object's instance variables
vars(my_obj)
# -
# #### Why this is helpful!
# You may be wondering why you would need another way to access an object's instance variables.
# - Notice that when using `vars()`, you can now pass in the name of the variable `var3` as a string.
# - What if you plan to use several variables that are similarly named (`var4`, `var5` ... `var9`) and wanted a convenient way to access them by incrementing a number?
#
# Try this!
# +
# Use a for loop to increment the index 'i'
for i in range(4,10):
# Format a string that is var
vars(my_obj)[f'var{i}'] = 0
# View the object's instance variables!
vars(my_obj)
# -
# There are couple equivalent ways in Python to format a string. Here are two of those ways:
# - f-string: f"var{i}"
# - .format: "var{}".format(i)
# +
# Format a string using f-string notation
i=1
print(f"var{i}")
# Format a string using .format notation
i=2
print("var{}".format(i))
# -
# You can access the variables of a class inside the class definition using `vars(self)`
# +
# Define a small class MyClass
class MyClass:
def __init__(self):
# Use vars(self) to access the class's dictionary of variables
vars(self)['var1'] = 1
# Create an object of type MyClass()
my_obj = MyClass()
vars(my_obj)
# -
# You'll see this in the upcoming code. Now you'll start building the VGG network!
# + [markdown] colab_type="text" id="k1T1UMw5YAkp"
# ## Create a generic VGG block (TODO)
#
# The VGG Network has blocks of layers, where each block has a varied number of layers.
# - In order to create blocks of layers that have a customizable number of conv2D layers, you'll define a class `Block`, which can generate a customizable block of layers
#
#
# ### `__init__`
# In the constructor `__init__`, store the conv2D parameters and also define the number of conv2D layers using the parameters passed into `__init__`.
# - Store the filters, kernel_size, and repetitions as class variables so that they can be used later in the `call` function.
# - Using a for loop, define a number of Conv2D [Conv2D](https://keras.io/api/layers/convolution_layers/convolution2d/) layers, based on the number of `repetitions` desired for this block.
# - You can define each conv2D layer using `vars` and string formatting to create conv2D_0, conv2D_1, conv2D_3 etc.
# - Set these four parameters of Conv2D:
# - filters
# - kernel_size
# - activation: set this to 'relu'
# - padding: set this to 'same' (default pading is 'valid').
#
# - Define the [MaxPool2D](https://keras.io/api/layers/pooling_layers/max_pooling2d/) layer that follows these Conv2D layers.
# - Set the following parameters for MaxPool2D:
# - pool_size: this will be a tuple with two values.
# - strides: this will also be a tuple with two values.
#
# ### `call`
# In `call`, you will connect the layers together.
# - The 0-th conv2D layer, `conv2D_0`, immediately follows the `inputs`.
# - For conv2D layers 1,2 and onward, you can use a for loop to connect conv2D_1 to conv2D_0, and connect conv2D_2 to conv2D_1, and so on.
# - After connecting all of the conv2D_i layers, add connect the max_pool layer and return the max_pool layer.
# + colab={} colab_type="code" deletable=false id="WGJGaxVjM00W" nbgrader={"cell_type": "code", "checksum": "7f19295d8925e1d2e60eefd42a6b4dd8", "grade": false, "grade_id": "cell-1449db9892707876", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Please uncomment all lines in this cell and replace those marked with `# YOUR CODE HERE`.
# You can select all lines in this code cell with Ctrl+A (Windows/Linux) or Cmd+A (Mac), then press Ctrl+/ (Windows/Linux) or Cmd+/ (Mac) to uncomment.
class Block(tf.keras.Model):
def __init__(self, filters, kernel_size, repetitions, pool_size=2, strides=2):
super(Block, self).__init__()
self.filters = filters# YOUR CODE HERE
self.kernel_size = kernel_size # YOUR CODE HERE
self.repetitions = repetitions# YOUR CODE HERE
# Define a conv2D_0, conv2D_1, etc based on the number of repetitions
for i in range(repetitions):
# Define a Conv2D layer, specifying filters, kernel_size, activation and padding.
vars(self)[f'conv2D_{i}'] = tf.keras.layers.Conv2D(filters, kernel_size, activation='relu', padding='same')# YOUR CODE HERE
# Define the max pool layer that will be added after the Conv2D blocks
self.max_pool = tf.keras.layers.MaxPooling2D(pool_size, strides)# YOUR CODE HERE
def call(self, inputs):
# access the class's conv2D_0 layer
conv2D_0 = self.conv2D_0 # YOUR CODE HERE
# Connect the conv2D_0 layer to inputs
x = conv2D_0(inputs)# YOUR CODE HERE
# for the remaining conv2D_i layers from 1 to `repetitions` they will be connected to the previous layer
for i in range(1, self.repetitions):
# access conv2D_i by formatting the integer `i`. (hint: check how these were saved using `vars()` earlier)
conv2D_i = vars(self)[f'conv2D_{i}']# YOUR CODE HERE
# Use the conv2D_i and connect it to the previous layer
x = conv2D_i(x)
# Finally, add the max_pool layer
max_pool = self.max_pool(x)# YOUR CODE HERE
return max_pool
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "4027611c9615b1f518a95d76a81bc8d1", "grade": true, "grade_id": "cell-2911e521bce8793b", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
utils.test_block_class(Block)
# + [markdown] colab_type="text" id="peM2GP6uYT0U"
# ## Create the Custom VGG network (TODO)
# This model stack has a series of VGG blocks, which can be created using the `Block` class that you defined earlier.
#
# ### `__init__`
# - Recall that the `__init__` constructor of `Block` takes several function parameters,
# - filters, kernel_size, repetitions: you'll set these.
# - kernel_size and strides: you can use the default values.
# - For blocks a through e, build the blocks according to the following specifications:
# - block_a: 64 filters, kernel_size 3, repetitions 2
# - block_b: 128 filters, kernel_size 3, repetitions 2
# - block_c: 256 filters, kernel_size 3, repetitions 3
# - block_d: 512 filters, kernel_size 3, repetitions 3
# - block_e: 512 filters, kernel_size 3, repetitions 3
#
# After block 'e', add the following layers:
# - flatten: use [Flatten](https://keras.io/api/layers/reshaping_layers/flatten/).
# - fc: create a fully connected layer using [Dense](https://keras.io/api/layers/core_layers/dense/). Give this 256 units, and a `'relu'` activation.
# - classifier: create the classifier using a Dense layer. The number of units equals the number of classes. For multi-class classification, use a `'softmax'` activation.
#
# ### `call`
# Connect these layers together using the functional API syntax:
# - inputs
# - block_a
# - block_b
# - block_c
# - block_d
# - block_e
# - flatten
# - fc
# - classifier
#
# Return the classifier layer.
# + colab={} colab_type="code" deletable=false id="yD-paeGiNGvz" nbgrader={"cell_type": "code", "checksum": "523346a38f53bc31e080114e98e8eca6", "grade": false, "grade_id": "cell-d9e90af0898eb47f", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Please uncomment all lines in this cell and replace those marked with `# YOUR CODE HERE`.
# You can select all lines in this code cell with Ctrl+A (Windows/Linux) or Cmd+A (Mac), then press Ctrl+/ (Windows/Linux) or Cmd+/ (Mac) to uncomment.
class MyVGG(tf.keras.Model):
def __init__(self, num_classes):
super(MyVGG, self).__init__()
# Creating blocks of VGG with the following
# (filters, kernel_size, repetitions) configurations
self.block_a = Block(filters= 64 , kernel_size = 3, repetitions = 2) # YOUR CODE HERE #
self.block_b = Block( filters= 128, kernel_size = 3, repetitions = 2) # YOUR CODE HERE #
self.block_c = Block( filters= 256, kernel_size = 3, repetitions = 3) # YOUR CODE HERE #
self.block_d = Block( filters= 512, kernel_size = 3, repetitions = 3) # YOUR CODE HERE #
self.block_e = Block( filters= 512, kernel_size = 3, repetitions = 3) # YOUR CODE HERE #
# Classification head
# Define a Flatten layer
self.flatten = tf.keras.layers.Flatten() # YOUR CODE HERE
# Create a Dense layer with 256 units and ReLU as the activation function
self.fc = tf.keras.layers.Dense(256, activation='relu')# YOUR CODE HERE
# Finally add the softmax classifier using a Dense layer
self.classifier = tf.keras.layers.Dense(num_classes, activation='softmax')# YOUR CODE HERE
def call(self, inputs):
# Chain all the layers one after the other
x = self.block_a(inputs)
x = self.block_b(x)
x = self.block_c(x)
x = self.block_d(x)
x = self.block_e(x)
x = self.flatten(x)
x = self.fc(x)
x = self.classifier(x)
return x
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "c72733dcd5baccf2728b5b2460caf515", "grade": true, "grade_id": "cell-559ac19437f4f2b2", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
utils.test_myvgg_class(MyVGG, Block)
# -
# ### Load data and train the VGG network (Optional)
#
# If you passed all tests above, then you've successfully built the model for your image classifier. Congratulations! The next steps in the pipeline will be loading the dataset and training your VGG network.
#
# - The code is shown below but it is only for reference and is **not required to complete the assignment**.
# - You can submit your work before starting the training.
#
# We did not enable an accelerator in this lab environment so training will run slow. If you want to make it quicker, one way is to download your notebook (`File -> Download As -> Notebook`), then upload to [Colab](https://colab.research.google.com). From there, you can use a GPU runtime (`Runtime -> Change Runtime type`) prior to running the cells. Just make sure to comment out the imports and calls to `utils.py` so you don't get `File Not Found` errors. Again, this part is only for reference and is not required for grading. For this lab, we will only grade how you built your model using subclassing. You will get to training and evaluating your models in the next courses of this Specialization.
# + colab={} colab_type="code" id="MaF763OKNJxU"
# dataset = tfds.load('cats_vs_dogs', split=tfds.Split.TRAIN, data_dir='data/')
# # Initialize VGG with the number of classes
# vgg = MyVGG(num_classes=2)
# # Compile with losses and metrics
# vgg.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# # Define preprocessing function
# def preprocess(features):
# # Resize and normalize
# image = tf.image.resize(features['image'], (224, 224))
# return tf.cast(image, tf.float32) / 255., features['label']
# # Apply transformations to dataset
# dataset = dataset.map(preprocess).batch(32)
# # Train the custom VGG model
# vgg.fit(dataset, epochs=10)
| Custom vcc .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="bHDShlG_PikW"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.spatial import distance_matrix
from scipy.spatial import distance
from sklearn.decomposition import PCA
import seaborn as sns
import time
from tqdm import tqdm
# + id="T6Yr1UhjPqj-"
test = pd.read_csv("/content/MNIST_test_small.csv")
x_test = test.iloc[:,1:].values
y_test = test.iloc[:,0].values
x_test_normalized= pd.DataFrame(np.apply_along_axis(
lambda x: 0.0 if x_test.max ()== 0.0 else x/x_test.max() , 0, x_test))
train = pd.read_csv("/content/MNIST_train_small.csv")
x_train = train.iloc[:,1:]
y_train = train.iloc[:,0]
x_train_normalized= pd.DataFrame(np.apply_along_axis(
lambda x: np.zeros(x.shape[ 0]) if x.max()== 0.0 else np.divide(x,x.max()) , 0, x_train.values))
train_big = pd.read_csv("/content/MNIST_train.csv")
x_train_big = train_big.iloc[:,1:]
y_train_big = train_big.iloc[:,0]
x_train_big_normalized= pd.DataFrame(np.apply_along_axis(
lambda x: 0.0 if x_train_big.values.max()== 0.0 else x/x_train_big.values.max() , 0, x_train_big.values ))
test_big = pd.read_csv("/content/MNIST_test.csv")
x_test_big = test_big.iloc[:,1:]
y_test_big = test_big.iloc[:,0]
x_test_big_normalized= pd.DataFrame(np.apply_along_axis(lambda x: 0.0 if x_test_big.values.max()== 0.0 else x/x_test_big.values.max() , 0, x_test_big.values))
# + id="85W-vV5JJDnv"
# mode function
def mode(array):
'''
In case of ties we are choosing the value of the nearest neighbor:
that is because .argmax() return the first item encountered in the array, that will be by definition the nearest one
'''
val, cnts = np.unique(array, return_counts=True)
return val[cnts.argmax()], cnts.max()
def manhattan_matrix(X1, X2, p=0):
return distance.cdist(X1, X2, 'cityblock')
def chebyshev_matrix(X1, X2, p=0):
return distance.cdist(X1, X2, 'chebyshev')
def mahalanobis_matrix(X1, X2, p=0):
return distance.cdist(X1, X2, 'mahalanobis', VI=None)
def cosine_dist(X1, X2, p=0):
mat = np.dot(np.array(X2).T,np.array(X1))
return 2 - 2*mat.T
def minkowski_dist(v, w, p = 2):
D = np.abs(np.subtract(w, v))
Dp = np.power(D, p)
distance_vals = np.sum(Dp, axis = 0)
distance_vals_p = np.power(distance_vals, 1 / p)
return distance_vals_p
# + id="T-Fj8w66cKFp"
class KNN():
def __init__(self, p=2, distance='minkowski', distance_f=minkowski_dist):
self.p = p
self.distance_f = distance_f
if(display=='manhattan'):
self.distance = 'manhattan'
self.distance_matrix = manhattan_matrix
elif(distance=='chebyshev'):
self.distance = 'chebyshev'
self.distance_matrix = chebyshev_matrix
else:
self.distance = 'minkowski'
self.distance_matrix = distance_matrix
pass
def fit(self, X, y, k_max, p_array=[2] ):
"""
Fit on training data
X: x_train dataset
Y: y_train
"""
self.x_train = X
self.y_train = y
self.n_training = X.shape[0]
self.classes = np.unique(y)
self.k_neigh_mtx = self.create_k_matrix(k_max)
self.k_neigh_mtx = self.k_neigh_mtx.reshape(self.k_neigh_mtx.shape[0],
self.k_neigh_mtx.shape[1])
pass
def __str__(self):
return 'Knn with distance parameter p = {}'.format(self.p)
def __repr__(self):
return self.__str__()
def loss(self, y1, y2):
return y1 != y2
def create_k_matrix(self, k):
'''
Reduce the matrix for the cross validation by considering only k smal
lest values
'''
return np.apply_along_axis(lambda x: np.argsort(self.distance_matrix(self.x_train.values, [x], p=self.p), axis=0)[:k],1, self.x_train.values)
def cross_validate(self, k, n_fold='loocv'):
"""
k_max : number of maximum to consider n_fold : fold of cross validation.
if n_fold='loocv' it will just use the leave-one-out cross validation where n_fold= # training samples
"""
if n_fold == 'loocv':
loss_ipk = [self.predict_train(i, k) for i in range(self.n_training)]
loss_cv = np.mean(loss_ipk, axis=0)
return loss_cv
def predict_train(self, i, k):
y_pred = mode(self.y_train[self.k_neigh_mtx[i, 1:(k+1)]])[0]
y_loss = self.loss(y_pred, self.y_train[i])
return y_loss
def predict(self, x_test, k_array):
"""
Predict class for the given observation
x_test: new observations to predict
k_array: array of k number of neighbors to consider
"""
kmax_neigh = np.apply_along_axis(
lambda x: np.argsort(self.distance_matrix(self.x_train.values, [x], p=self.p), axis=0)[:np.max(k_array)],1, x_test)
kmax_neigh = np.squeeze(kmax_neigh, axis=2)
return [np.apply_along_axis(lambda neigh: mode(self.y_train[neigh[:k]])[0], 1, kmax_neigh) for k in k_array]
def compute_loss(self, x_test, y_test, k_array):
"""
Score function: predict the values and compare to the real values given
x_test: input value to predict
y_test: real value of output
k_array: array of k number of neihbors to consider
"""
predictions = self.predict(x_test, k_array)
losses = [self.loss(predictions[kidx],y_test).sum()/y_test.shape[0] for kidx, k in enumerate(k_array)]
return dict(zip(k_array, losses))
# + [markdown] id="EYFGAdyFBwjN"
# A)
# + id="yOnQQowZXpJm" outputId="075d9b7a-8f59-48bf-f84d-5fbfafe7a8b6" colab={"base_uri": "https://localhost:8080/", "height": 282}
kmax=21
knn_a = KNN()
knn_a.fit(x_train,y_train, k_max=kmax-1)
train_loss_a = knn_a.compute_loss(x_train, y_train, np.arange(1,kmax))
test_loss_a = knn_a.compute_loss(x_test, y_test, np.arange(1,kmax))
#plot train_score and test_score
plt.xlabel('K')
plt.ylabel('Empirical loss')
plt.plot(*zip(*sorted(train_loss_a.items())), label="Train set")
plt.plot(*zip(*sorted(test_loss_a.items())), label="Test set")
plt.legend()
plt.show()
# + [markdown] id="_L9fRWx3B32-"
# B)
# + id="AjPXzmMNxqBt" outputId="85513af5-665d-4f5a-aab5-73300e4e14b9" colab={"base_uri": "https://localhost:8080/", "height": 279}
#B
kmax = 21
knn_b = KNN()
knn_b.fit(x_train,y_train, k_max=kmax-1)
k_array = np.arange(1,kmax+1)
train_loss_b = dict(zip(k_array, [knn_b.cross_validate(k) for k in k_array]))
#plot train_score_crossVal and test_score
plt.xlabel('K')
plt.ylabel('Empirical loss')
plt.plot(*zip(*sorted(train_loss_b.items())), label="Train set")
plt.plot(*zip(*sorted(test_loss_a.items())), label="Test set")
plt.legend()
plt.show()
# + [markdown] id="Uv_I9r0RCJVv"
# C)
# + id="xUblRI0c0zsD"
#C
pmin=1
pmax=16
kmax=21
crossValScore = []
k_array = np.arange(1,kmax-1)
for p in np.arange(pmin,pmax):
knn_c = KNN(p=p)
knn_c.fit(x_train, y_train, k_max=kmax)
crossValScore.append(dict(zip(k_array, [knn_c.cross_validate(k) for k in k_array])))
crossValScore = pd.DataFrame(crossValScore)
# + id="K4kPUWkX0FDV" outputId="2f083c15-435d-4c08-8036-bedd4ad921fd" colab={"base_uri": "https://localhost:8080/", "height": 351}
fig, ax = plt.subplots(figsize=(25,11))
ax.set_title("Empirical loss for (p, k) combinations")
sns.heatmap(crossValScore, ax=ax, annot=True, cmap='Blues', xticklabels='auto', yticklabels=np.arange(1, 16))
# + [markdown] id="oZJKLFGLCTkP"
# D)
# + id="Beihs5O16z9L"
k_array = np.arange(2,21)
knn_d_euclidean = KNN()
knn_d_euclidean.fit(x_train, y_train, k_max=20)
crossValScore_euclidean = dict(zip(k_array, [knn_d_euclidean.cross_validate(k) for k in k_array]))
knn_d_bestMinkowski = KNN(p=11)
knn_d_bestMinkowski.fit(x_train, y_train, k_max=20)
crossValScore_bestMinkowski = dict(zip(k_array, [knn_d_bestMinkowski.cross_validate(k) for k in k_array]))
knn_d_manhattan = KNN(distance='manhattan')
knn_d_manhattan.fit(x_train, y_train, k_max=20)
crossValScore_manhattan = dict(zip(k_array, [knn_d_manhattan.cross_validate(k) for k in k_array]))
knn_d_chebyshev = KNN(distance='chebyshev')
knn_d_chebyshev.fit(x_train, y_train, k_max=20)
crossValScore_chebyshev = dict(zip(k_array, [knn_d_chebyshev.cross_validate(k) for k in k_array]))
# + id="e-BAr9AAdert" outputId="32d16dac-6cff-491c-b7fb-aa8e18b4504d" colab={"base_uri": "https://localhost:8080/", "height": 279}
plt.xlabel('K')
plt.ylabel('Empirical loss')
plt.plot(*zip(*sorted(crossValScore_euclidean.items())), label="Euclidean")
plt.plot(*zip(*sorted(crossValScore_bestMinkowski.items())), label="BestMinkowsk i")
plt.plot(*zip(*sorted(crossValScore_manhattan.items())), label="Manhattan")
plt.plot(*zip(*sorted(crossValScore_chebyshev.items())), label="Chebyshev")
plt.legend()
plt.show()
# + [markdown] id="LUR10GioClXd"
# Processing with different filters.
# + id="rMGlC1Q30FDh"
def imageFilter(data,method):
'''
Preprocessing:
data -> np.array image or datasets of images method:
1. Gaussian: a Gaussian kernel is used: Gaussian filtering is highly ef fective in removing Gaussian noise from the image.
2. Median: median of all the pixels under the kernel window and the cent ral pixel
is replaced with this median value.
'''
if method == 'gaussian':
return np.apply_along_axis(lambda img: cv2.GaussianBlur(img.astype(dtype =np.uint8).reshape(28,-1),(5,5),0).flatten(),1,data)
if method =='averaging':
kernel = np.ones((5,5),np.float32)/25
return np.apply_along_axis(lambda img: cv2.filter2D(img.astype(dtype=np.uint8).reshape(28,-1),-1,kernel).flatten(),1,data)
# + id="_ZVkHnMiJcow"
knn = KNN(p=11)
k_array = np.arange(2,21)
start = time.time()
knn.fit(train_gaussian, y_train, k_max=20)
score_gaussian = dict(zip(k_array, [knn.cross_validate(k) for k in k_array]))
print('GaussianBlurring preprocessing:',time.time()-start)
start = time.time()
knn.fit(train_averaging, y_train, k_max=20)
score_median = dict(zip(k_array, [knn.cross_validate(k) for k in k_array]))
print('Averaging kernel preprocessing:',time.time()-start)
start = time.time()
knn.fit(x_train, y_train, k_max=20)
score_normal = dict(zip(k_array, [knn.cross_validate(k) for k in k_array]))
print('No preprocessing:',time.time()-start)
# + id="iJdMFJkxRvmR" outputId="56ca6373-f96f-4ebb-aef2-488fba8ea9f4" colab={"base_uri": "https://localhost:8080/", "height": 51}
from sklearn.preprocessing import MinMaxScaler
knn = KNN(p=11)
k_array = np.arange(1,21)
x_train_n = pd.DataFrame(MinMaxScaler().fit_transform(x_train))
start = time.time()
knn.fit(x_train_n, y_train, k_max=20)
norm_score = dict(zip(k_array, [knn.cross_validate(k) for k in k_array]))
print('Normalized data:',time.time()-start)
start = time.time()
knn.fit(x_train, y_train, k_max=20)
score = dict(zip(k_array, [knn.cross_validate(k) for k in k_array]))
print('Not normalized data:',time.time()-start)
# + id="rk8vXewcoH5T"
mnist_pca = PCA(n_components=0.90) #90% of variability is captured by those PC mnist_pca.fit(x_train)
components = mnist_pca.transform(x_train)
test = mnist_pca.transform(x_test)
| KNN implmentation on MNIST.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Summary: Compare model results and final model selection
#
# Using the Titanic dataset from [this](https://www.kaggle.com/c/titanic/overview) Kaggle competition.
#
# In this section, we will do the following:
# 1. Evaluate all of our saved models on the validation set
# 2. Select the best model based on performance on the validation set
# 3. Evaluate that model on the holdout test set
# ### Read Data for Valuation and Test
# +
import joblib
import pandas as pd
from sklearn.metrics import accuracy_score, precision_score, recall_score
from time import time
val_features = pd.read_csv('../dataset/val_features.csv')
val_labels = pd.read_csv('../dataset/val_labels.csv', header=None)
te_features = pd.read_csv('../dataset/test_features.csv')
te_labels = pd.read_csv('../dataset/test_labels.csv', header=None)
print("Import Completed")
# -
# ### Read Models
# +
models = {}
#for mdl in ['LR', 'SVM', 'MLP', 'RF', 'GB']:
# models[mdl] = joblib.load('../../../{}_model.pkl'.format(mdl))
for mdl in ['SVM','RF','GB']:
models[mdl] = joblib.load('../models/{}_model.pkl'.format(mdl))
# -
models
# ### Evaluate models on the validation set
# - **Accuracy** = # predicted correctly / total # of examples
# - **Precision** = # predicted as surviving that actually survived / total # predicted to survive
# - **Recall** = # predicted as surviving that actually survived / total # that actually survived
def evaluate_model(name, model, features, labels):
start = time()
pred = model.predict(features)
end = time()
accuracy = round(accuracy_score(labels, pred), 3)
precision = round(precision_score(labels, pred), 3)
recall = round(recall_score(labels, pred), 3)
print('{} -- Accuracy: {} / Precision: {} / Recall: {} / Latency: {}ms'.format(name,
accuracy,
precision,
recall,
round((end - start)*1000, 1)))
for name, mdl in models.items():
evaluate_model(name, mdl, val_features, val_labels)
# ### Evaluate best model on test set
evaluate_model('Random Forest', models['RF'], te_features, te_labels)
| Hands-on/modelling/Final model selection and evaluation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/souravgopal25/Data-Structure-Algorithm-Nanodegree/blob/master/BFS.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="gR1FITuBAUOu" colab_type="text"
# #BFS
# + id="SaIn3CG6-mDx" colab_type="code" colab={}
# this code makes the tree that we'll traverse
class Node(object):
def __init__(self,value = None):
self.value = value
self.left = None
self.right = None
def set_value(self,value):
self.value = value
def get_value(self):
return self.value
def set_left_child(self,left):
self.left = left
def set_right_child(self, right):
self.right = right
def get_left_child(self):
return self.left
def get_right_child(self):
return self.right
def has_left_child(self):
return self.left != None
def has_right_child(self):
return self.right != None
# define __repr_ to decide what a print statement displays for a Node object
def __repr__(self):
return f"Node({self.get_value()})"
def __str__(self):
return f"Node({self.get_value()})"
class Tree():
def __init__(self, value=None):
self.root = Node(value)
def get_root(self):
return self.root
# + id="VpHOrTYc_qC8" colab_type="code" colab={}
tree = Tree("apple")
tree.get_root().set_left_child(Node("banana"))
tree.get_root().set_right_child(Node("cherry"))
tree.get_root().get_left_child().set_left_child(Node("dates"))
# + id="dY6XbsTZ_rR5" colab_type="code" colab={}
from collections import deque
class Queue():
def __init__(self):
self.q = deque()
def enq(self,value):
self.q.appendleft(value)
def deq(self):
if len(self.q) > 0:
return self.q.pop()
else:
return None
def __len__(self):
return len(self.q)
def __repr__(self):
if len(self.q) > 0:
s = "<enqueue here>\n_________________\n"
s += "\n_________________\n".join([str(item) for item in self.q])
s += "\n_________________\n<dequeue here>"
return s
else:
return "<queue is empty>"
# + id="_oODHxTY_y8p" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="2dc54e66-1791-4d0b-983c-7cd7c25920ba"
visit_order = list()
q = Queue()
# start at the root node and add it to the queue
node = tree.get_root()
q.enq(node)
print(q)
# + id="ayQv-juuADZZ" colab_type="code" colab={}
def bfs(tree):
q = Queue()
visit_order = list()
node = tree.get_root()
q.enq(node)
while(len(q) > 0):
node = q.deq()
visit_order.append(node)
if node.has_left_child():
q.enq(node.get_left_child())
if node.has_right_child():
q.enq(node.get_right_child())
return visit_order
# + id="fO6X0Xf3ANHJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="bcd9f140-149d-443f-df7b-23acbde4ad6f"
bfs(tree)
# + id="6DBa3pxvAPQX" colab_type="code" colab={}
| BFS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <!--BOOK_INFORMATION-->
# <img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
# *This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by <NAME>; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*
#
# *The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*
#
# *No changes were made to the contents of this notebook from the original.*
# <!--NAVIGATION-->
# < [Introduction to NumPy](02.00-Introduction-to-NumPy.ipynb) | [Contents](Index.ipynb) | [The Basics of NumPy Arrays](02.02-The-Basics-Of-NumPy-Arrays.ipynb) >
# # Understanding Data Types in Python
# Effective data-driven science and computation requires understanding how data is stored and manipulated.
# This section outlines and contrasts how arrays of data are handled in the Python language itself, and how NumPy improves on this.
# Understanding this difference is fundamental to understanding much of the material throughout the rest of the book.
#
# Users of Python are often drawn-in by its ease of use, one piece of which is dynamic typing.
# While a statically-typed language like C or Java requires each variable to be explicitly declared, a dynamically-typed language like Python skips this specification. For example, in C you might specify a particular operation as follows:
#
# ```C
# /* C code */
# int result = 0;
# for(int i=0; i<100; i++){
# result += i;
# }
# ```
#
# While in Python the equivalent operation could be written this way:
#
# ```python
# # Python code
# result = 0
# for i in range(100):
# result += i
# ```
#
# Notice the main difference: in C, the data types of each variable are explicitly declared, while in Python the types are dynamically inferred. This means, for example, that we can assign any kind of data to any variable:
#
# ```python
# # Python code
# x = 4
# x = "four"
# ```
#
# Here we've switched the contents of ``x`` from an integer to a string. The same thing in C would lead (depending on compiler settings) to a compilation error or other unintented consequences:
#
# ```C
# /* C code */
# int x = 4;
# x = "four"; // FAILS
# ```
#
# This sort of flexibility is one piece that makes Python and other dynamically-typed languages convenient and easy to use.
# Understanding *how* this works is an important piece of learning to analyze data efficiently and effectively with Python.
# But what this type-flexibility also points to is the fact that Python variables are more than just their value; they also contain extra information about the type of the value. We'll explore this more in the sections that follow.
# ## A Python Integer Is More Than Just an Integer
#
# The standard Python implementation is written in C.
# This means that every Python object is simply a cleverly-disguised C structure, which contains not only its value, but other information as well. For example, when we define an integer in Python, such as ``x = 10000``, ``x`` is not just a "raw" integer. It's actually a pointer to a compound C structure, which contains several values.
# Looking through the Python 3.4 source code, we find that the integer (long) type definition effectively looks like this (once the C macros are expanded):
#
# ```C
# struct _longobject {
# long ob_refcnt;
# PyTypeObject *ob_type;
# size_t ob_size;
# long ob_digit[1];
# };
# ```
#
# A single integer in Python 3.4 actually contains four pieces:
#
# - ``ob_refcnt``, a reference count that helps Python silently handle memory allocation and deallocation
# - ``ob_type``, which encodes the type of the variable
# - ``ob_size``, which specifies the size of the following data members
# - ``ob_digit``, which contains the actual integer value that we expect the Python variable to represent.
#
# This means that there is some overhead in storing an integer in Python as compared to an integer in a compiled language like C, as illustrated in the following figure:
# 
# Here ``PyObject_HEAD`` is the part of the structure containing the reference count, type code, and other pieces mentioned before.
#
# Notice the difference here: a C integer is essentially a label for a position in memory whose bytes encode an integer value.
# A Python integer is a pointer to a position in memory containing all the Python object information, including the bytes that contain the integer value.
# This extra information in the Python integer structure is what allows Python to be coded so freely and dynamically.
# All this additional information in Python types comes at a cost, however, which becomes especially apparent in structures that combine many of these objects.
# ## A Python List Is More Than Just a List
#
# Let's consider now what happens when we use a Python data structure that holds many Python objects.
# The standard mutable multi-element container in Python is the list.
# We can create a list of integers as follows:
L = list(range(10))
L
type(L[0])
# Or, similarly, a list of strings:
L2 = [str(c) for c in L]
L2
type(L2[0])
# Because of Python's dynamic typing, we can even create heterogeneous lists:
L3 = [True, "2", 3.0, 4]
[type(item) for item in L3]
# But this flexibility comes at a cost: to allow these flexible types, each item in the list must contain its own type info, reference count, and other information–that is, each item is a complete Python object.
# In the special case that all variables are of the same type, much of this information is redundant: it can be much more efficient to store data in a fixed-type array.
# The difference between a dynamic-type list and a fixed-type (NumPy-style) array is illustrated in the following figure:
# 
# At the implementation level, the array essentially contains a single pointer to one contiguous block of data.
# The Python list, on the other hand, contains a pointer to a block of pointers, each of which in turn points to a full Python object like the Python integer we saw earlier.
# Again, the advantage of the list is flexibility: because each list element is a full structure containing both data and type information, the list can be filled with data of any desired type.
# Fixed-type NumPy-style arrays lack this flexibility, but are much more efficient for storing and manipulating data.
# ## Fixed-Type Arrays in Python
#
# Python offers several different options for storing data in efficient, fixed-type data buffers.
# The built-in ``array`` module (available since Python 3.3) can be used to create dense arrays of a uniform type:
import array
L = list(range(10))
A = array.array('i', L)
A
# Here ``'i'`` is a type code indicating the contents are integers.
#
# Much more useful, however, is the ``ndarray`` object of the NumPy package.
# While Python's ``array`` object provides efficient storage of array-based data, NumPy adds to this efficient *operations* on that data.
# We will explore these operations in later sections; here we'll demonstrate several ways of creating a NumPy array.
#
# We'll start with the standard NumPy import, under the alias ``np``:
import numpy as np
# ## Creating Arrays from Python Lists
#
# First, we can use ``np.array`` to create arrays from Python lists:
# integer array:
np.array([1, 4, 2, 5, 3])
# Remember that unlike Python lists, NumPy is constrained to arrays that all contain the same type.
# If types do not match, NumPy will upcast if possible (here, integers are up-cast to floating point):
np.array([3.14, 4, 2, 3])
# If we want to explicitly set the data type of the resulting array, we can use the ``dtype`` keyword:
np.array([1, 2, 3, 4], dtype='float32')
# Finally, unlike Python lists, NumPy arrays can explicitly be multi-dimensional; here's one way of initializing a multidimensional array using a list of lists:
# nested lists result in multi-dimensional arrays
np.array([range(i, i + 3) for i in [2, 4, 6]])
# The inner lists are treated as rows of the resulting two-dimensional array.
# ## Creating Arrays from Scratch
#
# Especially for larger arrays, it is more efficient to create arrays from scratch using routines built into NumPy.
# Here are several examples:
# Create a length-10 integer array filled with zeros
np.zeros(10, dtype=int)
# Create a 3x5 floating-point array filled with ones
np.ones((3, 5), dtype=float)
# Create a 3x5 array filled with 3.14
np.full((3, 5), 3.14)
# Create an array filled with a linear sequence
# Starting at 0, ending at 20, stepping by 2
# (this is similar to the built-in range() function)
np.arange(0, 20, 2)
# Create an array of five values evenly spaced between 0 and 1
np.linspace(0, 1, 5)
# Create a 3x3 array of uniformly distributed
# random values between 0 and 1
np.random.random((3, 3))
# Create a 3x3 array of normally distributed random values
# with mean 0 and standard deviation 1
np.random.normal(0, 1, (3, 3))
# Create a 3x3 array of random integers in the interval [0, 10)
np.random.randint(0, 10, (3, 3))
# Create a 3x3 identity matrix
np.eye(3)
# Create an uninitialized array of three integers
# The values will be whatever happens to already exist at that memory location
np.empty(3)
# ## NumPy Standard Data Types
#
# NumPy arrays contain values of a single type, so it is important to have detailed knowledge of those types and their limitations.
# Because NumPy is built in C, the types will be familiar to users of C, Fortran, and other related languages.
#
# The standard NumPy data types are listed in the following table.
# Note that when constructing an array, they can be specified using a string:
#
# ```python
# np.zeros(10, dtype='int16')
# ```
#
# Or using the associated NumPy object:
#
# ```python
# np.zeros(10, dtype=np.int16)
# ```
# | Data type | Description |
# |---------------|-------------|
# | ``bool_`` | Boolean (True or False) stored as a byte |
# | ``int_`` | Default integer type (same as C ``long``; normally either ``int64`` or ``int32``)|
# | ``intc`` | Identical to C ``int`` (normally ``int32`` or ``int64``)|
# | ``intp`` | Integer used for indexing (same as C ``ssize_t``; normally either ``int32`` or ``int64``)|
# | ``int8`` | Byte (-128 to 127)|
# | ``int16`` | Integer (-32768 to 32767)|
# | ``int32`` | Integer (-2147483648 to 2147483647)|
# | ``int64`` | Integer (-9223372036854775808 to 9223372036854775807)|
# | ``uint8`` | Unsigned integer (0 to 255)|
# | ``uint16`` | Unsigned integer (0 to 65535)|
# | ``uint32`` | Unsigned integer (0 to 4294967295)|
# | ``uint64`` | Unsigned integer (0 to 18446744073709551615)|
# | ``float_`` | Shorthand for ``float64``.|
# | ``float16`` | Half precision float: sign bit, 5 bits exponent, 10 bits mantissa|
# | ``float32`` | Single precision float: sign bit, 8 bits exponent, 23 bits mantissa|
# | ``float64`` | Double precision float: sign bit, 11 bits exponent, 52 bits mantissa|
# | ``complex_`` | Shorthand for ``complex128``.|
# | ``complex64`` | Complex number, represented by two 32-bit floats|
# | ``complex128``| Complex number, represented by two 64-bit floats|
# More advanced type specification is possible, such as specifying big or little endian numbers; for more information, refer to the [NumPy documentation](http://numpy.org/).
# NumPy also supports compound data types, which will be covered in [Structured Data: NumPy's Structured Arrays](02.09-Structured-Data-NumPy.ipynb).
# <!--NAVIGATION-->
# < [Introduction to NumPy](02.00-Introduction-to-NumPy.ipynb) | [Contents](Index.ipynb) | [The Basics of NumPy Arrays](02.02-The-Basics-Of-NumPy-Arrays.ipynb) >
| numpy/02.01-Understanding-Data-Types.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Correlating Returns
import alpaca_trade_api as tradeapi
import pandas as pd
from newsapi.newsapi_client import NewsApiClient
from datetime import date, datetime, timedelta
from nltk.sentiment.vader import SentimentIntensityAnalyzer
import nltk
nltk.download('vader_lexicon')
import os
from dotenv import load_dotenv ## We need this library to load the .env file
from pathlib import Path
# ## Load API Keys from Environment Variables
# +
abs_path = Path(r'C:/Users/metin/Documents/nufintech/.env')
load_dotenv(abs_path)
# Set News API Key
newsapi = NewsApiClient(api_key=os.environ["NEWS_API_KEY"])
# Set Alpaca API key and secret
alpaca_api_key = os.getenv("ALPACA_API_KEY")
alpaca_secret_key = os.getenv("ALPACA_SECRET_KEY")
api = tradeapi.REST(alpaca_api_key, alpaca_secret_key, api_version='v2')
# -
# ## Get AAPL Returns for Past Month
# +
# Set the ticker
ticker = "AAPL"
# Set timeframe to '1D'
timeframe = '1D'
# Get current date and the date from one month ago
current_date = date.today()
past_date = date.today() - timedelta(weeks=4)
# Get 4 weeks worth of historical data for AAPL
df = api.get_barset(
ticker,
timeframe,
limit=None,
start=current_date,
end=past_date,
after=None,
until=None,
).df
df.head()
# +
# Drop Outer Table Level
df = df.droplevel(axis=1, level=0)
# Use the drop function to drop extra columns
df.drop(columns=['open', 'high', 'low', 'volume'], inplace=True)
# Since this is daily data, we can keep only the date (remove the time) component of the data
df.index = df.index.date
df.head()
# -
# Use the `pct_change` function to calculate daily returns of AAPL
aapl_returns = df.pct_change()
aapl_returns.head()
# +
# Use newsapi client to get most relevant 20 headlines per day in the past month
def get_headlines(keyword):
all_headlines = []
all_dates = []
date = current_date
print(f"Fetching news about '{keyword}'")
print("*" * 30)
while date > past_date:
print(f"retrieving news from: {date}")
articles = newsapi.get_everything(
q=keyword,
from_param=str(date),
to=str(date),
language="en",
sort_by="relevancy",
page=1,
)
headlines = []
for i in range(0, len(articles["articles"])):
headlines.append(articles["articles"][i]["title"])
all_headlines.append(headlines)
all_dates.append(date)
date = date - timedelta(days=1)
return all_headlines, all_dates
# -
# Get first topic
aapl_headlines, dates = get_headlines("apple")
# Get second topic
trade_headlines, _ = get_headlines("trade")
# Get third topic
economy_headlines, _ = get_headlines("economy")
# Get fourth topic
iphone_headlines, _ = get_headlines("iphone")
# Get fifth topic
gold_headlines, _ = get_headlines("gold")
# Instantiate SentimentIntensityAnalyzer
sid = SentimentIntensityAnalyzer()
# Create function that computes average compound sentiment of headlines for each day
def headline_sentiment_summarizer_avg(headlines):
sentiment = []
for day in headlines:
day_score = []
for h in day:
if h == None:
continue
else:
day_score.append(sid.polarity_scores(h)["compound"])
sentiment.append(sum(day_score) / len(day_score))
return sentiment
# Get averages of each topics sentiment
aapl_avg = headline_sentiment_summarizer_avg(aapl_headlines)
trade_avg = headline_sentiment_summarizer_avg(trade_headlines)
economy_avg = headline_sentiment_summarizer_avg(economy_headlines)
iphone_avg = headline_sentiment_summarizer_avg(iphone_headlines)
gold_avg = headline_sentiment_summarizer_avg(gold_headlines)
# Combine Sentiment Averages into DataFrame
topic_sentiments = pd.DataFrame(
{
"aapl_avg": aapl_avg,
"trade_avg": trade_avg,
"economy_avg": economy_avg,
"iphone_avg": iphone_avg,
"gold_avg": gold_avg,
}
)
# Set the index value of the sentiment averages DataFrame to be the series of dates.
topic_sentiments.index = pd.to_datetime(dates)
# +
# Merge with AAPL returns
topic_sentiments = aapl_returns.join(topic_sentiments).dropna(how="any")
display(topic_sentiments)
# -
# Correlate the headlines' sentiment to returns
topic_sentiments.corr().style.background_gradient()
| Resources/resource_notebooks/correlating_returns.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from sklearn import tree
X = [[0, 0], [1, 1]]
Y = [0, 1]
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X, Y)
clf.predict([[2., 2.]])
# +
import sys
sys.path.append('./projects/tools/')
from email_preprocess import preprocess
import numpy as np
import matplotlib.pyplot as plt
from time import time
# -
features_train, features_test, labels_train, labels_test = preprocess(words_file = './projects/tools/word_data.pkl',
authors_file ='./projects/tools/email_authors.pkl')
# +
features_train = features_train[:len(features_train)]
labels_train = labels_train[:len(labels_train)]
C = [10000.]
for c in C:
clf = SVC(C=c, kernel="rbf")
t0 = time()
y_fit = clf.fit(features_train, labels_train)
print ("training time:", round(time()-t0, 3), "s")
t0 = time()
pred = clf.predict(features_test)
print ("predict time:", round(time()-t0, 3), "s")
acc = accuracy_score(pred, labels_test, normalize = True)
print(acc)
print('\n')
# plt.scatter(features_train[:0], labels_train, c=labels_test)
# plt.show()
# +
# features_train = features_train[:len(features_train)/100]
# labels_train = labels_train[:len(labels_train)/100]
features_train = features_train[:len(features_train)]
labels_train = labels_train[:len(labels_train)]
C = 10000.
clf = SVC(C=C, kernel="rbf")
t0 = time()
y_fit = clf.fit(features_train, labels_train)
print ("training time:", round(time()-t0, 3), "s")
t0 = time()
pred = clf.predict(features_test)
print ("predict time:", round(time()-t0, 3), "s")
# -
answer = [pred[10],pred[26],pred[50]]
print(answer)
np.count_nonzero(pred==1)
acc = accuracy_score(pred, labels_test, normalize = True)
print(acc)
print('\n')
| notebook/03-udacityIntroductionToMachineLearning/.ipynb_checkpoints/03.1-Decision-Trees-Introduction-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# ====================================================
# Library
# ====================================================
import os
import re
import numpy as np
import pandas as pd
from tqdm.auto import tqdm
tqdm.pandas()
import torch
# ====================================================
# Data Loading
# ====================================================
train = pd.read_csv('./input/bms-molecular-translation/train_labels.csv')
print(f'train.shape: {train.shape}')
# ====================================================
# Preprocess functions
# ====================================================
def split_form(form):
string = ''
for i in re.findall(r"[A-Z][^A-Z]*", form):
elem = re.match(r"\D+", i).group()
num = i.replace(elem, "")
if num == "":
string += f"{elem} "
else:
string += f"{elem} {str(num)} "
return string.rstrip(' ')
def split_form2(form):
string = ''
for i in re.findall(r"[a-z][^a-z]*", form):
elem = i[0]
num = i.replace(elem, "").replace('/', "")
num_string = ''
for j in re.findall(r"[0-9]+[^0-9]*", num):
num_list = list(re.findall(r'\d+', j))
assert len(num_list) == 1, f"len(num_list) != 1"
_num = num_list[0]
if j == _num:
num_string += f"{_num} "
else:
extra = j.replace(_num, "")
num_string += f"{_num} {' '.join(list(extra))} "
string += f"/{elem} {num_string}"
return string.rstrip(' ')
# ====================================================
# Tokenizer
# ====================================================
class Tokenizer(object):
def __init__(self):
self.stoi = {}
self.itos = {}
def __len__(self):
return len(self.stoi)
def fit_on_texts(self, texts):
vocab = set()
for text in texts:
vocab.update(text.split(' '))
vocab = sorted(vocab)
vocab.append('<sos>')
vocab.append('<eos>')
vocab.append('<pad>')
for i, s in enumerate(vocab):
self.stoi[s] = i
self.itos = {item[1]: item[0] for item in self.stoi.items()}
def text_to_sequence(self, text):
sequence = []
sequence.append(self.stoi['<sos>'])
for s in text.split(' '):
sequence.append(self.stoi[s])
sequence.append(self.stoi['<eos>'])
return sequence
def texts_to_sequences(self, texts):
sequences = []
for text in texts:
sequence = self.text_to_sequence(text)
sequences.append(sequence)
return sequences
def sequence_to_text(self, sequence):
return ''.join(list(map(lambda i: self.itos[i], sequence)))
def sequences_to_texts(self, sequences):
texts = []
for sequence in sequences:
text = self.sequence_to_text(sequence)
texts.append(text)
return texts
def predict_caption(self, sequence):
caption = ''
for i in sequence:
if i == self.stoi['<eos>'] or i == self.stoi['<pad>']:
break
caption += self.itos[i]
return caption
def predict_captions(self, sequences):
captions = []
for sequence in sequences:
caption = self.predict_caption(sequence)
captions.append(caption)
return captions
# -
# ====================================================
# main
# ====================================================
def main():
# ====================================================
# preprocess train.csv
# ====================================================
train['InChI_1'] = train['InChI'].progress_apply(lambda x: x.split('/')[1])
train['InChI_text'] = train['InChI_1'].progress_apply(split_form) + ' ' + \
train['InChI'].apply(lambda x: '/'.join(x.split('/')[2:])).progress_apply(split_form2).values
# ====================================================
# create tokenizer
# ====================================================
tokenizer = Tokenizer()
tokenizer.fit_on_texts(train['InChI_text'].values)
torch.save(tokenizer, 'tokenizer2.pth')
print('Saved tokenizer')
# ====================================================
# preprocess train.csv
# ====================================================
lengths = []
tk0 = tqdm(train['InChI_text'].values, total=len(train))
for text in tk0:
seq = tokenizer.text_to_sequence(text)
length = len(seq) - 2
lengths.append(length)
train['InChI_length'] = lengths
train.to_pickle('train2.pkl')
print('Saved preprocessed train.pkl')
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
if __name__ == '__main__':
main()
# -
train.head()
# +
## Create JSON file for AoA
import json
validation_index = int(len(train) * 0.8)
train_df = train.iloc[:, :validation_index]
validation_df = train.iloc[:, validation_index:]
def getTrainImgPath(imageId):
return "./input/bms-molecular-translation/train/{}/{}/{}/{}.png".format(
imageId[0], imageId[1], imageId[2], imageId)
def getTestImgPath(imageId):
return "./input/bms-molecular-translation/test/{}/{}/{}/{}.png".format(
imageId[0], imageId[1], imageId[2], imageId)
obj = []
for i, row in train.iterrows():
path = getTrainImgPath(row["image_id"])
obj.append({"file_path": path, "captions": [row["InChI_text"]], "id": row["image_id"], "split": "restval"})
'''for i, row in test.iterrows():
path = getTestImgPath(row["image_id"])
obj.append({"file_path": path, "captions": [row["InChI_text"]], "id": row["image_id"], "split": "test"})'''
with open('ichi.json', 'w') as f:
json.dump(obj, f)
# -
| inchi-preprocess-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Data-Load" data-toc-modified-id="Data-Load-1"><span class="toc-item-num">1 </span>Data Load</a></span></li><li><span><a href="#Vanilla-Backpropagation" data-toc-modified-id="Vanilla-Backpropagation-2"><span class="toc-item-num">2 </span>Vanilla Backpropagation</a></span></li><li><span><a href="#MNIST" data-toc-modified-id="MNIST-3"><span class="toc-item-num">3 </span>MNIST</a></span></li><li><span><a href="#CIFAR10" data-toc-modified-id="CIFAR10-4"><span class="toc-item-num">4 </span>CIFAR10</a></span></li><li><span><a href="#Save-saliency-maps" data-toc-modified-id="Save-saliency-maps-5"><span class="toc-item-num">5 </span>Save saliency maps</a></span><ul class="toc-item"><li><span><a href="#MNIST" data-toc-modified-id="MNIST-5.1"><span class="toc-item-num">5.1 </span>MNIST</a></span></li><li><span><a href="#CIFAR10" data-toc-modified-id="CIFAR10-5.2"><span class="toc-item-num">5.2 </span>CIFAR10</a></span></li></ul></li></ul></div>
# +
import torch
import numpy as np
import sys
import matplotlib.pyplot as plt
sys.path.append('../code')
from dataload import mnist_load, cifar10_load
from saliency.attribution_methods import VanillaBackprop
from saliency.ensembles import *
from utils import get_samples
from visualization import visualize_saliencys
# -
# # Data Load
original_images_mnist, original_targets_mnist, pre_images_mnist, mnist_classes, mnist_model = get_samples('mnist')
original_images_cifar10, original_targets_cifar10, pre_images_cifar10, cifar10_classes, cifar10_model = get_samples('cifar10')
# # Vanilla Backpropagation
#
# $$M(x) = \frac{\partial}{\partial{x}}f(x), f(x) \ is \ the \ output \ of \ the \ target.$$
mnist_VBP = VanillaBackprop(mnist_model)
cifar10_VBP = VanillaBackprop(cifar10_model)
# # MNIST
# +
# vanilla gradients
outputs, probs, preds = mnist_VBP.generate_image(pre_images_mnist, original_targets_mnist)
# ensemble
n = 50
sigma = 2
# vanilla gradients + smooth grad
outputs_SG, _, _ = generate_smooth_grad(pre_images_mnist, original_targets_mnist, n, sigma, mnist_VBP)
# vanilla gradients + smooth square grad
outputs_SG_SQ, _, _ = generate_smooth_square_grad(pre_images_mnist, original_targets_mnist, n, sigma, mnist_VBP)
# vanilla gradients + smooth var grad
outputs_SG_VAR, _, _ = generate_smooth_var_grad(pre_images_mnist, original_targets_mnist, n, sigma, mnist_VBP)
# +
names = ['Vanilla Backprop',
'Vanilla Backprop\nSmoothGrad','Vanilla Backprop\nSmoothGrad-Square','Vanilla Backprop\nSmoothGrad-VAR'] # names
results = [outputs, outputs_SG, outputs_SG_SQ, outputs_SG_VAR]
target = 'mnist'
visualize_saliencys(original_images_mnist,
results,
probs,
preds,
mnist_classes,
names,
target,
col=5, row=10, size=(20,35), labelsize=20, fontsize=25)
# -
# # CIFAR10
# + hide_input=false
# vanilla gradients
outputs, probs, preds = cifar10_VBP.generate_image(pre_images_cifar10, original_targets_cifar10)
# ensemble
n = 50
sigma = 2
# vanilla gradients + smooth grad
outputs_SG, _, _ = generate_smooth_grad(pre_images_cifar10, original_targets_cifar10, n, sigma, cifar10_VBP)
# vanilla gradients + smooth square grad
outputs_SG_SQ, _, _ = generate_smooth_square_grad(pre_images_cifar10, original_targets_cifar10, n, sigma, cifar10_VBP)
# vanilla gradients + smooth var grad
outputs_SG_VAR, _, _ = generate_smooth_var_grad(pre_images_cifar10, original_targets_cifar10, n, sigma, cifar10_VBP)
# + hide_input=false
names = ['Vanilla Backprop',
'Vanilla Backprop\nSmoothGrad','Vanilla Backprop\nSmoothGrad-Square','Vanilla Backprop\nSmoothGrad-VAR'] # names
results = [outputs, outputs_SG, outputs_SG_SQ, outputs_SG_VAR]
target = 'cifar10'
visualize_saliencys(original_images_cifar10,
results,
probs,
preds,
cifar10_classes,
names,
target,
col=5, row=10, size=(20,35), labelsize=20, fontsize=25)
# -
# # Save saliency maps
# ## MNIST
trainloader, validloader, testloader = mnist_load(shuffle=False)
mnist_VBP.save_saliency_map(trainloader, '../saliency_maps/[mnist]VBP_train.hdf5')
mnist_VBP.save_saliency_map(validloader, '../saliency_maps/[mnist]VBP_valid.hdf5')
mnist_VBP.save_saliency_map(testloader, '../saliency_maps/[mnist]VBP_test.hdf5')
# ## CIFAR10
trainloader, validloader, testloader = cifar10_load(shuffle=False, augmentation=False)
cifar10_VBP.save_saliency_map(trainloader, '../saliency_maps/[cifar10]VBP_train.hdf5')
cifar10_VBP.save_saliency_map(validloader, '../saliency_maps/[cifar10]VBP_valid.hdf5')
cifar10_VBP.save_saliency_map(testloader, '../saliency_maps/[cifar10]VBP_test.hdf5')
| notebook/[Attribution] - Vanilla Backpropagation & Ensemble.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Import all the software necessary for the final project
# +
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # or any {'0', '1', '2'}
from tensorflow.keras.layers import Input, Dense, GlobalAveragePooling2D, Dropout, Flatten, Conv2D, MaxPool2D, Reshape
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard
from sklearn.metrics import roc_curve, auc
# -
# ### Import Xception from Keras
from tensorflow.keras.applications.xception import Xception, preprocess_input
xception = tf.keras.applications.xception.Xception(weights='imagenet',include_top=True)
xception.summary()
# ### Change the final layer(s) of the model
#
# The final layer(s) of the model need to be changed to implement Xception to the project. The following layer(s) will be changed/added:
#
# ...
# +
IMAGE_SIZE = 299
input_shape = (IMAGE_SIZE, IMAGE_SIZE, 3)
input = Input(input_shape)
output = xception(input)
output = Dropout(0.5)(output)
output = Dense(1, activation='sigmoid')(output)
model = Model(input, output)
model.compile(SGD(lr=0.001, momentum=0.95), loss = 'binary_crossentropy', metrics=['accuracy'])
model.summary()
# -
# ### Get the datagenerators
def get_pcam_generators(base_dir, train_batch_size=32, val_batch_size=32):
# dataset parameters
train_path = os.path.join(base_dir, 'train+val', 'train')
valid_path = os.path.join(base_dir, 'train+val', 'valid')
# instantiate data generators
datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
train_gen = datagen.flow_from_directory(train_path,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=train_batch_size,
class_mode='binary')
val_gen = datagen.flow_from_directory(valid_path,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=val_batch_size,
class_mode='binary')
return train_gen, val_gen
train_gen, val_gen = get_pcam_generators(r'C:\Users\20173884\Documents\8P361')
# ### Fine-tune the new model
# +
# save the model and weights
model_name = 'transfer_Xception_mdoel'
model_filepath = model_name + '.json'
weights_filepath = model_name + '_weights.hdf5'
model_json = model.to_json() # serialize model to JSON
with open(model_filepath, 'w') as json_file:
json_file.write(model_json)
# define the model checkpoint and Tensorboard callbacks
checkpoint = ModelCheckpoint(weights_filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
tensorboard = TensorBoard(os.path.join('logs', model_name))
callbacks_list = [checkpoint, tensorboard]
# train the model, note that we define "mini-epochs"
train_steps = train_gen.n//train_gen.batch_size//20
val_steps = val_gen.n//val_gen.batch_size//20
# since the model is trained for only 10 "mini-epochs", i.e. half of the data is
# not used during training
history = model.fit_generator(train_gen, steps_per_epoch=train_steps,
validation_data=val_gen,
validation_steps=val_steps,
epochs=10,
callbacks=callbacks_list)
| Final project/Xception_tranfer_learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ---
#
# _You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._
#
# ---
# # Assignment 2 - Introduction to NLTK
#
# In part 1 of this assignment you will use nltk to explore the <NAME> novel <NAME>. Then in part 2 you will create a spelling recommender function that uses nltk to find words similar to the misspelling.
# ## Part 1 - Analyzing <NAME>
# +
import nltk
import pandas as pd
import numpy as np
nltk.download('punkt')
# If you would like to work with the raw text you can use 'moby_raw'
with open('moby.txt', 'r') as f:
moby_raw = f.read()
# If you would like to work with the novel in nltk.Text format you can use 'text1'
moby_tokens = nltk.word_tokenize(moby_raw)
text1 = nltk.Text(moby_tokens)
# -
# ### Example 1
#
# How many tokens (words and punctuation symbols) are in text1?
#
# *This function should return an integer.*
# +
def example_one():
return len(nltk.word_tokenize(moby_raw)) # or alternatively len(text1)
example_one()
# -
# ### Example 2
#
# How many unique tokens (unique words and punctuation) does text1 have?
#
# *This function should return an integer.*
# +
def example_two():
return len(set(nltk.word_tokenize(moby_raw))) # or alternatively len(set(text1))
example_two()
# -
# ### Example 3
#
# After lemmatizing the verbs, how many unique tokens does text1 have?
#
# *This function should return an integer.*
# +
from nltk.stem import WordNetLemmatizer
nltk.download('wordnet')
def example_three():
lemmatizer = WordNetLemmatizer()
lemmatized = [lemmatizer.lemmatize(w,'v') for w in text1]
return len(set(lemmatized))
example_three()
# -
# ### Question 1
#
# What is the lexical diversity of the given text input? (i.e. ratio of unique tokens to the total number of tokens)
#
# *This function should return a float.*
# +
def answer_one():
x = example_two()
y = example_one()
return x / y
answer_one()
# -
# ### Question 2
#
# What percentage of tokens is 'whale'or 'Whale'?
#
# *This function should return a float.*
# +
def answer_two():
from nltk.probability import FreqDist
from nltk.tokenize import word_tokenize
words = word_tokenize(moby_raw)
dist = FreqDist(words)
x = dist['whale'] + dist['Whale']
#x = dist['Whale']
return x / example_one()
answer_two()
# -
# ### Question 3
#
# What are the 20 most frequently occurring (unique) tokens in the text? What is their frequency?
#
# *This function should return a list of 20 tuples where each tuple is of the form `(token, frequency)`. The list should be sorted in descending order of frequency.*
# +
def answer_three():
from nltk.probability import FreqDist
from nltk.tokenize import word_tokenize
import pandas as pd
words = word_tokenize(moby_raw)
dist = FreqDist(words)
df = pd.Series(dist)
df = df.to_frame(name='repetition')
df = df.sort_values('repetition', ascending=False).reset_index()
return [(df.iloc[0][0], df.iloc[0][1]), (df.iloc[1][0], df.iloc[1][1]), (df.iloc[2][0], df.iloc[2][1]), (df.iloc[3][0], df.iloc[3][1]), (df.iloc[4][0], df.iloc[4][1]), (df.iloc[5][0], df.iloc[5][1]), (df.iloc[6][0], df.iloc[6][1]), (df.iloc[7][0], df.iloc[7][1]), (df.iloc[8][0], df.iloc[8][1]), (df.iloc[9][0], df.iloc[9][1]), (df.iloc[10][0], df.iloc[10][1]), (df.iloc[11][0], df.iloc[11][1]), (df.iloc[12][0], df.iloc[12][1]), (df.iloc[13][0], df.iloc[13][1]), (df.iloc[14][0], df.iloc[14][1]), (df.iloc[15][0], df.iloc[15][1]), (df.iloc[16][0], df.iloc[16][1]), (df.iloc[17][0], df.iloc[17][1]), (df.iloc[18][0], df.iloc[18][1]), (df.iloc[19][0], df.iloc[19][1])]
answer_three()
# -
# ### Question 4
#
# What tokens have a length of greater than 5 and frequency of more than 150?
#
# *This function should return an alphabetically sorted list of the tokens that match the above constraints. To sort your list, use `sorted()`*
# +
def answer_four():
from nltk.probability import FreqDist
from nltk.tokenize import word_tokenize
words = word_tokenize(moby_raw)
dist = FreqDist(words)
return [w for w in dist if len(w) > 5 and dist[w] > 150]
answer_four()
# -
# ### Question 5
#
# Find the longest word in text1 and that word's length.
#
# *This function should return a tuple `(longest_word, length)`.*
# +
def answer_five():
from nltk.probability import FreqDist
from nltk.tokenize import word_tokenize
words = word_tokenize(moby_raw)
dist = FreqDist(words)
df = pd.Series(dist)
df = df.to_frame(name='repetition')
df = df.sort_values('repetition', ascending=False).reset_index()
longest_word = 0
length = 0
for item in df['index']:
if item == ',':
longest_word = item
if len(item) > len(longest_word):
longest_word = item
length = len(longest_word)
return longest_word, length
answer_five()
# -
# ### Question 6
#
# What unique words have a frequency of more than 2000? What is their frequency?
#
# "Hint: you may want to use `isalpha()` to check if the token is a word and not punctuation."
#
# *This function should return a list of tuples of the form `(frequency, word)` sorted in descending order of frequency.*
# +
def answer_six():
from nltk.probability import FreqDist
from nltk.tokenize import word_tokenize
words = word_tokenize(moby_raw)
dist = FreqDist(words)
frequent_words = [(dist[w],w) for w in set(words) if w.isalpha() and dist[w] > 2000]
return sorted(frequent_words, key=lambda x:x[0],reverse=True)
answer_six()
# -
# ### Question 7
#
# What is the average number of tokens per sentence?
#
# *This function should return a float.*
# +
def answer_seven():
text2 = nltk.sent_tokenize(moby_raw)
return len(text1)/len(text2)
answer_seven()
# -
# ### Question 8
#
# What are the 5 most frequent parts of speech in this text? What is their frequency?
#
# *This function should return a list of tuples of the form `(part_of_speech, frequency)` sorted in descending order of frequency.*
# +
def answer_eight():
nltk.download('averaged_perceptron_tagger')
import collections
list1 = nltk.pos_tag(text1)
poscount = collections.Counter((subl[1] for subl in list1))
return poscount.most_common(5)
answer_eight()
# -
# ## Part 2 - Spelling Recommender
#
# For this part of the assignment you will create three different spelling recommenders, that each take a list of misspelled words and recommends a correctly spelled word for every word in the list.
#
# For every misspelled word, the recommender should find find the word in `correct_spellings` that has the shortest distance*, and starts with the same letter as the misspelled word, and return that word as a recommendation.
#
# *Each of the three different recommenders will use a different distance measure (outlined below).
#
# Each of the recommenders should provide recommendations for the three default words provided: `['cormulent', 'incendenece', 'validrate']`.
# +
from nltk.corpus import words
nltk.download('words')
correct_spellings = words.words()
print(correct_spellings)
# -
# ### Question 9
#
# For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:
#
# **[Jaccard distance](https://en.wikipedia.org/wiki/Jaccard_index) on the trigrams of the two words.**
#
# *This function should return a list of length three:
# `['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.*
# +
def answer_nine(entries=['cormulent', 'incendenece', 'validrate']):
recommendation = []
for i in entries:
y = [x for x in correct_spellings if x[0] == i[0]]
jaccard_dist = [nltk.jaccard_distance(set(nltk.ngrams(i,n=3)), set(nltk.ngrams(x, n=3))) for x in y]
recommendation.append(y[np.argmin(jaccard_dist)])
return recommendation
answer_nine()
# -
# ### Question 10
#
# For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:
#
# **[Jaccard distance](https://en.wikipedia.org/wiki/Jaccard_index) on the 4-grams of the two words.**
#
# *This function should return a list of length three:
# `['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.*
# +
def answer_ten(entries=['cormulent', 'incendenece', 'validrate']):
recommendation = []
for i in entries:
y = [x for x in correct_spellings if x[0] == i[0]]
jaccard_dist = [nltk.jaccard_distance(set(nltk.ngrams(i,n=4)), set(nltk.ngrams(x, n=4))) for x in y]
recommendation.append(y[np.argmin(jaccard_dist)])
return recommendation
answer_ten()
# -
# ### Question 11
#
# For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:
#
# **[Edit distance on the two words with transpositions.](https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance)**
#
# *This function should return a list of length three:
# `['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.*
# +
def answer_eleven(entries=['cormulent', 'incendenece', 'validrate']):
recommendation = []
for i in entries:
y = [x for x in correct_spellings if x[0] == i[0]]
dl_dist = [nltk.edit_distance(x, i, transpositions=True) for x in y]
recommendation.append(y[np.argmin(dl_dist)])
return recommendation
answer_eleven()
# -
| Applied Text Mining in Python/Assignment 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # trim_small_clusters
# an example for the use of porespy.filters.trim_small_clusters
import numpy as np
import porespy as ps
import scipy.ndimage as spim
import matplotlib.pyplot as plt
import skimage
ps.visualization.set_mpl_style()
# trim_small_clusters removes clusters from images if they are smaller than the size given as an input.
# ## Create image and variables
# + tags=[]
im = ps.generators.blobs(shape=[500, 500])
fig, ax = plt.subplots(figsize=[7,7]);
ax.imshow(im);
ax.axis(False);
# + [markdown] tags=[]
# ## Apply filter function
# +
size=10
x1 = ps.filters.trim_small_clusters(im, size=10)
fig, ax = plt.subplots(figsize=[7,7]);
ax.imshow(x1);
ax.axis(False);
| examples/filters/howtos/trim_small_clusters.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
from nltk.tokenize import sent_tokenize
papers = []
for line in open('json_file', 'r'):
papers.append(json.loads(line))
with open('subcorpus.txt', 'w') as filehandle:
for i,paper in enumerate(papers):
text = ""
if len(paper["abstract"]) != 0:
for abst in paper["abstract"]:
text += abst["text"]
for sect in paper["body_text"]:
text += sect["text"]
for ref in paper["bib_entries"].keys():
text += paper["bib_entries"][ref]["title"] + ". "
text = text.lower()
list_of_sentences = sent_tokenize(text)
if len(list_of_sentences) == 0:
continue
for listitem in list_of_sentences:
filehandle.write('%s\n' % listitem)
filehandle.write("\n")
| dataset_codes/S2ORCJson2Corpus.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <center> In-class Assignment 1 </center>
# **Task 1**. Guessing game (countries)
#
# You got bored and decided to play with your friend in guessing words. You have selected some random country and saved this word in the variable ```country```. Your friend has 10 attempts to guess this country. If his/her answer is incorrect, your code should print "Oh no! Try again! You have X more attempts left", where X is the number of attempts left. If the answer is correct, congratulate him/her by displaying the message "Correct! I should have chosen a country that is more difficult to guess."
# +
country = input('Provide a country name: ')
# YOUR CODE HERE
# -
# **Task 2**.
# 1. You got an internship at The New York Times and your first task is to publish an article which includes the statements made by politicians. You are given a list of quotes from the media, each line contains a quote and the author of this statement. Create a dictionary in which the key is the author's name (a string with first and last name), and the value is the author's quote. You are not allowed to create a new dictionary manually (copy-paste, etc.). Your code should transform a list with given strings into a dictionary mentioned above. You can use additional data structures (for example, to store only the names of politicians in it) if it helps you to get to the final dictionary.
#
# 2. Replace in each quote obscene words with a string "[censored]" each time. Hint: Each obscene word contains an asterisk.
# +
quotes = [''' 'I don't sit around just talking to experts because this is a college seminar,
we talk to these folks because they potentially have the best answers so I know whose a** to kick',
the president <NAME> said. ''',
''' "This guy has completely trampled on the rule of law, avoided consequence and accountability
under law. For all the sh*t people give me for being a prosecutor, I believe there should be
accountability and consequence", said <NAME>. ''',
''' In a notorious speech in Las Vegas in 2011, <NAME> dropped the f-word repeatedly,
arguing that his message to China would be "Listen you mother****ers, we’re going to tax you 25 percent!" ''']
# YOUR CODE HERE
# -
# **Task 3**.
#
# We have 3 groups, 15 students, and their grades for the final exam. Write a function that returns back a group with the greatest number of students, who passed the exam with a grade higher than 8. If there are several groups, provide a list with them in descending order.
#
# Example: <br>
#
# ```Input of your function: exam_result```
exam_results = {
'Student 1' : {'grade': 8, 'group': 101},
'Student 2' : {'grade': 9, 'group': 102},
'Student 3' : {'grade': 6, 'group': 101},
'Student 4' : {'grade': 7, 'group': 103},
'Student 5' : {'grade': 9, 'group': 103},
'Student 6' : {'grade': 4, 'group': 101},
'Student 7' : {'grade': 2, 'group': 103},
'Student 8' : {'grade': 8, 'group': 102},
'Student 9' : {'grade': 10, 'group': 101},
'Student 10' : {'grade': 10, 'group': 103},
'Student 11' : {'grade': 6, 'group': 102},
'Student 12' : {'grade': 9, 'group': 102},
'Student 13' : {'grade': 4, 'group': 101},
'Student 14' : {'grade': 8, 'group': 103},
'Student 15' : {'grade': 5, 'group': 102},
}
# ```Output of your function```: [102, 103]
#
# *Hint*: Note that each value in this dictionary is also a dictionary.
# If you want to get a value for a grade or for a group of a particular student, you request it by key:
#
# ```python
# >>> exam_results['Student 11']['grade']
# Out: 6
# ```
#
# ```python
# >>> exam_results['Student 11']['group']
# Out: 102
# ```
#
# Also, don't forget to run a cell above for creating exam_results.
def most_productive_group(exam_results):
# YOUR CODE HERE
| week6/In_class_A1_192.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial for using the package `fast-ml`
#
# This package is as good as having a junior Data Scientist working for you. Most of the commonly used EDA steps, Missing Data Imputation techniques, Feature Engineering steps are covered in a ready to use format
# ## Part 5. Feature Engineering for Categorical Variables / Categorical Encodings
#
#
#
# #### 1. Import eda module from the package
# `from fast_ml.feature_engineering import FeatureEngineering_Categorical`
#
# #### 2. Define the imputer object.
# * For Categorical variables use `FeatureEngineering_Categorical`
# * For Numerical variables use `FeatureEngineering_Numerical`
#
# `cat_imputer = FeatureEngineering_Categorical(method = 'label')`
#
# #### 3. Fit the object on your dataframe and provide a list of variables
# `cat_imputer.fit(train, variables = ['BsmtQual'])`
#
# #### 4. Apply the transform method on train / test dataset
# `train = cat_imputer.transform(train)`
# <br>&<br>
# `test = cat_imputer.transform(test)`
#
# #### 5. parameter dictionary gets created which store the values used for encoding. It can be viewed as
# `cat_imputer.param_dict_`
#
# ### Available Methods for Categorical Encoding
#
#
# 1. One-hot Encoding
# 1. Label Encoding / Integer Encoding
# 1. Count Encoding
# 1. Frequeny Encoding
# 1. Ordered Label Encoding
#
# <b>Target Based Encoding</b>
# 6. Target Ordered Encoding
# 7. Target Mean Value Encoding
# 8. Target Probability Ratio Encoding (only Classification model)
# 9. Weight of Evidence (WOE) Encoding (only Classification model)
# ## Start Feature Engineering for Categorical Variables
# +
# Import Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from fast_ml.feature_engineering import FeatureEngineering_Categorical
# %matplotlib inline
import warnings
warnings.filterwarnings('ignore')
# -
df = pd.read_csv('../data/house_prices.csv')
df.shape
df.head(5)
numeric_type = ['float64', 'int64']
category_type = ['object']
# ## Categorical Variables
# ### 1. BsmtQual
#Before Imputation
df['BsmtQual'].value_counts()
cat_imputer1 = FeatureEngineering_Categorical(method = 'label')
cat_imputer1.fit(df, variables = ['BsmtQual'])
cat_imputer1.param_dict_
df = cat_imputer1.transform(df)
#After Imputation
df['BsmtQual'].value_counts()
# ### 2. FireplaceQu
#Before Imputation
df['FireplaceQu'].value_counts()
# +
cat_imputer2 = FeatureEngineering_Categorical(method = 'freq')
cat_imputer2.fit(df, variables = ['FireplaceQu'])
print (cat_imputer2.param_dict_)
df = cat_imputer2.transform(df)
# -
#After Imputation
df['FireplaceQu'].value_counts()
| tutorials/5 Tutorial for fast_ml - Feature Engineering for Categorical Variables.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # This code example is to demonstrate 2D Ambisonic panning functions interactively
# Acoustic Holography and Holophony
#
# <NAME>, 2016
#
#
# ## Ambisonic Panning function in 2D
# The Ambisonic panning function is derived from the equivalence that an infinite Fourier series equals a Dirac delta function:
#
# \begin{equation}
# g_\infty(\varphi)=\sum_{m=-\infty}^\infty e^{\mathrm{i}m(\varphi)}=\sum_{m=0}^\infty(2-\delta_{m,0})\cos(m\varphi)=\delta(\varphi).
# \end{equation}
#
# After making the order finite, we obtain a sinc function
#
# \begin{equation}
# g_\mathrm{N}(\varphi)=\sum_{m=0}^\mathrm{N}(2-\delta_{m,0})\cos(m\varphi)=\frac{\sin[(\mathrm{N}+\frac{1}{2})\varphi]}{\sin(\frac{1}{2}\varphi)}.
# \end{equation}
# +
import numpy as np
import scipy as sp
import math
from bokeh.plotting import figure, output_file, show
from bokeh.io import push_notebook, output_notebook
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
phi=np.linspace(-np.pi,np.pi,100)
output_notebook()
p1 = figure(title="5th order periodic sinc function",plot_width=400,plot_height=250)
N=5
normalization=2*N+1
sincp=np.sin((N+0.5)*phi)/np.sin(0.5*phi)
p1.line(phi*180/np.pi,normalization*sincp,line_width=3)
show(p1)
# -
# This kind of periodic sinc function could be cyclically shifted by any angle $\varphi_\mathrm{s}$ and would not change its shape. However, we are not quite satisfied with the height of its sidelobes. It is worth to involve weights $a_m$ for the suppression of these sidelobes:
#
# \begin{equation}
# g_\mathrm{N}(\varphi)=\sum_{m=0}^\mathrm{N}a_m\,(2-\delta_{m,0})\cos(m\varphi).
# \end{equation}
#
# The so-called in-phase weighting (see <NAME> et al 1999) is defined as
#
# \begin{equation}
# a_m=\frac{N!^2}{(N-m)!(N+m)!}=\frac{N^\underline{m}}{N^\overline{m}},
# \end{equation}
#
# and the so-called max-$\boldsymbol{r}_\mathrm{E}$ weighting is
#
# \begin{equation}
# a_m=\cos\Bigl(\frac{\pi}{2(\mathrm{N}+1)}\Bigr),
# \end{equation}
#
# and the neutral,rectangular weighting with $a_m=1$ is called basic.
# +
def inphase_weights(N):
a=np.ones(N+1)
for n in range(1,N+1):
a[n]=(N-n+1)/(1.0*(N+n))*a[n-1]
return a
def maxre_weights(N):
m=np.arange(0,N+1)
a=np.cos(np.pi/(2*(N+1))*m)
return a
def basic_weights(N):
a=np.ones(N+1)
return a
N=7
p2 = figure(title="7th-order weights",plot_width=400,plot_height=250)
m=np.arange(0,N+1)
a=basic_weights(N)
p2.line(m,a,color="green",line_width=3,legend_label="basic")
a=inphase_weights(N)
p2.line(m,a,color="red",line_width=3,legend_label="in-phase")
a=maxre_weights(N)
p2.line(m,a,color="blue",line_width=3,legend_label="max-rE")
p2.legend.background_fill_alpha=0.4
show(p2)
# +
def weighted_cosine_series(phi,a):
N=a.size-1
g=np.zeros(phi.size)
amplitude=0;
for m in range(0,N+1):
g+=np.cos(m*phi)*a[m]*(2-(m==0))
amplitude+=a[m]*(2-(m==0))
return g/amplitude
p3 = figure(title="weighted cosine series",plot_width=400,plot_height=250)
a=basic_weights(N)
g=weighted_cosine_series(phi,a)
p3.line(phi*180/np.pi,g,line_width=3,color='green',legend_label="rectangular")
a=inphase_weights(N)
g=weighted_cosine_series(phi,a)
p3.line(phi*180/np.pi,g,line_width=3,color='red',legend_label="in-phase")
a=maxre_weights(N)
g=weighted_cosine_series(phi,a)
p3.line(phi*180/np.pi,g,line_width=3,color='blue',legend_label="max-rE")
show(p3)
# -
def g_ambipan(N,phis,phi,weight_type):
g=np.zeros(phi.size)
ampl=0
if weight_type == 1:
a=inphase_weights(N)
elif weight_type == 2:
a=maxre_weights(N)
else:
a=basic_weights(N)
g=weighted_cosine_series(phi-phis,a)
return g
output_notebook()
g=g_ambipan(5,0,phi,0)
p = figure(title="2D Ambi Panning Function",plot_width=400, plot_height=270, x_range=(-180,180), y_range=(-.4,1.1))
ll=p.line(phi*180/np.pi, g , line_width=3)
def plot_ambipan(N,phis,weight_type):
phi=np.linspace(-np.pi,np.pi,100)
g=g_ambipan(N,phis*np.pi/180,phi,weight_type)
ll.data_source.data['y']=g
push_notebook()
show(p,notebook_handle=True)
interact(plot_ambipan,N=(0,10,1), phis=(-180.0,180.0,1.0),weight_type={'in-phase':1,'max-rE':2,'basic':3});
#
| 04-AmbisonicPanningFunctionsCircle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (OpenVINO 2021.2)
# language: python
# name: c003-python_3
# ---
# ### Qarpo
#
# Qarpo library provides a jupyter notebook user interface to provide the following:
# 1. Submit jobs to node in a cluster
# 2. Track progress of jobs running
# 3. Display job's output results
# 4. Plot results completed jobs
#
#
#
# ### Examples:
# This list of examples provides a series of notebooks for step by step recipe to build a jupyter notebook UI using qarpo library
#
# #### 0. [No Qarpo](No_qarpo.ipynb)
# This notebook demonstrates how to submit jobs to cluster without using the qarpo library.
# #### 1. [Basic](Basic.ipynb)
# This notebook is the basic usage of the qarpo library, it provides the recipe to submit jobs to job scheduler and display the output interface to display jobs' outputs.
# #### 2. [Integrate progress bar](Integrate_Progress_Bar.ipynb)
# This notebook builds on the previous basic step. It provides a recipe to submit jobs to job scheduler, display progress bars, which is an indicator of job progression and its status (waiting to run, running, done), and finally display output interface for jobs' outputs.
# #### 3. [Integrate plot](Integrate_Plot.ipynb)
# In addition to the interface built in the previous step, this notebook provides a recipe to plot the results of the accomplished jobs.
# #### 4. [Integrate text widget](Integrate_Txt_Widget.ipynb)
# This notebook provides a recipe to integrate text widget to input interface, this text input is used as a part of the command used to run the job.
# #### 5. [Integrate select widget](Integrate_Select_Widget.ipynb)
# This notebook provides a recipe to integrate select widget to input interface, this select input is used as a part of the command used to run the job.
# #### 6. [Integrate interactive widget](Integrate_Interactive_widgets.ipynb)
# This notebook provides a recipe to integrate multiple interactive widgets to the input interface, any change in value of one of the widgets will reflect on the other interacting widgets. The values of these widgets are used as a part of the command used to run this job.
# #### 7. [Integrate control buttons](Integrate_control_buttons.ipynb)
# This notebook provides a recipe to integrate control buttons to cancel job or redirect to dashboard.
# #### 8. [Full interface](Full_Interface.ipynb)
# This notebook is the full recipe to use qarpo library to build input interface, including interactive widgets, and output interface, including progress bars, display output job results and plot the results of accomplished jobs.
| Examples/Start_Here.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ## Import Required Libraries
import snscrape.modules.twitter as sntwitter
import pandas as pd
import string
import re
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk.probability import FreqDist
from gensim.corpora import Dictionary
from gensim.models.ldamodel import LdaModel
from gensim.models import CoherenceModel
from nltk import ngrams
from rake_nltk import Rake
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation
import collections
import math
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import seaborn as sns
import gensim
from gsdmm import MovieGroupProcess
# ## Read Tweets Data
# +
tweets_list = []
# Using TwitterSearchScraper to scrape data and append tweets to list
for i,tweet in enumerate(sntwitter.TwitterSearchScraper(' since:2021-11-1 until:2021-11-30 lang:en').get_items()):
tweets_list.append([tweet.content, tweet.user.username, tweet.date, tweet.id])
# Creating a dataframe from the tweets list above
tweets_df = pd.DataFrame(tweets_list, columns=['text', 'user', 'date', 'Tweet Id'])
# -
# ## Preprocess Data
# Drop unnecessary columns
df.drop(columns = ['user', 'date', 'Tweet Id'], inplace = True)
# Remove URLs from data
# +
def remove_urls(text):
return re.sub(r'http\S+','', text)
df['text'] = df['text'].apply(remove_urls)
# -
# Lowercase all alphabets and remove punctuation
df['clean'] = df['text'].str.lower().str.replace('[^\w\s]', ' ').str.replace(' +', ' ').str.strip()
df = df.rename(columns={"text": 0, "clean": 1})
# Tokenize data
df[1] = df.apply(lambda row: nltk.word_tokenize(row[1]), axis=1)
# Remove Stop Words
stop_words = stopwords.words('english')
stop_words.extend(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l','m','n','o','p','q','r','s','t', 'u', 'v', 'w', 'x', 'y', 'z', "about", "across", "after", "all", "also", "an", "and", "another", "added",
"any", "are", "as", "at", "basically", "be", "because", 'become', "been", "before", "being", "between","both", "but", "by","came","can","come","could","did","do","does","each","else","every","either","especially", "for","from","get","given","gets",
'give','gives',"got","goes","had","has","have","he","her","here","him","himself","his","how","if","in","into","is","it","its","just","lands","like","make","making", "made", "many","may","me","might","more","most","much","must","my","never","provide",
"provides", "perhaps","no","now","of","on","only","or","other", "our","out","over","re","said","same","see","should","since","so","some","still","such","seeing", "see", "take","than","that","the","their","them","then","there",
"these","they","this","those","through","to","too","under","up","use","using","used", "underway", "very","want","was","way","we","well","were","what","when","where","which","while","whilst","who","will","with","would","you","your",
'etc', 'via', 'eg'])
stop_words += ['hi','\n','\n\n', '&', ' ', '.', '-', 'got', "it's", 'it’s', "i'm", 'i’m', 'im', 'want', 'like', '$', '@', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'rt', 'feel', 'give', 'giving', 'help', 'said', 'also', 'gave', 'like', 'going', 'even']
df[1] = df[1].apply(lambda x: [item for item in x if item not in stop_words])
# Perform Lemmatization
# +
wordnet_lemmatizer = WordNetLemmatizer()
df[1] = df[1].apply(lambda x: [wordnet_lemmatizer.lemmatize(y) for y in x])
# -
docs = df[1].to_numpy()
# ## Create a Dictionary
# +
# create dictionary of all words in all documents
dictionary = gensim.corpora.Dictionary(docs)
# filter extreme cases out of dictionary
dictionary.filter_extremes(no_below=15, no_above=0.5, keep_n=100000)
# create BOW dictionary
bow_corpus = [dictionary.doc2bow(doc) for doc in docs]
# -
# ## Create LDA model
# create LDA model using preferred hyperparameters
lda_model = gensim.models.LdaMulticore(bow_corpus,
num_topics=5,
id2word=dictionary,
passes=4,
workers=2,
random_state=21)
from gensim.models import CoherenceModel
# View LDA topics
lda_model.show_topics()
# ### Calculate LDA Coherence Score
cm = CoherenceModel(model=lda_model, corpus=bow_corpus, texts=docs, coherence='c_v')
coherence_lda = cm.get_coherence()
print(coherence_lda)
# ## Create GSDMM Model
# +
# create variable containing length of dictionary/vocab
vocab_length = len(dictionary)
# initialize GSDMM
gsdmm = MovieGroupProcess(K=15, alpha=0.1, beta=0.3, n_iters=15)
# fit GSDMM model
y = gsdmm.fit(docs, vocab_length)
# -
import numpy as np
# Display GSDMM topics with top words
# +
# print number of documents per topic
doc_count = np.array(gsdmm.cluster_doc_count)
print('Number of documents per topic :', doc_count)
# Topics sorted by the number of document they are allocated to
top_index = doc_count.argsort()[-15:][::-1]
print('Most important clusters (by number of docs inside):', top_index)
# define function to get top words per topic
def top_words(cluster_word_distribution, top_cluster, values):
for cluster in top_cluster:
sort_dicts = sorted(cluster_word_distribution[cluster].items(), key=lambda k: k[1], reverse=True)[:values]
print("\nCluster %s : %s"%(cluster, sort_dicts))
# get top words in topics
top_words(gsdmm.cluster_word_distribution, top_index, 20)
# -
# Create Lists from GSDMM topics
# +
def get_topics_lists(model, top_clusters, n_words):
'''
Gets lists of words in topics as a list of lists.
model: gsdmm instance
top_clusters: numpy array containing indices of top_clusters
n_words: top n number of words to include
'''
# create empty list to contain topics
topics = []
# iterate over top n clusters
for cluster in top_clusters:
#create sorted dictionary of word distributions
sorted_dict = sorted(model.cluster_word_distribution[cluster].items(), key=lambda k: k[1], reverse=True)[:n_words]
#create empty list to contain words
topic = []
#iterate over top n words in topic
for k,v in sorted_dict:
#append words to topic list
topic.append(k)
#append topics to topics list
topics.append(topic)
return topics
# get topics to feed to coherence model
topics = get_topics_lists(gsdmm, top_index, 20)
# -
# ### Calculate GSDMM Coherence Score
# +
# evaluate model using Topic Coherence score
cm_gsdmm = CoherenceModel(topics=topics, dictionary=dictionary, corpus=bow_corpus, texts=docs, coherence='c_v')
# get coherence value
coherence_gsdmm = cm_gsdmm.get_coherence()
print(coherence_gsdmm)
# -
top_words(gsdmm.cluster_word_distribution, top_index, 20)
cm_gsdmm = CoherenceModel(topics=topics, dictionary=dictionary, corpus=bow_corpus, texts=docs, coherence='c_v')
coherence_gsdmm = cm_gsdmm.get_coherence()
print(coherence_gsdmm)
| viral_topic_analysis/lda_vs_gsdmm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import statistics
users_df = pd.read_csv("E:/ASU_CourseWork/Fall_2018/SML/Project/sof_user_churn/data/processed/users.csv", delimiter = ',')
posts_df = pd.read_csv("E:/ASU_CourseWork/Fall_2018/SML/Project/sof_user_churn/data/processed/posts.csv", delimiter = ',')
posts_df.head()
# -
accepted_answerer_data = []
userId_list = users_df['Id']
for user in userId_list:
accepted_postid_list = posts_df[(posts_df.OwnerUserId == user) & (posts_df.PostTypeId == 1) &
(posts_df.AcceptedAnswerId.notnull())]['AcceptedAnswerId']
accepted_answerer_userIds = posts_df[posts_df.Id.isin(accepted_postid_list)]['OwnerUserId']
mean_rep = users_df[users_df.Id.isin(accepted_answerer_userIds)].Reputation.mean()
accepted_answerer_data.append({'userid' : user, 'mean_reputation' : mean_rep})
accepted_answerer_rep = pd.DataFrame(accepted_answerer_data)
accepted_answerer_rep[accepted_answerer_rep.mean_reputation.notnull()].head()
# +
meanOfmax_answerer_reputation_data = []
userId_list = users_df['Id']
for user in userId_list:
user_question_postid_list = posts_df[(posts_df.OwnerUserId == user) & (posts_df.PostTypeId == 1)]['Id']
max_rep_list = []
for postid in user_question_postid_list:
answerers_userid = posts_df[posts_df.ParentId == postid]['OwnerUserId']
rept = users_df[users_df.Id.isin(answerers_userid)].Reputation.max()
max_rep_list.append(rept)
if (len(max_rep_list) > 0):
meanOfmax_answerer_reputation_data.append({'userid' : user, 'max_rep_answerer' : np.mean(max_rep_list)})
meanOfMax_reputation_answerer = pd.DataFrame(meanOfmax_answerer_reputation_data)
print(meanOfMax_reputation_answerer)
# -
meanOfMax_reputation_answerer.tail()
userId_to_noofHis_questions_answered = []
userId_list = users_df['Id']
for user in userId_list:
user_question_post_id_list = posts_df[(posts_df.OwnerUserId == user) & (posts_df.PostTypeId == 1)]['Id']
user_questions_answered = 0
for post_id in user_question_post_id_list:
counter = len(posts_df[posts_df.ParentId == post_id])
if (counter > 0):
user_questions_answered += 1
if (user_questions_answered > 0):
userId_to_noofHis_questions_answered.append({'userid': user, 'number_ofHis_questions_answered': user_questions_answered})
userId_to_his_questions_answered = pd.DataFrame(userId_to_noofHis_questions_answered)
print(userId_to_his_questions_answered)
# +
from datetime import datetime
import time
fmt = '%Y-%m-%d %H:%M:%S'
userId_to_mean_time_for_first_answ = []
userId_list = users_df['Id']
for user in userId_list:
#user_question_post_id_df
df = posts_df[(posts_df.OwnerUserId == user) & (posts_df.PostTypeId == 1)][['Id', 'CreationDate']]
first_answered_time_list = []
for index, row in df.iterrows():
# Formating the date format of the question created date
question_date = row['CreationDate']
question_date = question_date.replace("T", " ")
question_date = question_date[: len(question_date) - 4]
d1 = datetime.strptime(question_date, fmt)
d1_ts = time.mktime(d1.timetuple())
answered_date_list = posts_df[posts_df.ParentId == row['Id']]['CreationDate'].tolist()
answered_time_diff_list = []
#Formating the date format of the answer created date for the given quesiton and Convert to Unix timestamp
for date in answered_date_list:
date = date.replace("T", " ")
date = date[: len(date) - 4]
d2 = datetime.strptime(date, fmt)
d2_ts = time.mktime(d2.timetuple())
answered_time_diff_list.append(int(d2_ts-d1_ts) / 60)
answered_time_diff_list.sort()
if (len(answered_time_diff_list) > 0):
first_answered_time_list.append(answered_time_diff_list[0])
if (len(first_answered_time_list) > 0):
mean_response_time = sum(first_answered_time_list)/len(first_answered_time_list)
userId_to_mean_time_for_first_answ.append({'userid': user, 'time_for_first_answer': mean_response_time})
userId_to_mean_time_for_first_answ_DF = pd.DataFrame(userId_to_mean_time_for_first_answ)
print(userId_to_mean_time_for_first_answ_DF)
# -
| notebooks/knowledge_level_feature.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.0 64-bit (''Data-Analytics-and-Basics-of-Artificial-In-rUWBvdkx'':
# venv)'
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display
data = pd.read_csv("data/Titanic_data.csv")
names = pd.read_csv("data/Titanic_names.csv")
data.info()
data.describe()
names.info()
names.describe()
data.hist(bins=4)
df = data.merge(names, how="inner", on="id")
df
len(df)
len(df.query("Gender == 'female'"))
len(df.query("Gender == 'male'"))
round(df['Age'].mean(), 1)
len(df.query("Age == 0"))
m = df.query("Age != 0")["Age"].mean()
df.loc[df["Age"] == 0, "Age"] = m
df
df.query("PClass == '*'")
df["Survived"].value_counts()
# +
[dieded, survived] = df["Survived"].value_counts()
display(f"{round((survived / len(df)) * 100)}%")
display(F"{round((dieded / len(df)) * 100)}%")
| homework/w36/e02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Images and pixels with SimpleITK from R
# The R notebooks follow the general structure of the python equivalents.
#
# A few comments to get started:
# * R will display a variable when you type the name. If the variable is a SimpleITK image then, by default, R/SimpleITK will attempt to display it with ImageJ. This isn't much good for notebooks or knitted documents, so the "show" method has been replaced by a function using internal R graphics. This isn't necessary for interactive sessions.
# * The documentation doesn't exist in the R package yet - be prepared to do some detective work.
# * C++ enumerated types are wrapped using strings, rather than integers.
# * Error messages are can be very cryptic - it helps to get familiar with the R way of querying methods and objects.
# * The object oriented style used by swig is one of the less common ones used in R packages.
# * C++ vectors are automatically converted to R vectors.
#
# Let's start by creating an image and accessing information about it.
# +
# Load the SimpleITK library
library(SimpleITK)
imA = Image(256, 128, 64, "sitkInt16")
imB = SimpleITK::Image(64, 64, "sitkFloat32")
imC = Image(c(32, 32), "sitkUInt32")
imRGB = Image(c(128,128), "sitkVectorUInt8", 3)
# -
# We've created 3 different (empty) images using slightly differeing syntax to specify the dimensions. We've also specified the pixel type. The range of available pixel types is:
get(".__E___itk__simple__PixelIDValueEnum", pos="package:SimpleITK")
# It isn't usually necessary to access the enumeration classes directly, but it can be handy to know how when debugging.
# Lets start querying the images we've created. The `$` operator is used to access object methods:
imA$GetSize()
imC$GetSpacing()
# We can change the spacing (voxel size):
imC$SetSpacing(c(2, 0.25))
imC$GetSpacing()
# There are shortcuts for accessing a single dimension:
imC$GetWidth()
imA$GetHeight()
imA$GetDepth()
imC$GetDepth()
# The depth refers to the z dimension. A colour or vector image has an additional "dimension":
imRGB$GetNumberOfComponentsPerPixel()
# The list of available methods can be retrieved using the following. There are many functions for accessing pixels in a type dependent way, which don't need to be accessed directly.
#
getMethod('$', class(imA))
# ## Array notation for images
# R, like python and matlab, has a flexible notation for accessing elements of arrays. SimpleITK adopts the same notation for accessing pixels (or sub images, which will be covered in a subsequent notebook). First, lets get set up so that we can display an image in the notebook.
# +
# override show function
# The example below will respect image voxel sizes
## This is a hack to allow global setting of the displayed image width
Dwidth <- grid::unit(10, "cm")
setMethod('show', '_p_itk__simple__Image',
function(object)
{
a <- t(as.array(object))
rg <- range(a)
A <- (a-rg[1])/(rg[2]-rg[1])
dd <- dim(a)
sp <- object$GetSpacing()
sz <- object$GetSize()
worlddim <- sp*sz
worlddim <- worlddim/worlddim[1]
W <- Dwidth
H <- Dwidth * worlddim[2]
grid::grid.raster(A,default.units="mm", width=W, height=H)
}
)
options(repr.plot.height = 5, repr.plot.width=5)
# -
# Typing the name of an image variable will now result in the image being displayed using an R graphics device. Lets load up an image from the package and test:
# Now we source the code to allow us to fetch images used in examples. This code will download the images if they aren't already on the system.
source("downloaddata.R")
cthead <- ReadImage(fetch_data("cthead1.png"))
cthead
# Now we will check the value of a single pixel:
cthead[146, 45]
# ## Pixel access order and conversion to arrays
#
# The rule for index ordering in R/SimpleITK is most rapidly changing index first. For images this is x (horizontal), then y (vertical), then z. For R arrays this is row, then column. The conversion between images and arrays maintains the ordering of speed of index change, so the most rapidly changing index is always the first element:
#
# +
imA$GetSize()
imA.arr <- as.array(imA)
dim(imA.arr)
# -
# The array and image dimensions are the same, however if they are displayed they are likely to appear different as R interprets the first index as rows. Also note that traditional graphing tools are likely to interpret vertical direction differently: For example, lets plot the cthead image - the result is flipped vertically, because the graph origin is bottom left, and rows and columns are swapped. The show method we use in this notebook has a transpose operation to reverse the row/column swap.
image(as.array(cthead))
| R/Image_Basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.2 ('pyqmc_dev')
# language: python
# name: python3
# ---
# +
import pyscf
import pyscf.lo
from pyscf.tools import cubegen
import matplotlib.pyplot as plt
import numpy as np
eV = 27.2114
# +
mol = pyscf.gto.M(
atom="H 0. 0. 0.; H 0. 0. 2",
basis='ccpvdz',
)
mf = pyscf.scf.RHF(mol)
mf.kernel()
# +
plt.figure(figsize=(10,3))
for i in np.arange(2):
plt.plot(mf.mo_coeff[:,i], marker='o')
plt.axhline(0, color = 'grey', zorder=-1)
plt.ylim([-0.8,0.8])
plt.xticks(np.arange(10), mol.ao_labels())
plt.savefig('mo.pdf',bbox_inches='tight')
# -
rdm1_ao = mf.make_rdm1()
B = mf.get_ovlp() @ mf.mo_coeff
rdm1_mo = np.einsum('ai, ab, bj->ij', B, rdm1_ao, B)
plt.imshow(rdm1_mo)
plt.savefig('rdm1_mo.pdf',bbox_inches='tight')
np.trace(rdm1_mo)
a = pyscf.lo.iao.iao(mol, mf.mo_coeff[:,:2])#, minao='minao')
a = pyscf.lo.vec_lowdin(a, mf.get_ovlp())
# +
plt.figure(figsize=(10,3))
for i in np.arange(2):
plt.plot(a[:,i], marker='o')
plt.axhline(0, color = 'grey', zorder=-1)
plt.ylim([-0.8,0.8])
plt.xticks(np.arange(10), mol.ao_labels())
plt.savefig('iao.pdf',bbox_inches='tight')
# -
def plot_orbital(orbital, output):
plt.figure()
range = np.max(np.abs(orbital))
cm = plt.imshow(orbital[orbital.shape[0]//2,:,:], cmap='bwr',vmin=-range, vmax=range, aspect = 0.5)
plt.xlabel('z grid index')
plt.ylabel('y grid index')
plt.colorbar()
plt.savefig(output,bbox_inches='tight')
# plot mo and iao
for i in np.arange(2):
mo = cubegen.orbital(mol, f'h2_mo{i}.cube', mf.mo_coeff[:,i])
iao = cubegen.orbital(mol, f'h2_iao{i}.cube', a[:,i])
plot_orbital(mo, f'h2_mo{i}.pdf')
plot_orbital(iao, f'h2_iao{i}.pdf')
# +
R = np.einsum("ij,ik,kl->jl" , a, mf.get_ovlp(), mf.mo_coeff[:,:2])
print(R)
# Note that the rotation matrix is unitary.
print(R @ R.conj().T)
# +
# rotate the Hamiltonian
H0 = np.array([[mf.mo_energy[0],0],[0,mf.mo_energy[1]]])
H_tb = np.einsum("ij,jk,kl->il", R, H0, R.conj().T)
print('tb model:\n', H_tb)
# +
eigvals, eigvecs = np.linalg.eigh(H_tb)
print('tb model eigenvalues:', eigvals)
print('original eigenvalues:', mf.mo_energy[:2])
# -
print(eigvecs)
# compute the rotated rdm1
rdm1_iao = np.einsum("ij,jk,kl->il", R, rdm1_mo[:2,:2], R.conj().T)
print(rdm1_iao)
| 3_25_h2_iao/h2_iao.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="UEBilEjLj5wY"
# # Module version used
#
# - torch 1.4.0
# - numpy 1.18.1
# - CPython 3.6.9
# - IPython 7.10.2
# - numpy 1.17.4
# - PIL.Image 6.2.1
# - pandas 0.25.3
# + [markdown] colab_type="text" id="rH4XmErYj5wm"
# # AlexNet CIFAR-10 Classifier
# -
# ### Network Architecture
# References
#
# - [1] Krizhevsky, Alex, <NAME>, and <NAME>. "[Imagenet classification with deep convolutional neural networks.](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)" In Advances in Neural Information Processing Systems, pp. 1097-1105. 2012.
#
# + [markdown] colab_type="text" id="MkoGLH_Tj5wn"
# ## Imports
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="ORj09gnrj5wp"
import os
import time
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torch.utils.data.dataset import Subset
from torchvision import datasets
from torchvision import transforms
import matplotlib.pyplot as plt
from PIL import Image
if torch.cuda.is_available():
torch.backends.cudnn.deterministic = True
# + [markdown] colab_type="text" id="I6hghKPxj5w0"
# ## Model Settings
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 85} colab_type="code" executionInfo={"elapsed": 23936, "status": "ok", "timestamp": 1524974497505, "user": {"displayName": "<NAME>", "photoUrl": "//lh6.googleusercontent.com/-cxK6yOSQ6uE/AAAAAAAAAAI/AAAAAAAAIfw/P9ar_CHsKOQ/s50-c-k-no/photo.jpg", "userId": "118404394130788869227"}, "user_tz": 240} id="NnT0sZIwj5wu" outputId="55aed925-d17e-4c6a-8c71-0d9b3bde5637"
##########################
### SETTINGS
##########################
# Hyperparameters
RANDOM_SEED = 1
LEARNING_RATE = 0.0001
BATCH_SIZE = 256
NUM_EPOCHS = 10
# Architecture
NUM_CLASSES = 10
# Other
DEVICE = "cuda:0"
# -
# ## Dataset
# +
train_indices = torch.arange(0, 48000)
valid_indices = torch.arange(48000, 50000)
train_transform = transforms.Compose([transforms.Resize((70, 70)),
transforms.RandomCrop((64, 64)),
transforms.ToTensor()])
test_transform = transforms.Compose([transforms.Resize((70, 70)),
transforms.CenterCrop((64, 64)),
transforms.ToTensor()])
serverAvailable = "no"
if serverAvailable == "yes":
datapath = "../Database/"
else:
datapath = '../../../MEGA/DatabaseLocal/'
train_and_valid = datasets.CIFAR10(root=datapath,
train=True,
transform=train_transform,
download=True)
train_dataset = Subset(train_and_valid, train_indices)
valid_dataset = Subset(train_and_valid, valid_indices)
test_dataset = datasets.CIFAR10(root=datapath,
train=False,
transform=test_transform,
download=False)
train_loader = DataLoader(dataset=train_dataset,
batch_size=BATCH_SIZE,
num_workers=4,
shuffle=True)
valid_loader = DataLoader(dataset=valid_dataset,
batch_size=BATCH_SIZE,
num_workers=4,
shuffle=False)
test_loader = DataLoader(dataset=test_dataset,
batch_size=BATCH_SIZE,
num_workers=4,
shuffle=False)
# +
# Checking the dataset
print('Training Set:\n')
for images, labels in train_loader:
print('Image batch dimensions:', images.size())
print('Image label dimensions:', labels.size())
break
# Checking the dataset
print('\nValidation Set:')
for images, labels in valid_loader:
print('Image batch dimensions:', images.size())
print('Image label dimensions:', labels.size())
break
# Checking the dataset
print('\nTesting Set:')
for images, labels in train_loader:
print('Image batch dimensions:', images.size())
print('Image label dimensions:', labels.size())
break
# -
# ## Model
# +
##########################
### MODEL
##########################
class AlexNet(nn.Module):
def __init__(self, num_classes):
super(AlexNet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(64, 192, kernel_size=5, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
)
self.avgpool = nn.AdaptiveAvgPool2d((6, 6))
self.classifier = nn.Sequential(
nn.Dropout(0.5),
nn.Linear(256 * 6 * 6, 4096),
nn.ReLU(inplace=True),
nn.Dropout(0.5),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, num_classes)
)
def forward(self, x):
x = self.features(x)
x = self.avgpool(x)
x = x.view(x.size(0), 256 * 6 * 6)
logits = self.classifier(x)
probas = F.softmax(logits, dim=1)
return logits, probas
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="_lza9t_uj5w1"
torch.manual_seed(RANDOM_SEED)
model = AlexNet(NUM_CLASSES)
model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
# + [markdown] colab_type="text" id="RAodboScj5w6"
# ## Training
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 1547} colab_type="code" executionInfo={"elapsed": 2384585, "status": "ok", "timestamp": 1524976888520, "user": {"displayName": "<NAME>", "photoUrl": "//lh6.googleusercontent.com/-cxK6yOSQ6uE/AAAAAAAAAAI/AAAAAAAAIfw/P9ar_CHsKOQ/s50-c-k-no/photo.jpg", "userId": "118404394130788869227"}, "user_tz": 240} id="Dzh3ROmRj5w7" outputId="5f8fd8c9-b076-403a-b0b7-fd2d498b48d7"
def compute_acc(model, data_loader, device):
correct_pred, num_examples = 0, 0
model.eval()
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
assert predicted_labels.size() == targets.size()
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
start_time = time.time()
cost_list = []
train_acc_list, valid_acc_list = [], []
for epoch in range(NUM_EPOCHS):
model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(DEVICE)
targets = targets.to(DEVICE)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = F.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
#################################################
### CODE ONLY FOR LOGGING BEYOND THIS POINT
################################################
cost_list.append(cost.item())
if not batch_idx % 150:
print (f'Epoch: {epoch+1:03d}/{NUM_EPOCHS:03d} | '
f'Batch {batch_idx:03d}/{len(train_loader):03d} |'
f' Cost: {cost:.4f}')
model.eval()
with torch.set_grad_enabled(False): # save memory during inference
train_acc = compute_acc(model, train_loader, device=DEVICE)
valid_acc = compute_acc(model, valid_loader, device=DEVICE)
print(f'Epoch: {epoch+1:03d}/{NUM_EPOCHS:03d}\n'
f'Train ACC: {train_acc:.2f} | Validation ACC: {valid_acc:.2f}')
train_acc_list.append(train_acc)
valid_acc_list.append(valid_acc)
elapsed = (time.time() - start_time)/60
print(f'Time elapsed: {elapsed:.2f} min')
elapsed = (time.time() - start_time)/60
print(f'Total Training Time: {elapsed:.2f} min')
# -
# ## Evaluation
import matplotlib.pyplot as plt
# %matplotlib inline
# +
plt.plot(cost_list, label='Minibatch cost')
plt.plot(np.convolve(cost_list,
np.ones(200,)/200, mode='valid'),
label='Running average')
plt.ylabel('Cross Entropy')
plt.xlabel('Iteration')
plt.legend()
plt.show()
# +
plt.plot(np.arange(1, NUM_EPOCHS+1), train_acc_list, label='Training')
plt.plot(np.arange(1, NUM_EPOCHS+1), valid_acc_list, label='Validation')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
# +
with torch.set_grad_enabled(False):
test_acc = compute_acc(model=model,
data_loader=test_loader,
device=DEVICE)
valid_acc = compute_acc(model=model,
data_loader=valid_loader,
device=DEVICE)
print(f'Validation ACC: {valid_acc:.2f}%')
print(f'Test ACC: {test_acc:.2f}%')
# -
# %watermark -iv
| 03_ConvolutionalNN/03_CNN_08_Alexnet-alexnet-cifar10.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
# +
reward = ''' 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0.1 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.1 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. -0.01'''
reward = np.array([float(m) for m in reward.split()])
# -
totalHits = '''1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0'''
totalHits = np.array([float(m) for m in totalHits.split(', ')])
# +
cumRewards1 = '''3.51873654e-03 4.39842067e-03 5.49802584e-03 6.87253230e-03
8.59066538e-03 1.07383317e-02 1.34229147e-02 1.67786433e-02
2.09733042e-02 2.62166302e-02 3.27707877e-02 4.09634847e-02
5.12043558e-02 6.40054448e-02 8.00068060e-02 1.00008507e-01
1.06325077e-05 1.32906347e-05 1.66132933e-05 2.07666166e-05
2.59582708e-05 3.24478385e-05 4.05597981e-05 5.06997477e-05
6.33746846e-05 7.92183557e-05 9.90229447e-05 1.23778681e-04
1.54723351e-04 1.93404189e-04 2.41755236e-04 3.02194045e-04
3.77742556e-04 4.72178195e-04 5.90222744e-04 7.37778430e-04
9.22223038e-04 1.15277880e-03 1.44097350e-03 1.80121687e-03
2.25152109e-03 2.81440136e-03 3.51800170e-03 4.39750212e-03
5.49687766e-03 6.87109707e-03 8.58887134e-03 1.07360892e-02
1.34201115e-02 1.67751393e-02 2.09689242e-02 2.62111552e-02
3.27639440e-02 4.09549300e-02 5.11936625e-02 6.39920781e-02
7.99900977e-02 9.99876221e-02 -1.54742501e-05 -1.93428127e-05
-2.41785159e-05 -3.02231448e-05 -3.77789310e-05 -4.72236638e-05
-5.90295797e-05 -7.37869746e-05 -9.22337183e-05 -1.15292148e-04
-1.44115185e-04 -1.80143981e-04 -2.25179976e-04 -2.81474970e-04
-3.51843713e-04 -4.39804641e-04 -5.49755802e-04 -6.87194752e-04
-8.58993440e-04 -1.07374180e-03 -1.34217725e-03 -1.67772156e-03
-2.09715195e-03 -2.62143994e-03 -3.27679993e-03 -4.09599991e-03
-5.11999989e-03 -6.39999986e-03 -7.99999982e-03 -9.99999978e-03'''
cumRewards1 = np.array([float(m) for m in cumRewards1.split()])
# +
cumRewards = '''0.19 0.19 0.19 0.19 0.19 0.19 0.19 0.19 0.19 0.19 0.19 0.19
0.19 0.19 0.19 0.19 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09
0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09
0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09
0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 -0.01 -0.01
-0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01
-0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01
-0.01 -0.01 -0.01 -0.01'''
cumRewards = np.array([float(m) for m in cumRewards.split() ])
# +
plt.figure(figsize=(4,3))
plt.axes([0.17, 0.17, 0.76, 0.76])
plt.axhline(0, color='black', lw=2)
plt.axhline(0.03, color='black', lw=2)
plt.axvline(10, ls= '-.', color='black', lw=0.2)
plt.axvline(15, ls= '-.', color='black', lw=0.2)
plt.axvline(52, ls= '-.', color='black', lw=0.2)
plt.axvline(57, ls= '-.', color='black', lw=0.2)
plt.xlim(0, 87)
plt.plot(cumRewards, label='$\gamma=1$')
plt.plot(cumRewards1, label='$\gamma=0.8$')
plt.legend()
plt.xlabel('$t$')
plt.ylabel('cumulative reward')
plt.savefig('cumReward.png', dpi=300)
plt.show()
plt.close('all')
# -
np.where(cumRewards1 > 0.03)
# +
plt.figure(figsize=(4,3))
plt.axes([0.17, 0.17, 0.76, 0.76])
plt.plot(totalHits)
plt.axhline(0.03, color='black', lw=2)
plt.axvline(10, ls= '-.', color='black', lw=0.2)
plt.axvline(15, ls= '-.', color='black', lw=0.2)
plt.axvline(52, ls= '-.', color='black', lw=0.2)
plt.axvline(57, ls= '-.', color='black', lw=0.2)
plt.xlim(0, 87)
plt.xlabel('$t$')
plt.ylabel('number of hits')
plt.savefig('numHits.png', dpi=300)
plt.show()
plt.close('all')
# -
len(reward)
| notebooks/Visualizing various paramers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/mlelarge/dataflowr/blob/master/RNN_practicals_X_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="yfsCBLrUrWq-" colab_type="text"
# # RNN practicals
#
# This jupyter notebook allows you to reproduce and explore the results presented in the [lecture on RNN](https://mlelarge.github.io/dataflowr-slides/X/lesson5.html#1)
# + id="Q5rLTkQqrWrA" colab_type="code" colab={}
import numpy as np
from collections import OrderedDict
import scipy.special
from scipy.special import binom
import matplotlib.pyplot as plt
import time
# + id="MAWM5y_nrWrD" colab_type="code" colab={}
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / float(N)
def Catalan(k):
return binom(2*k,k)/(k+1)
# + id="lzWUmgXorWrF" colab_type="code" colab={}
import torch
use_gpu = torch.cuda.is_available()
def gpu(tensor, gpu=use_gpu):
if gpu:
return tensor.cuda()
else:
return tensor
# + [markdown] id="ub_Zinp9rWrI" colab_type="text"
# # Generation datasets
# + id="EtiBpJPjrWrJ" colab_type="code" colab={}
seq_max_len = 20
seq_min_len = 4
# + [markdown] id="J1PZZAEkrWrL" colab_type="text"
# ## generating positive examples
# + id="cjLg6wv4rWrM" colab_type="code" colab={}
# convention: +1 opening parenthesis and -1 closing parenthesis
def all_parent(n, a, k=-1):
global res
if k==n-1 and sum(a) == 0:
res.append(a.copy())
elif k==n-1:
pass
else:
k += 1
if sum(a) > 0:
a[k] = 1
all_parent(n,a,k)
a[k] = -1
all_parent(n,a,k)
a[k] = 0
else:
a[k] = 1
all_parent(n,a,k)
a[k] = 0
# + [markdown] id="j1BcdKZjrWrQ" colab_type="text"
# ## generating negative examples
# + id="-7HcZhiGrWrR" colab_type="code" colab={}
def all_parent_mistake(n, a, k=-1):
global res
if k==n-1 and sum(a) >= -1 and sum(a) <= 1 and min(np.cumsum(a))<0:
res.append(a.copy())
elif sum(a) > n-k:
pass
elif k==n-1:
pass
else:
k += 1
if sum(a) >= -1 and k != 0:
a[k] = 1
all_parent_mistake(n,a,k)
a[k] = -1
all_parent_mistake(n,a,k)
a[k] = 0
else:
a[k] = 1
all_parent_mistake(n,a,k)
a[k] = 0
# + id="c6l9A9-KrWrT" colab_type="code" colab={}
# numbering the parentheses
# example: seq of len 6
# ( ( ( ) ) )
# 0 1 2 6 5 4
# we always have ( + ) = seq_len
# 'wrong' parentheses are always closing and numbered as:
# ) )
# 7 8
def reading_par(l, n):
res = [0]*len(l)
s = []
n_plus = -1
n_moins = n+1
c = 0
for i in l:
if i == 1:
n_plus += 1
s.append(n_plus)
res[c] = n_plus
c += 1
else:
try:
res[c] = n-s.pop()
except:
res[c] = n_moins
n_moins += 1
c += 1
return res
# + id="_upJ2s3ZrWrW" colab_type="code" colab={}
all_par = OrderedDict()
for n in range(seq_min_len,seq_max_len+1,2):
a = [0]*n
res = []
all_parent(n=n,a=a,k=-1)
all_par[n] = [reading_par(k,n) for k in res]
# + id="ieBs4apprWrZ" colab_type="code" colab={}
all_par_mist = OrderedDict()
for n in range(seq_min_len,seq_max_len+1,2):
a = [0]*n
res = []
all_parent_mistake(n=n,a=a,k=-1)
all_par_mist[n] = [reading_par(k,n) for k in res]
# + id="2MFeL_EVrWrd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="9b58efa0-d075-49e6-cf53-49c8174d2ea7"
all_par[6]
# + id="qwUw2is5rWrf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="8effb51f-ce34-43f6-c5c1-f3e749876480"
all_par_mist[6]
# + [markdown] id="pz5QVIj8rWri" colab_type="text"
# ## number of negative examples by length
# + id="mdlMTFOvrWrj" colab_type="code" colab={}
long_mist = {i:len(l) for (i,l) in zip(all_par_mist.keys(),all_par_mist.values())}
# + id="KOFGg3barWrl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="65e56e65-29db-46de-de8c-e35601d88ee8"
long_mist
# + [markdown] id="tAHWT3mhrWrn" colab_type="text"
# ## number of positive examples by length
# + id="JT5CH7y8rWro" colab_type="code" colab={}
Catalan_num = {i:len(l) for (i,l) in zip(all_par.keys(),all_par.values())}
# + id="bR71MufbrWrs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3760de1c-a466-4d00-9bb9-1fd9b0a5d5ee"
Catalan_num
# + [markdown] id="Let6wWF3rWrv" colab_type="text"
# Sanity check, see [Catalan numbers](https://en.wikipedia.org/wiki/Catalan_number)
# + id="jLA_LsWnrWrw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 174} outputId="9d4aceae-377b-4ab4-a293-f41404563adf"
[(2*i,Catalan(i)) for i in range(2,int(seq_max_len/2)+1)]
# + id="CNk4ynfirWr0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="96d5a74a-6373-4191-8a1d-8f0caa89a0a8"
# nombre de suites correctes de longueur entre 4 et 10, alphabet de taille nb_symbol.
nb_symbol = 10
np.sum([Catalan(i)*int(nb_symbol/2)**i for i in range(2,int(seq_max_len/2)+1)])
# + id="N4dqFkpzrWr4" colab_type="code" colab={}
import random
import torch
class SequenceGenerator():
def __init__(self, nb_symbol = 10, seq_min_len = 4, seq_max_len = 10):
self.nb_symbol = nb_symbol
self.seq_min_len = seq_min_len
self.seq_max_len = seq_max_len
self.population = [i for i in range(int(nb_symbol/2))]
def generate_pattern(self):
len_r = random.randint(self.seq_min_len/2,self.seq_max_len/2)
pattern = random.choices(self.population,k=len_r)
return pattern + pattern[::-1]
def generate_pattern_parenthesis(self, len_r = None):
if len_r == None:
len_r = int(2*random.randint(self.seq_min_len/2,self.seq_max_len/2))
pattern = np.random.choice(self.population,size=int(len_r/2),replace=True)
ind_r = random.randint(0,Catalan_num[len_r]-1)
res = [pattern[i] if i <= len_r/2 else self.nb_symbol-1-pattern[len_r-i] for i in all_par[len_r][ind_r]]
return res
def generate_parenthesis_false(self):
len_r = int(2*random.randint(self.seq_min_len/2,self.seq_max_len/2))
pattern = np.random.choice(self.population,size=int(len_r/2),replace=True)
ind_r = random.randint(0,long_mist[len_r]-1)
res = [pattern[i] if i <= len_r/2
else self.nb_symbol-1-pattern[len_r-i] if i<= len_r
else self.nb_symbol-1-pattern[i-len_r] for i in all_par_mist[len_r][ind_r]]
return res
def generate_hard_parenthesis(self, len_r = None):
if len_r == None:
len_r = int(2*random.randint(self.seq_min_len/2,self.seq_max_len/2))
pattern = np.random.choice(self.population,size=int(len_r/2),replace=True)
ind_r = random.randint(0,Catalan_num[len_r]-1)
res = [pattern[i] if i <= len_r/2 else self.nb_symbol-1-pattern[len_r-i] for i in all_par[len_r][ind_r]]
if len_r == None:
len_r = int(2*random.randint(self.seq_min_len/2,self.seq_max_len/2))
pattern = np.random.choice(self.population,size=int(len_r/2),replace=True)
ind_r = random.randint(0,Catalan_num[len_r]-1)
res2 = [pattern[i] if i <= len_r/2 else self.nb_symbol-1-pattern[len_r-i] for i in all_par[len_r][ind_r]]
return res + res2
def generate_hard_nonparenthesis(self, len_r = None):
if len_r == None:
len_r = int(2*random.randint(self.seq_min_len/2,self.seq_max_len/2))
pattern = np.random.choice(self.population,size=int(len_r/2),replace=True)
ind_r = random.randint(0,long_mist[len_r]-1)
res = [pattern[i] if i <= len_r/2
else self.nb_symbol-1-pattern[len_r-i] if i<= len_r
else self.nb_symbol-1-pattern[i-len_r] for i in all_par_mist[len_r][ind_r]]
if len_r == None:
len_r = int(2*random.randint(self.seq_min_len/2,self.seq_max_len/2))
pattern = np.random.choice(self.population,size=int(len_r/2),replace=True)
ind_r = random.randint(0,Catalan_num[len_r]-1)
res2 = [pattern[i] if i <= len_r/2 else self.nb_symbol-1-pattern[len_r-i] for i in all_par[len_r][ind_r]]
return res +[self.nb_symbol-1-pattern[0]]+ res2
def generate_false(self):
popu = [i for i in range(nb_symbol)]
len = random.randint(self.seq_min_len/2,self.seq_max_len/2)
return random.choices(popu,k=len) + random.choices(popu,k=len)
def generate_label(self, x):
l = int(len(x)/2)
return 1 if x[:l] == x[:l-1:-1] else 0
def generate_label_parenthesis(self, x):
s = []
label = 1
lenx = len(x)
for i in x:
if s == [] and i < self.nb_symbol/2:
s.append(i)
elif s == [] and i >= self.nb_symbol/2:
label = 0
break
elif i == self.nb_symbol-1-s[-1]:
s.pop()
else:
s.append(i)
if s != []:
label = 0
return label
def one_hot(self,seq):
one_hot_seq = []
for s in seq:
one_hot = [0 for _ in range(self.nb_symbol)]
one_hot[s] = 1
one_hot_seq.append(one_hot)
return one_hot_seq
def generate_input(self, len_r = None, true_parent = False, hard_false = True):
if true_parent:
seq = self.generate_pattern_parenthesis(len_r)
elif bool(random.getrandbits(1)):
seq = self.generate_pattern_parenthesis(len_r)
else:
if hard_false:
seq = self.generate_parenthesis_false()
else:
seq = self.generate_false()
return gpu(torch.from_numpy(np.array(self.one_hot(seq))).type(torch.FloatTensor)), gpu(torch.from_numpy(np.array([self.generate_label_parenthesis(seq)])))
def generate_input_hard(self,true_parent = False):
if true_parent:
seq = self.generate_hard_parenthesis(self.seq_max_len)
elif bool(random.getrandbits(1)):
seq = self.generate_hard_parenthesis(self.seq_max_len)
else:
seq = self.generate_hard_nonparenthesis(self.seq_max_len)
return gpu(torch.from_numpy(np.array(self.one_hot(seq))).type(torch.FloatTensor)), gpu(torch.from_numpy(np.array([self.generate_label_parenthesis(seq)])))
# + id="vXdJgnRyrWr6" colab_type="code" colab={}
nb_symbol = 10
generator = SequenceGenerator(nb_symbol = nb_symbol, seq_min_len = seq_min_len, seq_max_len = seq_max_len)
# + id="o-zawVMnrWr-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="208e52dd-91a2-41ab-a0f7-833f338cba57"
generator.generate_pattern_parenthesis()
# + id="b9OeFRAxrWsB" colab_type="code" colab={}
x = generator.generate_parenthesis_false()
# + id="yFuAuGQVrWsF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="11b99f9f-55f5-40c0-81fa-56a5b05ea063"
generator.generate_label_parenthesis(x)
# + id="CGrwpM8LrWsH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="c44db97c-abe2-42dd-e3cf-dcd8a88e5d99"
generator.generate_input()
# + [markdown] id="i9DXlASgrWsJ" colab_type="text"
# # First RNN
# + id="V3HWwzQSrWsJ" colab_type="code" colab={}
import torch
import torch.nn as nn
import torch.nn.functional as F
class RecNet(nn.Module):
def __init__(self, dim_input=10, dim_recurrent=50, dim_output=2):
super(RecNet, self).__init__()
self.fc_x2h = nn.Linear(dim_input, dim_recurrent)
self.fc_h2h = nn.Linear(dim_recurrent, dim_recurrent, bias = False)
self.fc_h2y = nn.Linear(dim_recurrent, dim_output)
def forward(self, x):
h = x.new_zeros(1, self.fc_h2y.weight.size(1))
for t in range(x.size(0)):
h = torch.relu(self.fc_x2h(x[t,:]) + self.fc_h2h(h))
return self.fc_h2y(h)
RNN = gpu(RecNet(dim_input = nb_symbol))
# + id="hapmc6cYrWsN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="169515a4-3524-41a1-b2d4-0862a8c9c7e2"
cross_entropy = nn.CrossEntropyLoss()
learning_rate = 1e-3
optimizer = torch.optim.Adam(RNN.parameters(),lr=learning_rate)
nb_train = 40000
loss_t = []
corrects =[]
labels = []
start = time.time()
for k in range(nb_train):
x,l = generator.generate_input(hard_false = False)
y = RNN(x)
loss = cross_entropy(y,l)
_,preds = torch.max(y.data,1)
corrects.append(preds.item() == l.data.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_t.append(loss)
labels.append(l.data)
print(time.time() - start) # Time in minutes
# + id="2mC3DwUxrWsQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="b23e36a8-76c8-4115-9868-505a61639058"
plt.plot(running_mean(loss_t,int(nb_train/100)))
# + id="lNjWBlNcrWsT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="f2dd3c55-48b2-4eec-b06d-bfff583d4e21"
plt.plot(running_mean(corrects,int(nb_train/100)))
# + id="9jnDmqGJrWsW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="534faed2-76f8-42be-9b57-64cbf0a8efce"
plt.plot(np.cumsum(labels))
# + id="WdyU7s3vrWsX" colab_type="code" colab={}
nb_test = 1000
corrects_test =[]
labels_test = []
for k in range(nb_test):
x,l = generator.generate_input(len_r=seq_max_len,true_parent=True)
y = RNN(x)
_,preds = torch.max(y.data,1)
corrects_test.append(preds.item() == l.data.item())
labels_test.append(l.data)
# + id="S219SZHNrWsa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="fe3508f4-8752-4fe4-9b00-198d00cf0dfe"
np.sum(corrects_test)/nb_test
# + id="wv0nnF79rWse" colab_type="code" colab={}
nb_test = 1000
corrects_test =[]
labels_test = []
for k in range(nb_test):
x,l = generator.generate_input(len_r=seq_max_len, hard_false = True)
y = RNN(x)
_,preds = torch.max(y.data,1)
corrects_test.append(preds.item() == l.data.item())
labels_test.append(l.data)
# + id="8kCzxggMrWsg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9b04093d-91f2-4811-bfd1-84133fb8c372"
np.sum(corrects_test)/nb_test
# + id="A6AtJasprWsi" colab_type="code" colab={}
nb_test = 1000
correctsh_test =[]
labelsh_test = []
for k in range(nb_test):
x,l = generator.generate_input_hard()
y = RNN(x)
_,preds = torch.max(y.data,1)
correctsh_test.append(preds.item() == l.data.item())
labelsh_test.append(l.data)
# + id="GQyeOweGrWsk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="06f26035-44ba-418a-845f-71c64039240a"
np.sum(correctsh_test)/nb_test
# + id="sbsG29EKrWsm" colab_type="code" colab={}
nb_test = 1000
correctsh_test =[]
labelsh_test = []
for k in range(nb_test):
x,l = generator.generate_input_hard(true_parent=True)
y = RNN(x)
_,preds = torch.max(y.data,1)
correctsh_test.append(preds.item() == l.data.item())
labelsh_test.append(l.data)
# + id="5oVty4x9rWso" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d9554776-e979-44c8-90d8-b4a38efe9abc"
np.sum(correctsh_test)/nb_test
# + [markdown] id="j_BKTqbSrWsp" colab_type="text"
# # RNN with Gating
# + id="SzeIUDuHrWsq" colab_type="code" colab={}
class RecNetGating(nn.Module):
def __init__(self, dim_input=10, dim_recurrent=50, dim_output=2):
super(RecNetGating, self).__init__()
self.fc_x2h = nn.Linear(dim_input, dim_recurrent)
self.fc_h2h = nn.Linear(dim_recurrent, dim_recurrent, bias = False)
self.fc_x2z = nn.Linear(dim_input, dim_recurrent)
self.fc_h2z = nn.Linear(dim_recurrent,dim_recurrent, bias = False)
self.fc_h2y = nn.Linear(dim_recurrent, dim_output)
def forward(self, x):
h = x.new_zeros(1, self.fc_h2y.weight.size(1))
for t in range(x.size(0)):
z = torch.sigmoid(self.fc_x2z(x[t,:])+self.fc_h2z(h))
hb = torch.relu(self.fc_x2h(x[t,:]) + self.fc_h2h(h))
h = z * h + (1-z) * hb
return self.fc_h2y(h)
RNNG = gpu(RecNetGating(dim_input = nb_symbol))
# + id="TJEecKPGrWss" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="eba0ce71-4618-4a8e-aab8-9a845ea86906"
optimizerG = torch.optim.Adam(RNNG.parameters(),lr=1e-3)
loss_tG = []
correctsG =[]
labelsG = []
start = time.time()
for k in range(nb_train):
x,l = generator.generate_input(hard_false = False)
y = RNNG(x)
loss = cross_entropy(y,l)
_,preds = torch.max(y.data,1)
correctsG.append(preds.item() == l.data.item())
optimizerG.zero_grad()
loss.backward()
optimizerG.step()
loss_tG.append(loss)
labelsG.append(l.item())
print(time.time() - start)
# + id="n4ozNgShrWsx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="49ac31a1-44e1-46e9-9904-bc9a491a1a49"
plt.plot(running_mean(loss_tG,int(nb_train/50)))
plt.plot(running_mean(loss_t,int(nb_train/50)))
# + id="6UDOKWvgrWs0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="96279f0b-b239-4b9a-abb7-652810753097"
plt.plot(running_mean(correctsG,int(nb_train/50)))
plt.plot(running_mean(corrects,int(nb_train/50)))
# + id="6P_0xHTCrWs3" colab_type="code" colab={}
nb_test = 1000
correctsG_test =[]
labelsG_test = []
for k in range(nb_test):
x,l = generator.generate_input(len_r=seq_max_len,true_parent=True)
y = RNNG(x)
_,preds = torch.max(y.data,1)
correctsG_test.append(preds.item() == l.data.item())
labelsG_test.append(l.data)
# + id="MtGJ7t62rWs5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="ebc6f9cb-559b-4a34-9e6d-9fdf523d60f3"
np.sum(correctsG_test)/nb_test
# + id="YpFjWEjkrWs7" colab_type="code" colab={}
nb_test = 1000
correctsG_test =[]
labelsG_test = []
for k in range(nb_test):
x,l = generator.generate_input(len_r=seq_max_len, hard_false = True)
y = RNNG(x)
_,preds = torch.max(y.data,1)
correctsG_test.append(preds.item() == l.data.item())
labelsG_test.append(l.data)
# + id="SSFHc7okrWs-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a8f40ab6-ebcd-4d65-da16-e2d0339e735a"
np.sum(correctsG_test)/nb_test
# + id="Ybzz8j9prWs_" colab_type="code" colab={}
nb_test = 1000
correctshG_test =[]
labelshG_test = []
for k in range(nb_test):
x,l = generator.generate_input_hard()
y = RNNG(x)
_,preds = torch.max(y.data,1)
correctshG_test.append(preds.item() == l.data.item())
labelshG_test.append(l.data)
# + id="aZseJiEHrWtB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="587082d2-a86e-415a-c18f-d1fd91079872"
np.sum(correctshG_test)/nb_test
# + [markdown] id="FQd3ZTE4rWtH" colab_type="text"
# # LSTM
# + id="x8gS-hH9rWtI" colab_type="code" colab={}
class LSTMNet(nn.Module):
def __init__(self, dim_input=10, dim_recurrent=50, num_layers=4, dim_output=2):
super(LSTMNet, self).__init__()
self.lstm = nn.LSTM(input_size = dim_input,
hidden_size = dim_recurrent,
num_layers = num_layers)
self.fc_o2y = nn.Linear(dim_recurrent,dim_output)
def forward(self, x):
x = x.unsqueeze(1)
output, _ = self.lstm(x)
output = output.squeeze(1)
output = output.narrow(0, output.size(0)-1,1)
return self.fc_o2y(F.relu(output))
lstm = gpu(LSTMNet(dim_input = nb_symbol))
# + id="89VPrGqFrWtL" colab_type="code" colab={}
x, l = generator.generate_input()
# + id="QxZq8PfWrWtN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="bda3737a-ea10-4db2-a084-3f9c6f92ec90"
lstm(x)
# + id="Aa4mjh2YrWtP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="88e46e74-7b4f-46d3-c6d6-4f2610157322"
optimizerL = torch.optim.Adam(lstm.parameters(),lr=1e-3)
loss_tL = []
correctsL =[]
labelsL = []
start = time.time()
for k in range(nb_train):
x,l = generator.generate_input(hard_false = False)
y = lstm(x)
loss = cross_entropy(y,l)
_,preds = torch.max(y.data,1)
correctsL.append(preds.item() == l.data.item())
optimizerL.zero_grad()
loss.backward()
optimizerL.step()
loss_tL.append(loss)
labelsL.append(l.item())
print(time.time() - start)
# + id="p8-6GEfIrWtS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="d3b4bd86-d6e9-410b-b2e9-c395e0828081"
plt.plot(running_mean(loss_tL,int(nb_train/50)))
plt.plot(running_mean(loss_tG,int(nb_train/50)))
plt.plot(running_mean(loss_t,int(nb_train/50)))
# + id="f-oaHPcTrWtU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="b0923f0b-5491-4ea7-80ce-60d24730104a"
plt.plot(running_mean(correctsL,int(nb_train/50)))
plt.plot(running_mean(correctsG,int(nb_train/50)))
plt.plot(running_mean(corrects,int(nb_train/50)))
# + id="kyW25KNFrWtX" colab_type="code" colab={}
nb_test = 1000
correctsL_test =[]
labelsL_test = []
for k in range(nb_test):
x,l = generator.generate_input(len_r=seq_max_len,true_parent=True)
y = lstm(x)
_,preds = torch.max(y.data,1)
correctsL_test.append(preds.item() == l.data.item())
labelsL_test.append(l.data)
# + id="jyvSAen_rWtZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5441c878-c6a9-4847-b014-f14ef28b3677"
np.sum(correctsL_test)/nb_test
# + id="G8ujRM01rWtb" colab_type="code" colab={}
nb_test = 1000
correctsL_test =[]
labelsL_test = []
for k in range(nb_test):
x,l = generator.generate_input(len_r=seq_max_len,true_parent=False,hard_false = True)
y = lstm(x)
_,preds = torch.max(y.data,1)
correctsL_test.append(preds.item() == l.data.item())
labelsL_test.append(l.data)
# + id="J-eibKJarWtd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="62f369ec-6286-4126-eb2f-2a2e7de4f353"
np.sum(correctsL_test)/nb_test
# + id="KI_uFdpTrWtg" colab_type="code" colab={}
nb_test = 1000
correctshL_test =[]
labelshL_test = []
for k in range(nb_test):
x,l = generator.generate_input_hard()
y = lstm(x)
_,preds = torch.max(y.data,1)
correctshL_test.append(preds.item() == l.data.item())
labelshL_test.append(l.data)
# + id="y1RnxVjBrWti" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="7dad5832-9a89-49c6-ab4c-ffb34af16ecd"
np.sum(correctshL_test)/nb_test
# + [markdown] id="nWsEn63TrWtk" colab_type="text"
# # GRU
#
# Implement your RNN with a [GRU](https://pytorch.org/docs/stable/nn.html#gru)
# + id="SvpioaJ5rWtm" colab_type="code" colab={}
# + [markdown] id="4rURh4FMrWto" colab_type="text"
# # Explore!
#
# What are good negative examples?
#
# How to be sure that your network 'generalizes'?
# + id="SZj5ed7WrWtp" colab_type="code" colab={}
| RNN_practicals_X_colab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
# +
import sys
SOURCE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__name__)))
sys.path.insert(0, SOURCE_DIR)
# -
import malaya_speech
import malaya_speech.config
from malaya_speech.train.model import revsic_glowtts as glowtts
import tensorflow as tf
import numpy as np
import math
import matplotlib.pyplot as plt
# +
import pickle
with open('dataset-mel.pkl', 'rb') as fopen:
data, d = pickle.load(fopen)
with open('dataset-mel-wav.pkl', 'rb') as fopen:
wav = pickle.load(fopen)
data.keys()
# -
i = tf.placeholder(tf.int32, [None, None])
i_lengths = tf.placeholder(tf.int32, [None])
mel_outputs = tf.placeholder(tf.float32, [None, None, 80])
mel_lengths = tf.placeholder(tf.int32, [None])
config = glowtts.Config(mel = 80, vocabs = 66)
model = glowtts.Model(config)
loss, losses, attn = model.compute_loss(text = i, textlen = i_lengths, mel = mel_outputs, mellen = mel_lengths)
loss, losses, attn
mel, mellen, attn_out = model(inputs = i, lengths = i_lengths)
mel, mellen, attn_out
# +
parameters = {
'optimizer_params': {'beta1': 0.9, 'beta2': 0.98, 'epsilon': 1e-9},
'lr_policy_params': {
'warmup_steps': 40000,
},
}
def noam_schedule(step, learning_rate, channels, warmup_steps=4000):
return learning_rate * channels ** -0.5 * \
tf.minimum(step ** -0.5, step * warmup_steps ** -1.5)
def learning_rate_scheduler(global_step):
return transformer_schedule(
tf.cast(global_step, tf.float32),
config.channels,
**parameters['lr_policy_params'],
)
# -
optimizer = tf.train.AdamOptimizer(learning_rate = 1e-4, beta1 = 0.9,
beta2 = 0.98, epsilon = 1e-9).minimize(loss)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# +
# %%time
o = sess.run(mel, feed_dict = {i: [data['text_ids'][0],data['text_ids'][0]],
i_lengths: [data['len_text_ids'][0,0], data['len_text_ids'][0,0]],
mel_outputs: [data['mel'].astype(np.float32)[0],data['mel'].astype(np.float32)[0]],
mel_lengths: [408, 408]})
# -
mel_outputs_ = np.reshape(o[0], [-1, 80])
fig = plt.figure(figsize=(10, 8))
ax1 = fig.add_subplot(311)
ax1.set_title(f'Predicted Mel-before-Spectrogram')
im = ax1.imshow(np.rot90(mel_outputs_), aspect='auto', interpolation='none')
fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1)
plt.show()
# +
# %%time
o = sess.run([loss, losses, attn], feed_dict = {i: [data['text_ids'][0],data['text_ids'][0]],
i_lengths: [data['len_text_ids'][0,0], data['len_text_ids'][0,0]],
mel_outputs: [data['mel'].astype(np.float32)[0],data['mel'].astype(np.float32)[0]],
mel_lengths: [408, 408]})
# -
for k in range(200):
o = sess.run([loss, losses, optimizer], feed_dict = {i: [data['text_ids'][0],data['text_ids'][0]],
i_lengths: [data['len_text_ids'][0,0], data['len_text_ids'][0,0]],
mel_outputs: [data['mel'].astype(np.float32)[0],data['mel'].astype(np.float32)[0]],
mel_lengths: [408, 408]})
print(k, o)
# +
# %%time
o = sess.run([loss, losses, attn], feed_dict = {i: [data['text_ids'][0],data['text_ids'][0]],
i_lengths: [data['len_text_ids'][0,0], data['len_text_ids'][0,0]],
mel_outputs: [data['mel'].astype(np.float32)[0],data['mel'].astype(np.float32)[0]],
mel_lengths: [408, 408]})
# -
o = sess.run([mel, mellen, attn_out], feed_dict = {i: [data['text_ids'][0],data['text_ids'][0]],
i_lengths: [data['len_text_ids'][0,0], data['len_text_ids'][0,0]],
mel_outputs: [data['mel'].astype(np.float32)[0],data['mel'].astype(np.float32)[0]],
mel_lengths: [408, 408]})
mel_outputs_ = np.reshape(o[0][0], [-1, 80])
fig = plt.figure(figsize=(10, 8))
ax1 = fig.add_subplot(311)
ax1.set_title(f'Predicted Mel-before-Spectrogram')
im = ax1.imshow(np.rot90(mel_outputs_), aspect='auto', interpolation='none')
fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1)
plt.show()
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111)
ax.set_title('Alignment steps')
im = ax.imshow(
o[-1][0],
aspect='auto',
origin='lower',
interpolation='none')
fig.colorbar(im, ax=ax)
xlabel = 'Decoder timestep'
plt.xlabel(xlabel)
plt.ylabel('Encoder timestep')
plt.tight_layout()
plt.show()
saver = tf.train.Saver()
saver.save(sess, 'test/model.ckpt')
| test/test-revsic-glowtts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <span style="color:orange">Anomaly Detection Tutorial (ANO101) - Level Beginner</span>
# **Created using: PyCaret 2.0** <br />
# **Date Updated: August 24, 2020**
#
# # 1.0 Objective of Tutorial
# Welcome to Anomaly Detection Tutorial **(ANO101)**. This tutorial assumes that you are new to PyCaret and looking to get started with Anomaly Detection using `pycaret.anomaly` Module.
#
# In this tutorial we will learn:
#
#
# * **Getting Data:** How to import data from PyCaret repository?
# * **Setting up Environment:** How to setup experiment in PyCaret to get started with building anomaly models?
# * **Create Model:** How to create a model and assign anomaly labels to original dataset for analysis?
# * **Plot Model:** How to analyze model performance using various plots?
# * **Predict Model:** How to assign anomaly labels to new and unseen dataset based on trained model?
# * **Save / Load Model:** How to save / load model for future use?
#
# Read Time : Approx. 25 Minutes
#
#
# ## 1.1 Installing PyCaret
# First step to get started with PyCaret is to install PyCaret. Installing PyCaret is easy and takes few minutes only. Follow the instructions below:
#
# #### Installing PyCaret in Local Jupyter Notebook
# `pip install pycaret` <br />
#
# #### Installing PyCaret on Google Colab or Azure Notebooks
# `!pip install pycaret`
#
#
# ## 1.2 Pre-Requisites
# - Python 3.6 or greater
# - PyCaret 2.0 or greater
# - Internet connection to load data from PyCaret's repository
# - Basic Knowledge of Anomaly Detection
#
# ## 1.3 For Google colab users:
# If you are running this notebook on Google Colab, run the following code at top of your notebook to display interactive visuals.<br/>
# <br/>
# `from pycaret.utils import enable_colab` <br/>
# `enable_colab()`
#
# ## 1.4 See also:
# - __[Anomaly Detectiom Tutorial (ANO102) - Level Intermediate](https://github.com/pycaret/pycaret/blob/master/tutorials/Anomaly%20Detection%20Tutorial%20Level%20Intermediate%20-%20ANO102.ipynb)__
# - __[Anomaly Detection Tutorial (ANO103) - Level Expert](https://github.com/pycaret/pycaret/blob/master/tutorials/Anomaly%20Detection%20Tutorial%20Level%20Expert%20-%20ANO103.ipynb)__
# # 2.0 What is Anomaly Detection?
#
# Anomaly Detection is the task of identifying the rare items, events or observations which raise suspicions by differing significantly from the majority of the data. Typically the anomalous items will translate to some kind of problem such as bank fraud, a structural defect, medical problems or errors in a text. There are three broad categories of anomaly detection techniques exist:
#
# - **Unsupervised anomaly detection:** Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the dataset are normal by looking for instances that seem to fit least to the remainder of the data set. <br/>
# <br/>
# - **Supervised anomaly detection:** This technique requires a dataset that has been labeled as "normal" and "abnormal" and involves training a classifier. <br/>
# <br/>
# - **Semi-supervised anomaly detection:** This technique constructs a model representing normal behavior from a given normal training dataset, and then tests the likelihood of a test instance to be generated by the learnt model. <br/>
#
# `pycaret.anomaly` module supports the unsupervised and supervised anomaly detection technique. In this tutorial we will only cover unsupervised anomaly detection technique.
#
# __[Learn More about Anomaly Detection](https://en.wikipedia.org/wiki/Anomaly_detection)__
# # 3.0 Overview of Anomaly Detection Module in PyCaret
# PyCaret's anomaly detection module (`pycaret.anomaly`) is a an unsupervised machine learning module which performs the task of identifying rare items, events or observations which raise suspicions by differing significantly from the majority of the data.
#
# PyCaret anomaly detection module provides several pre-processing features that can be configured when initializing the setup through `setup()` function. It has over 12 algorithms and few plots to analyze the results of anomaly detection. PyCaret's anomaly detection module also implements a unique function `tune_model()` that allows you to tune the hyperparameters of anomaly detection model to optimize the supervised learning objective such as `AUC` for classification or `R2` for regression.
# # 4.0 Dataset for the Tutorial
# For this tutorial we will use a dataset from UCI called **Mice Protein Expression**. The dataset consists of the expression levels of 77 proteins/protein modifications that produced detectable signals in the nuclear fraction of cortex. The dataset contains a total of 1080 measurements per protein. Each measurement can be considered as an independent sample/mouse. __[Click Here](https://archive.ics.uci.edu/ml/datasets/Mice+Protein+Expression)__ to read more about the dataset.
#
#
# #### Dataset Acknowledgement:
# <NAME> Department of Software Engineering and Artificial Intelligence, Faculty of Informatics and the Department of Biochemistry and Molecular Biology, Faculty of Chemistry, University Complutense, Madrid, Spain.
# Email: <EMAIL>
#
# <NAME>, creator and owner of the protein expression data, is currently with the Linda Crnic Institute for Down Syndrome, Department of Pediatrics, Department of Biochemistry and Molecular Genetics, Human Medical Genetics and Genomics, and Neuroscience Programs, University of Colorado, School of Medicine, Aurora, Colorado, USA.
# Email: <EMAIL>
#
# <NAME> is currently with the Department of Computer Science, Virginia Commonwealth University, Richmond, Virginia, USA, and IITiS Polish Academy of Sciences, Poland.
# Email: <EMAIL>
#
# The original dataset and data dictionary can be __[found here.](https://archive.ics.uci.edu/ml/datasets/Mice+Protein+Expression)__
# # 5.0 Getting the Data
# You can download the data from the original source __[found here](https://archive.ics.uci.edu/ml/datasets/Mice+Protein+Expression)__ and load it using pandas __[(Learn How)](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html)__ or you can use PyCaret's data respository to load the data using `get_data()` function (This will require internet connection).
from pycaret.datasets import get_data
dataset = get_data('mice')
#check the shape of data
dataset.shape
# In order to demonstrate the `predict_model()` function on unseen data, a sample of 5% (54 samples) are taken out from original dataset to be used for predictions at the end of experiment. This should not be confused with train/test split. This particular split is performed to simulate real life scenario. Another way to think about this is that these 54 samples are not available at the time when this experiment was performed.
# +
data = dataset.sample(frac=0.95, random_state=786)
data_unseen = dataset.drop(data.index)
data.reset_index(drop=True, inplace=True)
data_unseen.reset_index(drop=True, inplace=True)
print('Data for Modeling: ' + str(data.shape))
print('Unseen Data For Predictions: ' + str(data_unseen.shape))
# -
# # 6.0 Setting up Environment in PyCaret
# `setup()` function initializes the environment in PyCaret and creates the transformation pipeline to prepare the data for modeling and deployment. `setup()` must be called before executing any other function in PyCaret. It takes only one mandatory parameter: pandas dataframe. All other parameters are optional and are used to customize pre-processing pipeline (we will see them in later tutorials).
#
# When `setup()` is executed, PyCaret's inference algorithm will automatically infer the data types for all features based on certain properties. Although, most of the times the data type is inferred correctly but it's not always the case. Therefore, after `setup()` is executed, PyCaret displays a table containing features and their inferred data types. At which stage, you can inspect and press `enter` to continue if all data types are correctly inferred or type `quit` to end the experiment. Identifying data types correctly is of fundamental importance in PyCaret as it automatically performs few pre-processing tasks which are imperative to perform any machine learning experiment. These pre-processing tasks are performed differently for each data type. As such, it is very important that data types are correctly configured.
#
# In later tutorials we will learn how to overwrite PyCaret's inferred data types using `numeric_features` and `categorical_features` parameter in `setup()`.
# +
from pycaret.anomaly import *
exp_ano101 = setup(data, normalize = True,
ignore_features = ['MouseID'],
session_id = 123)
# -
# Once the setup is succesfully executed it prints the information grid that contains few important information. Much of the information is related to pre-processing pipeline which is constructed when `setup()` is executed. Much of these features are out of scope for the purpose of this tutorial. However, few important things to note at this stage are:
#
# - **session_id :** A pseudo-random number distributed as a seed in all functions for later reproducibility. If no `session_id` is passed, a random number is automatically generated that is distributed to all functions. In this experiment session_id is set as `123` for later reproducibility.<br/>
# <br/>
# - **Missing Values :** When there are missing values in original data it will show as `True`. Notice that `Missing Values` in the information grid above is `True` as the data contains missing values which are automatically imputed using `mean` for numeric features and `constant` for categorical features. The method of imputation can be changed using `numeric_imputation` and `categorical_imputation` parameter in `setup()`. <br/>
# <br/>
# - **Original Data :** Displays the original shape of dataset. In this experiment (1026, 82) means 1026 samples and 82 features. <br/>
# <br/>
# - **Transformed Data :** Displays the shape of transformed dataset. Notice that the shape of original dataset (1026, 82) is transformed into (1026, 91). The number of features has increased due to encoding of categorical features in the dataset. <br/>
# <br/>
# - **Numeric Features :** Number of features inferred as numeric. In this dataset, 77 out of 82 features are inferred as numeric. <br/>
# <br/>
# - **Categorical Features :** Number of features inferred as categorical. In this dataset, 5 out of 82 features are inferred as categorical. Also notice, we have ignored one categorical feature i.e. `MouseID` using `ignore_feature` parameter. <br/>
#
# Notice that how few tasks such as missing value imputation and categorical encoding that are imperative to perform modeling are automatically handled. Most of the other parameters in `setup()` are optional and used for customizing pre-processing pipeline. These parameters are out of scope for this tutorial but as you progress to intermediate and expert level, we will cover them in much detail.
# # 7.0 Create a Model
# Creating an anomaly detection model in PyCaret is simple and similar to how you would have created a model in supervised modules of PyCaret. The anomaly detection model is created using `create_model()` function which takes one mandatory parameter i.e. name of the model as a string. This function returns a trained model object. See the example below:
iforest = create_model('iforest')
print(iforest)
# We have created Isolation Forest model using `create_model()`. Notice the `contamination` parameter is set `0.05` which is the default value when you do not pass `fraction` parameter in `create_model()`. `fraction` parameter determines the proportion of outliers in the dataset. In below example, we will create `One Class Support Vector Machine` model with `0.025` fraction.
svm = create_model('svm', fraction = 0.025)
print(svm)
# Just by replacing `iforest` with `svm` inside `create_model()` we have now created `OCSVM` anomaly detection model. There are 12 models available ready-to-use in `pycaret.anomaly` module. To see the complete list, please see docstring or use `models` function.
models()
# # 8.0 Assign a Model
# Now that we have created a model, we would like to assign the anomaly labels to our dataset (1080 samples) to analyze the results. We will achieve this by using `assign_model()` function. See an example below:
iforest_results = assign_model(iforest)
iforest_results.head()
# Notice that two columns `Label` and `Score` are added towards the end. 0 stands for inliers and 1 for outliers/anomalies. Score is the values computed by the algorithm. Outliers are assigned with larger anomaly scores. Notice that `iforest_results` also includes `MouseID` feature that we have dropped during `setup()`. It wasn't used for the model and is only appended to the dataset when you use `assign_model()`. In the next section we will see how to analyze the results of anomaly detection using `plot_model()`.
# # 9.0 Plot a Model
# `plot_model()` function can be used to analyze the anomaly detection model over different aspects. This function takes a trained model object and returns a plot. See the examples below:
# ### 9.1 T-distributed Stochastic Neighbor Embedding (t-SNE)
plot_model(iforest)
# ### 9.2 Uniform Manifold Approximation and Projection
plot_model(iforest, plot = 'umap')
# # 10.0 Predict on Unseen Data
# `predict_model()` function is used to assign anomaly labels on the new unseen dataset. We will now use our `iforest` model to predict the data stored in `data_unseen`. This was created in the beginning of the experiment and it contains 54 new samples that were not exposed to PyCaret before.
unseen_predictions = predict_model(iforest, data=data_unseen)
unseen_predictions.head()
# `Label` column indicates the outlier (1 = outlier, 0 = inlier). `Score` is the values computed by the algorithm. Outliers are assigned with larger anomaly scores. You can also use `predict_model()` function to label the training data. See example below:
data_predictions = predict_model(iforest, data = data)
data_predictions.head()
# # 11.0 Saving the Model
# We have now finished the experiment by using our `iforest` model to predict outlier labels on unseen data. This brings us to the end of our experiment but one question is still to be asked. What happens when you have more new data to predict? Do you have to go through the entire experiment again? The answer is No, you don't need to rerun the entire experiment and reconstruct the pipeline to generate predictions on new data. PyCaret's inbuilt function `save_model()` allows you to save the model along with entire transformation pipeline for later use.
save_model(iforest,'Final IForest Model 08Feb2020')
# # 12.0 Loading the Saved Model
# To load a saved model on a future date in the same or different environment, we would use the PyCaret's `load_model()` function and then easily apply the saved model on new unseen data for prediction
saved_iforest = load_model('Final IForest Model 08Feb2020')
# Once the model is loaded in the environment, you can simply use it to predict on any new data using the same `predict_model()` function . Below we have applied the loaded model to predict the same `data_unseen` that we have used in section 10 above.
new_prediction = predict_model(saved_iforest, data=data_unseen)
new_prediction.head()
# Notice that results of `unseen_predictions` and `new_prediction` are identical.
# # 16.0 Wrap-up / Next Steps?
# What we have covered in this tutorial is the entire machine learning pipeline from data ingestion, pre-processing, training the anomaly detector, predicting on unseen data and saving the model for later use. We have completed all this in less than 10 commands which are naturally constructed and very intuitive to remember such as `create_model()`, `assign_model()`, `plot_model()`. Re-creating the entire experiment without PyCaret would have taken well over 100 lines of code in most of the libraries.
#
# In this tutorial, we have only covered the basics of `pycaret.anomaly`. In the following tutorials, we will go deeper into advanced pre-processing techniques that allows you to fully customize your machine learning pipeline that are must to know for any data scientist.
#
# See you at the next tutorial. Follow the link to __[Anomaly Detection Tutorial (ANO102) - Level Intermediate](https://github.com/pycaret/pycaret/blob/master/tutorials/Anomaly%20Detection%20Tutorial%20Level%20Intermediate%20-%20ANO102.ipynb)__
| tutorials/Anomaly Detection Tutorial Level Beginner - ANO101.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Batch image processing using Python
# There are severale ways to deal with images in Python, resonable choices are [numpy](https://numpy.org/), [scikit-image](https://scikit-image.org/) or [Pillow](https://pillow.readthedocs.io/en/stable/).
# Here in this tutorial we will use __Pillow__.
# __Setup__
from PIL import Image
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
# ## Get a set of images from google (optional)
# We use code provided here: https://github.com/hardikvasa/google-images-download
# __Setting__
search_term = "beiersdorf"
arguments = {
"keywords": search_term,
"limit": 15,
"size" : '>800*600',
"print_urls": True,
"output_directory": '../data/images',
'format' : 'png'
}
# +
from google_images_download import google_images_download # importing the library
response = google_images_download.googleimagesdownload() # class instantiation
# uncomment to download images
#paths = response.download(arguments) # passing the arguments to the function
# -
# ## Open an image
# !ls ../data/images/{search_term}/
p = Path(f'../data/images/beiersdorf/example.png')
img = Image.open(p)
# ## Image properties
print(img.format)
print()
print(img.size)
print(img.width)
print(img.height)
img.format
img.mode
img.size
print(f'Width: {img.width}, Height: {img.height}')
# ## Plot image
img
plt.imshow(img)
# ## Image manipulation
# ### Cropping
box = (400, 80, img.width, 500) # left, upper, right, and lower pixel coordinate
img_crop = img.crop(box=box)
plt.imshow(img_crop)
# ***
# > __Challenge: Write a function called `image_crop`. The function should crop the image by a given ratio.__
# <img src="./_img/resize_image.png" width="600px">
def image_crop(img, ratio=1):
'''
Function crop a PIL image object by a given ratio from 0 to 1.
: img: PIL image object
: ratio: float between 0 and 1
: returns: a cropped PIL image object
'''
assert ratio > 0, print("Warning: Ratio shall be greater than 0")
assert ratio <= 1, print("Warning: Ratio shall be lower than 1")
center = (img.width // 2, img.height //2)
upper_left_x = None # your code here
upper_left_y = None # your code here
lower_right_x = None # your code here
lower_right_y = None # your code here
# Checking if all None's have been replaced by actual code
if any(x is None for x in [upper_left_x, upper_left_y, lower_right_x, lower_right_y]):
print("Keep improving your code ...\n")
return img
else:
return img.crop(box=(upper_left_x, upper_left_y, lower_right_x, lower_right_y))
# _If you did not manage to compete the task feel free to look at a possible soultion by uncommenting the next code cell. Do not forget to run the cell twice!_
# +
# # %load ../src/_solutions/image_crop.py
# -
# __Apply the function__
cropped_by_ratio = image_crop(img, ratio=0.5)
print(f'Original: {img.size}, Cropped: {cropped_by_ratio.size}')
plt.imshow(cropped_by_ratio);
# ***
# ### Filters
# +
from PIL import ImageFilter
# Blur the input image using the filter ImageFilter.BLUR
img_crop.filter(filter=ImageFilter.BLUR)
# -
# ### Resize
# +
(width, height) = (img_crop.width // 2, img_crop.height // 2)
#PIL.Image.NEAREST # default
#PIL.Image.BOX, PIL.Image.BILINEAR, PIL.Image.HAMMING, PIL.Image.BICUBIC or PIL.Image.LANCZOS.
img_resized = img_crop.resize((width, height))
print(f'cropped: {img_crop.size}, resized: {img_resized.size}')
img_resized
# -
# ### Rotate
# Rotate the image by 60 degrees counter clockwise
theta = 60
# Angle is in degrees counter clockwise
img_crop.rotate(angle=theta)
img_crop.rotate(angle=theta, expand=True)
# ### Saving to disk
fp = Path('../data/images/bs_cropped.png')
img_crop.save(fp)
fp.exists()
# ## Image enhancments
from PIL import ImageEnhance
# ### Sharpness
enhancer = ImageEnhance.Sharpness(img_crop)
enhancer.enhance(0.3)
# ### Color
enhancer = ImageEnhance.Color(img_crop)
enhancer.enhance(0.25)
# ### Contrast
enhancer = ImageEnhance.Contrast(img_crop)
enhancer.enhance(0.75)
# ### Brightness
enhancer = ImageEnhance.Brightness(img_crop)
enhancer.enhance(0.25)
# ***
# > __Challenge: Write a function called `image_process`. Use the function to apply a number of image processing steps to any image provided to the function.__
def image_process(img):
'''
Function to apply a series of image processing and enhancement steps on an image
: img: a PIL image object
: return: a copy of a processed/enhanced PIL image object
'''
# your code here ...
return img.copy() # returns a copy
# _If you did not manage to compete the task feel free to look at a possible soultion by uncommenting the next code cell. Do not forget to run the cell twice!_
# +
# # %load ../src/_solutions/image_process.py
# -
image_process(img)
# ## Expand to batch processing
# +
# google search images
paths = Path(f'../data/images/{search_term}/').glob('*.png')
# images from Ahmed
#paths = Path(f'../data/images/Ahmed/').glob('*.JPG')
# -
out_path = Path('../data/images/processed/')
for path in paths:
print(f'Processing {path} ...')
try:
img = Image.open(path)
processed = image_process(img)
fp = out_path.joinpath(path.name)
processed.save(fp)
except Exception as e:
print(f'\nWarning {e}. Image {path.name} is excluded!\n')
# ***
| 03_automate_the_boring_stuff/notebooks/04_batch_image_processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import copy
import glob
import json
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import logging
sns.set() # clear whole style
# %matplotlib inline
# +
def make_col_readable(col_data, raw_key):
"""Make single column data readable."""
k,m,g = 1024, 1024*1024, 1024*1024*1024
col_mean = col_data.mean()
if g <= col_mean:
new_col = col_data / g
new_key = "{}({})".format(raw_key, "G")
if m <= col_mean < g:
new_col = col_data / m
new_key = "{}({})".format(raw_key, "M")
elif k <= col_mean < m:
new_col = col_data / k
new_key = "{}({})".format(raw_key, "K")
else:
new_col, new_key = None, None
return new_col, new_key
def make_human_readable(data, keys):
"""Make a pandas data readable."""
updated_map = dict()
if not isinstance(keys, list):
keys = [keys]
for k in keys:
col_data = data[k]
updated_val, updated_key = make_col_readable(col_data, k)
if not updated_key: #
continue
# add new column
data[updated_key] = updated_val
updated_map.update({k: updated_key})
return data, updated_map
METRIC_MAP = {
"Classification": "accuarcy",
"Super-Resolution": {"psnr", "ssim"}, # and
"Segmentation": "mIOU",
"Detection": {"F1 score", "LAMR"}, # or
"Click-Through Rate Prediction": "AUC",
}
PARAMS_SET = set(["model_size", "metric", "params", "flops", "Inference Time"])
def get_params_set(case_type, default_params, metric_map):
"""Get params set."""
ret_params = copy.deepcopy(default_params)
if "metric" in default_params:
ret_params.remove("metric")
metric_keys = metric_map.get(case_type)
if not metric_keys:
raise KeyError("not found case: {} in map: {}.".format(case_type, metric_map))
elif isinstance(metric_keys, str):
ret_params.add(metric_keys)
elif isinstance(metric_keys, set):
ret_params |= metric_keys
else:
raise ValueError("unkown value: {}".format(metric_keys))
return ret_params
def test_get_params():
detec_target = {'params', 'F1 score', 'model_size', 'Inference Time', 'flops', 'LAMR'}
assert get_params_set("Detection", PARAMS_SET, METRIC_MAP) == detec_target
classif_target = {'params', 'accuarcy', 'model_size', 'Inference Time', 'flops'}
assert get_params_set("Classification", PARAMS_SET, METRIC_MAP) == classif_target
print("check passed!")
if __name__ == "__main__":
test_get_params()
# +
def _get_json_val(json_file):
"""Get json value."""
with open(json_file, "r") as json_obj:
single_val = json.load(json_obj)
return single_val
def gather_metrics_from_dir(dir_path, case_type):
"""Collect the results under path."""
json_files = glob.glob(os.path.join(dir_path, "*.json"))
need_keys = get_params_set(case_type, PARAMS_SET, METRIC_MAP)
# print("json_files: ",json_files)
ret_val_total = list()
# try first json with keys check
single_val = _get_json_val(json_files[0])
unused_key = set([_k for _k in need_keys if _k not in single_val])
need_keys -= unused_key
# move set to list, own to order
need_keys = list(need_keys)
for _json_file in json_files:
json_value = _get_json_val(_json_file)
single_val = list([float(json_value[k]) for k in need_keys])
# add json_file_name, consider the type between json file and data
single_val.append(_json_file)
ret_val_total.append(single_val)
ret_np = np.array(ret_val_total)
ret_pd = pd.DataFrame(ret_np, columns=[*need_keys, "json_name"])
ret_pd[need_keys] = ret_pd[need_keys].apply(pd.to_numeric)
ret_pd.sort_values("model_size", inplace=True)
return ret_pd
# -
def plot_pruning_case(pdf, x_label, y_label, to_png_file="defaut.png", dpi=300):
sns.set()
sns.set(rc={"figure.figsize": (6.4, 3.6), "figure.dpi": 300})
# sns.set_style("white")
# sns.set_context("paper", font_scale=1.5, rc={"lines.linewidth":0.5})
sns.set_style("dark")
fig,axes = plt.subplots(nrows=2,ncols=2,figsize=(6.4,3.6))
fig.subplots_adjust(hspace=0.1, wspace=0.25)
plt.ticklabel_format(style='plain', axis='y')
for count, (x, y, ax) in enumerate(zip(x_label, y_label, axes.reshape(-1))):
ax= sns.scatterplot(x=x, y=y,data=pdf,ax=ax)
ax.set_xlabel(x,fontsize=6)
ax.set_ylabel(y,fontsize=6)
ax.tick_params(axis='y',labelsize=4, rotation=0)
ax.tick_params(axis='x',labelsize=4, rotation=45)
# ax.ticklabel_format(style='plain', axis='y',useOffset=False)
if count < 2:
ax.set(xlabel=None)
ax.set(xticklabels=[])
fig.savefig(to_png_file, dpi=dpi, bbox_inches='tight')
return to_png_file
# # Parameter description
#
# - model_size (y) for y axis
# - accuracy (y) metric types,support defined by user
# - flops (y)
# - params (y)
# - latency_batch (y) batch = (1, 32), batch 32 defautl
# - latency_batch(1) # use `Inference Time = latency_batch/batch`
# - latency_batch(32)
#
# ### Collect the json file under special path
#
# **Input**
# - dir_path: Directory to read json file
# - case_type: case to analysis, choose from the keys of METRIC_MAP
#
# **Output**:
# - pandas.DataFrame
# example data typo, accuracy , not accuarcy!!
ret_data = gather_metrics_from_dir(dir_path="./raw_metric1014",case_type="Classification")
ret_data
type(ret_data["Inference Time"][1])
# ### Check single json file
with open('./raw_metric1014/performance_104.json', "r") as f:
v = json.load(f)
v
# ### Show the Pruning case
#
# Input:
# - data with pandas.DataFrame
# - four label of x axis
# - for label of y axis
# - the name of image file to save
plot_pruning_case(ret_data, x_label=["model_size"]*4,
y_label=["accuarcy", "params", "Inference Time", "flops"],
to_png_file="./default_purning.png", dpi=100) # json key error
# +
def plot_from_csv_summarize(csv_file, to_png_file="./default_summarize.png", dpi=300):
"""Plot pruning png with csv_summarize file."""
pdf = pd.read_csv(csv_file)
print(pdf.keys())
pdf_readable, up_map = make_human_readable(pdf, ["flops", "model_size", "params", "input_num"])
print(up_map)
# print(pdf_readable)
plot_pruning_case(pdf_readable, x_label=["model_size(K)"]*4,
y_label=["accuracy", "params(K)", "latency_sum", "flops(M)"],
to_png_file=to_png_file, dpi=dpi)
plot_from_csv_summarize(csv_file="./resnet20_prune_ea_gpu_latency_batchsize_1.csv", dpi=100)
| zeus/visual/visualizer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Samrath49/AI_ML_DL/blob/main/22.%20NLP.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="mjNIvjXtNy-O"
# # NLP
# + [markdown] id="inb4dgGpN0vc"
# ### Importing Libraries
# + id="1Yo0OWo_OUzI"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# + [markdown] id="DYDC3Z3nN2-o"
# ### Importing Dataset
# + id="H4DxcQ93OVIM"
dataset = pd.read_csv('Restaurant_Reviews.tsv', delimiter = '\t', quoting = 3)
# + [markdown] id="pfSqCpV6N40T"
# ### Cleaning the texts
# + colab={"base_uri": "https://localhost:8080/"} id="nVNcHs6HOV5M" outputId="facb2a90-f4cf-4e63-c447-d124f4e9204b"
import re
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
corpus = []
for i in range(0, 1000):
review = re.sub('[^a-zA-Z]', ' ', dataset['Review'][i])
review = review.lower()
review = review.split()
ps = PorterStemmer()
all_stopwords = stopwords.words('english')
all_stopwords.remove('not')
review = [ps.stem(word) for word in review if not word in set(all_stopwords)]
review = ' '.join(review)
corpus.append(review)
# + colab={"base_uri": "https://localhost:8080/"} id="vDSMvKImTno6" outputId="becee689-ce6c-43a2-eb5f-89e35c49b08e"
print(corpus)
# + [markdown] id="sL2-pUFFN8SE"
# ### Creating the Bag of Words model
# + id="v_Sv5RrpOWQU"
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(max_features = 1500)
X = cv.fit_transform(corpus).toarray()
y = dataset.iloc[:,-1].values
# + colab={"base_uri": "https://localhost:8080/"} id="6jXsdTjKekYO" outputId="142d6558-a4d2-46dc-c97a-ff5ecd58d19c"
len(X[0])
# + [markdown] id="TsvP5OecOAxt"
# ### Splitting the dataset into Training & Test set
# + id="qrHjc1smOWqD"
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 000)
# + [markdown] id="HS0bLCJqOGOc"
# ### Training Naive Bayes model on Training Set
# + colab={"base_uri": "https://localhost:8080/"} id="TwlU0wD0OXCg" outputId="b849b1fa-c0f3-4756-ddcf-c37e5b08753a"
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)
# + [markdown] id="E9ML_YFjOMm8"
# ### Predicting the Test set results
# + colab={"base_uri": "https://localhost:8080/"} id="KJrM0aWEOXg8" outputId="29269624-700c-4a10-b943-0180f4bd1097"
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred), 1), y_test.reshape(len(y_test), 1)), 1))
# + [markdown] id="GsID32vbOPxE"
# ### Making the confusion Matrix
# + colab={"base_uri": "https://localhost:8080/"} id="_C30ddCsOYEV" outputId="6f553bf3-64cf-433d-cc0f-1129e5e7081e"
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
# + [markdown] id="SD3hUvNhispc"
# ### Predicting if a single review is positive or negative
# + [markdown] id="hqwSubAPivxG"
# ### Positive review
# + [markdown] id="cVIhEJfnix1u"
# Use our model to predict if the following review:
#
# "I love this restaurant so much"
#
# is positive or negative.
# + [markdown] id="9qQXFbYtiz2C"
# We just repeat the same text preprocessing process we did before, but this time with a single review.
# + colab={"base_uri": "https://localhost:8080/"} id="TVEijdCFi2jt" outputId="24ea0d5d-1479-4af1-da9c-cef6f0576ad4"
new_review = 'I love this restaurant so much'
new_review = re.sub('[^a-zA-Z]', ' ', new_review)
new_review = new_review.lower()
new_review = new_review.split()
ps = PorterStemmer()
all_stopwords = stopwords.words('english')
all_stopwords.remove('not')
new_review = [ps.stem(word) for word in new_review if not word in set(all_stopwords)]
new_review = ' '.join(new_review)
new_corpus = [new_review]
new_X_test = cv.transform(new_corpus).toarray()
new_y_pred = classifier.predict(new_X_test)
print(new_y_pred)
# + [markdown] id="lzBVjpfmi2_3"
# The review was correctly predicted as positive by our model.
# + [markdown] id="a4YodybQi4ft"
# ### Negative review
# + [markdown] id="rj1feGY-i7L2"
# Use our model to predict if the following review:
#
# "I hate this restaurant so much"
#
# is positive or negative.
# + [markdown] id="LiX6L3s7i9lh"
# We just repeat the same text preprocessing process we did before, but this time with a single review.
# + colab={"base_uri": "https://localhost:8080/"} id="7yoWN3Y7i-ws" outputId="006ecdc3-1277-4b58-b8c4-11c2f5937c30"
new_review = 'I hate this restaurant so much'
new_review = re.sub('[^a-zA-Z]', ' ', new_review)
new_review = new_review.lower()
new_review = new_review.split()
ps = PorterStemmer()
all_stopwords = stopwords.words('english')
all_stopwords.remove('not')
new_review = [ps.stem(word) for word in new_review if not word in set(all_stopwords)]
new_review = ' '.join(new_review)
new_corpus = [new_review]
new_X_test = cv.transform(new_corpus).toarray()
new_y_pred = classifier.predict(new_X_test)
print(new_y_pred)
# + [markdown] id="gMOH9dCnjABS"
# The review was correctly predicted as negative by our model.
| 22. NLP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# # Interactive Plotting with `bokeh`
#
# This notebook shows how to build some simple interactive plots with bokeh. Change parameters and datasets to mix things up and create your own data visualisations! 📈📊📉
#
# There are also excellent interactive `bokeh` tutorials [here](https://mybinder.org/v2/gh/bokeh/bokeh-notebooks/master?filepath=tutorial%2F00%20-%20Introduction%20and%20Setup.ipynb).
# ## Preamble
# %load_ext autoreload
# %autoreload 2
# install im_tutorial package
# !pip install git+https://github.com/nestauk/im_tutorials.git
# +
# useful Python tools
import itertools
import collections
# matplotlib for static plots
import matplotlib.pyplot as plt
# networkx for networks
import networkx as nx
# numpy for mathematical functions
import numpy as np
# pandas for handling tabular data
import pandas as pd
# seaborn for pretty statistical plots
import seaborn as sns
pd.set_option('max_columns', 99)
# basic bokeh imports for an interactive scatter plot or line chart
from bokeh.io import show, output_notebook
from bokeh.plotting import figure
from bokeh.models import ColumnDataSource
# NB: If using Google Colab, this function must be run at
# the end of any cell that you want to display a bokeh plot.
# If using Jupyter, then this line need only appear once at
# the start of the notebook.
output_notebook()
from im_tutorials.data import gis
# -
# ## Scatter Plot
from bokeh.palettes import Category20_20
country_df = gis.country_basic_info()
country_df.head(2)
country_cds = ColumnDataSource(
country_df[['lat', 'lng', 'population', 'area', 'alpha3Code', 'continent']],
)
from bokeh.transform import factor_cmap
# +
p = figure(width=800, height=400)
p.scatter(source=country_cds, x='lng', y='lat',
color=factor_cmap('continent', palette=Category20_20, factors=country_df['continent'].unique()))
show(p)
# output_notebook()
# -
| notebooks/04_basic_interactive_plotting.ipynb |