repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
EmuKit/emukit
notebooks/Emukit-tutorial-multi-fidelity-MUMBO-Example.ipynb
apache-2.0
[ "Demo of MUMBO for multi-fidelity Bayesian Optimisation\nThis notebook provides a demo of the MUlti-task Max-value Bayesian Optimisation (MUMBO) acquisition function of Moss et al [2020].\nhttps://arxiv.org/abs/2006.12093\nMUMBO provides the high perfoming optimization of other entropy-based acquisitions. However, unlike the standard entropy-search for multi-fidelity optimization, MUMBO requires a fraction of the computational cost. MUMBO is a multi-fidelity (or multi-task) extension of max-value entropy search also availible in Emukit.\nOur implementation of MUMBO is controlled by two parameters: \"num_samples\" and \"grid_size\". \"num_samples\" controls how many mote-carlo samples we use to calculate entropy reductions. As we only approximate a 1-d integral, \"num_samples\" does not need to be large or be increased for problems with large d (unlike standard entropy-search). We recomend values between 5-15. \"grid_size\" controls the coarseness of the grid used to approximate the distribution of our max value and so must increase with d. We recommend 10,000*d. Note that as the grid must only be calculated once per BO step, the choice of \"grid_size\" does not have a large impact on computation time.", "### General imports\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import colors as mcolors\nimport GPy\nimport time\nnp.random.seed(12345)\n\n### Emukit imports\nfrom emukit.test_functions.forrester import multi_fidelity_forrester_function\nfrom emukit.core.loop.user_function import UserFunctionWrapper\nfrom emukit.multi_fidelity.convert_lists_to_array import convert_x_list_to_array\nfrom emukit.bayesian_optimization.acquisitions.entropy_search import MultiInformationSourceEntropySearch\nfrom emukit.bayesian_optimization.acquisitions.max_value_entropy_search import MUMBO\nfrom emukit.core.acquisition import Acquisition\nfrom emukit.multi_fidelity.models.linear_model import GPyLinearMultiFidelityModel\nfrom emukit.multi_fidelity.kernels.linear_multi_fidelity_kernel import LinearMultiFidelityKernel\nfrom emukit.multi_fidelity.convert_lists_to_array import convert_xy_lists_to_arrays\nfrom emukit.core import ParameterSpace, ContinuousParameter, InformationSourceParameter\nfrom emukit.model_wrappers import GPyMultiOutputWrapper\nfrom GPy.models.gp_regression import GPRegression\n\n\n### --- Figure config\nLEGEND_SIZE = 15", "Set up our toy problem (1D optimisation of the forrester function with two fidelity levels) and collect 6 initial points at low fidelity and 3 at high fidelitly.", "# Load function\n# The multi-fidelity Forrester function is already wrapped as an Emukit UserFunction object in \n# the test_functions package\nforrester_fcn, _ = multi_fidelity_forrester_function()\nforrester_fcn_low = forrester_fcn.f[0]\nforrester_fcn_high = forrester_fcn.f[1]\n\n# Assign costs\nlow_fidelity_cost = 1\nhigh_fidelity_cost = 10\n\n# Plot the function s\nx_plot = np.linspace(0, 1, 200)[:, None]\ny_plot_low = forrester_fcn_low(x_plot)\ny_plot_high = forrester_fcn_high(x_plot)\nplt.plot(x_plot, y_plot_low, 'b')\nplt.plot(x_plot, y_plot_high, 'r')\nplt.legend(['Low fidelity', 'High fidelity'])\nplt.xlim(0, 1)\nplt.title('High and low fidelity Forrester functions')\nplt.xlabel('x')\nplt.ylabel('y');\n\n\n# Collect and plot initial samples\nnp.random.seed(123)\nx_low = np.random.rand(6)[:, None]\nx_high = x_low[:3]\ny_low = forrester_fcn_low(x_low)\ny_high = forrester_fcn_high(x_high)\nplt.scatter(x_low,y_low)\nplt.scatter(x_high,y_high)", "Fit our linear multi-fidelity GP model to the observed data.", "x_array, y_array = convert_xy_lists_to_arrays([x_low, x_high], [y_low, y_high])\n\nkern_low = GPy.kern.RBF(1)\nkern_low.lengthscale.constrain_bounded(0.01, 0.5)\n\nkern_err = GPy.kern.RBF(1)\nkern_err.lengthscale.constrain_bounded(0.01, 0.5)\n\nmulti_fidelity_kernel = LinearMultiFidelityKernel([kern_low, kern_err])\ngpy_model = GPyLinearMultiFidelityModel(x_array, y_array, multi_fidelity_kernel, 2)\n\ngpy_model.likelihood.Gaussian_noise.fix(0.1)\ngpy_model.likelihood.Gaussian_noise_1.fix(0.1)\n\nmodel = GPyMultiOutputWrapper(gpy_model, 2, 5, verbose_optimization=False)\nmodel.optimize()", "Define acqusition functions for multi-fidelity problems", "# Define cost of different fidelities as acquisition function\nclass Cost(Acquisition):\n def __init__(self, costs):\n self.costs = costs\n\n def evaluate(self, x):\n fidelity_index = x[:, -1].astype(int)\n x_cost = np.array([self.costs[i] for i in fidelity_index])\n return x_cost[:, None]\n \n @property\n def has_gradients(self):\n return True\n \n def evaluate_with_gradients(self, x):\n return self.evalute(x), np.zeros(x.shape)\n\nparameter_space = ParameterSpace([ContinuousParameter('x', 0, 1), InformationSourceParameter(2)])\ncost_acquisition = Cost([low_fidelity_cost, high_fidelity_cost])\nes_acquisition = MultiInformationSourceEntropySearch(model, parameter_space) / cost_acquisition\nmumbo_acquisition = MUMBO(model, parameter_space, num_samples=5, grid_size=500) / cost_acquisition", "Lets plot the resulting acqusition functions (MUMBO and standard entropy search for multi-fidelity BO) for the chosen model on the collected data. Note that MES takes a fraction of the time of ES to compute (plotted on a log scale). This difference becomes even more apparent as you increase the dimensions of the sample space.", "x_plot_low = np.concatenate([np.atleast_2d(x_plot), np.zeros((x_plot.shape[0], 1))], axis=1)\nx_plot_high = np.concatenate([np.atleast_2d(x_plot), np.ones((x_plot.shape[0], 1))], axis=1)\nt_0=time.time()\nes_plot_low = es_acquisition.evaluate(x_plot_low)\nes_plot_high = es_acquisition.evaluate(x_plot_high)\nt_es=time.time()-t_0\nmumbo_plot_low = mumbo_acquisition.evaluate(x_plot_low)\nmumbo_plot_high = mumbo_acquisition.evaluate(x_plot_high)\nt_mumbo=time.time()-t_es-t_0\n\nfig, (ax1, ax2) = plt.subplots(1, 2)\nax1.plot(x_plot, es_plot_low , \"blue\")\nax1.plot(x_plot, es_plot_high, \"red\")\nax1.set_title(\"Multi-fidelity Entropy Search\")\nax1.set_xlabel(r\"$x$\")\nax1.set_ylabel(r\"$\\alpha(x)$\")\nax1.set_xlim(0, 1)\n\nax2.plot(x_plot, mumbo_plot_low , \"blue\", label=\"Low fidelity evaluations\")\nax2.plot(x_plot, mumbo_plot_high , \"red\",label=\"High fidelity evaluations\")\nax2.legend(loc=\"upper right\")\nax2.set_title(\"MUMBO\")\nax2.set_xlabel(r\"$x$\")\nax2.set_ylabel(r\"$\\alpha(x)$\")\nax2.set_xlim(0, 1)\nplt.tight_layout()\nplt.figure()\nplt.bar([\"es\",\"MUMBO\"],[t_es,t_mumbo])\nplt.xlabel(\"Acquisition Choice\")\nplt.yscale('log')\nplt.ylabel(\"Calculation Time (secs)\")\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
andrzejkrawczyk/python-course
workshops/Gr4-31-07-2018/Tresci zadan.ipynb
apache-2.0
[ "Napisz funkcje, ktora znajdzie duplikaty w kolekcji\nNapisz za pomoca jednego polecenia wyswietlenie 300-krotne liczby \"1.44e+4+50\" oddzielonego srednikiem", "assert duplicates((1, 1, 2, 3, 4, 5, 6, 8, 2, 4, -7, 12, -7)) == (1, 2, 4, -7)\nassert duplicates([1, 1, 2, 3, 4, 5, \"asd\", 8, \"asd\", 4, -7, 12, -7]) == (1, 2, 4, \"asd\", -7)", "Napisz generator liczb pseudolosowych z czestotliwosciami 0,25 dla zakresu 1-50 i 0,75 dla zakresu 51-100.\nSprawdz czestotliwosc dla 250 tysiecy wygenerowanych liczb\nNapisz list comprehension, ktore zwroci kwadraty liczb z kolekcji", "assert square_collection([1, 2, 3, 4, 5, 6]) == [1, 4, 9, 16, 25, 36]", "Zaimplementuj linked liste w pythonie wykorzystujac namedtuples\nZaimplementuj funkcje, ktora znajdzie czesc wspolna dwoch kolekcji", "a = [12, 1, 2, 3, 4, 7, 8, 10]\nb = [1, 12, 33, 4, 7, 9, 10]\nassert intersection(a, b) == [12, 1, 4, 7, 10]", "Stworz generator losowych hasel z zakresu ASCII o zadanej dlugosci" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mediagit2016/workcamp-maschinelles-lernen-grundlagen
01-grundlagen/pandas.ipynb
gpl-3.0
[ "Einführung in Pandas\nmit Aktiendaten und Beispielen zur Korrelation\nAuthor list: Alexander Fred-Ojala & Ikhlaq Sidh & Ramon Rank\nReferences / Sources: \nIncludes examples from Wes McKinney and the 10min intro to Pandas\nLicense Agreement: Feel free to do whatever you want with this code\n\nWhat Does Pandas Do?\n<img src=\"https://github.com/mediagit2016/workcamp-maschinelles-lernen-grundlagen/raw/master/01-grundlagen/pandas-10.JPG\">\nWhat is a Pandas Table Object?\n<img src=\"https://github.com/mediagit2016/workcamp-maschinelles-lernen-grundlagen/raw/master/01-grundlagen/pandas-20.JPG\">\nImport Bibliotheken", "# Import der Bibliotheken\n\nimport pandas as pd\n\n# Extra packages\nimport numpy as np\nimport matplotlib.pyplot as plt # für die grafische Darstellung\nimport seaborn as sns # für grafische Darstellung. Muss vorher evtl. installiert werden\n\n# jupyter notebook magic Befehl\n%matplotlib inline\n\nplt.rcParams['figure.figsize'] = (10,6) # größere Darstellung\n\n# im /data Ordner sollten apple.csv boeing.csv googl.csv microsoft.csv nike.csv liegen\n!ls ./data", "Teil 1\nEinfache Erzeugung und Veränderung von Panda Objekten\nKey Points: Pandas has two / three main data types:\n* Series (similar to numpy arrays, but with index)\n* DataFrames (table or spreadsheet with Series in the columns)\n* Panels (3D version of DataFrame, not as common)\nEs ist einfach einen DataFrame zu erzeugen\nWir verwenden pd.DataFrame(**inputs**) und können jeden Datentyp als Argument verwenden\nFunction: pd.DataFrame(data=None, index=None, columns=None, dtype=None, copy=False)\nInput data can be a numpy ndarray (structured or homogeneous), dict, or DataFrame. \nDict can contain Series, arrays, constants, or list-like objects as the values.", "# Wir probieren es mit einem array aus\nnp.random.seed(0) # setze seed für Nachvollziebarkeit\n\na1 = np.array(np.random.randn(3))\na2 = np.array(np.random.randn(3))\na3 = np.array(np.random.randn(3))\n\nprint (a1)\nprint (a2)\nprint (a3)\n\n# Wir erzeugen einen ersten Dataframe mit einem np.array - der Dataframe hat nur eine Spalte\ndf0 = pd.DataFrame(a1)\nprint(type(df0))\ndf0\n\n# DataFrame aus einer Liste der np.arrays\n\ndf0 = pd.DataFrame([a1, a2, a3])\ndf0\n\n# beachte, dass keine Spalten Labels vorhanden sind, nur ganzzahlige Werte,\n# und der Index wird automatisch gesetzt\n\n# DataFrame von einem 2D np.array\n# 9 Normalverteilte Zufallszahlen np.random.randn()\n# mit reshape als 3x3 Matrix\nax = np.random.randn(9).reshape(3,3)\nax\n\n# DataFrame von einem 2D np.array\n# 24 Normalverteilte Zufallszahlen np.random.randn()\n# mit reshape als 3x8 Matrix\nat = np.random.randn(24).reshape(3,8)\nat\n\n# Setzen der Spalten Labels in ax\ndf0 = pd.DataFrame(ax,columns=['rand_normal_1','Random Again','Third'],\n index=[100,200,99]) # wir können Spalten Bezeichnungen und einen Index zuweisen, dies muss aber in der Größe übereinstimmen\n#Ausgabe von df0\ndf0\n\n# Setzen der Spalten Labels in ax\ndf_0 = pd.DataFrame(at,columns=['rand_1','rand_2','rand_3', 'rand_4', 'rand_5','rand_6', 'rand_7', 'rand_8'],\n index=[0,1,2]) # wir können Spalten Bezeichnungen und einen Index zuweisen, dies muss aber in der Größe übereinstimmen\n# Ausgabe von df_0\ndf_0\n\n# DataFrame aus einem Dictionary\n# arrays a1 und a2\ndict1 = {'A':a1, 'B':a2}\ndf1 = pd.DataFrame(dict1) \n#Ausgabe df1\ndf1\n\n\n# Man kann leicht eine weitere Spalte hinzufügen (so wie man Werte in ein dictionary hinzufügt)\ndf1['C']=a3\n# Ausgabe df1\ndf1\n\n# Wir können eine zusätzliche Spalte mit Text und Zahlen hinzufügen \ndf1['L'] = [\"Etwas\", 3.2, \"Worte\"]\ndf1", "Pandas Series Objekt\nWie einnp.array, aber es können Datentypen kombiniert werden und eine Series hat einen eigenen Index\nAnmerkung: Jede Spalte in einem Dataframe ist eine Series", "print(df1[['L','A']])\n\nprint(type(df1['L']))\n\ndf1\n\n# Spalten umbenennen\ndf1 = df1.rename(columns = {'L':'Umbenannt'})\ndf1\n\n# Löschen von Spalten\ndel df1['C']\ndf1\n\n# oder drop Spalten\ndf1.drop('A',axis=1,inplace=True) # does not change df1 if we don't set inplace=True\n\ndf1\n\ndf1\n\n# oder drop Zeilen\ndf1.drop(0)\n\n# Beispiel: Anzeige einer Spalte\ndf1['B']\n\n# Anzeigen verschiedener Spalten\ndf1[['B','Umbenannt']]", "Slicing\n<a href=\"https://pandas.pydata.org/pandas-docs/stable/getting_started/10min.html\">[10 Minutes]</a>\nIn the 10 min Pandas Guide, you will see many ways to view, slice a dataframe\n\n\nview/slice by rows, eg df[1:3], etc.\n\n\nview by index location, see df.iloc (iloc)\n\n\nview by ranges of labels, ie index label 2 to 5, or dates feb 3 to feb 25, see df.loc (loc)\n\n\nview a single row by the index df.xs (xs) or df.ix (ix)\n\n\nfiltering rows that have certain conditions\n\nadd column\n\nadd row\n\n\nHow to change the index\n\n\nand more...", "print (df1[0:2]) # ok\n\ndf1\n\ndf1.iloc[1,1]\n\ndf1", "Teil 2\nBeispiel Finance: Große Data Frames\nWir laden Daten im CSV Format.\nSee https://www.quantshare.com/sa-43-10-ways-to-download-historical-stock-quotes-data-for-free", "!ls data/\n\n# Wir können Daten aus dem Web herunterladen. Hierfür nutzen wir pd.read_csv\n# Ein CSV file ist ein comma seperated file\n\nbase_url = 'https://google.com/finance?output=csv&q='\n\ndfg = pd.read_csv('data/googl.csv').drop('Unnamed: 0',axis=1) # Google Aktien\ndfa = pd.read_csv('data/apple.csv').drop('Unnamed: 0',axis=1) #Apple Aktien\n\ndfg\n\ndfg.head() # zeige die ersten fünf Werte an\n\ndfg.tail(3) # zeige die letzten 3 Werte an\n\ndfg.columns # returns columns, can be used to loop over\n\ndfg.index # return", "Umwandeln des Index in pandas datetime Objekt", "dfg['Date'][0]\n\ntype(dfg['Date'][0])\n\n# Date wird als String gelistet, sollte deshalb in datetime umgewandelt werden\ndfg.index = pd.to_datetime(dfg['Date']) # setzen des neuen index\n\ndfg.drop(['Date'],axis=1,inplace=True)\n\ndfg.head()\n\nprint(type(dfg.index[0]))\ndfg.index[0]\n\ndfg.index\n\ndfg['2017-08':'2017-06']", "Attributes & general statistics of a Pandas DataFrame", "dfg.shape # 251 business days last year\n\ndfg.columns\n\ndfg.size\n\n# Generelle statistische Daten mit describe()\n\ndfg.describe()\n\n# Boolean indexing\ndfg['Open'][dfg['Open']>1130] # check what dates the opening\n\n# Check where Open, High, Low and Close where greater than 1130\ndfg[dfg>1000].drop('Volume',axis=1)\n\n# If you want the values in an np array\ndfg.values", ".loc()", "# Getting a cross section with .loc - BY VALUES of the index and columns\n# df.loc[a:b, x:y], by rows and column location\n\n# Note: You have to know indices and columns\n\ndfg.loc['2017-08-31':'2017-08-21','Open':'Low']", ".iloc()", "# .iloc slicing at specific location - BY POSITION in the table\n# Recall:\n# dfg[a:b] by rows\n# dfg[[col]] or df[[col1, col2]] by columns\n# df.loc[a:b, x:y], by index and column values + location\n# df.iloc[3:5,0:2], numeric position in table\n\ndfg.iloc[1:4,3:5] # 2nd to 4th row, 4th to 5th column", "More Basic Statistics", "# We can change the index sorting\ndfg.sort_index(axis=0, ascending=True).head() # starts a year ago\n\n# sort by value\ndfg.sort_values(by='Open')[0:10]", "Boolean", "dfg[dfg>1115].head(10)\n\n# we can also drop all NaN values\ndfg[dfg>1115].head(10).dropna()\n\ndfg2 = dfg # make a copy and not a view\ndfg2 is dfg", "Setting Values", "# Recall\ndfg.head(4)\n\n# All the ways to view\n# can also be used to set values\n# good for data normalization\n\ndfg['Volume'] = dfg['Volume']/100000.0\ndfg.head(4)", "More Statistics and Operations", "# mean by column, also try var() for variance\ndfg.mean() \n\n# Verainz für jede Spalte\ndfg.var()\n\ndfg[0:5].mean(axis = 1) # Mittelwert der Zeilen für die ersten 5 Zeilen", "PlotCorrelation\nLoad several stocks", "# Reload\ndfg = pd.read_csv('data/googl.csv').drop('Unnamed: 0',axis=1) # Google stock data\ndfa = pd.read_csv('data/apple.csv').drop('Unnamed: 0',axis=1) # Apple stock data\ndfm = pd.read_csv('data/microsoft.csv').drop('Unnamed: 0',axis=1) # Google stock data\ndfn = pd.read_csv('data/nike.csv').drop('Unnamed: 0',axis=1) # Apple stock data\ndfb = pd.read_csv('data/boeing.csv').drop('Unnamed: 0',axis=1) # Apple stock data\n\ndfb.head()\n\n# Rename columns\ndfg = dfg.rename(columns = {'Close':'GOOG'})\n#print (dfg.head())\n\ndfa = dfa.rename(columns = {'Close':'AAPL'})\n#print (dfa.head())\n\ndfm = dfm.rename(columns = {'Close':'MSFT'})\n#print (dfm.head())\n\ndfn = dfn.rename(columns = {'Close':'NKE'})\n#print (dfn.head())\n\ndfb = dfb.rename(columns = {'Close':'BA'})\n\ndfb.head(2)\n\n# Lets merge some tables\n# They will all merge on the common column Date\n\ndf = dfg[['Date','GOOG']].merge(dfa[['Date','AAPL']])\ndf = df.merge(dfm[['Date','MSFT']])\ndf = df.merge(dfn[['Date','NKE']])\ndf = df.merge(dfb[['Date','BA']])\n\ndf.head()\n\ndf['Date'] = pd.to_datetime(df['Date'])\ndf = df.set_index('Date')\ndf.head()\n\ndf.plot()\n\ndf['2017'][['NKE','BA']].plot()\n\n# show a correlation matrix (pearson)\ncrl = df.corr()\ncrl\n\ncrl.sort_values(by='GOOG',ascending=False)\n\ns = crl.unstack()\nso = s.sort_values(ascending=False)\nso[so<1]\n\ndf.mean()\n\nsim=df-df.mean()\nsim.tail()\n\nsim[['MSFT','BA']].plot()", "https://github.com/guipsamora/pandas_exercises/blob/master/09_Time_Series/Getting_Financial_Data/Exercises.ipynb\nhttps://pandas.pydata.org/pandas-docs/stable/getting_started/10min.html\nhttps://www.youtube.com/channel/UC0rqucBdTuFTjJiefW5t-IQ" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jc091/deep-learning
first-neural-network/Your_first_neural_network.ipynb
mit
[ "Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!", "data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()", "Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.", "rides[:24*10].plot(x='dteday', y='cnt')", "Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().", "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor dummy_field in dummy_fields:\n dummies = pd.get_dummies(rides[dummy_field], prefix=dummy_field, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()\n\ndata.size", "Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.", "quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor quant_feature in quant_features:\n mean, std = data[quant_feature].mean(), data[quant_feature].std()\n scaled_features[quant_feature] = [mean, std]\n data.loc[:, quant_feature] = (data[quant_feature] - mean)/std", "Splitting the data into training, testing, and validation sets\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.", "# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]", "We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).", "# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]", "Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n<img src=\"assets/neural_network.png\" width=300px>\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.", "class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, \n (self.input_nodes, self.hidden_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n self.lr = learning_rate\n \n #### TODO: Set self.activation_function to your implemented sigmoid function ####\n #\n # Note: in Python, you can define a function with a lambda expression,\n # as shown below.\n self.activation_function = lambda x : 1 / (1 + np.exp(-x)) \n \n def train(self, features, targets):\n ''' Train the network on batch of features and targets. \n \n Arguments\n ---------\n \n features: 2D array, each row is one data record, each column is a feature\n targets: 1D array of target values\n \n '''\n n_records = features.shape[0]\n delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)\n delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)\n for X, y in zip(features, targets):\n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer - Replace these values with your calculations.\n hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer - Replace these values with your calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer\n\n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error - Replace this value with your calculations.\n error = y - final_outputs # Output layer error is the difference between desired target and actual output.\n\n # TODO: Calculate the hidden layer's contribution to the error\n hidden_error = np.dot(self.weights_hidden_to_output, error)\n \n # TODO: Backpropagated error terms - Replace these values with your calculations.\n output_error_term = error * 1\n hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)\n \n # Weight step (hidden to output)\n delta_weights_h_o += output_error_term * hidden_outputs[:, None]\n # Weight step (input to hidden)\n delta_weights_i_h += hidden_error_term * X[:, None]\n\n # TODO: Update the weights - Replace these values with your calculations.\n self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step\n \n def run(self, features):\n ''' Run a forward pass through the network with input features \n \n Arguments\n ---------\n features: 1D array of feature values\n '''\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer - replace these values with the appropriate calculations.\n hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer - Replace these values with the appropriate calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer \n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)", "Unit tests\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.", "import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)", "Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.", "import sys\n\n### Set the hyperparameters here ###\niterations = 5000\nlearning_rate = 0.4\nhidden_nodes = 30\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()", "Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.", "fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)", "OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below\nThe model did a great job in predicting the data before Dec 22.\nIt started to fail after Dec 21.\nI guess it is due to the holiday.\nWe splited the last approximately 21 days from data set for testing instead of split the data randomly for training and testing." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dennys-bd/Coursera-Machine-Learning-Specialization
Course 2 - ML, Regression/numpy-tutorial.ipynb
mit
[ "Numpy Tutorial\nNumpy is a computational library for Python that is optimized for operations on multi-dimensional arrays. In this notebook we will use numpy to work with 1-d arrays (often called vectors) and 2-d arrays (often called matrices).\nFor a the full user guide and reference for numpy see: http://docs.scipy.org/doc/numpy/", "import numpy as np # importing this way allows us to refer to numpy as np", "Creating Numpy Arrays\nNew arrays can be made in several ways. We can take an existing list and convert it to a numpy array:", "mylist = [1., 2., 3., 4.]\nmynparray = np.array(mylist)\nmynparray", "You can initialize an array (of any dimension) of all ones or all zeroes with the ones() and zeros() functions:", "one_vector = np.ones(4)\nprint one_vector # using print removes the array() portion\n\none2Darray = np.ones((2, 4)) # an 2D array with 2 \"rows\" and 4 \"columns\"\nprint one2Darray\n\nzero_vector = np.zeros(4)\nprint zero_vector", "You can also initialize an empty array which will be filled with values. This is the fastest way to initialize a fixed-size numpy array however you must ensure that you replace all of the values.", "empty_vector = np.empty(5)\nprint empty_vector", "Accessing array elements\nAccessing an array is straight forward. For vectors you access the index by referring to it inside square brackets. Recall that indices in Python start with 0.", "mynparray[2]", "2D arrays are accessed similarly by referring to the row and column index separated by a comma:", "my_matrix = np.array([[1, 2, 3], [4, 5, 6]])\nprint my_matrix\n\nprint my_matrix[1, 2]", "Sequences of indices can be accessed using ':' for example", "print my_matrix[0:2, 2] # recall 0:2 = [0, 1]\n\nprint my_matrix[0, 0:3]", "You can also pass a list of indices.", "fib_indices = np.array([1, 1, 2, 3])\nrandom_vector = np.random.random(10) # 10 random numbers between 0 and 1\nprint random_vector\n\nprint random_vector[fib_indices]", "You can also use true/false values to select values", "my_vector = np.array([1, 2, 3, 4])\nselect_index = np.array([True, False, True, False])\nprint my_vector[select_index]", "For 2D arrays you can select specific columns and specific rows. Passing ':' selects all rows/columns", "select_cols = np.array([True, False, True]) # 1st and 3rd column\nselect_rows = np.array([False, True]) # 2nd row\n\nprint my_matrix[select_rows, :] # just 2nd row but all columns\n\nprint my_matrix[:, select_cols] # all rows and just the 1st and 3rd column", "Operations on Arrays\nYou can use the operations '*', '**', '\\', '+' and '-' on numpy arrays and they operate elementwise.", "my_array = np.array([1., 2., 3., 4.])\nprint my_array*my_array\n\nprint my_array**2\n\nprint my_array - np.ones(4)\n\nprint my_array + np.ones(4)\n\nprint my_array / 3\n\nprint my_array / np.array([2., 3., 4., 5.]) # = [1.0/2.0, 2.0/3.0, 3.0/4.0, 4.0/5.0]", "You can compute the sum with np.sum() and the average with np.average()", "print np.sum(my_array)\n\nprint np.average(my_array)\n\nprint np.sum(my_array)/len(my_array)", "The dot product\nAn important mathematical operation in linear algebra is the dot product. \nWhen we compute the dot product between two vectors we are simply multiplying them elementwise and adding them up. In numpy you can do this with np.dot()", "array1 = np.array([1., 2., 3., 4.])\narray2 = np.array([2., 3., 4., 5.])\nprint np.dot(array1, array2)\n\nprint np.sum(array1*array2)", "Recall that the Euclidean length (or magnitude) of a vector is the squareroot of the sum of the squares of the components. This is just the squareroot of the dot product of the vector with itself:", "array1_mag = np.sqrt(np.dot(array1, array1))\nprint array1_mag\n\nprint np.sqrt(np.sum(array1*array1))", "We can also use the dot product when we have a 2D array (or matrix). When you have an vector with the same number of elements as the matrix (2D array) has columns you can right-multiply the matrix by the vector to get another vector with the same number of elements as the matrix has rows. For example this is how you compute the predicted values given a matrix of features and an array of weights.", "my_features = np.array([[1., 2.], [3., 4.], [5., 6.], [7., 8.]])\nprint my_features\n\nmy_weights = np.array([0.4, 0.5])\nprint my_weights\n\nmy_predictions = np.dot(my_features, my_weights) # note that the weights are on the right\nprint my_predictions # which has 4 elements since my_features has 4 rows", "Similarly if you have a vector with the same number of elements as the matrix has rows you can left multiply them.", "my_matrix = my_features\nmy_array = np.array([0.3, 0.4, 0.5, 0.6])\n\nprint np.dot(my_array, my_matrix) # which has 2 elements because my_matrix has 2 columns", "Multiplying Matrices\nIf we have two 2D arrays (matrices) matrix_1 and matrix_2 where the number of columns of matrix_1 is the same as the number of rows of matrix_2 then we can use np.dot() to perform matrix multiplication.", "matrix_1 = np.array([[1., 2., 3.],[4., 5., 6.]])\nprint matrix_1\n\nmatrix_2 = np.array([[1., 2.], [3., 4.], [5., 6.]])\nprint matrix_2\n\nprint np.dot(matrix_1, matrix_2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/tf-estimator-tutorials
07_Image_Analysis/02.0 - CNN with CIFAR-10 tfrecords dataset.ipynb
apache-2.0
[ "This notebook describes how to implement distributed tensorflow code.\nContent of this notebook is shown below.\n\nPrepare CIFAR-10 Dataset (TFRecords Format)\nDefine parameters\nDefine data input pipeline\nDefine features\nDefine a model\nDefine serving function\nTrain, evaluate and export a model\nEvaluate with Estimator\nPrediction with Exported Model\nDistributed Training with Cloud ML Engine\n\n1. Prepare CIFAR-10 Dataset (TFRecords Format)", "import cPickle\nimport os\nimport re\nimport shutil\nimport tarfile\nimport tensorflow as tf\n\nprint(tf.__version__)\n\nCIFAR_FILENAME = 'cifar-10-python.tar.gz'\nCIFAR_DOWNLOAD_URL = 'http://www.cs.toronto.edu/~kriz/' + CIFAR_FILENAME\nCIFAR_LOCAL_FOLDER = 'cifar-10-batches-py'\n\ndef _download_and_extract(data_dir):\n tf.contrib.learn.datasets.base.maybe_download(CIFAR_FILENAME, data_dir, CIFAR_DOWNLOAD_URL)\n tarfile.open(os.path.join(data_dir, CIFAR_FILENAME), 'r:gz').extractall(data_dir)\n\ndef _get_file_names():\n \"\"\"Returns the file names expected to exist in the input_dir.\"\"\"\n file_names = {}\n file_names['train'] = ['data_batch_%d' % i for i in xrange(1, 5)]\n file_names['validation'] = ['data_batch_5']\n file_names['eval'] = ['test_batch']\n return file_names\n\ndef _read_pickle_from_file(filename):\n with tf.gfile.Open(filename, 'r') as f:\n data_dict = cPickle.load(f)\n return data_dict\n\ndef _convert_to_tfrecord(input_files, output_file):\n \"\"\"Converts a file to TFRecords.\"\"\"\n print('Generating %s' % output_file)\n with tf.python_io.TFRecordWriter(output_file) as record_writer:\n for input_file in input_files:\n data_dict = _read_pickle_from_file(input_file)\n data = data_dict['data']\n labels = data_dict['labels']\n num_entries_in_batch = len(labels)\n for i in range(num_entries_in_batch):\n example = tf.train.Example(features=tf.train.Features(\n feature={\n 'image': _bytes_feature(data[i].tobytes()),\n 'label': _int64_feature(labels[i])\n }))\n record_writer.write(example.SerializeToString())\n\ndef _int64_feature(value):\n return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))\n\ndef _bytes_feature(value):\n return tf.train.Feature(bytes_list=tf.train.BytesList(value=[str(value)]))\n\ndef create_tfrecords_files(data_dir='cifar-10'):\n _download_and_extract(data_dir)\n file_names = _get_file_names()\n input_dir = os.path.join(data_dir, CIFAR_LOCAL_FOLDER)\n\n for mode, files in file_names.items():\n input_files = [os.path.join(input_dir, f) for f in files]\n output_file = os.path.join(data_dir, mode+'.tfrecords')\n try:\n os.remove(output_file)\n except OSError:\n pass\n # Convert to tf.train.Example and write to TFRecords.\n _convert_to_tfrecord(input_files, output_file)\n\ncreate_tfrecords_files()", "2. Define parameters", "class FLAGS():\n pass\n\nFLAGS.batch_size = 200\nFLAGS.max_steps = 1000\nFLAGS.eval_steps = 100\nFLAGS.save_checkpoints_steps = 100\nFLAGS.tf_random_seed = 19851211\nFLAGS.model_name = 'cnn-model-02'\nFLAGS.use_checkpoint = False\n\nIMAGE_HEIGHT = 32\nIMAGE_WIDTH = 32\nIMAGE_DEPTH = 3\nNUM_CLASSES = 10", "3. Define data input pipeline", "def parse_record(serialized_example):\n features = tf.parse_single_example(\n serialized_example,\n features={\n 'image': tf.FixedLenFeature([], tf.string),\n 'label': tf.FixedLenFeature([], tf.int64),\n })\n \n image = tf.decode_raw(features['image'], tf.uint8)\n image.set_shape([IMAGE_DEPTH * IMAGE_HEIGHT * IMAGE_WIDTH])\n image = tf.reshape(image, [IMAGE_DEPTH, IMAGE_HEIGHT, IMAGE_WIDTH])\n image = tf.cast(tf.transpose(image, [1, 2, 0]), tf.float32)\n \n label = tf.cast(features['label'], tf.int32)\n label = tf.one_hot(label, NUM_CLASSES)\n\n return image, label\n\ndef preprocess_image(image, is_training=False):\n \"\"\"Preprocess a single image of layout [height, width, depth].\"\"\"\n if is_training:\n # Resize the image to add four extra pixels on each side.\n image = tf.image.resize_image_with_crop_or_pad(\n image, IMAGE_HEIGHT + 8, IMAGE_WIDTH + 8)\n\n # Randomly crop a [_HEIGHT, _WIDTH] section of the image.\n image = tf.random_crop(image, [IMAGE_HEIGHT, IMAGE_WIDTH, IMAGE_DEPTH])\n\n # Randomly flip the image horizontally.\n image = tf.image.random_flip_left_right(image)\n\n # Subtract off the mean and divide by the variance of the pixels.\n image = tf.image.per_image_standardization(image)\n return image\n\ndef generate_input_fn(file_names, mode=tf.estimator.ModeKeys.EVAL, batch_size=1):\n def _input_fn():\n dataset = tf.data.TFRecordDataset(filenames=file_names)\n\n is_training = (mode == tf.estimator.ModeKeys.TRAIN)\n if is_training:\n buffer_size = batch_size * 2 + 1\n dataset = dataset.shuffle(buffer_size=buffer_size)\n\n # Transformation\n dataset = dataset.map(parse_record)\n dataset = dataset.map(\n lambda image, label: (preprocess_image(image, is_training), label))\n\n dataset = dataset.repeat()\n dataset = dataset.batch(batch_size)\n dataset = dataset.prefetch(2 * batch_size)\n\n images, labels = dataset.make_one_shot_iterator().get_next()\n\n features = {'images': images}\n return features, labels\n \n return _input_fn", "4. Define features", "def get_feature_columns():\n feature_columns = {\n 'images': tf.feature_column.numeric_column('images', (IMAGE_HEIGHT, IMAGE_WIDTH, IMAGE_DEPTH)),\n }\n return feature_columns\n\nfeature_columns = get_feature_columns()\nprint(\"Feature Columns: {}\".format(feature_columns))", "5. Define a model", "def inference(images):\n # 1st Convolutional Layer \n conv1 = tf.layers.conv2d(\n inputs=images, filters=64, kernel_size=[5, 5], padding='same',\n activation=tf.nn.relu, name='conv1')\n pool1 = tf.layers.max_pooling2d(\n inputs=conv1, pool_size=[3, 3], strides=2, name='pool1')\n norm1 = tf.nn.lrn(\n pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm1')\n\n # 2nd Convolutional Layer \n conv2 = tf.layers.conv2d(\n inputs=norm1, filters=64, kernel_size=[5, 5], padding='same',\n activation=tf.nn.relu, name='conv2')\n norm2 = tf.nn.lrn(\n conv2, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm2')\n pool2 = tf.layers.max_pooling2d(\n inputs=norm2, pool_size=[3, 3], strides=2, name='pool2')\n\n # Flatten Layer \n shape = pool2.get_shape()\n pool2_ = tf.reshape(pool2, [-1, shape[1]*shape[2]*shape[3]])\n\n # 1st Fully Connected Layer \n dense1 = tf.layers.dense(\n inputs=pool2_, units=384, activation=tf.nn.relu, name='dense1')\n\n # 2nd Fully Connected Layer \n dense2 = tf.layers.dense(\n inputs=dense1, units=192, activation=tf.nn.relu, name='dense2')\n\n # 3rd Fully Connected Layer (Logits) \n logits = tf.layers.dense(\n inputs=dense2, units=NUM_CLASSES, activation=tf.nn.relu, name='logits')\n\n return logits\n\ndef model_fn(features, labels, mode, params):\n # Create the input layers from the features \n feature_columns = list(get_feature_columns().values())\n\n images = tf.feature_column.input_layer(\n features=features, feature_columns=feature_columns)\n\n images = tf.reshape(\n images, shape=(-1, IMAGE_HEIGHT, IMAGE_WIDTH, IMAGE_DEPTH))\n\n # Calculate logits through CNN \n logits = inference(images)\n\n if mode in (tf.estimator.ModeKeys.PREDICT, tf.estimator.ModeKeys.EVAL):\n predicted_indices = tf.argmax(input=logits, axis=1)\n probabilities = tf.nn.softmax(logits, name='softmax_tensor')\n\n if mode in (tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL):\n global_step = tf.train.get_or_create_global_step()\n label_indices = tf.argmax(input=labels, axis=1)\n loss = tf.losses.softmax_cross_entropy(\n onehot_labels=labels, logits=logits)\n tf.summary.scalar('cross_entropy', loss)\n\n if mode == tf.estimator.ModeKeys.PREDICT:\n predictions = {\n 'classes': predicted_indices,\n 'probabilities': probabilities\n }\n export_outputs = {\n 'predictions': tf.estimator.export.PredictOutput(predictions)\n }\n return tf.estimator.EstimatorSpec(\n mode, predictions=predictions, export_outputs=export_outputs)\n\n if mode == tf.estimator.ModeKeys.TRAIN:\n optimizer = tf.train.AdamOptimizer(learning_rate=0.001)\n train_op = optimizer.minimize(loss, global_step=global_step)\n return tf.estimator.EstimatorSpec(\n mode, loss=loss, train_op=train_op)\n\n if mode == tf.estimator.ModeKeys.EVAL:\n eval_metric_ops = {\n 'accuracy': tf.metrics.accuracy(label_indices, predicted_indices)\n }\n return tf.estimator.EstimatorSpec(\n mode, loss=loss, eval_metric_ops=eval_metric_ops)", "6. Define a serving function", "def serving_input_fn():\n receiver_tensor = {'images': tf.placeholder(\n shape=[None, IMAGE_HEIGHT, IMAGE_WIDTH, IMAGE_DEPTH], dtype=tf.float32)}\n features = {'images': tf.map_fn(preprocess_image, receiver_tensor['images'])}\n return tf.estimator.export.ServingInputReceiver(features, receiver_tensor)", "7. Train, evaluate and export a model", "model_dir = 'trained_models/{}'.format(FLAGS.model_name)\ntrain_data_files = ['cifar-10/train.tfrecords']\nvalid_data_files = ['cifar-10/validation.tfrecords']\ntest_data_files = ['cifar-10/eval.tfrecords']\n\nrun_config = tf.estimator.RunConfig(\n save_checkpoints_steps=FLAGS.save_checkpoints_steps,\n tf_random_seed=FLAGS.tf_random_seed,\n model_dir=model_dir\n)\n\nestimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)\n\n# There is another Exporter named FinalExporter\nexporter = tf.estimator.LatestExporter(\n name='Servo',\n serving_input_receiver_fn=serving_input_fn,\n assets_extra=None,\n as_text=False,\n exports_to_keep=5)\n\ntrain_spec = tf.estimator.TrainSpec(\n input_fn=generate_input_fn(file_names=train_data_files,\n mode=tf.estimator.ModeKeys.TRAIN,\n batch_size=FLAGS.batch_size),\n max_steps=FLAGS.max_steps)\n\neval_spec = tf.estimator.EvalSpec(\n input_fn=generate_input_fn(file_names=valid_data_files,\n mode=tf.estimator.ModeKeys.EVAL,\n batch_size=FLAGS.batch_size),\n steps=FLAGS.eval_steps, exporters=exporter)\n\nif not FLAGS.use_checkpoint:\n print(\"Removing previous artifacts...\")\n shutil.rmtree(model_dir, ignore_errors=True)\n\ntf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)", "8. Evaluation with Estimator", "test_input_fn = generate_input_fn(file_names=test_data_files,\n mode=tf.estimator.ModeKeys.EVAL,\n batch_size=1000)\nestimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)\nprint(estimator.evaluate(input_fn=test_input_fn, steps=1))", "9. Prediction with Exported Model", "export_dir = model_dir + '/export/Servo/'\nsaved_model_dir = os.path.join(export_dir, os.listdir(export_dir)[-1]) \n\npredictor_fn = tf.contrib.predictor.from_saved_model(\n export_dir = saved_model_dir,\n signature_def_key='predictions')\n\nimport numpy\n\ndata_dict = _read_pickle_from_file('cifar-10/cifar-10-batches-py/test_batch')\n\nN = 1000\nimages = data_dict['data'][:N].reshape([N, 3, 32, 32]).transpose([0, 2, 3, 1])\nlabels = data_dict['labels'][:N]\n\noutput = predictor_fn({'images': images})\naccuracy = numpy.sum(\n [ans==ret for ans, ret in zip(labels, output['classes'])]) / float(N)\n\nprint(accuracy)", "10. Distributed Training with Cloud ML Engine\na. Set environments", "import os\n\nPROJECT = 'YOUR-PROJECT-ID' # REPLACE WITH YOUR PROJECT ID\nBUCKET = 'YOUR-BUCKET-NAME' # REPLACE WITH YOUR BUCKET NAME\nREGION = 'BUCKET-REGION' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\n\nos.environ['PROJECT'] = PROJECT\nos.environ['BUCKET'] = BUCKET\nos.environ['REGION'] = REGION\n\n%%bash\ngcloud config set project $PROJECT\ngcloud config set compute/region $REGION", "b. Set permission to BUCKET (NOTE: Create bucket beforehand)", "%%bash\n\nPROJECT_ID=$PROJECT\nAUTH_TOKEN=$(gcloud auth print-access-token)\n\nSVC_ACCOUNT=$(curl -X GET -H \"Content-Type: application/json\" \\\n -H \"Authorization: Bearer $AUTH_TOKEN\" \\\n https://ml.googleapis.com/v1/projects/${PROJECT_ID}:getConfig \\\n | python -c \"import json; import sys; response = json.load(sys.stdin); \\\n print response['serviceAccount']\")\n\necho \"Authorizing the Cloud ML Service account $SVC_ACCOUNT to access files in $BUCKET\"\ngsutil -m defacl ch -u $SVC_ACCOUNT:R gs://$BUCKET\ngsutil -m acl ch -u $SVC_ACCOUNT:R -r gs://$BUCKET # error message (if bucket is empty) can be ignored\ngsutil -m acl ch -u $SVC_ACCOUNT:W gs://$BUCKET", "c. Copy TFRecords files to GCS BUCKET", "%%bash\n\necho ${BUCKET}\ngsutil -m rm -rf gs://${BUCKET}/cifar-10\ngsutil -m cp cifar-10/*.tfrecords gs://${BUCKET}/cifar-10", "d. Run distributed training with Cloud MLE", "%%bash\nOUTDIR=gs://$BUCKET/trained_models_3cpu\nJOBNAME=sm_$(date -u +%y%m%d_%H%M%S)\necho $OUTDIR $REGION $JOBNAME\n\ngsutil -m rm -rf $OUTDIR\ngcloud ml-engine jobs submit training $JOBNAME \\\n --region=$REGION \\\n --module-name=cnn-model-02.task \\\n --package-path=\"$(pwd)/trainer/cnn-model-02\" \\\n --job-dir=$OUTDIR \\\n --staging-bucket=gs://$BUCKET \\\n --config=config_3cpu.yaml \\\n --runtime-version=1.4 \\\n -- \\\n --bucket_name=$BUCKET \\\n --train_data_pattern=cifar-10/train*.tfrecords \\\n --eval_data_pattern=cifar-10/eval*.tfrecords \\\n --output_dir=$OUTDIR \\\n --max_steps=10000\n\n%%bash\nOUTDIR=gs://$BUCKET/trained_models_3gpu\nJOBNAME=sm_$(date -u +%y%m%d_%H%M%S)\necho $OUTDIR $REGION $JOBNAME\n\ngsutil -m rm -rf $OUTDIR\ngcloud ml-engine jobs submit training $JOBNAME \\\n --region=$REGION \\\n --module-name=cnn-model-02.task \\\n --package-path=\"$(pwd)/trainer/cnn-model-02\" \\\n --job-dir=$OUTDIR \\\n --staging-bucket=gs://$BUCKET \\\n --config=config_3gpu.yaml \\\n --runtime-version=1.4 \\\n -- \\\n --bucket_name=$BUCKET \\\n --train_data_pattern=cifar-10/train*.tfrecords \\\n --eval_data_pattern=cifar-10/eval*.tfrecords \\\n --output_dir=$OUTDIR \\\n --max_steps=10000" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session05/Day1/python_imred.ipynb
mit
[ "Image Reduction in Python\nErik Tollerud (STScI)\nIn this notebook we will walk through several of the basic steps required to do data reduction using Python and Astropy. This notebook is focused on \"practical\" (you decide if that is a code word for \"lazy\") application of the existing ecosystem of Python packages. That is, it is not a thorough guide to the nitty-gritty of how all these stages are implemented. For that, see other lectures in this session.\nInstallation/Requirements\nThis notebook requires the following Python packages that do not come by default with anaconda:\n\nCCDProc (>= v1.3)\nPhotutils (>= v0.4)\n\nBoth if these should be available on the astropy (or conda-forge) channels. So you can get them with conda by doing conda -c astropy ccdproc photutils.\nIf for some reason this doesn't work, pip install &lt;packagename&gt; should also work for either package.\nRegardless of how you install them, you may need to restart your notebook kernel to recognize the packages are present. Run the cells below to ensure you have the right versions.", "import ccdproc\nccdproc.__version__\n\nimport photutils\nphotutils.__version__", "We also do a few standard imports here python packages we know we will need.", "from glob import glob\n\nimport numpy as np\n\nfrom astropy import units as u\n\n\n%matplotlib inline\nfrom matplotlib import pyplot as plt", "Getting the data\nWe need to first get the image data we are looking at for this notebook. This is actual data from an observing run by the author using the Palomar 200\" Hale Telescope, using the Large Format Camera (LFC) instrument. \nThe cell below will do just this from the web (or you can try downloading from here: https://northwestern.box.com/s/4mact3c5xu9wcveofek55if8j1mxptqd), and un-tar it locally to create a directory with the image files. This is an ~200MB download, so might take a bit. If the wifi has gotten bad, try asking a neighbor or your instructor if they have it on a key drive or similar.", "!wget http://www.stsci.edu/\\~etollerud/python_imred_data.tar\n!tar xf python_imred_data.tar", "The above creates a directory called \"python_imred_data\" - lets examine it:", "ls -lh python_imred_data", "Exercise\nLook at the observing_log.csv file - it's an excerpt from the log. Now look at the file sizes above. What patterns do you see? Can you tell why? Discuss with your neighbor. (Hint: the \".gz\" at the end is significant here.)\nYou might find it useful to take a quick look at some of the images with a fits viewer like ds9 to do this. Feel free to come back to this after looking over some of the files if you don't have an external prgram.\nLoading the data into Python\nTo have the \"lowest-level\" view of a fits file, you can use the astropy.io.fits package. It is a direct view into a fits file, which means you have a lot of control of how you look at the file, but because FITS files can store more than just an individual image dataset, it requires some understanding of FITS files. Here we take a quick look at one of the \"science\" images using this interface.\nQuick look with astropy.io.fits", "from astropy.io import fits\n\ndata_g = fits.open('python_imred_data/ccd.037.0.fits.gz')\ndata_g", "This shows that this file is a relatively simple image - a single \"HDU\" (Header + Data Unit). Lets take a look at what that HDU contains:", "data_g[0].header\n\ndata_g[0].data", "The header contains all the metadata about the image, while the data stores \"counts\" from the CCD (in the form of 16-bit integers). Seems sensible enough. Now lets try plotting up those counts.", "plt.imshow(data_g[0].data)", "Hmm, not very useful, as it turns out. \nIn fact, astronomical data tend to have dynamic ranges that require a bit more care when visualizing. To assist in this, the astropy.visualization packages has some helper utilities for re-scaling an image look more interpretable (to learn more about these see the astropy.visualization docs).", "from astropy import visualization as aviz\n\nimage = data_g[0].data\nnorm = aviz.ImageNormalize(image, \n interval=aviz.PercentileInterval(95), \n stretch=aviz.LogStretch())\n\nfig, ax = plt.subplots(1,1, figsize=(6,10))\naim = ax.imshow(image, norm=norm, origin='lower')\nplt.colorbar(aim)", "Well that looks better. It's now clear this is an astronomical image. However, it is not a very good looking one. It is full of various kinds of artifacts that need removing before any science can be done. We will address how to correct these in the rest of this notebook.\nExercise\nTry playing with the parameters in the ImageNormalize class. See if you can get the image to look clearer. The image stretching part of the docs are your friend here!\nTo simplify this plotting task in the future, we will make a helperfunction that takes in an image and scales it to some settings that seem to look promising. Feel free to adjust this function to your preferences base on your results in the exercise, though!", "def show_image(image, percl=99, percu=None, figsize=(6, 10)):\n \"\"\"\n Show an image in matplotlib with some basic astronomically-appropriat stretching.\n \n Parameters\n ----------\n image\n The image to show \n percl : number\n The percentile for the lower edge of the stretch (or both edges if ``percu`` is None)\n percu : number or None\n The percentile for the upper edge of the stretch (or None to use ``percl`` for both)\n figsize : 2-tuple\n The size of the matplotlib figure in inches\n \"\"\" \n if percu is None:\n percu = percl\n percl = 100-percl\n \n norm = aviz.ImageNormalize(image, interval=aviz.AsymmetricPercentileInterval(percl, percu), \n stretch=aviz.LogStretch())\n\n fig, ax = plt.subplots(1,1, figsize=figsize)\n plt.colorbar(ax.imshow(image, norm=norm, origin='lower'))", "CCDData / astropy.nddata\nFor the rest of this notebook, instead of using astropy.io.fits, we will use a somewhat higher-level view that is intended specifically for CCD (or CCD-like) images. Appropriately enough, it is named CCDData. It can be found in astropy.nddata, a sub-package storing data structures for multidimensional astronomical data sets. Lets open the same file we were just look at with CCDData.", "from astropy.nddata import CCDData\n\nccddata_g = CCDData.read('python_imred_data/ccd.037.0.fits.gz', unit=u.count)\nccddata_g", "Looks to be the same file... But not a few differences: it's immediately an image, with no need to do [0]. Also note that you had to specify a unit. Some fits files come with their units specified (allowing fits fulls of e.g. calibrated images to know what their units are), but this one is raw data, so we had to specify the unit.\nNote also that CCDDatathe object knows all the header information, and has a copy of the data just as in the astropy.io.fits interface:", "ccddata_g.meta # ccddata_g.header is the exact same thing\n\nccddata_g.data", "CCDData has several other features like the ability to store (and propogate) uncertainties, or to mark certain pixels as defective, but we will not use that here, as we are focused on getting something out quickly. So lets just try visualizing it using our function from above:", "show_image(ccddata_g, 95)", "Looks just the same, so looks like it accomplished the same aim but with less dependence on the FITS format. So we will continue on with this data structure.\nOverscan and Bias\nOur first set of corrections center around removing the \"bias\" of the image - usually due to the voltage offset necessary to make a CCD work - along with associated pixel-by-pixel offsets due to the electronics.\nUnderstanding Bias and Overscan areas\nExamine the observing_log.csv file. It shows that the first several images are called \"bias\". That seems like a good place to start! Lets try examining one - both the image itself and its header.", "im1 = CCDData.read('python_imred_data/ccd.001.0.fits.gz', unit=u.count)\nshow_image(im1)\nim1.header", "Several things to note here. First, it says it has an exposure time of zero, as expected for a bias. There is also a big glowing strip on the left that is a few hunderd counts. This is probably an example of \"amplifier glow\", where the readout electronics emit light (or electronic noise) that is picked up by the CCD as it reads out. Forrunately, this pattern, along with the other patterns evident in this image, are mostly static, so they can be removed by simply subtracting the bias.\nAnother thing that might not be obvious to the untrained eye is that the image dimensions are sllightly larger than a power-of-two. This is not the size of the actual detector. Another hint is in the header in the section called BIASSEC: this reveals the section of the chip that is used for \"overscan\" - the readout process is repeated several times past the end of the chip to characterize the average bias for that row. Lets take a closer look at the overscan area. Note that we do this in one of the science exposures because it's not visible in the bias (which is in some sense entirely overscan):", "show_image(ccddata_g)\nplt.xlim(2000,2080)\nplt.ylim(0,150);", "Now it is clear that the overscan region is quite different from the rest of the image. \nSubtracting overscan\nFortunately, subtracting the overscan region is fairly trivial with the ccdproc package. It has a function to do just this task. Take a look at the docs for the function by executing the cell below.", "ccdproc.subtract_overscan?", "Of particular note is the fits_section keyword. This seems like it might be useful because the header already includes a FITS-style \"BIASSEC\" keyword. Lets try using that:", "ccdproc.subtract_overscan(im1, fits_section=im1.header['BIASSEC'])", "D'Oh! That didn't work. Why not? The error message, while a bit cryptic, provides a critical clue: the overscan region found is one pixel shorter than the actual image. This is because the \"BIASSEC\" keyword in this image doesn't follow quite the convention expected for the second (vertical) dimension. So we will just manually fill in a fits_section with the correct overscan region.", "im1.header['BIASSEC']\n\nsubed = ccdproc.subtract_overscan(im1, fits_section='[2049:2080,:]', overscan_axis=1)\nshow_image(subed)\nsubed.shape", "While the image at first glance looks the same, looking at the color bar now reveals that it is centered around 0 rather than ~1100. So we have sucessfully subtracted the row-by-row bias using the overscan region.\nHowever, the image still includes the overscan region. To remove that, we need to trim the image. The FITS header keyword DATASEC is typically used for this purpose, and we can use that here to trim off the overscan section:", "trimmed = ccdproc.trim_image(subed, fits_section=subed.meta['DATASEC'])\nshow_image(trimmed)\ntrimmed.shape", "Looking closely, though we see another oddity: even with the trimming done above, it is still not the dimensions the LFC specs say the CCD should be (2048 × 4096 pixels). The vertical direction still has excess pixels. Lets look at these in the science image:", "show_image(ccddata_g)\nplt.xlim(1000,1050)\nplt.ylim(4000,4130);", "It appears as though there is a second overscan region along the columns. We could conceivably choose to use that overscan region as well, but we should be suspicious of this given that it is not part of the BIASSEC mentioned in the header, and is included in DATASEC. So it seems like it might be safer to just trim this region off and trust in the bias to remove the column-wise variations.\nWe know we need to apply this correction to every image, so to make the operations above repeatable easily, we move them into a single function to perform the overscan correction, which we will call later in the notebook:", "def overscan_correct(image):\n \"\"\"\n Subtract the row-wise overscan and trim the non-data regions.\n \n Parameters\n ----------\n image : CCDData object\n The image to apply the corrections to\n \n Returns\n -------\n CCDData object\n the overscan-corrected image\n \"\"\"\n subed = ccdproc.subtract_overscan(image, fits_section='[2049:2080,:]', overscan_axis=1)\n trimmed = ccdproc.trim_image(subed, fits_section='[1:2048,1:4096]')\n return trimmed", "Exercise\nBoth bias images and overscan regions contain read noise (since they are read with the same electronics as everything else). See if you can determine the read noise (in counts) of this chip using the images we've examined thus far. Compare to the LFC spec page and see if you get the same answer as they do (see the table in section 3 - this image is from CCD #0).\nCombining and applying a \"master bias\"\nThe overscan is good at removing the bias along its scan direction, but will do nothing about pixel-specific bias variations or readout phenomena that are constant per-readout but spatially varying (like the \"glow\" on the left edge of the image that we saw above). To correct that we will use a set of bias frames and subtract them from the images. Before we can do this, however, we need to combine the biases together into a \"master bias\". Otherwise we may simply be adding more noise than we remove, as a single bias exposure has the same noise as a single science exposure.\nLook at the observing_log.csv file. It shows you which files are biases. The code below conveniently grabs them all into a single python list:", "biasfns = glob('python_imred_data/ccd.00?.0.fits.gz')\nbiasfns", "Now we both load these images into memory and apply the overscan correction in a single step.", "biases = [overscan_correct(CCDData.read(fn, unit=u.count)) for fn in biasfns]\n\n# The above cell uses Python's \"list comprehensions\", which are faster and more \n# compact than a regular for-loop. But if you have not seen these before, \n# it's useful to know they are exactly equivalent to this:\n\n# biases = []\n# for fn in biasfns:\n# im = overscan_correct(CCDData.read(fn, unit=u.count))\n# biases.append(im)", "Now that we have all the bias images loaded and overscan subtracted, we can combine them together into a single combined (or \"master\") bias. We use the ccdproc.Combiner class to do median combining (although it supports several other combining algorithms):", "bias_combiner = ccdproc.Combiner(biases)\nmaster_bias = bias_combiner.median_combine()\n\nshow_image(master_bias)", "Not too much exciting there to the eye... but it should have better noise properties than any of the biases by themselves. Now lets do the final step in the bias process and try subtracting the bias from the science image. Lets look at the docstring for ccdproc.subtract_bias", "ccdproc.subtract_bias?", "Looks pretty straightforward to use! Just remember that we have to overscan-correct the science image before we apply the bias:", "ccddata_g_corr = overscan_correct(ccddata_g)\nccd_data_g_unbiased = ccdproc.subtract_bias(ccddata_g_corr, master_bias)\nshow_image(ccd_data_g_unbiased, 10, 99.8)", "Hark! The bias level is gone, as is the glowing strip on the left. Two artifacts down. It is now also quite evident that there is some kind of interesting astronomical object to the center-left...\nExercise\nOpinions vary on the best way to combine biases. Try comparing the statistics of a bias made using the above procedure and one using a simple average (which you should be able to do with bias_combiner). Which one is better? Discuss with your neighbor when you might like one over the other.\nFlats\nThe final image correction we will apply is a flat. Dividing by a flat removes pixel-by-pixel sensitivity variations, as well as some imperfections in instrument optics, dust on filters, etc. Different flats are taken with each filter, so lets look in the observing_log.csv to see which images are flats for the g-band filter:", "flat_g_fns = glob('python_imred_data/ccd.01[4-6].0.fits.gz')\nflat_g_fns", "Now we load these files like we did the biases, applying both the overscan and bias corrections:", "# These steps could all be one single list comprehension. However,\n# breaking them into several lines makes it much clearer which steps\n# are being applied. In practice the performance difference is\n# ~microseconds, far less than the actual execution time for even a\n# single image\n\nflats_g = [CCDData.read(fn, unit=u.count) for fn in flat_g_fns]\nflats_g = [overscan_correct(flat) for flat in flats_g]\nflats_g = [ccdproc.subtract_bias(flat, master_bias) for flat in flats_g]\nshow_image(flats_g[0], 90)", "Inspecting this bias shows that it has far more counts than the science image, which is good because that means we have I higher S/N than the science. It clearly shows the cross-shaped imprint in the middle that is also present in the science image, as well as vignetting near the bottom).\nNow we combine the flats like we did with the biases to get a single flat to apply to science images.", "flat_g_combiner = ccdproc.Combiner(flats_g)\n# feel free to choose a different combine algorithm if you developed a preference in the last exercise\ncombined_flat_g = flat_g_combiner.median_combine()\n\nshow_image(combined_flat_g, 90)", "Now we can do the final correction to our science image: dividing by the flat. ccdproc provides a function to do that, too:", "ccdproc.flat_correct?", "Note that this includes, by default, automatically normalizing the flat so that the resulting image is close to the true count rate, so there's no need to manually re-normalize the flat (unless you want to apply it many times and don't want to re-compute the normalization). Lets see what happens when we apply the flat to our science image:", "ccd_data_g_flattened = ccdproc.flat_correct(ccd_data_g_unbiased, combined_flat_g)\nshow_image(ccd_data_g_flattened, 10, 99.5)", "Now we're in business! This now looks like an astronomical image of a galaxy and various objects around it, aside from some residual amp glow (discussed below in the \"Advanced Exercise\").\nExercise\nWe still haven't gotten quite rid of the glow. It seems there is a slight residual glow in the strip on the left, and a quite prominent glow to the lower-left.\nThe lower-left glow is apparently time-dependent - that is, it is absent in the biases (0 sec exposure), weak in the flats (70 sec exposure), but much stronger in the science images (300 sec). This means it is probably due to amplifier electronics that emit a continuous glow even when not reading out. That's an annoying thing about semiconductors: while they are great light absorbers but also great emmitters! Oh well, at least in means our electricity bills are getting cheaper...\nIn any event, this means the only way to correct for this glow is to use a \"dark\" in place of a bias. A dark is an exposure of the same time as the target exposure, but with the camera shutter closed. This exposure should then capture the full amplifier glow for a given exposure time. You may have noticed the data files included a darks directory. If you look there you'll see several images, including dark exposures of times appropriate for our images. They have overlapping exposure numbers and no log, because they were taken on a different night as the science data, but around the same time. So you will have to do some sleuthing to figure out which ones to use.\nOnce you've figured this out, try applying the darks to the images in the same way as the biases and see if you can get rid of the remaining glow. If you get this working, you can use those images instead of the ones derived above for the \"Photometry\" section.\nAdvanced Exercise\nDue to time constraints, the above discussion has said little about uncertainties. But there is enough information in the images above to compute running per-pixel uncertainties. See if you can do this, and attach them to the final file as the ccd_data_g_flattened.uncertainty attribute (see the nddata and CCDData docs for the details of how to store the type of uncertainty).\nPhotometry\nAfter the above reductions, opinions begin to diverge wildly on the best way to reduce data. Many, many papers have been written on the right way to do photometry for various conditions or classes of objects. It is an area both of active research and active code development. It is also the subject of many of the lectures this week.\nHence, this final section is not meant to be in any way complete, but rather meant to demonstrate a few ways you might do certain basic photometric measurements in Python. For this purpose, we will rely heavily on photutils, the main Astropy package for doing general photometry.\nBackground Estimation\nBefore any photometric measurements can be made of any object, the background flux must be subtracted. In some images the background is variable enough that fairly complex models are required. In other cases, this is done locally as part of the photometering, although that can be problematic in crowded fields. But for many purposes estimating a single background for the whole image is sufficient, and it is that case we will consider here.\nPhutils has several background-estimation algorithms available. Here we will use an algorithm meant to estimate the mode of a distribution relatively quickly in the presence of outliers (i.e., the background in a typical astronomical image that's not too crowded with sources):", "from astropy.stats import SigmaClip\n\nbkg_estimator = photutils.ModeEstimatorBackground(sigma_clip=SigmaClip(sigma=3.))\n# Note: for some versions of numpy you may need to do ``ccd_data_g_flattened.data`` in the line below\nbkg_val = bkg_estimator.calc_background(ccd_data_g_flattened)\nbkg_val", "Now we can subtract that from the reduced science image. Note that the units have to match:", "ccd_data_g_bkgsub = ccd_data_g_flattened.subtract(bkg_val*u.count)\nshow_image(ccd_data_g_bkgsub)", "Doesn't look too different from the last one by eye... which is expected because it's just a shift in the 0-level. But now seems like a good time to zoom in on that galaxy in the center-left:\nFinding Notable Objects", "show_image(ccd_data_g_bkgsub, 12, 99.9, figsize=(12, 10))\nplt.xlim(0, 1000)\nplt.ylim(2200, 3300)", "Aha! It's a Local Group galaxy, because otherwise you wouldn't be able to see individual stars like that. Lets see if we can identify it. Lets see where the telescope was pointed:", "ccddata_g.header['RA'], ccddata_g.header['DEC']", "Knowing that this location in the image is pretty close to the center, we can try using NED (the NASA/IPAC Extragalactic Database) to identify the object. Go to NED's web site and click the \"near position\" search. Enter in the coordinates, and hit search. You'll see plenty of objects, but only one that's actually a Local Group galaxy.\nFor comparison purposes, lets also look at this field in the Sloan Digital Sky Survey. Go to the SDSS Sky Navigate Tool, and enter in the same coordinates as above. You'll land right on a field that should look like our target galaxy. This means we can use the SDSS as a comparison to see if our photometric measurements make any sense. Keep that window open for reference later on.\nFor now, we note two objects that are more compact than the Local Group galaxy and therefore more amenable to a comparison with SDSS: the two background galaxies that are directly to the right of the Local Group object.\nAperture Photometry\nThe simplest form of photometry is simply drawing apertures (often circular) around an object and counting the flux inside that aperture. Since this process is so straightforward, we will us it as a sanity check for comparing our image to the SDSS.\nFirst, we need to pick an aperture. SDSS provides several aperture photometry measurements, but the easiest to find turns out to be 3\" diameter apertures (available on SDSS as \"FIBERMAG\"). We need to compute how many pixels for our image are in a 3\" aperture. Lets choose to true the FITS headers, which give a plate scale.", "ccddata_g.header", "Note the \"SECPIX1\"/\"SECPIX2\" keywords, which give the number of arcseconds per pixel. While this is relatively straightforward, we e can use astropy.units machinery in a way that makes it foolproof (i.e., keeps you from getting pixel/arcsec or arcsec/pixel confused):", "scale_eq = u.pixel_scale(ccddata_g.header['SECPIX1']*u.arcsec/u.pixel)\nfibermag_ap_diam = (3*u.arcsec).to(u.pixel, scale_eq)\nfibermag_ap_diam", "Now we can use this to define a photutils CircularAperture object. These objects also require positions, so we'll pick the positions of the two objects we identified above:", "positions = [(736., 2601.5), (743., 2872.)]\napertures = photutils.CircularAperture(positions, r=fibermag_ap_diam.value/2)", "Conveniently, these apertures can even plot themselves:", "show_image(ccd_data_g_bkgsub, 12, 99.9, figsize=(6, 10))\napertures.plot(color='red')\nplt.xlim(600, 800)\nplt.ylim(2530, 2920)", "And indeed we see they are centered on the right objects. \nNow we simply perform photometry on these objects using photutils.aperture_photometry", "photutils.aperture_photometry?", "While it has many options, the defaults are probably sufficient for our needs right now:", "apphot = photutils.aperture_photometry(ccd_data_g_bkgsub, apertures)\napphot", "The results given are fluxes in units of counts, which we can convert to \"instrumental\" magnitudes and put into the table:", "apphot['aperture_mags'] = u.Magnitude(apphot['aperture_sum'])\napphot", "While the value of these magnitudes is instrument-specific, the difference of these magnitudes should match the difference between any other instrument, including the SDSS. Find the two objects in the navigate view. When you click on one of them in the window, you can choose \"explore\" on the right, and then the \"PhotoObj\" link on the left sidebar in the explore window that comes up. Record \"fiberMag_g\" row in that table. Repeat for the other galaxy, and compare the difference that you compute below.", "apphot['aperture_mags'][1] - apphot['aperture_mags'][0]", "You should get the SDSS and your computed difference to be within ~0.001 mags if all went well in the previous steps. Guess we're on the right track!\nExercise\nThe astroquery package contains a sub-package astroquery.sdss that can be used to programatically search the SDSS for objects. See if you can use that to automate the process we followed above to compare SDSS and our image.\nNote: you may need to install astroquery the same way as you did ccdproc or photutils (see the top of this notebook).\nTo go one step further, try using the SDSS to calibrate (at least, roughly) our measurements. This will require identifying matching objects in the field (ideally fairly bright stars) and using them to compute the instrumental-to-$g$-band offset. See if you get magnitudes to match the SDSS on other objects.\nSource Detection using Thresholding\nAs with photometry, source-finding as a complex subject. Here we overview a straightforward algorithm that photutils provides to find heterogeneous objects in an image. Note that this is not optimal if you are only looking for stars. photutils provides several star-finders that are better-suited for that problem (but are not covered further here).\nHave a look at the options to the relevant photutils function:", "photutils.detect_sources?", "The thresholding algorithm here is straightforward and fairly efficient interpretation, but it requires a threshold above which \"isolated\" pixels are considered separate objects. photutils provides automated ways to do this, but for now we will do it manually:", "plt.hist(ccd_data_g_bkgsub.data.flat, histtype='step',\n bins=100, range=(-100, 200))\nplt.xlim(-100, 200)\nplt.tight_layout()", "Simply eye-balling this histogram reveals that the background fluctuations are at the level of ~20 counts. So if we want a 3-sigma threshold, we use 60. We also, somewhat arbitrary, require at last 5 pixels for a source to be included:", "ccd_data_g_bkgsub.shape\n\n# as above, for some numpy versions you might need a `.data` to get this to work\nsrcs = photutils.detect_sources(ccd_data_g_bkgsub, 60, 5)", "The resulting object also has some utilities to help plot itself:", "plt.figure(figsize=(8, 16))\nplt.imshow(srcs, cmap=srcs.cmap('#222222'), origin='lower')", "We can clearly see our two objects from before standing out as distinct sources. Lets have a closer look:", "plt.figure(figsize=(6, 10))\nplt.imshow(srcs, cmap=srcs.cmap('#222222'), origin='lower')\napertures.plot(color='red')\nplt.xlim(600, 800)\nplt.ylim(2530, 2920)", "Now lets try to figure out which of the many objects in the source table are ours, and try making measurements of these objects directly from these threshold maps:", "# if you had to do `.data` above, you'll need to add it here too to get all the cells below to work\nsrc_props = photutils.source_properties(ccd_data_g_bkgsub, srcs)\n\nsrcid = np.array(src_props.id)\nx = src_props.xcentroid.value\ny = src_props.ycentroid.value\n\nsrc0_id = srcid[np.argmin(np.hypot(x-positions[0][0] , y-positions[0][1]))]\nsrc1_id = srcid[np.argmin(np.hypot(x-positions[1][0] , y-positions[1][1]))]\nmsk = np.in1d(srcid, [src0_id, src1_id])\n\nx[msk], y[msk]", "Now sanity-check that they are actually the correct objects:", "plt.figure(figsize=(8, 16))\nplt.imshow(srcs, cmap=srcs.cmap('#222222'), origin='lower')\n\nplt.scatter(x[msk], y[msk], c='r', s=150, marker='*')\n\nplt.xlim(600, 800)\nplt.ylim(330+2200, 720+2200)", "Now that we've identified them, we can see all the information the photutils.source_properties computes. The one thing it does not do is instrumental magnitudes, so we add that manually.", "src_tab = src_props.to_table()\nif src_tab['source_sum'].unit == u.count:\n src_tab['source_mags'] = u.Magnitude(src_tab['source_sum'])\nelse:\n # this is the case for if you had to do the ``.data`` work-around in the ``source_properties`` call\n src_tab['source_mags'] = u.Magnitude(src_tab['source_sum']*u.count)\n\nsrc_tab[msk]", "We see quite a number of properties, but most notably the magnitudes are quite different. This is to be expected since the effective \"aperture\" is quite different, but it may or may not be desired. What is \"desired\", however, is left as an exercise to the reader!\nExercise\nTry combining the two sections above. Given the centers the source-detection stage finds, try doing aperture photometry.\nIf you like, compare to the SDSS and see if you get similar answers (astroquery will be an even bigger help in this stage). At this point you might be angry that in the last exercise you had to do it manually. Fair enough... but no pain no gain!\nExercise\nExperiment with photutils.deblend_sources. Can you deblend some of the individual stars that are visible in the Local Group Galaxy? Compare to what the SDSS provides.\nWrap-up\nThat's it for this whirlwind overview of basic image reductions in Python and Astropy. A few more exercises are below if you're looking for more, but there's plenty more later in the week!\nExercise\nIf you look at the data directory, you'll see that, in addition to the $g$-band science image we've been looking at, there's an $i$-band image. Try following the above reduction steps to extract that as well. Use this to get aperture photometry-based colors for some objects. (hint: you'll need to do the calibration against SDSS to make this work...)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ericmjl/bokeh
examples/howto/notebook_comms/Numba Image Example.ipynb
bsd-3-clause
[ "Interactive Image Processing with Numba and Bokeh\nThis demo shows off how interactive image processing can be done in the notebook, using Numba for numerics, Bokeh for plotting, and Ipython interactors for widgets. The demo runs entirely inside the Ipython notebook, with no Bokeh server required.\nNumba must be installed in order to run this demo. To run, click on, Cell-&gt;Run All in the top menu, then scroll down to individual examples and play around with their controls.", "from timeit import default_timer as timer\n\nfrom bokeh.plotting import figure, show, output_notebook\nfrom bokeh.models import GlyphRenderer, LinearColorMapper\nfrom bokeh.io import push_notebook\nfrom numba import jit, njit\n\nfrom ipywidgets import interact\nimport numpy as np\nimport scipy.misc\n\noutput_notebook()", "Gaussian Blur\nThis first section demonstrates performing a simple Gaussian blur on an image. It presents the image, as well as a slider that controls how much blur is applied. Numba is used to compile the python blur kernel, which is invoked when the user modifies the slider. \nNote: This simple example does not handle the edge case, so the edge of the image will remain unblurred as the slider is increased.", "# smaller image\nimg_blur = (scipy.misc.ascent()[::-1,:]/255.0)[:250, :250].copy(order='C')\n\npalette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)]\nwidth, height = img_blur.shape\np_blur = figure(x_range=(0, width), y_range=(0, height))\nr_blur = p_blur.image(image=[img_blur], x=[0], y=[0], dw=[width], dh=[height], palette=palette, name='blur')\n\n@njit\ndef blur(outimg, img, amt):\n iw, ih = img.shape\n for i in range(amt, iw-amt):\n for j in range(amt, ih-amt):\n px = 0.\n for w in range(-amt//2, amt//2):\n for h in range(-amt//2, amt//2):\n px += img[i+w, j+h]\n outimg[i, j]= px/(amt*amt)\n\ndef update(i=0):\n level = 2*i + 1\n \n out = img_blur.copy()\n \n ts = timer()\n blur(out, img_blur, level)\n te = timer()\n print('blur takes:', te - ts)\n \n renderer = p_blur.select(dict(name=\"blur\", type=GlyphRenderer))\n r_blur.data_source.data['image'] = [out]\n push_notebook(handle=t_blur)\n\nt_blur = show(p_blur, notebook_handle=True)\n\ninteract(update, i=(0, 10))", "3x3 Image Kernels\nMany image processing filters can be expressed as 3x3 matrices. This more sophisticated example demonstrates how numba can be used to compile kernels for arbitrary 3x3 kernels, and then provides serveral predefined kernels for the user to experiment with. \nThe UI presents the image to process (along with a dropdown to select a different image) as well as a dropdown that lets the user select which kernel to apply. Additioanlly there are sliders the permit adjustment to the bias and scale of the final greyscale image. \nNote: Right now, adjusting the scale and bias are not as efficient as possible, because the update function always also applies the kernel (even if it has not changed). A better implementation might have a class that keeps track of the current kernal and output image so that bias and scale can be applied by themselves.", "@jit\ndef getitem(img, x, y):\n w, h = img.shape\n if x >= w:\n x = w - 1 - (x - w)\n if y >= h:\n y = h - 1 - (y - h)\n return img[x, y]\n \ndef filter_factory(kernel):\n ksum = np.sum(kernel)\n if ksum == 0:\n ksum = 1\n k9 = kernel / ksum\n \n @jit\n def kernel_apply(img, out, x, y):\n tmp = 0\n for i in range(3):\n for j in range(3):\n tmp += img[x+i-1, y+j-1] * k9[i, j]\n out[x, y] = tmp\n \n @jit\n def kernel_apply_edge(img, out, x, y):\n tmp = 0\n for i in range(3):\n for j in range(3):\n tmp += getitem(img, x+i-1, y+j-1) * k9[i, j]\n out[x, y] = tmp\n \n @jit\n def kernel_k9(img, out):\n # Loop through all internals\n for x in range(1, img.shape[0] -1):\n for y in range(1, img.shape[1] -1):\n kernel_apply(img, out, x, y)\n \n # Loop through all the edges\n for x in range(img.shape[0]):\n kernel_apply_edge(img, out, x, 0)\n kernel_apply_edge(img, out, x, img.shape[1] - 1)\n \n for y in range(img.shape[1]):\n kernel_apply_edge(img, out, 0, y)\n kernel_apply_edge(img, out, img.shape[0] - 1, y)\n \n return kernel_k9\n\naverage = np.array([\n [1, 1, 1],\n [1, 1, 1],\n [1, 1, 1],\n], dtype=np.float32)\n\nsharpen = np.array([\n [-1, -1, -1],\n [-1, 12, -1],\n [-1, -1, -1],\n], dtype=np.float32)\n\nedge = np.array([\n [ 0, -1, 0],\n [-1, 4, -1],\n [ 0, -1, 0],\n], dtype=np.float32)\n\nedge_h = np.array([\n [ 0, 0, 0],\n [-1, 2, -1],\n [ 0, 0, 0],\n], dtype=np.float32)\n\nedge_v = np.array([\n [0, -1, 0],\n [0, 2, 0],\n [0, -1, 0],\n], dtype=np.float32)\n\ngradient_h = np.array([\n [-1, -1, -1],\n [ 0, 0, 0],\n [ 1, 1, 1],\n], dtype=np.float32)\n\ngradient_v = np.array([\n [-1, 0, 1],\n [-1, 0, 1],\n [-1, 0, 1],\n], dtype=np.float32)\n\nsobol_h = np.array([\n [ 1, 2, 1],\n [ 0, 0, 0],\n [-1, -2, -1],\n], dtype=np.float32)\n\nsobol_v = np.array([\n [-1, 0, 1],\n [-2, 0, 2],\n [-1, 0, 1],\n], dtype=np.float32)\n \nemboss = np.array([ \n [-2, -1, 0],\n [-1, 1, 1],\n [ 0, 1, 2],\n], dtype=np.float32)\n\nkernels = {\n \"average\" : filter_factory(average),\n \"sharpen\" : filter_factory(sharpen),\n \"edge (both)\" : filter_factory(edge),\n \"edge (horizontal)\" : filter_factory(edge_h),\n \"edge (vertical)\" : filter_factory(edge_v),\n \"gradient (horizontal)\" : filter_factory(gradient_h),\n \"gradient (vertical)\" : filter_factory(gradient_v),\n \"sobol (horizontal)\" : filter_factory(sobol_h),\n \"sobol (vertical)\" : filter_factory(sobol_v),\n \"emboss\" : filter_factory(emboss),\n}\n\nimages = {\n \"ascent\" : np.copy(scipy.misc.ascent().astype(np.float32)[::-1, :]),\n \"face\" : np.copy(scipy.misc.face(gray=True).astype(np.float32)[::-1, :]),\n}\n\npalette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)]\ncm = LinearColorMapper(palette=palette, low=0, high=256)\nwidth, height = images['ascent'].shape\np_kernel = figure(x_range=(0, width), y_range=(0, height))\nr_kernel = p_kernel.image(image=[images['ascent']], x=[0], y=[0], dw=[width], dh=[height], color_mapper=cm, name=\"kernel\")\n\ndef update(image=\"ascent\", kernel_name=\"none\", scale=100, bias=0):\n global _last_kname\n global _last_out\n \n img_kernel = images.get(image)\n\n kernel = kernels.get(kernel_name, None)\n if kernel == None:\n out = np.copy(img_kernel)\n\n else:\n out = np.zeros_like(img_kernel)\n\n ts = timer()\n kernel(img_kernel, out)\n te = timer()\n print('kernel takes:', te - ts)\n\n out *= scale / 100.0\n out += bias\n\n r_kernel.data_source.data['image'] = [out]\n push_notebook(handle=t_kernel)\n\nt_kernel = show(p_kernel, notebook_handle=True)\n\nknames = [\"none\"] + sorted(kernels.keys())\ninteract(update, image=[\"ascent\" ,\"face\"], kernel_name=knames, scale=(10, 100, 10), bias=(0, 255))", "Wavelet Decomposition\nThis last example demostrates a Haar wavelet decomposition using a Numba-compiled function. Play around with the slider to see different levels of decomposition of the image.", "@njit\ndef wavelet_decomposition(img, tmp):\n \"\"\"\n Perform inplace wavelet decomposition on `img` with `tmp` as\n a temporarily buffer.\n\n This is a very simple wavelet for demonstration\n \"\"\"\n w, h = img.shape\n halfwidth, halfheight = w//2, h//2\n \n lefthalf, righthalf = tmp[:halfwidth, :], tmp[halfwidth:, :]\n \n # Along first dimension\n for x in range(halfwidth):\n for y in range(h):\n lefthalf[x, y] = (img[2 * x, y] + img[2 * x + 1, y]) / 2\n righthalf[x, y] = img[2 * x, y] - img[2 * x + 1, y]\n \n # Swap buffer\n img, tmp = tmp, img\n tophalf, bottomhalf = tmp[:, :halfheight], tmp[:, halfheight:]\n \n # Along second dimension\n for y in range(halfheight):\n for x in range(w):\n tophalf[x, y] = (img[x, 2 * y] + img[x, 2 * y + 1]) / 2\n bottomhalf[x, y] = img[x, 2 * y] - img[x, 2 * y + 1]\n \n return halfwidth, halfheight\n\nimg_wavelet = np.copy(scipy.misc.face(gray=True)[::-1, :])\n\npalette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)]\nwidth, height = img_wavelet.shape\np_wavelet = figure(x_range=(0, width), y_range=(0, height))\nr_wavelet = p_wavelet.image(image=[img_wavelet], x=[0], y=[0], dw=[width], dh=[height], palette=palette, name=\"wavelet\")\n\ndef update(level=0):\n\n out = np.copy(img_wavelet)\n tmp = np.zeros_like(img_wavelet)\n\n ts = timer()\n hw, hh = img_wavelet.shape\n while level > 0 and hw > 1 and hh > 1:\n hw, hh = wavelet_decomposition(out[:hw, :hh], tmp[:hw, :hh])\n level -= 1\n te = timer()\n print('wavelet takes:', te - ts)\n\n r_wavelet.data_source.data['image'] = [out]\n push_notebook(handle=t_wavelet)\n\nt_wavelet = show(p_wavelet, notebook_handle=True)\n\ninteract(update, level=(0, 7))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/ko/hub/tutorials/text_to_video_retrieval_with_s3d_milnce.ipynb
apache-2.0
[ "# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================", "Text-to-Video retrieval with S3D MIL-NCE\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/hub/tutorials/text_to_video_retrieval_with_s3d_milnce\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org에서 보기</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/hub/tutorials/text_to_video_retrieval_with_s3d_milnce.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab에서 실행하기</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/hub/tutorials/text_to_video_retrieval_with_s3d_milnce.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub에서소스 보기</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/hub/tutorials/text_to_video_retrieval_with_s3d_milnce.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">노트북 다운로드하기</a></td>\n</table>", "!pip install -q opencv-python\n\nimport os\n\nimport tensorflow.compat.v2 as tf\nimport tensorflow_hub as hub\n\nimport numpy as np\nimport cv2\nfrom IPython import display\nimport math", "TF-Hub 모델 가져오기\n이 튜토리얼에서는 TensorFlow Hub의 S3D MIL-NCE 모델을 사용하여 텍스트-비디오 검색을 수행하여 주어진 텍스트 쿼리에 가장 유사한 비디오를 찾는 방법을 보여줍니다.\n이 모델에는 비디오 임베딩을 생성하기 위한 서명과 텍스트 임베딩을 생성하기 위한 서명 등 두 개의 서명이 있습니다. 이러한 임베딩을 사용하여 임베딩 공간에서 nearest neighbor(NN)를 찾습니다.", "# Load the model once from TF-Hub.\nhub_handle = 'https://tfhub.dev/deepmind/mil-nce/s3d/1'\nhub_model = hub.load(hub_handle)\n\ndef generate_embeddings(model, input_frames, input_words):\n \"\"\"Generate embeddings from the model from video frames and input words.\"\"\"\n # Input_frames must be normalized in [0, 1] and of the shape Batch x T x H x W x 3\n vision_output = model.signatures['video'](tf.constant(tf.cast(input_frames, dtype=tf.float32)))\n text_output = model.signatures['text'](tf.constant(input_words))\n return vision_output['video_embedding'], text_output['text_embedding']\n\n# @title Define video loading and visualization functions { display-mode: \"form\" }\n\n# Utilities to open video files using CV2\ndef crop_center_square(frame):\n y, x = frame.shape[0:2]\n min_dim = min(y, x)\n start_x = (x // 2) - (min_dim // 2)\n start_y = (y // 2) - (min_dim // 2)\n return frame[start_y:start_y+min_dim,start_x:start_x+min_dim]\n\n\ndef load_video(video_url, max_frames=32, resize=(224, 224)):\n path = tf.keras.utils.get_file(os.path.basename(video_url)[-128:], video_url)\n cap = cv2.VideoCapture(path)\n frames = []\n try:\n while True:\n ret, frame = cap.read()\n if not ret:\n break\n frame = crop_center_square(frame)\n frame = cv2.resize(frame, resize)\n frame = frame[:, :, [2, 1, 0]]\n frames.append(frame)\n\n if len(frames) == max_frames:\n break\n finally:\n cap.release()\n frames = np.array(frames)\n if len(frames) < max_frames:\n n_repeat = int(math.ceil(max_frames / float(len(frames))))\n frames = frames.repeat(n_repeat, axis=0)\n frames = frames[:max_frames]\n return frames / 255.0\n\ndef display_video(urls):\n html = '<table>'\n html += '<tr><th>Video 1</th><th>Video 2</th><th>Video 3</th></tr><tr>'\n for url in urls:\n html += '<td>'\n html += '<img src=\"{}\" height=\"224\">'.format(url)\n html += '</td>'\n html += '</tr></table>'\n return display.HTML(html)\n\ndef display_query_and_results_video(query, urls, scores):\n \"\"\"Display a text query and the top result videos and scores.\"\"\"\n sorted_ix = np.argsort(-scores)\n html = ''\n html += '<h2>Input query: <i>{}</i> </h2><div>'.format(query)\n html += 'Results: <div>'\n html += '<table>'\n html += '<tr><th>Rank #1, Score:{:.2f}</th>'.format(scores[sorted_ix[0]])\n html += '<th>Rank #2, Score:{:.2f}</th>'.format(scores[sorted_ix[1]])\n html += '<th>Rank #3, Score:{:.2f}</th></tr><tr>'.format(scores[sorted_ix[2]])\n for i, idx in enumerate(sorted_ix):\n url = urls[sorted_ix[i]];\n html += '<td>'\n html += '<img src=\"{}\" height=\"224\">'.format(url)\n html += '</td>'\n html += '</tr></table>'\n return html\n\n\n# @title Load example videos and define text queries { display-mode: \"form\" }\n\nvideo_1_url = 'https://upload.wikimedia.org/wikipedia/commons/b/b0/YosriAirTerjun.gif' # @param {type:\"string\"}\nvideo_2_url = 'https://upload.wikimedia.org/wikipedia/commons/e/e6/Guitar_solo_gif.gif' # @param {type:\"string\"}\nvideo_3_url = 'https://upload.wikimedia.org/wikipedia/commons/3/30/2009-08-16-autodrift-by-RalfR-gif-by-wau.gif' # @param {type:\"string\"}\n\nvideo_1 = load_video(video_1_url)\nvideo_2 = load_video(video_2_url)\nvideo_3 = load_video(video_3_url)\nall_videos = [video_1, video_2, video_3]\n\nquery_1_video = 'waterfall' # @param {type:\"string\"}\nquery_2_video = 'playing guitar' # @param {type:\"string\"}\nquery_3_video = 'car drifting' # @param {type:\"string\"}\nall_queries_video = [query_1_video, query_2_video, query_3_video]\nall_videos_urls = [video_1_url, video_2_url, video_3_url]\ndisplay_video(all_videos_urls)", "텍스트-비디오 검색 시연하기", "# Prepare video inputs.\nvideos_np = np.stack(all_videos, axis=0)\n\n# Prepare text input.\nwords_np = np.array(all_queries_video)\n\n# Generate the video and text embeddings.\nvideo_embd, text_embd = generate_embeddings(hub_model, videos_np, words_np)\n\n# Scores between video and text is computed by dot products.\nall_scores = np.dot(text_embd, tf.transpose(video_embd))\n\n# Display results.\nhtml = ''\nfor i, words in enumerate(words_np):\n html += display_query_and_results_video(words, all_videos_urls, all_scores[i, :])\n html += '<br>'\ndisplay.HTML(html)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ContinuumIO/pydata-apps
Section_0_Introduction.ipynb
mit
[ "<img src=\"images/continuum_analytics_logo.png\" \n alt=\"Continuum Logo\",\n align=\"right\",\n width=\"30%\">\nBuilding Python Data Applications <br> with Blaze and Bokeh\nPyData Dallas 2015\nby Andy Terrel and Christine Doig\n<hr>\n\nTutorial sections\n\nSection 0: Introduction\nSection 1: Blaze\nSection 2: Bokeh\nSection 3: Apps\n\n<hr>\n\nFollow along\nhttp://git.io/pydata-apps\n<br>\n\nOption A: Download repository\n\nhttps://github.com/ContinuumIO/pydata-apps\n\nOption B: View notebooks on nbviewer\n\nhttp://nbviewer.ipython.org/github/ContinuumIO/pydata-apps\n<hr>\n\nBlaze\nhttp://blaze.pydata.org/en/latest/\nBlaze allows Python users a familiar interface to query data living in other data storage systems.", "from IPython.display import IFrame\nIFrame('http://blaze.pydata.org/en/latest/', width='100%', height=350)", "<hr>\n\nBokeh\nhttp://bokeh.pydata.org/en/latest/\nBokeh is a Python interactive visualization library that targets modern web browsers for presentation.", "from IPython.display import IFrame\nIFrame('http://bokeh.pydata.org/en/latest/', width='100%', height=350)", "<hr>\n\nTutorial goals\n\nQuery backends with Blaze expressions \nGenerate Bokeh visualizations\nCreate data applications with Blaze and Bokeh\n\n<hr>\n\nInstallation\n\nDownload and install the Anaconda Python Distribution\nDownload archive of this repository or checkout with git \n\ngit clone https://github.com/ContinuumIO/pydata-apps.git\n\nEach tutorial has a slightly different set of requirements. To download all the requirements try:\n\nconda update conda\n conda env create\n\nActivate the environment\n\nsource activate pydata_apps\nStatic notebooks\nFor those want to just follow a static notebook (not all interactive elements will work), see the following links:\n\nSection 0: Introduction\nSection 1: Blaze Tutorial\nSection 2: Bokeh Tutorial" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
Cyberface/nrutils_dev
review/notebooks/bam_inclination_to_thetajn.ipynb
mit
[ "Load a BAM hdf5 file, and given an inclination value, determine the angle $\\theta_{\\mathrm{JN}}$\n\nThe goal of this script is to take in INCLINATION and then output $\\theta_{\\mathrm{JN}}$", "# Import useful standard things\nfrom os import system, remove, makedirs, path\nfrom os.path import dirname, basename, isdir, realpath\nfrom numpy import array,ones,pi,dot,cos,sin,savetxt\nfrom numpy import arccos as acos\nfrom numpy.linalg import norm\nfrom matplotlib.pyplot import *\nfrom os.path import expanduser\n# Import nrutils (See: https://github.com/llondon6/nrutils_dev)\nfrom nrutils import alert,nr2h5,scsearch,gwylm,alert\nfrom nrutils.tools.unit.conversion import *", "Use nrutils to locate the simulation of interest.", "Y = scsearch(keyword='q1.2_dc2dcp2',verbose=True,unique=True)[0]", "Define function for $\\theta_{\\mathrm{JN}}$ calculation", "def iota2thetajn(A, # scnetry object from nrutils\n IOTA, # The angle between L and the line of sight \n verbose=False): # be verbose toggle\n \n # Import useful things\n from numpy import array,ones,pi,dot,cos,sin,vstack,hstack\n from numpy import arccos as acos\n from numpy.linalg import norm\n \n # Extract L, S, and J (NOTE: these are after-junk quantities)\n L = A.L1 + A.L2\n S = A.S1 + A.S2 \n J = L + S\n\n # Calculate the direction of J and L using the loaded information\n info = A.raw_metadata\n L_bbh = array( [ info.initial_angular_momentumx, info.initial_angular_momentumy, info.initial_angular_momentumz ] )\n S1 = array( [ info.after_junkradiation_spin1x, info.after_junkradiation_spin1y, info.after_junkradiation_spin1z ] )\n S2 = array( [ info.after_junkradiation_spin2x, info.after_junkradiation_spin2y, info.after_junkradiation_spin2z ] )\n S = S1+S2\n J_bbh = L_bbh+S\n\n # The magnitude of For now, apply norm(L_bbh)\n\n if False: # verbose:\n print 'L from the bbh files is %s' % L_bbh\n print 'L after junk from nrutils is %s' % L\n print 40*'--'\n\n # Find unit vectors\n L_hat = L / norm(L)\n J_hat = J / norm(J)\n\n # Extract oribital separation (NOTE: this is also referenced after-junk)\n n_hat = ( A.R1 - A.R2 ) / norm( A.R1 - A.R2 )\n\n #\n theta = IOTA\n\n # per LAL convention\n u_hat = [1,0,0]\n\n #\n phi = acos( dot( n_hat,u_hat ) )\n\n #\n N_hat = array( [ cos(phi)*sin(theta), sin(phi)*sin(theta), cos(theta) ] )\n\n #\n theta_JN = acos( dot(N_hat,J_hat) )\n theta_LN = acos( dot(N_hat,L_hat) )\n theta_JL = acos( dot(J_hat,L_hat) )\n\n #\n if verbose:\n print '(iota,theta_JN) = (%1.4f,%1.4f)' % (IOTA*180/pi,theta_JN*180/pi)\n if False: # verbose:\n print 'phi = %1.4f' % phi\n print 'n_hat = %s' % n_hat\n print 'N_hat = %s' % N_hat\n print 'L_hat = %s' % L_hat\n print 'J_hat = %s' % J_hat\n print 'theta_JN = %1.4f degrees' % (theta_JN*180/pi)\n print 'theta_LN = %1.4f degrees' % (theta_LN*180/pi)\n print 'theta_JL = %1.4f degrees' % (theta_JL*180/pi)\n print 'theta_JZ = %1.4f degrees' % ( acos(dot(J_hat,[0,0,1])) )\n \n #\n return theta_JN\n", "Define inclinations of interest.", "NUM = 9 # Number of points in iota to use \niota = array( [ (k*pi/(NUM-1.0)) for k in range(0,NUM) ] )\nprint iota", "Evaluate above function for the given list of runs (scentry objects).", "theta_JN = array( [ iota2thetajn(Y,k,verbose=True) for k in iota ] )", "Make array of iota and $\\theta_{\\mathrm{JN}}$ values", "output_data = array( [iota,theta_JN] ).T\nprint output_data", "Save the array to a txt file.", "output_file_path = '/Users/book/GARREG/REPOS/cbc_svn_nr_systematics/nr_systematics/scripts/thetaJN_'+Y.setname+'.txt'\nsavetxt( output_file_path, output_data, fmt='%1.8f', delimiter='\\t' )" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.23/_downloads/c4c1adf6983ad491e45e3941a0c10d6e/time_frequency_mixed_norm_inverse.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute MxNE with time-frequency sparse prior\nThe TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA)\nthat promotes focal (sparse) sources (such as dipole fitting techniques)\n:footcite:GramfortEtAl2013b,GramfortEtAl2011.\nThe benefit of this approach is that:\n\nit is spatio-temporal without assuming stationarity (sources properties\n can vary over time)\nactivations are localized in space, time and frequency in one step.\nwith a built-in filtering process based on a short time Fourier\n transform (STFT), data does not need to be low passed (just high pass\n to make the signals zero mean).\nthe solver solves a convex optimization problem, hence cannot be\n trapped in local minima.", "# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n# Daniel Strohmeier <daniel.strohmeier@tu-ilmenau.de>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse\nfrom mne.inverse_sparse import tf_mixed_norm, make_stc_from_dipoles\nfrom mne.viz import (plot_sparse_source_estimates,\n plot_dipole_locations, plot_dipole_amplitudes)\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nsubjects_dir = data_path + '/subjects'\nfwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\nave_fname = data_path + '/MEG/sample/sample_audvis-no-filter-ave.fif'\ncov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'\n\n# Read noise covariance matrix\ncov = mne.read_cov(cov_fname)\n\n# Handling average file\ncondition = 'Left visual'\nevoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))\nevoked = mne.pick_channels_evoked(evoked)\n# We make the window slightly larger than what you'll eventually be interested\n# in ([-0.05, 0.3]) to avoid edge effects.\nevoked.crop(tmin=-0.1, tmax=0.4)\n\n# Handling forward solution\nforward = mne.read_forward_solution(fwd_fname)", "Run solver", "# alpha parameter is between 0 and 100 (100 gives 0 active source)\nalpha = 40. # general regularization parameter\n# l1_ratio parameter between 0 and 1 promotes temporal smoothness\n# (0 means no temporal regularization)\nl1_ratio = 0.03 # temporal regularization parameter\n\nloose, depth = 0.2, 0.9 # loose orientation & depth weighting\n\n# Compute dSPM solution to be used as weights in MxNE\ninverse_operator = make_inverse_operator(evoked.info, forward, cov,\n loose=loose, depth=depth)\nstc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9.,\n method='dSPM')\n\n# Compute TF-MxNE inverse solution with dipole output\ndipoles, residual = tf_mixed_norm(\n evoked, forward, cov, alpha=alpha, l1_ratio=l1_ratio, loose=loose,\n depth=depth, maxit=200, tol=1e-6, weights=stc_dspm, weights_min=8.,\n debias=True, wsize=16, tstep=4, window=0.05, return_as_dipoles=True,\n return_residual=True)\n\n# Crop to remove edges\nfor dip in dipoles:\n dip.crop(tmin=-0.05, tmax=0.3)\nevoked.crop(tmin=-0.05, tmax=0.3)\nresidual.crop(tmin=-0.05, tmax=0.3)", "Plot dipole activations", "plot_dipole_amplitudes(dipoles)", "Plot location of the strongest dipole with MRI slices", "idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles])\nplot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample',\n subjects_dir=subjects_dir, mode='orthoview',\n idx='amplitude')\n\n# # Plot dipole locations of all dipoles with MRI slices:\n# for dip in dipoles:\n# plot_dipole_locations(dip, forward['mri_head_t'], 'sample',\n# subjects_dir=subjects_dir, mode='orthoview',\n# idx='amplitude')", "Show the evoked response and the residual for gradiometers", "ylim = dict(grad=[-120, 120])\nevoked.pick_types(meg='grad', exclude='bads')\nevoked.plot(titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim,\n proj=True, time_unit='s')\n\nresidual.pick_types(meg='grad', exclude='bads')\nresidual.plot(titles=dict(grad='Residuals: Gradiometers'), ylim=ylim,\n proj=True, time_unit='s')", "Generate stc from dipoles", "stc = make_stc_from_dipoles(dipoles, forward['src'])", "View in 2D and 3D (\"glass\" brain like 3D plot)", "plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),\n opacity=0.1, fig_name=\"TF-MxNE (cond %s)\"\n % condition, modes=['sphere'], scale_factors=[1.])\n\ntime_label = 'TF-MxNE time=%0.2f ms'\nclim = dict(kind='value', lims=[10e-9, 15e-9, 20e-9])\nbrain = stc.plot('sample', 'inflated', 'rh', views='medial',\n clim=clim, time_label=time_label, smoothing_steps=5,\n subjects_dir=subjects_dir, initial_time=150, time_unit='ms')\nbrain.add_label(\"V1\", color=\"yellow\", scalar_thresh=.5, borders=True)\nbrain.add_label(\"V2\", color=\"red\", scalar_thresh=.5, borders=True)", "References\n.. footbibliography::" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/inpe/cmip6/models/sandbox-2/ocean.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: INPE\nSource ID: SANDBOX-2\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:07\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'inpe', 'sandbox-2', 'ocean')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Seawater Properties\n3. Key Properties --&gt; Bathymetry\n4. Key Properties --&gt; Nonoceanic Waters\n5. Key Properties --&gt; Software Properties\n6. Key Properties --&gt; Resolution\n7. Key Properties --&gt; Tuning Applied\n8. Key Properties --&gt; Conservation\n9. Grid\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Discretisation --&gt; Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --&gt; Tracers\n14. Timestepping Framework --&gt; Baroclinic Dynamics\n15. Timestepping Framework --&gt; Barotropic\n16. Timestepping Framework --&gt; Vertical Physics\n17. Advection\n18. Advection --&gt; Momentum\n19. Advection --&gt; Lateral Tracers\n20. Advection --&gt; Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --&gt; Momentum --&gt; Operator\n23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\n24. Lateral Physics --&gt; Tracers\n25. Lateral Physics --&gt; Tracers --&gt; Operator\n26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\n27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\n30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n35. Uplow Boundaries --&gt; Free Surface\n36. Uplow Boundaries --&gt; Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\n39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\n40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\n41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the ocean.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the ocean component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.2. Eos Functional Temp\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTemperature used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n", "2.3. Eos Functional Salt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSalinity used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n", "2.4. Eos Functional Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n", "2.5. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.6. Ocean Specific Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.7. Ocean Reference Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date of bathymetry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Ocean Smoothing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Source\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe source of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how isolated seas is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. River Mouth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.5. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.6. Is Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.7. Thickness Level 1\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThickness of first surface ocean level (in meters)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBrief description of conservation methodology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Consistency Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Was Flux Correction Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes conservation involve flux correction ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of grid in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical coordinates in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Partial Steps\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11. Grid --&gt; Discretisation --&gt; Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Staggering\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal grid staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Diurnal Cycle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiurnal cycle type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Timestepping Framework --&gt; Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time stepping scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14. Timestepping Framework --&gt; Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBaroclinic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Timestepping Framework --&gt; Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime splitting method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBarotropic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Timestepping Framework --&gt; Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of vertical time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of advection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Advection --&gt; Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n", "18.2. Scheme Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean momemtum advection scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. ALE\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19. Advection --&gt; Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19.3. Effective Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.5. Passive Tracers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPassive tracers advected", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.6. Passive Tracers Advection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Advection --&gt; Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lateral physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transient eddy representation in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n", "22. Lateral Physics --&gt; Momentum --&gt; Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24. Lateral Physics --&gt; Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24.2. Submesoscale Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "25. Lateral Physics --&gt; Tracers --&gt; Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Constant Val\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.3. Flux Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV flux (advective or skew)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Added Diffusivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vertical physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical convection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.2. Tide Induced Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.3. Double Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there double diffusion", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.4. Shear Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there interior shear mixing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "33.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "34.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "34.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35. Uplow Boundaries --&gt; Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of free surface in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFree surface scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35.3. Embeded Seaice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36. Uplow Boundaries --&gt; Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Type Of Bbl\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.3. Lateral Mixing Coef\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "36.4. Sill Overflow\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any specific treatment of sill overflows", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of boundary forcing in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.2. Surface Pressure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.3. Momentum Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.4. Tracers Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.5. Wave Effects\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.6. River Runoff Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.7. Geothermal Heating\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum bottom friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum lateral friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of sunlight penetration scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40.2. Ocean Colour\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "40.3. Extinction Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. From Sea Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.3. Forced Mode Restoring\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
wakkadojo/OperationPeanut
oldModels/AlmondNut_PreMomentum.ipynb
gpl-3.0
[ "import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom geopy.distance import great_circle as geodist\nimport sklearn.linear_model as sklm\nimport sklearn.preprocessing as skpp\nimport sklearn.metrics as skm\nimport sklearn.model_selection as skms\n\n# load data\n\nreg_season = pd.read_csv('data/RegularSeasonDetailedResults.csv')\ntourney = pd.read_csv('data/TourneyDetailedResults.csv')\nratings = pd.read_csv('data/addl/massey_ordinals_2003-2016.csv')\nteam_geog = pd.read_csv('data/addl/TeamGeog.csv')\ntourney_geog = pd.read_csv('data/addl/TourneyGeog_Thru2016.csv')\ntourney_slots = pd.read_csv('data/TourneySlots.csv')\ntourney_seeds = pd.read_csv('data/TourneySeeds.csv')\nkenpom = pd.read_csv('data/kenPomTeamData.csv')\nteams = pd.read_csv('data/Teams.csv')", "Almond Nut Learner\nUse published rankings together with distance traveled to play to classify winners + losers\nTrain to regular season and test on post season\nconsiderations: \n\nRefine\nVegas odds in first round\nPREDICTING UPSETS??\nteam upset rating\nteam score variance\nupset predictors based on past seasons\n\n\nRatings closer to date played\nModel tuning / hyperparameter tuning\n\n\nImplemented\nindividual ratings vs aggregate\nLook at aggregate and derive statistics\n\n\ndiff vs absolute ratings\nUse diffs for feature generation\n\n\nonly use final rankings instead of those at time of play?\nFor now: time of play\n\n\nDistance from home? Distance from last game?\nFor now: distance from home\n\n\nHow do regular season and playoffs differ in features?\nIs using distance in playoffs trained on regular season right?\n\n\n\n\nAugment (not yet executed)\nDefensive / offense ratings from kenpom\nElo, Elo differences, and assoc probabilities\nEnsemble?\nConstruct micro-classifier from elo\n\n\nCoaches\nLook at momentum + OT effects when training\nBeginning of season vs end of season for training", "def attach_ratings_diff_stats(df, ratings_eos, season):\n out_cols = list(df.columns) + ['mean_rtg_1', 'std_rtg_1', 'num_rtg_1', 'mean_rtg_2', 'std_rtg_2', 'num_rtg_2']\n rtg_1 = ratings_eos.rename(columns = {'mean_rtg' : 'mean_rtg_1', 'std_rtg' : 'std_rtg_1', 'num_rtg' : 'num_rtg_1'})\n rtg_2 = ratings_eos.rename(columns = {'mean_rtg' : 'mean_rtg_2', 'std_rtg' : 'std_rtg_2', 'num_rtg' : 'num_rtg_2'})\n return df\\\n .merge(rtg_1, left_on = ['Season', 'Team1'], right_on = ['season', 'team'])\\\n .merge(rtg_2, left_on = ['Season', 'Team2'], right_on = ['season', 'team'])\\\n [out_cols]\n\ndef get_eos_ratings(ratings):\n ratings_last_day = ratings.groupby('season').aggregate(max)[['rating_day_num']].reset_index()\n ratings_eos_all = ratings_last_day\\\n .merge(ratings, left_on = ['season', 'rating_day_num'], right_on = ['season', 'rating_day_num'])\n ratings_eos = ratings_eos_all.groupby(['season', 'team']).aggregate([np.mean, np.std, len])['orank']\n return ratings_eos.reset_index().rename(columns = {'mean' : 'mean_rtg', 'std' : 'std_rtg', 'len' : 'num_rtg'})\n\ndef get_score_fluctuation(reg_season, season):\n # note: quick and dirty; not best practice for home / away etc b/c these would only improve est for\n # std on second order\n # scale the score spreads by # posessions\n # note: units don't really matter because this is used in a ratio and is normalized later\n \n rsc = reg_season[reg_season['Season'] == season].copy()\n \n # avg home vs away\n hscores = rsc[rsc['Wloc'] == 'H']['Wscore'].tolist() + rsc[rsc['Wloc'] == 'A']['Lscore'].tolist()\n ascores = rsc[rsc['Wloc'] == 'A']['Wscore'].tolist() + rsc[rsc['Wloc'] == 'H']['Lscore'].tolist()\n home_correction = np.mean(hscores) - np.mean(ascores)\n \n # get posessions per game\n posessions = 0.5 * (\n rsc['Lfga'] - rsc['Lor'] + rsc['Lto'] + 0.475*rsc['Lfta'] +\\\n rsc['Wfga'] - rsc['Wor'] + rsc['Wto'] + 0.475*rsc['Wfta']\n )\n \n # get victory margins and correct for home / away -- scale for posessions\n rsc['win_mgn'] = rsc['Wscore'] - rsc['Lscore']\n rsc['win_mgn'] += np.where(rsc['Wloc'] == 'H', -home_correction, 0)\n rsc['win_mgn'] += np.where(rsc['Wloc'] == 'A', home_correction, 0)\n rsc['win_mgn_scaled'] = rsc['win_mgn'] * 100 / posessions # score per 100 posessions\n \n # get mgn of victory stats per team\n win_mgns_wins = rsc[['Wteam', 'win_mgn_scaled']].rename(columns = {'Wteam' : 'team', 'win_mgn_scaled' : 'mgn'})\n win_mgns_losses = rsc[['Lteam', 'win_mgn_scaled']].rename(columns = {'Lteam' : 'team', 'win_mgn_scaled' : 'mgn'})\n win_mgns_losses['mgn'] *= -1\n win_mgns = pd.concat([win_mgns_wins, win_mgns_losses])\n \n return win_mgns.groupby('team').aggregate(np.std).rename(columns = {'mgn' : 'std_mgn'}).reset_index()\n\ndef attach_score_fluctuations(df, reg_season, season):\n cols_to_keep = list(df.columns) + ['std_mgn_1', 'std_mgn_2']\n \n fluct = get_score_fluctuation(reg_season, season)\n fluct1 = fluct.rename(columns = {'std_mgn' : 'std_mgn_1'})\n fluct2 = fluct.rename(columns = {'std_mgn' : 'std_mgn_2'})\n return df\\\n .merge(fluct1, left_on = 'Team1', right_on = 'team')\\\n .merge(fluct2, left_on = 'Team2', right_on = 'team')[cols_to_keep]\n\ndef attach_kenpom_stats(df, kenpom, season):\n cols_to_keep = list(df.columns) + ['adjem_1', 'adjem_2', 'adjt_1', 'adjt_2']\n \n kp1 = kenpom[kenpom['Season'] == season][['Team_Id', 'AdjEM', 'AdjTempo']]\\\n .rename(columns = {'AdjEM' : 'adjem_1', 'AdjTempo' : 'adjt_1'})\n kp2 = kenpom[kenpom['Season'] == season][['Team_Id', 'AdjEM', 'AdjTempo']]\\\n .rename(columns = {'AdjEM' : 'adjem_2', 'AdjTempo' : 'adjt_2'})\n return df\\\n .merge(kp1, left_on = 'Team1', right_on = 'Team_Id')\\\n .merge(kp2, left_on = 'Team2', right_on = 'Team_Id')[cols_to_keep]\n\ndef get_root_and_leaves(hierarchy):\n all_children = set(hierarchy[['Strongseed', 'Weakseed']].values.flatten())\n all_parents = set(hierarchy[['Slot']].values.flatten())\n root = [ p for p in all_parents if p not in all_children ][0]\n leaves = [ c for c in all_children if c not in all_parents ]\n return root, leaves\n\ndef get_tourney_tree_one_season(tourney_slots, season):\n \n def calculate_depths(tree, child, root):\n if child == root:\n return 0\n elif tree[child]['depth'] < 0:\n tree[child]['depth'] = 1 + calculate_depths(tree, tree[child]['parent'], root)\n return tree[child]['depth']\n \n hierarchy = tourney_slots[tourney_slots['Season'] == season][['Slot', 'Strongseed', 'Weakseed']]\n root, leaves = get_root_and_leaves(hierarchy) # should be R6CH...\n tree_raw = {**dict(zip(hierarchy['Strongseed'],hierarchy['Slot'])), \n **dict(zip(hierarchy['Weakseed'],hierarchy['Slot']))}\n tree = { c : {'parent' : tree_raw[c], 'depth' : -1} for c in tree_raw}\n \n for c in leaves:\n calculate_depths(tree, c, root)\n \n return tree\n\ndef get_tourney_trees(tourney_slots):\n return { season : get_tourney_tree_one_season(tourney_slots, season)\\\n for season in tourney_slots['Season'].unique() }\n\ndef slot_matchup_from_seed(tree, seed1, seed2):\n # return which slot the two teams would face off in\n if seed1 == seed2:\n return seed1\n next_seed1 = seed1 if tree[seed1]['depth'] < tree[seed2]['depth'] else tree[seed1]['parent']\n next_seed2 = seed2 if tree[seed2]['depth'] < tree[seed1]['depth'] else tree[seed2]['parent']\n return slot_matchup_from_seed(tree, next_seed1, next_seed2)\n\ndef get_team_seed(tourney_seeds, season, team):\n seed = tourney_seeds[\n (tourney_seeds['Team'] == team) & \n (tourney_seeds['Season'] == season)\n ]['Seed'].values\n if len(seed) == 1:\n return seed[0]\n else:\n return None \n\ndef dist(play_lat, play_lng, lat, lng):\n return geodist((play_lat, play_lng), (lat, lng)).miles\n\ndef reg_distance_to_game(games_in, team_geog):\n \n games = games_in.copy()\n out_cols = list(games.columns) + ['w_dist', 'l_dist']\n \n w_geog = team_geog.rename(columns = {'lat' : 'w_lat', 'lng' : 'w_lng'})\n l_geog = team_geog.rename(columns = {'lat' : 'l_lat', 'lng' : 'l_lng'})\n games = games\\\n .merge(w_geog, left_on = 'Wteam', right_on = 'team_id')\\\n .merge(l_geog, left_on = 'Lteam', right_on = 'team_id')\n # handle neutral locations later by averaging distance from home for 2 teams if neutral location\n games['play_lat'] = np.where(games['Wloc'] == 'H', games['w_lat'], games['l_lat'])\n games['play_lng'] = np.where(games['Wloc'] == 'H', games['w_lng'], games['l_lng'])\n games['w_dist'] = games.apply(lambda x: dist(x['play_lat'], x['play_lng'], x['w_lat'], x['w_lng']), axis = 1)\n games['l_dist'] = games.apply(lambda x: dist(x['play_lat'], x['play_lng'], x['l_lat'], x['l_lng']), axis = 1)\n # correct for neutral\n games['w_dist'], games['l_dist'] =\\\n np.where(games['Wloc'] == 'N', (games['w_dist'] + games['l_dist'])/2, games['w_dist']),\\\n np.where(games['Wloc'] == 'N', (games['w_dist'] + games['l_dist'])/2, games['l_dist'])\n return games[out_cols]\n\ndef tourney_distance_to_game(tourney_raw_in, tourney_geog, team_geog, season):\n \n out_cols = list(tourney_raw_in.columns) + ['dist_1', 'dist_2']\n\n tourney_raw = tourney_raw_in.copy()\n \n geog_1 = team_geog.rename(columns = {'lat' : 'lat_1', 'lng' : 'lng_1'})\n geog_2 = team_geog.rename(columns = {'lat' : 'lat_2', 'lng' : 'lng_2'})\n geog_play = tourney_geog[tourney_geog['season'] == season][['slot', 'lat', 'lng']]\\\n .rename(columns = {'lat' : 'lat_p', 'lng' : 'lng_p'})\n \n tourney_raw = tourney_raw\\\n .merge(geog_1, left_on = 'Team1', right_on = 'team_id')\\\n .merge(geog_2, left_on = 'Team2', right_on = 'team_id')\\\n .merge(geog_play, left_on = 'SlotMatchup', right_on = 'slot')\n \n tourney_raw['dist_1'] = tourney_raw.apply(lambda x: dist(x['lat_p'], x['lng_p'], x['lat_1'], x['lng_1']), axis = 1)\n tourney_raw['dist_2'] = tourney_raw.apply(lambda x: dist(x['lat_p'], x['lng_p'], x['lat_2'], x['lng_2']), axis = 1)\n \n return tourney_raw[out_cols]\n\ndef get_raw_reg_season_data(reg_season, team_geog, season):\n \n cols_to_keep = ['Season', 'Daynum', 'Team1', 'Team2', 'score_1', 'score_2', 'dist_1', 'dist_2']\n \n rsr = reg_season[reg_season['Season'] == season] # reg season raw\n rsr = reg_distance_to_game(rsr, team_geog)\n \n rsr['Team1'] = np.where(rsr['Wteam'] < rsr['Lteam'], rsr['Wteam'], rsr['Lteam'])\n rsr['Team2'] = np.where(rsr['Wteam'] > rsr['Lteam'], rsr['Wteam'], rsr['Lteam'])\n rsr['score_1'] = np.where(rsr['Wteam'] < rsr['Lteam'], rsr['Wscore'], rsr['Lscore'])\n rsr['score_2'] = np.where(rsr['Wteam'] > rsr['Lteam'], rsr['Wscore'], rsr['Lscore'])\n rsr['dist_1'] = np.where(rsr['Wteam'] < rsr['Lteam'], rsr['w_dist'], rsr['l_dist'])\n rsr['dist_2'] = np.where(rsr['Wteam'] > rsr['Lteam'], rsr['w_dist'], rsr['l_dist'])\n \n return rsr[cols_to_keep]\n\ndef get_raw_tourney_data(tourney_seeds, tourney_trees, tourney_geog, team_geog, season):\n \n # tree to find play location\n tree = tourney_trees[season]\n \n # get all teams in tourney\n seed_map = tourney_seeds[tourney_seeds['Season'] == season].set_index('Team').to_dict()['Seed']\n teams = sorted(seed_map.keys())\n \n team_pairs = sorted([ (team1, team2) for team1 in teams for team2 in teams if team1 < team2 ])\n tourney_raw = pd.DataFrame(team_pairs).rename(columns = { 0 : 'Team1', 1 : 'Team2' })\n tourney_raw['Season'] = season\n \n # find out where they would play each other\n tourney_raw['SlotMatchup'] = tourney_raw.apply(\n lambda x: slot_matchup_from_seed(tree, seed_map[x['Team1']], seed_map[x['Team2']]), axis = 1\n )\n \n # get features\n tourney_raw = tourney_distance_to_game(tourney_raw, tourney_geog, team_geog, season)\n \n return tourney_raw\n\ndef attach_supplements(data, reg_season, kenpom, ratings_eos, season):\n \n dc = data.copy()\n dc = attach_ratings_diff_stats(dc, ratings_eos, season) # get ratings diff stats\n dc = attach_kenpom_stats(dc, kenpom, season)\n dc = attach_score_fluctuations(dc, reg_season, season)\n \n return dc", "Feature engineering\n\nLog of distance\nCapture rating diffs\nCapture rating diffs acct for variance (t score)\nDiff in expected scores via EM diffs\n\nTag winners in training set + viz. Also, normalize data.", "def generate_features(df):\n \n has_score = 'score_1' in df.columns and 'score_2' in df.columns\n \n cols_to_keep = ['Team1', 'Team2', 'Season', 'ln_dist_diff', 'rtg_diff', 't_rtg', 'pt_diff', 't_score'] +\\\n (['Team1_win'] if has_score else [])\n \n features = df.copy()\n features['ln_dist_diff'] = np.log((1 + df['dist_1'])/(1 + df['dist_2']))\n # use negative for t_rtg so that better team has higher statistic than worse team\n features['rtg_diff'] = -(df['mean_rtg_1'] - df['mean_rtg_2']) \n features['t_rtg'] = -(df['mean_rtg_1'] - df['mean_rtg_2']) / np.sqrt(df['std_rtg_1']**2 + df['std_rtg_2']**2)\n features['pt_diff'] = df['adjem_1'] - df['adjem_2']\n features['t_score'] = (df['adjem_1'] - df['adjem_2']) / np.sqrt(df['std_mgn_1']**2 + df['std_mgn_2']**2)\n \n # truth feature: did team 1 win?\n if has_score:\n features['Team1_win'] = features['score_1'] > features['score_2']\n \n return features[cols_to_keep]\n\ndef normalize_features(train, test, features):\n all_data_raw = pd.concat([train[features], test[features]])\n all_data_norm = skpp.scale(all_data_raw) # with_mean = False ?\n train_norm = train.copy()\n test_norm = test.copy()\n train_norm[features] = all_data_norm[:len(train)]\n test_norm[features] = all_data_norm[len(train):]\n return train_norm, test_norm\n\ndef get_key(df):\n return df['Season'].map(str) + '_' + df['Team1'].map(str) + '_' + df['Team2'].map(str)", "Running the model", "features_to_use = ['ln_dist_diff', 'rtg_diff', 't_rtg', 'pt_diff', 't_score']\npredict_field = 'Team1_win'\n\ndef get_features(season, tourney_slots, ratings, reg_season, team_geog, kenpom, tourney_seeds, tourney_geog):\n \n # support data\n tourney_trees = get_tourney_trees(tourney_slots)\n ratings_eos = get_eos_ratings(ratings)\n \n # regular season cleaned data\n regular_raw = get_raw_reg_season_data(reg_season, team_geog, season)\n regular_raw = attach_supplements(regular_raw, reg_season, kenpom, ratings_eos, season)\n \n # post season cleaned data\n tourney_raw = get_raw_tourney_data(tourney_seeds, tourney_trees, tourney_geog, team_geog, season)\n tourney_raw = attach_supplements(tourney_raw, reg_season, kenpom, ratings_eos, season)\n \n # get and normalize features\n feat_train = generate_features(regular_raw)\n feat_test = generate_features(tourney_raw)\n train_norm, test_norm = normalize_features(feat_train, feat_test, features_to_use)\n \n return regular_raw, tourney_raw, feat_train, feat_test, train_norm, test_norm\n\ndef make_predictions(season, train_norm, test_norm, tourney, C = 1):\n \n # fit\n lr = sklm.LogisticRegression(C = C) # fit_intercept = False???\n lr.fit(train_norm[features_to_use].values, train_norm[predict_field].values)\n\n # predictions\n probs = lr.predict_proba(test_norm[features_to_use].values)\n keys = get_key(test_norm)\n predictions = pd.DataFrame({'Id' : keys.values, 'Pred' : probs[:,1]})\n \n # Evaluate outcomes\n res_base = tourney[(tourney['Season'] == season) & (tourney['Daynum'] > 135)].copy().reset_index()\n res_base['Team1'] = np.where(res_base['Wteam'] < res_base['Lteam'], res_base['Wteam'], res_base['Lteam'])\n res_base['Team2'] = np.where(res_base['Wteam'] > res_base['Lteam'], res_base['Wteam'], res_base['Lteam'])\n res_base['Result'] = (res_base['Wteam'] == res_base['Team1']).map(lambda x: 1 if x else 0)\n res_base['Id'] = get_key(res_base) \n # attach results to predictions\n res = pd.merge(res_base[['Id', 'Result']], predictions, on = 'Id', how = 'left')\n # logloss\n ll = skm.log_loss(res['Result'], res['Pred'])\n \n# print(lr.intercept_)\n# print(lr.coef_)\n \n return predictions, res, ll\n\n\nall_predictions = []\nfor season in [2013, 2014, 2015, 2016]:\n regular_raw, tourney_raw, feat_train, feat_test, train_norm, test_norm = \\\n get_features(season, tourney_slots, ratings, reg_season, team_geog, kenpom, tourney_seeds, tourney_geog)\n # see below for choice of C\n predictions, res, ll = make_predictions(season, train_norm, test_norm, tourney, C = 5e-3)\n print(ll)\n all_predictions += [predictions]\n\n# 0.559078513104 -- 2013\n# 0.541984791608 -- 2014\n# 0.480356337664 -- 2015\n# 0.511671826092 -- 2016\n\npd.concat(all_predictions).to_csv('./submissions/simpleLogisticModel2013to2016_tuned.csv', index = False)\n\nsns.pairplot(train_norm, hue = predict, vars = ['ln_dist_diff', 'rtg_diff', 't_rtg', 'pt_diff', 't_score'])\nplt.show()", "Sandbox explorations", "teams[teams['Team_Id'].isin([1163, 1196])]\n\ntourney_raw[(tourney_raw['Team1'] == 1163) & (tourney_raw['Team2'] == 1196)]\n\nfeat_test[(feat_test['Team1'] == 1195) & (feat_test['Team2'] == 1196)]\n\nres.ix[np.argsort(-(res['Pred'] - res['Result']).abs())].reset_index(drop = True)\n\n# accuracy?\nnp.sum(np.where(res['Pred'] > 0.5, res['Result'] == 1, res['Result'] == 0)) / len(res)", "Effect of C on different years", "cs_to_check = np.power(10, np.arange(-4, 2, 0.1))\nyears_to_check = range(2011, 2017)\nc_effect_df_dict = { 'C' : cs_to_check }\nfor yr in years_to_check:\n regular_raw, tourney_raw, feat_train, feat_test, train_norm, test_norm = \\\n get_features(yr, tourney_slots, ratings, reg_season, team_geog, kenpom, tourney_seeds, tourney_geog)\n log_losses = [ make_predictions(yr, train_norm, test_norm, tourney, C = C)[2] for C in cs_to_check ]\n c_effect_df_dict[str(yr)] = log_losses\nc_effect = pd.DataFrame(c_effect_df_dict)\n\nplt.semilogx()\nfor col in [ col for col in c_effect if col != 'C' ]:\n plt.plot(c_effect['C'], c_effect[col])\nplt.legend(loc = 3)\nplt.xlabel('C')\nplt.ylabel('logloss')\nplt.ylim(0.45, 0.65)\nplt.show()", "Look at who is contributing to logloss", "# contribution to logloss\nrc = res.copy()\nftc = feat_test.copy()\nftc['Id'] = get_key(ftc)\nrc['logloss_contrib'] = -np.log(np.where(rc['Result'] == 1, rc['Pred'], 1 - rc['Pred'])) / len(rc)\nftc = pd.merge(rc, ftc, how = 'left', on = 'Id')\n\nfig, axes = plt.subplots(nrows=1, ncols=2, figsize = (10, 4))\nim = axes[0].scatter(ftc['t_score'], ftc['t_rtg'], c = ftc['logloss_contrib'], vmin = 0, vmax = 0.025, cmap = plt.cm.get_cmap('coolwarm'))\naxes[0].set_xlabel('t_score')\naxes[0].set_ylabel('t_rtg')\n#plt.colorbar(sc)\naxes[1].scatter(-ftc['ln_dist_diff'], ftc['t_rtg'], c = ftc['logloss_contrib'], vmin = 0, vmax = 0.025, cmap = plt.cm.get_cmap('coolwarm'))\naxes[1].set_xlabel('ln_dist_diff')\ncb = fig.colorbar(im, ax=axes.ravel().tolist(), label = 'logloss_contrib')\nplt.show()", "Logloss contribution by round", "tourney_rounds = tourney_raw[['Team1', 'Team2', 'Season', 'SlotMatchup']].copy()\ntourney_rounds['Id'] = get_key(tourney_rounds)\ntourney_rounds['round'] = tourney_rounds['SlotMatchup'].map(lambda s: int(s[1]))\ntourney_rounds = tourney_rounds[['Id', 'round']]\nftc_with_rounds = pd.merge(ftc, tourney_rounds, how = 'left', on = 'Id')\n\nfig, axs = plt.subplots(ncols=2, figsize = (10, 4))\nsns.barplot(data = ftc_with_rounds, x = 'round', y = 'logloss_contrib', errwidth = 0, ax = axs[0])\nsns.barplot(data = ftc_with_rounds, x = 'round', y = 'logloss_contrib', errwidth = 0, estimator=max, ax = axs[1])\naxs[0].set_ylim(0, 0.035)\naxs[1].set_ylim(0, 0.035)\nplt.show()", "Overtime counts", "sns.barplot(data = reg_season[reg_season['Season'] > 2000], x = 'Season', y = 'Numot', errwidth = 0)\nplt.show()", "A look at dynamics of ratings data", "sns.lmplot('mean_rtg', 'std_rtg', data = ratings_eos, fit_reg = False)\nplt.show()\n\nratings_eos_test = ratings_eos.copy()\nratings_eos_test['parabola_mean_model'] =(ratings_eos_test['mean_rtg'].max()/2)**2-(ratings_eos_test['mean_rtg'] - ratings_eos_test['mean_rtg'].max()/2)**2\nsns.lmplot('parabola_mean_model', 'std_rtg', data = ratings_eos_test, fit_reg = False)\nplt.show()\n\ntest_data_test = test_data.copy()\ntest_data_test['rtg_diff'] = test_data_test['mean_rtg_1'] - test_data_test['mean_rtg_2']\ntest_data_test['t_model'] = test_data_test['rtg_diff']/(test_data_test['std_rtg_1']**2 + test_data_test['std_rtg_2']**2)**0.5\n#sns.lmplot('rtg_diff', 't_model', data = test_data_test, fit_reg = False)\nsns.pairplot(test_data_test[['rtg_diff', 't_model']])\nplt.show()", "Quick investigation: looks like avg score decreases with log of distance traveled", "dist_test = get_training_data(reg_season, team_geog, 2016)\nw_dist_test = dist_test[['w_dist', 'Wscore']].rename(columns = {'w_dist' : 'dist', 'Wscore' : 'score'})\nl_dist_test = dist_test[['l_dist', 'Lscore']].rename(columns = {'l_dist' : 'dist', 'Lscore' : 'score'})\ndist_test = pd.concat([w_dist_test, l_dist_test]).reset_index()[['dist', 'score']]\n\nplt.hist(dist_test['dist'])\nplt.xlim(0, 3000)\nplt.semilogy()\nplt.show()\n\nbucket_size = 1\ndist_test['bucket'] = bucket_size * (np.log(dist_test['dist'] + 1) // bucket_size)\ndist_grp = dist_test.groupby('bucket').aggregate([np.mean, np.std, len])['score']\ndist_grp['err'] = dist_grp['std'] / np.sqrt(dist_grp['len'])\n\nplt.plot(dist_grp['mean'])\nplt.fill_between(dist_grp.index, \n (dist_grp['mean'] - 2*dist_grp['err']).values, \n (dist_grp['mean'] + 2*dist_grp['err']).values,\n alpha = 0.3)\nplt.xlabel('log of distance traveled')\nplt.ylabel('avg score')\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sky111111111/study
test2.ipynb
gpl-3.0
[ "Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.", "# sequence_to_sequence_implementation course assignment was used a lot to finish this hw\n\n# A live help person highly suggested I worked through it again. --- 10000% correct. this was vital\n\n### AKA the UDACITY seq2seq assignment, /deep-learning/seq2seq/sequence_to_sequence_implementation.ipynb\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (10, 110)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end.\nYou can get the &lt;EOS&gt; word id by doing:\npython\ntarget_vocab_to_int['&lt;EOS&gt;']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.", "def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n # TODO: Implement Function\n # I couldn't remember what eos stood for (too many acronyms to remember) so I googled it\n #https://www.tensorflow.org/tutorials/seq2seq\n # end-of-senence (eos)\n # asked a live support about this. He / she directed me to https://github.com/nicolas-ivanov/tf_seq2seq_chatbot/issues/15\n #\n \n \n # Ok, setup the stuff that is known to be needed first\n source_id_text = []\n target_id_text = []\n end_of_seq = target_vocab_to_int['<EOS>'] # had \"eos\" at first and it gave an error. Changing to EOS. ## Update: doesn't fix, / issue is something else.\n \n #look at data strcuture\n #print(\"================\")\n #print(source_text)\n #print(\"================\")\n #source_id_text = enumerate(source_text.split('\\n'))\n #source_id_text = for tacos in (source_text.split('\\n'))\n \n \n #source_id_text = source_text.split('\\n')\n #print(source_id_text)\n #print(np.)\n print(\"================\")\n \n source_id_textsen = source_text.split('\\n')\n target_id_textsen = target_text.split('\\n')\n \n #for sentence in (source_id_textsen):\n # for word in sentence.split():\n # I think this is OK. default *should be spaces*\n #print(\"test:\"+word)\n #source_id_text = word\n #source_id_text = source_vocab_to_int[word]\n # source_id_text.append([source_vocab_to_int[word]])\n #print(len(source_id_text))\n #for sentence in (target_id_textsen):\n # for word in sentence.split():\n # #pass\n # #target_id_text = target_vocab_to_int[word]\n # target_id_text.append(target_vocab_to_int[word])\n # target_id_text.append(end_of_seq)\n \n\n #### WHY AM I STILL GETTING 60 something and an error saying it should just be four values in\n # source_id_text\n #How did I just break this.... It jus t worked\n\n # for sentence in (source_id_textsen):\n # source_id_text = [[source_vocab_to_int[word] for word in sentence.split()]]\n \n # for sentence in (target_id_textsen):\n # target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] + [end_of_seq]]\n \n\n\n\n\n # Live help said the following is the same. Added here for future reference if a similar problem is encountered after the course.\n source_id_text = [[source_vocab_to_int[word] for word in seq.split()] for seq in source_text.split('\\n')]\n target_id_text = [[target_vocab_to_int[word] for word in seq.split()] + [end_of_seq] for seq in target_text.split('\\n')]\n\n return source_id_text, target_id_text\n\n \n # do an enummeration for\n print(\"================\")\n \n \n \n return (source_id_text, target_id_text) #None, None\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()", "Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoder_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\nTarget sequence length placeholder named \"target_sequence_length\" with rank 1\nMax target sequence length tensor named \"max_target_len\" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.\nSource sequence length placeholder named \"source_sequence_length\" with rank 1\n\nReturn the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)", "def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.\n :return: Tuple (input, targets, learning rate, keep probability, target sequence length,\n max target sequence length, source sequence length)\n \"\"\"\n #https://www.tensorflow.org/api_docs/python/tf/placeholder\n \n \n #float32 issue at end of project, chaning things to int32 where possible???\n \n \n Input = tf.placeholder(dtype=tf.int32,shape=[None,None],name=\"input\")\n Target = tf.placeholder(dtype=tf.int32,shape=[None,None],name=\"target\")\n lr = tf.placeholder(dtype=tf.float32,name=\"lr\")\n taretlength = tf.placeholder(dtype=tf.int32,name=\"target_sequence_length\")\n kp = tf.placeholder(dtype=tf.float32,name=\"keep_prob\")\n \n \n #maxseq = tf.placeholder(dtype.float32,name='max_target_len')\n maxseq = tf.reduce_max(taretlength,name='max_target_len')\n \n sourceseqlen = tf.placeholder(dtype=tf.int32,shape=[None],name='source_sequence_length')\n \n \n# TODO: Implement Function\n \n \n \n return Input, Target, lr, kp, taretlength, maxseq, sourceseqlen\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)", "Process Decoder Input\nImplement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.", "### From the UDACITYclass assignment:\n##########################################\n# Process the input we'll feed to the decoder\n#def process_decoder_input(target_data, vocab_to_int, batch_size):\n # '''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''\n # ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n # dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)##\n #return dec_input#\n###udacity/hw/deep-learning/seq2seq/sequence_to_sequence_implementation.ipynb\n#####################################\n\ndef process_decoder_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for encoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n # done: Implement Function\n # this is to be sliced just like one would do with numpy\n # to do that, https://www.tensorflow.org/api_docs/python/tf/strided_slice is used.\n # ref to verify this is the rigth func: https://stackoverflow.com/questions/41380126/what-does-tf-strided-slice-do \n \n #strided_slice(\n #input_,\n # begin,\n #end,\n #strides=None,\n #begin_mask=0,\n #end_mask=0,\n #ellipsis_mask=0,\n #new_axis_mask=0,\n #shrink_axis_mask=0,\n #var=None,\n #name=None\n #\n #)\n #ret = tf.strided_slice(input_=target_data,begin=[0],end=[batch_size],)\n \n\n # FROM UDACITY seq2seq assignment\n #ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n #dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)\n #return dec_input\n \n \n ret = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n #ret =tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), target_data], 1)\n ret =tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ret], 1)\n \n return ret #None\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_encoding_input(process_decoder_input)", "Encoding\nImplement encoding_layer() to create a Encoder RNN layer:\n * Embed the encoder input using tf.contrib.layers.embed_sequence\n * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper\n * Pass cell and embedded input to tf.nn.dynamic_rnn()", "from imp import reload\nreload(tests)\n\ndef encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, \n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :param source_sequence_length: a list of the lengths of each sequence in the batch\n :param source_vocab_size: vocabulary size of source data\n :param encoding_embedding_size: embedding size of source data\n :return: tuple (RNN output, RNN state)\n \"\"\"\n # done: Implement Function\n \n \n ##################\n ##\n ## This is simlar to 2.1 Encoder of the UDACITY seq2seq hw\n \"\"\"\n #def encoding_layer(input_data, rnn_size, num_layers,\n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n\n\n # Encoder embedding\n enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)\n\n # RNN cell\n def make_cell(rnn_size):\n enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n return enc_cell\n\n enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)\n \n return enc_output, enc_state\"\"\"\n #\n ##\n \n \n ##########\n \n # the respective documents for this cell are:\n #https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence\n #https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell\n #https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper\n #https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn\n \n #rrnoutput=\n #rrnstate=\n \n #embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, rnn_size, encoding_embedding_size)\n embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)\n \n #tf.contrib.layers.embed_sequence()\n def make_cell(rnn_size):\n #https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper\n \n enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) # I had AN INSANE AMMOUNT OF ERRORS BECAUSE I ACCIDENTALLY EDINTED THIS LINE TO HAVE PROB INSTEAD OF THE DROPOUT. >.> no good error codes\n enc_cell = tf.contrib.rnn.DropoutWrapper(enc_cell,output_keep_prob=keep_prob) \n \n # Not sure which one. Probably not input. EIther output or state..\n #input_keep_prob: unit Tensor or float between 0 and 1, input keep probability; if it is constant and 1, no input dropout will be added.\n #output_keep_prob: unit Tensor or float between 0 and 1, output keep probability; if it is constant and 1, no output dropout will be added.\n #state_keep_prob: unit Tensor or float between 0 and 1, output keep probability; if it is constant and 1, no output dropout will be added. State dropout is performed on the output states of the cell.\n \n return enc_cell\n\n enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, embed_input, sequence_length=source_sequence_length, dtype=tf.float32)\n return enc_output, enc_state\n #return None, None\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)", "Decoding - Training\nCreate a training decoding layer:\n* Create a tf.contrib.seq2seq.TrainingHelper \n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode", "######\n# SUPEEEER TRICKY UDACITY!\n# I spent half a day trying to figure out why I had cryptic errors - turns out only\n# Tensorflow 1.1 can run this.\n# not 1.0 . Not 1.2. \n# wasting my time near the submission deadline even though my code is OK.\n\n# Used the UDACITY sequence_to_sequence_implementation as reference for this\n# did find operation (ctrl+f) fro \"rainingHelper\"\n# Found decoding_layer(...) function which seems to address this cell's requirements\ndef decoding_layer_train(encoder_state, dec_cell, dec_embed_input, \n target_sequence_length, max_summary_length, \n output_layer, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_summary_length: The length of the longest sequence in the batch\n :param output_layer: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing training logits and sample_id\n \"\"\"\n # done: Implement Function\n \n \"\"\"\n #from seq 2 seq: \n # Helper for the training process. Used by BasicDecoder to read inputs.\n training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,\n sequence_length=target_sequence_length,\n time_major=False)\n \n \n # Basic decoder\n training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n training_helper,\n enc_state,\n output_layer) \n \n # Perform dynamic decoding using the decoder\n training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,\n \n \n \"\"\"\n # Helper for the training process. Used by BasicDecoder to read inputs.\n training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,\n sequence_length=target_sequence_length,\n time_major=False)\n\n #encoder_state ... ameError: name 'enc_state' is not defined\n # Basic decoder\n training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n training_helper,\n encoder_state,\n output_layer) \n\n # Perform dynamic decoding using the decoder\n #NameError: name 'max_target_sequence_length' is not defined ... same deal\n training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length)\n #ValueError: too many values to unpack (expected 2)\n #training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length)\n \n \n return training_decoder_output\n \n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)", "Decoding - Inference\nCreate inference decoder:\n* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper\n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode", "#########################\n#\n# Searched course tutorial Seq2seq again, same functtion as last code cell\n#\n# See below:\n\"\"\"\nwith tf.variable_scope(\"decode\", reuse=True):\n start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')\n\n # Helper for the inference process.\n inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,\n start_tokens,\n target_letter_to_int['<EOS>'])\n\n # Basic decoder\n inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n inference_helper,\n enc_state,\n output_layer)\n \n # Perform dynamic decoding using the decoder\n inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)\n \n\n \n return training_decoder_output, inference_decoder_output\n\"\"\"\n\n\n\ndef decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,\n end_of_sequence_id, max_target_sequence_length,\n vocab_size, output_layer, batch_size, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param max_target_sequence_length: Maximum length of target sequences\n :param vocab_size: Size of decoder/target vocabulary\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_layer: Function to apply the output layer\n :param batch_size: Batch size\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing inference logits and sample_id\n \"\"\"\n # done: Implement Function\n \n #### BASED STRONGLY ON CLASS COURSEWORK, THE SEQ2SEQ material\n \n #https://www.tensorflow.org/api_docs/python/tf/tile\n #NameError: name 'target_letter_to_int' is not defined\n #start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')\n #start_tokens = tf.tile(tf.constant(['<GO>'], dtype=tf.int32), [batch_size], name='start_tokens')\n start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')\n\n \n \n # Helper for the inference process.\n #https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper\n \n #inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,start_tokens,target_letter_to_int['<EOS>'])\n #NameError: name 'target_letter_to_int' is not defined\n inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,start_tokens,end_of_sequence_id)\n \n # Basic decoder\n #enc_state # encoder_state changed naes\n #https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder\n inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,inference_helper,encoder_state,output_layer)\n\n # Perform dynamic decoding using the decoder\n #https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode\n inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,impute_finished=True,maximum_iterations=max_target_sequence_length)\n\n\n \n return inference_decoder_output#None\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)", "Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nEmbed the target sequences\nConstruct the decoder LSTM cell (just like you constructed the encoder cell above)\nCreate an output layer to map the outputs of the decoder to the elements of our vocabulary\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.", "#\n## \n# Again, as suggested by a Uedacity TA (live support), SEQ 2 SEQ \n# Largely based on the decoding_layer in the udadcity seq2seq tutorial/example material. \n# See here:\n\"\"\"\ndef decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size,\n target_sequence_length, max_target_sequence_length, enc_state, dec_input):\n # 1. Decoder Embedding\n target_vocab_size = len(target_letter_to_int)\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n\n # 2. Construct the decoder cell\n def make_cell(rnn_size):\n dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n return dec_cell\n\n dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n # 3. Dense layer to translate the decoder's output at each time \n # step into a choice from the target vocabulary\n output_layer = Dense(target_vocab_size,\n kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))\n\n\n # 4. Set up a training decoder and an inference decoder\n # Training Decoder\n with tf.variable_scope(\"decode\"):\n\n # Helper for the training process. Used by BasicDecoder to read inputs.\n training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,\n sequence_length=target_sequence_length,\n time_major=False)\n \n \n # Basic decoder\n training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n training_helper,\n enc_state,\n output_layer) \n \n # Perform dynamic decoding using the decoder\n training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)\n # 5. Inference Decoder\n # Reuses the same parameters trained by the training process\n with tf.variable_scope(\"decode\", reuse=True):\n start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')\n\n # Helper for the inference process.\n inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,\n start_tokens,\n target_letter_to_int['<EOS>'])\n\n # Basic decoder\n inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n inference_helper,\n enc_state,\n output_layer)\n \n # Perform dynamic decoding using the decoder\n inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)\n \n\n \n return training_decoder_output, inference_decoder_output\n\"\"\"\n\n\ndef decoding_layer(dec_input, encoder_state,\n target_sequence_length, max_target_sequence_length,\n rnn_size,\n num_layers, target_vocab_to_int, target_vocab_size,\n batch_size, keep_prob, decoding_embedding_size):\n \"\"\"\n Create decoding layer\n :param dec_input: Decoder input\n :param encoder_state: Encoder state\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_target_sequence_length: Maximum length of target sequences\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param target_vocab_size: Size of target vocabulary\n :param batch_size: The size of the batch\n :param keep_prob: Dropout keep probability\n :param decoding_embedding_size: Decoding embedding size\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n # TODO: Implement Function\n \n \n # 1. Decoder Embedding\n #NameError: name 'target_letter_to_int' is not defined\n #target_vocab_size = len(target_letter_to_int) # already param\n \n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n\n # 2. Construct the decoder cell\n def make_cell(rnn_size):\n dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n return dec_cell\n\n dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n # 3. Dense layer to translate the decoder's output at each time \n # step into a choice from the target vocabulary\n output_layer = Dense(target_vocab_size,\n kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))\n\n\n # 4. Set up a training decoder and an inference decoder\n # Training Decoder\n with tf.variable_scope(\"decode\"):\n\n # Helper for the training process. Used by BasicDecoder to read inputs.\n training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,\n sequence_length=target_sequence_length,\n time_major=False)\n \n \n # Basic decoder\n #NameError: name 'enc_state' is not defined\n training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n training_helper,\n encoder_state,\n output_layer) \n \n # Perform dynamic decoding using the decoder\n training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)\n # 5. Inference Decoder\n # Reuses the same parameters trained by the training process\n with tf.variable_scope(\"decode\", reuse=True):\n #NameError: name 'target_letter_to_int' is not defined\n #target_vocab_to_int is the closest equivalent\n start_tokens = tf.tile(tf.constant([target_vocab_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')\n\n # Helper for the inference process.\n inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,\n start_tokens,\n target_vocab_to_int['<EOS>'])\n\n # Basic decoder\n #NameError: name 'enc_state' is not defined\n inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n inference_helper,\n encoder_state,\n output_layer)\n \n # Perform dynamic decoding using the decoder\n inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)\n \n\n \n return training_decoder_output, inference_decoder_output\n \n #return None, None\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)", "Build the Neural Network\nApply the functions you implemented above to:\n\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).\nProcess target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.\nDecode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.", "def seq2seq_model(input_data, target_data, keep_prob, batch_size,\n source_sequence_length, target_sequence_length,\n max_target_sentence_length,\n source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size,\n rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param source_sequence_length: Sequence Lengths of source sequences in the batch\n :param target_sequence_length: Sequence Lengths of target sequences in the batch\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n # done: Implement Function\n \n #ENcode\n \n \n RNN_output, RNN_state= encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size)\n #process\n \n Preprocessedtargetdata=process_decoder_input(target_data, target_vocab_to_int, batch_size)\n \n #decode\n reta,retb= decoding_layer(Preprocessedtargetdata, RNN_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) \n \n \n return reta,retb#None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability\nSet display_step to state how many steps between each debug output statement", "# PHyper parameters are expected to be of similar range to those of the Seq 2 seq lesson\n\n\n# Number of Epochs\nepochs = 16 #60 #None\n# Batch Size\nbatch_size = 256 #None\n# RNN Size\nrnn_size = 50#None\n# Number of Layers\nnum_layers = 2#None\n# Embedding Size\nencoding_embedding_size = 256 #15None\ndecoding_embedding_size = 256 #None\n# Learning Rate\nlearning_rate = 0.01# None\n# Dropout Keep Probability\nkeep_probability = 0.75 # reasoning: should be more than 50/50.. but it should still be able to drop values so it can search #None\ndisplay_step = 32#None", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_target_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()\n\n #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n\n train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),\n targets,\n keep_prob,\n batch_size,\n source_sequence_length,\n target_sequence_length,\n max_target_sequence_length,\n len(source_vocab_to_int),\n len(target_vocab_to_int),\n encoding_embedding_size,\n decoding_embedding_size,\n rnn_size,\n num_layers,\n target_vocab_to_int)\n\n\n training_logits = tf.identity(train_logits.rnn_output, name='logits')\n inference_logits = tf.identity(inference_logits.sample_id, name='predictions')\n\n masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n training_logits,\n targets,\n masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)\n", "Batch and pad the source and target sequences", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef pad_sentence_batch(sentence_batch, pad_int):\n \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n max_sentence = max([len(sentence) for sentence in sentence_batch])\n return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n\n\ndef get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):\n \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n for batch_i in range(0, len(sources)//batch_size):\n start_i = batch_i * batch_size\n\n # Slice the right amount for the batch\n sources_batch = sources[start_i:start_i + batch_size]\n targets_batch = targets[start_i:start_i + batch_size]\n\n # Pad\n pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n\n # Need the lengths for the _lengths parameters\n pad_targets_lengths = []\n for target in pad_targets_batch:\n pad_targets_lengths.append(len(target))\n\n pad_source_lengths = []\n for source in pad_sources_batch:\n pad_source_lengths.append(len(source))\n\n yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths\n", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1])],\n 'constant')\n\n return np.mean(np.equal(target, logits))\n\n# Split data to training and validation sets\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\nvalid_source = source_int_text[:batch_size]\nvalid_target = target_int_text[:batch_size]\n(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,\n valid_target,\n batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])) \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(\n get_batches(train_source, train_target, batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])):\n\n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n target_sequence_length: targets_lengths,\n source_sequence_length: sources_lengths,\n keep_prob: keep_probability})\n\n\n if batch_i % display_step == 0 and batch_i > 0:\n\n\n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch,\n source_sequence_length: sources_lengths,\n target_sequence_length: targets_lengths,\n keep_prob: 1.0})\n\n\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_sources_batch,\n source_sequence_length: valid_sources_lengths,\n target_sequence_length: valid_targets_lengths,\n keep_prob: 1.0})\n\n train_acc = get_accuracy(target_batch, batch_train_logits)\n\n valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)\n\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')", "Save Parameters\nSave the batch_size and save_path parameters for inference.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()", "Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the &lt;UNK&gt; word id.", "def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)", "Translate\nThis will translate translate_sentence from English to French.", "#translate_sentence = 'he saw a old yellow truck .'\n#Why does this have a typo in it? It should be \"He saw AN old, yellow truck.\"\n\ntranslate_sentence = \"There once was a man from Nantucket.\"\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,\n target_sequence_length: [len(translate_sentence)*2]*batch_size,\n source_sequence_length: [len(translate_sentence)]*batch_size,\n keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in translate_logits]))\nprint(' French Words: {}'.format(\" \".join([target_int_to_vocab[i] for i in translate_logits])))\n", "Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jobovy/simple-m2m
py/HOM2M_SN.ipynb
mit
[ "import os, os.path\nimport numpy\nfrom astropy.io import ascii\n%pylab inline\nimport matplotlib as mpl\nmpl.style.use('classic')\nfrom matplotlib.ticker import NullFormatter\nfrom galpy.util import bovy_plot\nimport seaborn as sns\nnumpy.random.seed(4)", "The data\nThe gas data\n$\\mathbf{H_2}$\nMcKee et al. (2015) take their spatial distribution from Dame et al. (1987). Dame et al. estimate the FWHM of the distribution of molecular clouds and then assume the distribution is Gaussian, this gives them a Gaussian with a dispersion of 74 pc, or as McKee et al. state it: $\\rho(z) \\propto \\exp(-[z/105\\,\\mathrm{pc}]^2)$.\nWe can check this using the recent molecular cloud catalog of Miville-Deschênes et al. (2017). Let's download this catalog and plot the vertical distribution of all clouds at $8.25\\,\\mathrm{kpc} < R < 8.75\\,\\mathrm{kpc}$ (they seem to assume $R_0 = 8.5\\,\\mathrm{kpc}$:", "cloud_name= 'apjaa4dfdt1_mrt.txt'\nif not os.path.exists(cloud_name):\n !wget http://iopscience.iop.org/0004-637X/834/1/57/suppdata/apjaa4dfdt1_mrt.txt\ncloud_data= ascii.read(cloud_name,format='cds')\n# Compute distsance and height z based on whether near of far kinematic distance is more likely\nd_cloud= cloud_data['Dn']\nd_cloud[cloud_data['INF'] == 1]= cloud_data['Df'][cloud_data['INF'] == 1]\nz_cloud= cloud_data['zn']\nz_cloud[cloud_data['INF'] == 1]= cloud_data['zf'][cloud_data['INF'] == 1]\n\nfigsize(7,6)\nbovy_plot.bovy_print(axes_labelsize=17.,text_fontsize=12.,xtick_labelsize=15.,ytick_labelsize=15.)\nzbins_h2= numpy.arange(-0.2125,0.225,0.025)\nzbinsp_h2= 0.5*(numpy.roll(zbins_h2,-1)+zbins_h2)[:-1]\nd,e,_= hist(z_cloud[(cloud_data['Rgal'] > 8.25)*(cloud_data['Rgal'] < 8.75)],\n bins=zbins_h2,range=[-.5,.5],normed=True,histtype='step',lw=2.,color='k')\ngca().set_yscale('log')\nxs= numpy.linspace(-0.5,0.5,1001)\nplot(xs,1./numpy.sqrt(2.*numpy.pi)/.074*numpy.exp(-0.5*xs**2./.074**2.),\n label=r'$\\rho(z) \\propto \\exp(-[z/105\\,\\mathrm{pc}]^2)$')\nplot(xs,1./0.1*numpy.exp(-numpy.fabs(xs)/.05),\n label=r'$\\rho(z) \\propto \\exp(-|z|/50\\,\\mathrm{pc})$')\nplot(xs,1./0.2*1./numpy.cosh(-xs/.1)**2.,\n label=r'$\\rho(z) \\propto \\sech^2(-|z|/100\\,\\mathrm{pc})$')\nlegend(loc='upper left',frameon=False,fontsize=15.)\nylim(0.1,120.)\nxlabel(r'$z\\,(\\mathrm{kpc})$')", "All three analytic forms capture parts of the distribution well and it's not entirely clear which would be the best fit (taking into account completeness etc.). We will estimate the uncertainty in the profile as the standard deviation between the observed number of clouds in each bin and the predictions from all three density profiles. We further scale the counts such that the integrated surface density is $1\\,M_\\odot\\,\\mathrm{pc}^{-2}$ as determined by McKee et al.", "counts_h2= d\necounts_h2= numpy.std([d,\n 1./numpy.sqrt(2.*numpy.pi)/.074*numpy.exp(-0.5*zbinsp_h2**2./.074**2.),\n 1./0.1*numpy.exp(-numpy.fabs(zbinsp_h2)/.05),\n 1./0.2*1./numpy.cosh(-zbinsp_h2/.1)**2.],axis=0)\necounts_h2/= numpy.sum(counts_h2)*(zbins_h2[1]-zbins_h2[0])*1000./1.0\ncounts_h2/= numpy.sum(counts_h2)*(zbins_h2[1]-zbins_h2[0])*1000./1.0\ncounts_h2[numpy.fabs(zbinsp_h2) > 0.21]= numpy.nan\necounts_h2[numpy.fabs(zbinsp_h2) > 0.21]= numpy.nan\n\nfigsize(6,4.5)\nbovy_plot.bovy_plot(1000.*zbinsp_h2,\n counts_h2,'ko',semilogy=True,\n lw=2.,zorder=2,\n xlabel=r'$Z\\,(\\mathrm{pc})$',\n ylabel=r'$\\rho_{H_2}(z)\\,(M_\\odot\\,\\mathrm{pc}^{-3})$',\n xrange=[-420,420],yrange=[0.1/1000.,50./1000.])\nerrorbar(1000.*zbinsp_h2,counts_h2,yerr=ecounts_h2,color='k',marker='o',\n ls='None')\nbovy_plot.bovy_text(r'$\\Sigma_{H_2} = %.1f \\pm %.1f\\,M_\\odot\\,\\mathrm{pc}^{-2}$' \\\n % (numpy.nansum(counts_h2)*(zbinsp_h2[1]-zbinsp_h2[0])*1000.,\n numpy.sqrt(numpy.nansum(ecounts_h2**2.))*(zbinsp_h2[1]-zbinsp_h2[0])*1000.),\n top_left=True,size=16.)", "The uncertainty in the surface density is 10%. There is an addition, larger uncertainty of about 30% due to the uncertainty CO-to-$H_2$ conversion, which we will apply separately (because it is a systematic).\nHI\nMcKee et al. refer to Dickey & Lockman (1990) for the vertical distribution of HI. This model has three components: two Gaussian components, with mid-plane normalizations of 0.395 and 0.107 and FWHMs of 212 pc and 530 pc; the third component is an exponential with mid-plane normalization 0.064 and a scale height of 403 pc.\nMcKee et al. also refer to Kalberla & Dedes (2008) for an alternative model made up of two exponentials. The fit here actually seems to go back to Kalberla & Kelp (1998).\nWe also consider a model based on the data from Schmidt (1957), taking their main Gaussian component and adding a small wider Gaussian to model the tails in their data.\nWe define functions for all three of these HI profiles and compare them to each other:", "def dickeylockman(z):\n return (0.395*numpy.exp(-z**2.*4.*numpy.log(2.)/0.212**2.)\\\n +0.107*numpy.exp(-z**2.*4.*numpy.log(2.)/0.530**2.)\\\n +0.064*numpy.exp(-numpy.fabs(z)/0.403))/0.566\ndef kalberla(z):\n return (0.5*numpy.exp(-numpy.fabs(z)/0.15)+0.19*numpy.exp(-numpy.fabs(z)/0.5))/0.69\ndef schmidt(z):\n return (numpy.exp(-z**2.*4.*numpy.log(2.)/0.220**2.)\\\n +0.2*numpy.exp(-z**2.*4.*numpy.log(2.)/0.520**2.))/1.2\n\nfigsize(6,4)\nxs= numpy.linspace(-1.,1.,1001)\nbovy_plot.bovy_plot(xs,dickeylockman(xs),label=r'$\\mathrm{Dickey\\ \\&\\ Lockman\\ (1990)}$',\n xlabel=r'$z\\,(\\mathrm{kpc})$',\n xrange=[-1.,1.],\n yrange=[0.,1.3])\nplot(xs,kalberla(xs)*0.69/0.566/1.2,label=r'$\\mathrm{Kalberla}$')\nplot(xs,schmidt(xs),label=r'$\\mathrm{Schmidt}$')\nlegend(loc='upper left',fontsize=16.)", "Now we will take the Dickey & Lockman model as the fiducial model and quantify the uncertainty as the spread between the three models. We further add an overall 10% uncertainty in the counts and an uncertainty that decreases quadratically with height up to 600 pc, because observations are most confused and difficult close to the mid-plane. There's quite a bit of data (e.g., Schmidt 1957), so we'll bin the data into our normal bins:", "zbins_hi= numpy.arange(-0.8125,0.825,0.025)\nzbinsp_hi= 0.5*(numpy.roll(zbins_hi,-1)+zbins_hi)[:-1]\ncounts_hi= dickeylockman(zbinsp_hi)\necounts_hi= numpy.std([dickeylockman(zbinsp_hi),schmidt(zbinsp_hi),kalberla(zbinsp_hi)],axis=0)\\\n +0.1*counts_hi+(0.6-numpy.fabs(zbinsp_hi))**2.*0.2\ncounts_hi+= numpy.random.normal(size=len(counts_hi))*ecounts_hi\n# Normalize\nnorm_xs= numpy.linspace(0.,1.1,1001)\nsfmass= numpy.sum(dickeylockman(norm_xs))*2.*(norm_xs[1]-norm_xs[0])*1000.\ncounts_hi/= sfmass/10.7\necounts_hi/= sfmass/10.7\n\nfigsize(6,4.5)\nbovy_plot.bovy_plot(1000.*zbinsp_hi,\n counts_hi,'ko',semilogy=True,\n lw=2.,zorder=2,\n xlabel=r'$Z\\,(\\mathrm{pc})$',\n ylabel=r'$\\rho_{\\mathrm{HI}}(z)\\,(M_\\odot\\,\\mathrm{pc}^{-3})$',\n xrange=[-620,620],yrange=[1./1000.,80./1000.])\nerrorbar(1000.*zbinsp_hi,counts_hi,yerr=ecounts_hi,color='k',marker='o',\n ls='None')\nbovy_plot.bovy_text(r'$\\Sigma_{\\mathrm{HI}\\ \\lesssim\\ 600\\,\\mathrm{pc}} = %.1f \\pm %.1f\\,M_\\odot\\,\\mathrm{pc}^{-2}$' \\\n % (numpy.nansum(counts_hi)*(zbinsp_hi[1]-zbinsp_hi[0])*1000.,\n numpy.sqrt(numpy.nansum(ecounts_hi**2.))*(zbinsp_hi[1]-zbinsp_hi[0])*1000.),\n top_left=True,size=16.)", "This gives a not unreasonable looking uncertainty, although the uncertainty on the total surface density remains small. We will allow for an additional, systematic offset later.\nHII\nWe follow McKee et al. and consider the models fit to pulsar dispersion measures discussed by Schnitzeler (2012). In particular, we consider their simple single-disk models: (a) the exponential disk of Berkhuijsen & Müller with a scale height of 930 pc, (b) the exponential disk of Gaensler et al. (2008) which has a scale height of 1.83 kpc, and (c) the exponential disk fitted by Schnitzeler, which has a scale height of 1.59 kpc. These three models look like this:", "def berkhuijsen(z):\n return 21.7/0.93*numpy.exp(-numpy.fabs(z)/0.93)/25.6\ndef gaensler(z):\n return 25.6/1.83*numpy.exp(-numpy.fabs(z)/1.83)/25.6\ndef schnitzeler(z):\n return 24.4/1.59*numpy.exp(-numpy.fabs(z)/1.59)/25.6\n\nfigsize(6,4)\nxs= numpy.linspace(-3.,3.,1001)\nbovy_plot.bovy_plot(xs,berkhuijsen(xs),\n label=r'$\\mathrm{Berkhuijsen\\ \\&\\ Mueller}$',\n xlabel=r'$z\\,(\\mathrm{kpc})$',\n xrange=[-3.,3.],\n yrange=[0.,1.1])\nplot(xs,gaensler(xs)*0.69/0.566/1.2,label=r'$\\mathrm{Gaensler}$')\nplot(xs,schnitzeler(xs),label=r'$\\mathrm{Schnitzeler}$')\nlegend(loc='upper left',fontsize=16.)", "The Berkhuijsen & Muller model is quite different from the others, especially at low $z$. The data on pulsar dispersion measures appears to be quite sparse near the plane, so perhaps the range spanned by these three models does not seem too small and we will conservatively consider the spread as an estimate of the error. We bin the profile in 100 pc bins (based on the available data in Schnitzeler) and scale the uncertainties up somewhat further to create a total column density with a similar uncertainty as in McKee et al.", "zbins_hii= numpy.arange(-2.05,2.15,0.1)\nzbinsp_hii= 0.5*(numpy.roll(zbins_hii,-1)+zbins_hii)[:-1]\ncounts_hii= schnitzeler(zbinsp_hii)\necounts_hii= numpy.std([schnitzeler(zbinsp_hii),gaensler(zbinsp_hii),berkhuijsen(zbinsp_hii)],axis=0)*1.5\necounts_hii[numpy.fabs(zbinsp_hii) > 0.5]*= 2.5\ncounts_hii+= numpy.random.normal(size=len(counts_hii))*ecounts_hii\n# Normalize\nnorm_xs= numpy.linspace(0.,5.1,1001)\nsfmass= numpy.sum(schnitzeler(norm_xs))*2.*(norm_xs[1]-norm_xs[0])*1000.\ncounts_hii/= sfmass/1.8\necounts_hii/= sfmass/1.8\n\nfigsize(6,4.5)\nbovy_plot.bovy_plot(1000.*zbinsp_hii,\n counts_hii,'ko',semilogy=True,\n lw=2.,zorder=2,\n xlabel=r'$Z\\,(\\mathrm{pc})$',\n ylabel=r'$\\rho_{\\mathrm{HII}}(z)\\,(M_\\odot\\,\\mathrm{pc}^{-3})$',\n xrange=[-2120,2120],yrange=[0.05/1000.,2./1000.])\nerrorbar(1000.*zbinsp_hii,counts_hii,yerr=ecounts_hii,color='k',marker='o',\n ls='None')\nbovy_plot.bovy_text(r'$\\Sigma_{\\mathrm{HII}\\ \\lesssim\\ 2\\,\\mathrm{kpc}} = %.1f \\pm %.1f\\,M_\\odot\\,\\mathrm{pc}^{-2}$' \\\n % (numpy.nansum(counts_hii)*(zbinsp_hii[1]-zbinsp_hii[0])*1000.,\n numpy.sqrt(numpy.nansum(ecounts_hii**2.))*(zbinsp_hii[1]-zbinsp_hii[0])*1000.),\n top_left=True,size=16.)", "This profile correctly conveys that the overall profile is quite flat and leads to a reasonable uncertainty on the total column. \nSummary of gas data\nThe following plot summarizes the gas data:", "figsize(6,8.5)\nmarker='o'\nms= 6.\nfor ii,(xrange,yrange,xlabel) in enumerate(zip([[-620.,620.],[-2120,2120]],\n [[10.**-4.,10.**-1.],[10.**-4.,10.**-1.]],\n [None,r'$Z\\,(\\mathrm{pc})$'])):\n subplot(2,1,ii+1)\n # H2\n bovy_plot.bovy_plot(1000.*zbinsp_h2,\n counts_h2,marker,semilogy=True,gcf=True,ms=ms,\n lw=2.,zorder=2,color=sns.color_palette()[0],\n xlabel=xlabel,\n ylabel=r'$\\rho(z)\\,(M_\\odot\\,\\mathrm{pc}^{-3})$',\n xrange=xrange,yrange=yrange)\n errorbar(1000.*zbinsp_h2,counts_h2,yerr=ecounts_h2,marker=marker,color=sns.color_palette()[0],\n ls='None',ms=ms,label=r'$H_2$')\n #HI\n bovy_plot.bovy_plot(1000.*zbinsp_hi,\n counts_hi,marker,semilogy=True,ms=ms,\n color=sns.color_palette()[1],\n lw=2.,zorder=2,overplot=True)\n errorbar(1000.*zbinsp_hi,counts_hi,yerr=ecounts_hi,marker=marker,color=sns.color_palette()[1],\n ls='None',ms=ms,label=r'$\\mathrm{HI}$')\n #HII\n bovy_plot.bovy_plot(1000.*zbinsp_hii,\n counts_hii,marker,semilogy=True,ms=ms,\n color=sns.color_palette()[2],\n lw=2.,zorder=2,overplot=True)\n errorbar(1000.*zbinsp_hii,counts_hii,yerr=ecounts_hii,marker=marker,color=sns.color_palette()[2],\n ls='None',ms=ms,label=r'$\\mathrm{HII}$')\n if ii == 0:\n bovy_plot.bovy_text(r'$\\rho_{\\mathrm{ISM}}(z=0) = %.3f\\pm%.3f\\,M_\\odot\\,\\mathrm{pc}^{-3}$' \\\n % (counts_h2[len(zbinsp_h2)//2]\n +counts_hi[len(zbinsp_hi)//2]\n +counts_hii[len(zbinsp_hii)//2]\\\n ,numpy.sqrt(ecounts_h2[len(zbinsp_h2)//2]**2.\\\n +ecounts_hi[len(zbinsp_hi)//2]**2.+\n ecounts_hii[len(zbinsp_hii)//2]**2.)),\n top_left=True,size=16.)\n elif ii == 1:\n bovy_plot.bovy_text(r'$\\Sigma_{\\mathrm{ISM}}(|z| \\leq 1.1\\,\\mathrm{kpc}) = %.1f\\pm%.1f\\,M_\\odot\\,\\mathrm{pc}^{-2}$' \\\n % (1000.*(numpy.nansum(counts_h2)*(zbins_h2[1]-zbins_h2[0])\n +numpy.nansum(counts_hi)*(zbins_hi[1]-zbins_hi[0])\n +numpy.nansum(counts_hii)*(zbins_hii[1]-zbins_hii[0])),\n 1000.*numpy.sqrt(numpy.nansum(ecounts_h2**2.)*(zbins_h2[1]-zbins_h2[0])**2.\n +numpy.nansum(ecounts_hi**2.)*(zbins_hi[1]-zbins_hi[0])**2.\n +numpy.nansum(ecounts_hii**2.)*(zbins_hii[1]-zbins_hii[0])**2.)),\n top_left=True,size=16.)\nlegend(loc='center left',fontsize=16.)\ntight_layout()", "The total uncertainties are small because they do not include the systematic 30% uncertainty in the $H_2$ and the systematic 15% uncertainty in the HI." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
timcera/tsgettoolbox
notebooks/tsgettoolbox-nwis-api.ipynb
bsd-3-clause
[ "tsgettoolbox and tstoolbox - Python Programming Interface\n'tsgettoolbox nwis ...': Download data from the National Water Information System (NWIS)\nThis notebook is to illustrate the Python API usage for 'tsgettoolbox' to download and work with data from the National Water Information System (NWIS). There is a different notebook to do the same things from the command line called tsgettoolbox-nwis-cli. The CLI version of tsgettoolbox can be used from other languages that have the ability to make a system call.", "%matplotlib inline\nfrom tsgettoolbox import tsgettoolbox", "Let's say that I want flow (parameterCd=00060) for site '02325000'. All of the tsgettoolbox functions create a pandas DataFrame.", "df = tsgettoolbox.nwis_dv(sites=\"02325000\", startDT=\"2000-01-01\", parameterCd=\"00060\")\n\ndf.head() # The .head() function gives the first 5 values of the time-series", "'tstoolbox ...': Process data using 'tstoolbox'\nNow lets use \"tstoolbox\" to plot the time-series. The 'input_ts' option is used to read in the time-series from the DataFrame.", "from tstoolbox import tstoolbox\n\ntstoolbox.plot(input_ts=df, ofilename=\"plot_api.png\")", "'tstoolbox plot' has many options that can be used to modify the plot.", "tstoolbox.plot(\n input_ts=df,\n ofilename=\"flow_api.png\",\n ytitle=\"Flow (cfs)\",\n title=\"02325000: FENHOLLOWAY RIVER NEAR PERRY, FLA\",\n legend=False,\n)", "", "mdf = tstoolbox.aggregate(input_ts=df, agg_interval=\"M\", statistic=\"mean\")\n\ntstoolbox.plot(input_ts=mdf, drawstyle=\"steps-pre\", ofilename=\"flow_api_monthly.png\")", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phoebe-project/phoebe2-docs
2.3/tutorials/datasets.ipynb
gpl-3.0
[ "Datasets\nDatasets tell PHOEBE how and at what times to compute the model. In some cases these will include the actual observational data, and in other cases may only include the times at which you want to compute a synthetic model.\nAdding a dataset - even if it doesn't contain any observational data - is required in order to compute a synthetic model (which will be described in the Compute Tutorial).\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).", "#!pip install -I \"phoebe>=2.3,<2.4\"\n\nimport phoebe\nfrom phoebe import u # units\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()", "Adding a Dataset from Arrays\nTo add a dataset, you need to provide the function in\nphoebe.parameters.dataset for the particular type of data you're dealing with, as well\nas any of your \"observed\" arrays.\nThe current available methods include:\n\nlc light curves (tutorial)\nrv radial velocity curves (tutorial)\nlp spectral line profiles (tutorial)\norb orbit/positional data (tutorial)\nmesh discretized mesh of stars (tutorial)\n\nwhich can always be listed via phoebe.list_available_datasets", "phoebe.list_available_datasets()", "Without Observations\nThe simplest case of adding a dataset is when you do not have observational \"data\" and only want to compute a synthetic model. Here all you need to provide is an array of times and information about the type of data and how to compute it.\nHere we'll do just that - we'll add an orbit dataset which will track the positions and velocities of both our 'primary' and 'secondary' stars (by their component tags) at each of the provided times.\nUnlike other datasets, the mesh and orb dataset cannot accept actual observations, so there is no times parameter, only the compute_times and compute_phases parameters. For more details on these, see the Advanced: Compute Times & Phases tutorial.", "b.add_dataset(phoebe.dataset.orb, \n compute_times=phoebe.linspace(0,10,20), \n dataset='orb01', \n component=['primary', 'secondary'])", "Here we used phoebe.linspace. This is essentially just a shortcut to np.linspace, but using nparray to allow these generated arrays to be serialized and stored easier within the Bundle. Other nparray constructor functions available at the top-level of PHOEBE include:\n\nphoebe.arange\nphoebe.invspace\nphoebe.linspace\nphoebe.logspace\nphoebe.geomspace\n\nAny nparray object, list, or numpy array is acceptable as input to FloatArrayParameters.\nb.add_dataset can either take a function or the name of a function in phoebe.parameters.dataset as its first argument. The following line would do the same thing (and we'll pass overwrite=True to avoid the error of overwriting dataset='orb01').", "b.add_dataset('orb', \n compute_times=phoebe.linspace(0,10,20), \n component=['primary', 'secondary'], \n dataset='orb01', \n overwrite=True)", "You may notice that add_dataset does take some time to complete. In the background, the passband is being loaded (when applicable) and many parameters are created and attached to the Bundle.\nIf you do not provide a list of component(s), they will be assumed for you based on the dataset method. LCs (light curves) and meshes can only attach at the system level (component=None), for instance, whereas RVs and ORBs can attach for each star.", "b.add_dataset('rv', times=phoebe.linspace(0,10,20), dataset='rv01')\n\nprint(b.filter(qualifier='times', dataset='rv01').components)", "Here we added an RV dataset and can see that it was automatically created for both stars in our system. Under-the-hood, another entry is created for component='_default'. The default parameters hold the values that will be replicated if a new component is added to the system in the future. In order to see these hidden parameters, you need to pass check_default=False to any filter-type call (and note that '_default' is no longer exposed when calling .components). Also note that for set_value_all, this is automatically set to False.\nSince we did not explicitly state that we only wanted the primary and secondary components, the time array on '_default' is filled as well. If we were then to add a tertiary component, its RVs would automatically be computed because of this replicated time array.", "print(b.filter(qualifier='times', dataset='rv01', check_default=False).components)\n\nprint(b.get('times@_default@rv01', check_default=False))", "With Observations\nLoading datasets with observations is (nearly) as simple. \nPassing arrays to any of the dataset columns will apply it to all of the same components in which the time will be applied (see the 'Without Observations' section above for more details). This make perfect sense for fluxes in light curves where the time and flux arrays are both at the system level:", "b.add_dataset('lc', times=[0,1], fluxes=[1,0.5], dataset='lc01')\n\nprint(b.get_parameter(qualifier='fluxes', dataset='lc01', context='dataset'))", "For datasets which attach to individual components, however, this isn't always the desired behavior.\nFor a single-lined RV where we only attach to one component, everything is as expected.", "b.add_dataset('rv', \n times=[0,1], \n rvs=[-3,3], \n component='primary', \n dataset='rv01', \n overwrite=True)\n\nprint(b.get_parameter(qualifier='rvs', dataset='rv01', context='dataset'))", "However, for a double-lined RV we probably don't want to do the following:", "b.add_dataset('rv', \n times=[0,0.5,1], \n rvs=[-3,3], \n dataset='rv02')\n\nprint(b.filter(qualifier='rvs', dataset='rv02', context='dataset'))", "Instead we want to pass different arrays to the 'rvs@primary' and 'rvs@secondary'. This can be done by explicitly stating the components in a dictionary sent to that argument:", "b.add_dataset('rv', \n times=[0,0.5,1], \n rvs={'primary': [-3,3], 'secondary': [4,-4]}, \n dataset='rv02',\n overwrite=True)\n\nprint(b.filter(qualifier='rvs', dataset='rv02', context='dataset'))", "Alternatively, you could of course not pass the values while calling add_dataset and instead call the set_value method after and explicitly state the components at that time. For more details see the add_dataset API docs.\nPHOEBE doesn't come with any built-in file parsing, but using common file parsers such as np.loadtxt or np.genfromtxt to extract arrays from an external data file.\nDataset Types\nFor a full explanation of all related options and Parameter see the respective dataset tutorials:\n\nLight Curves/Fluxes (lc)\nRadial Velocities (rv)\nLine Profiles (lp)\nOrbits (orb)\nMeshes (mesh)\n\nNext\nNext up: let's learn how to compute observables and create our first synthetic model.\nOr see some of these advanced topics:\n\nAdvanced: Datasets (passband options, dealing with phases, removing datasets)\nAdvanced: Compute Times & Phases" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
stable/_downloads/141ddce18e923e8220337b357ba3dc45/ssd_spatial_filters.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute Spectro-Spatial Decomposition (SSD) spatial filters\nIn this example, we will compute spatial filters for retaining\noscillatory brain activity and down-weighting 1/f background signals\nas proposed by :footcite:NikulinEtAl2011.\nThe idea is to learn spatial filters that separate oscillatory dynamics\nfrom surrounding non-oscillatory noise based on the covariance in the\nfrequency band of interest and the noise covariance based on surrounding\nfrequencies.", "# Author: Denis A. Engemann <denis.engemann@gmail.com>\n# Victoria Peterson <victoriapeterson09@gmail.com>\n# License: BSD-3-Clause\n\nimport matplotlib.pyplot as plt\nimport mne\nfrom mne import Epochs\nfrom mne.datasets.fieldtrip_cmc import data_path\nfrom mne.decoding import SSD", "Define parameters", "fname = data_path() / 'SubjectCMC.ds'\n\n# Prepare data\nraw = mne.io.read_raw_ctf(fname)\nraw.crop(50., 110.).load_data() # crop for memory purposes\nraw.resample(sfreq=250)\n\nraw.pick_types(meg=True, eeg=False, ref_meg=False)\n\nfreqs_sig = 9, 12\nfreqs_noise = 8, 13\n\n\nssd = SSD(info=raw.info,\n reg='oas',\n sort_by_spectral_ratio=False, # False for purpose of example.\n filt_params_signal=dict(l_freq=freqs_sig[0], h_freq=freqs_sig[1],\n l_trans_bandwidth=1, h_trans_bandwidth=1),\n filt_params_noise=dict(l_freq=freqs_noise[0], h_freq=freqs_noise[1],\n l_trans_bandwidth=1, h_trans_bandwidth=1))\nssd.fit(X=raw.get_data())", "Let's investigate spatial filter with max power ratio.\nWe will first inspect the topographies.\nAccording to Nikulin et al. 2011 this is done by either inverting the filters\n(W^{-1}) or by multiplying the noise cov with the filters Eq. (22) (C_n W)^t.\nWe rely on the inversion approach here.", "pattern = mne.EvokedArray(data=ssd.patterns_[:4].T,\n info=ssd.info)\npattern.plot_topomap(units=dict(mag='A.U.'), time_format='')\n\n# The topographies suggest that we picked up a parietal alpha generator.\n\n# Transform\nssd_sources = ssd.transform(X=raw.get_data())\n\n# Get psd of SSD-filtered signals.\npsd, freqs = mne.time_frequency.psd_array_welch(\n ssd_sources, sfreq=raw.info['sfreq'], n_fft=4096)\n\n# Get spec_ratio information (already sorted).\n# Note that this is not necessary if sort_by_spectral_ratio=True (default).\nspec_ratio, sorter = ssd.get_spectral_ratio(ssd_sources)\n\n# Plot spectral ratio (see Eq. 24 in Nikulin 2011).\nfig, ax = plt.subplots(1)\nax.plot(spec_ratio, color='black')\nax.plot(spec_ratio[sorter], color='orange', label='sorted eigenvalues')\nax.set_xlabel(\"Eigenvalue Index\")\nax.set_ylabel(r\"Spectral Ratio $\\frac{P_f}{P_{sf}}$\")\nax.legend()\nax.axhline(1, linestyle='--')\n\n# We can see that the initial sorting based on the eigenvalues\n# was already quite good. However, when using few components only\n# the sorting might make a difference.", "Let's also look at the power spectrum of that source and compare it to\nto the power spectrum of the source with lowest SNR.", "below50 = freqs < 50\n# for highlighting the freq. band of interest\nbandfilt = (freqs_sig[0] <= freqs) & (freqs <= freqs_sig[1])\nfig, ax = plt.subplots(1)\nax.loglog(freqs[below50], psd[0, below50], label='max SNR')\nax.loglog(freqs[below50], psd[-1, below50], label='min SNR')\nax.loglog(freqs[below50], psd[:, below50].mean(axis=0), label='mean')\nax.fill_between(freqs[bandfilt], 0, 10000, color='green', alpha=0.15)\nax.set_xlabel('log(frequency)')\nax.set_ylabel('log(power)')\nax.legend()\n\n# We can clearly see that the selected component enjoys an SNR that is\n# way above the average power spectrum.", "Epoched data\nAlthough we suggest to use this method before epoching, there might be some\nsituations in which data can only be treated by chunks.", "# Build epochs as sliding windows over the continuous raw file.\nevents = mne.make_fixed_length_events(raw, id=1, duration=5.0, overlap=0.0)\n\n# Epoch length is 5 seconds.\nepochs = Epochs(raw, events, tmin=0., tmax=5,\n baseline=None, preload=True)\n\nssd_epochs = SSD(info=epochs.info,\n reg='oas',\n filt_params_signal=dict(l_freq=freqs_sig[0],\n h_freq=freqs_sig[1],\n l_trans_bandwidth=1,\n h_trans_bandwidth=1),\n filt_params_noise=dict(l_freq=freqs_noise[0],\n h_freq=freqs_noise[1],\n l_trans_bandwidth=1,\n h_trans_bandwidth=1))\nssd_epochs.fit(X=epochs.get_data())\n\n# Plot topographies.\npattern_epochs = mne.EvokedArray(data=ssd_epochs.patterns_[:4].T,\n info=ssd_epochs.info)\npattern_epochs.plot_topomap(units=dict(mag='A.U.'), time_format='')", "References\n.. footbibliography::" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
yuhao0531/dmc
notebooks/week-6/02-using a pre-trained model with Keras.ipynb
apache-2.0
[ "Lab 6.2 - Using a pre-trained model with Keras\nIn this section of the lab, we will load the model we trained in the previous section, along with the training data and mapping dictionaries, and use it to generate longer sequences of text.\nLet's start by importing the libraries we will be using:", "import numpy as np\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import Dropout\nfrom keras.layers import LSTM\nfrom keras.callbacks import ModelCheckpoint\nfrom keras.utils import np_utils\n\nimport sys\nimport re\nimport pickle", "Next, we will import the data we saved previously using the pickle library.", "pickle_file = '-basic_data.pickle'\n\nwith open(pickle_file, 'rb') as f:\n save = pickle.load(f)\n X = save['X']\n y = save['y']\n char_to_int = save['char_to_int'] \n int_to_char = save['int_to_char'] \n del save # hint to help gc free up memory\n print('Training set', X.shape, y.shape)", "Now we need to define the Keras model. Since we will be loading parameters from a pre-trained model, this needs to match exactly the definition from the previous lab section. The only difference is that we will comment out the dropout layer so that the model uses all the hidden neurons when doing the predictions.", "# define the LSTM model\nmodel = Sequential()\nmodel.add(LSTM(128, return_sequences=False, input_shape=(X.shape[1], X.shape[2])))\n# model.add(Dropout(0.50))\nmodel.add(Dense(y.shape[1], activation='softmax'))", "Next we will load the parameters from the model we trained previously, and compile it with the same loss and optimizer function.", "# load the parameters from the pretrained model\nfilename = \"-basic_LSTM.hdf5\"\nmodel.load_weights(filename)\nmodel.compile(loss='categorical_crossentropy', optimizer='adam')", "We also need to rewrite the sample() and generate() helper functions so that we can use them in our code:", "def sample(preds, temperature=1.0):\n preds = np.asarray(preds).astype('float64')\n preds = np.log(preds) / temperature\n exp_preds = np.exp(preds)\n preds = exp_preds / np.sum(exp_preds)\n probas = np.random.multinomial(1, preds, 1)\n return np.argmax(probas)\n\ndef generate(sentence, sample_length=50, diversity=0.35):\n generated = sentence\n sys.stdout.write(generated)\n\n for i in range(sample_length):\n x = np.zeros((1, X.shape[1], X.shape[2]))\n for t, char in enumerate(sentence):\n x[0, t, char_to_int[char]] = 1.\n\n preds = model.predict(x, verbose=0)[0]\n next_index = sample(preds, diversity)\n next_char = int_to_char[next_index]\n\n generated += next_char\n sentence = sentence[1:] + next_char\n\n sys.stdout.write(next_char)\n sys.stdout.flush()\n print", "Now we can use the generate() function to generate text of any length based on our imported pre-trained model and a seed text of our choice. For best result, the length of the seed text should be the same as the length of training sequences (100 in the previous lab section). \nIn this case, we will test the overfitting of the model by supplying it two seeds:\n\none which comes verbatim from the training text, and\none which comes from another earlier speech by Obama\n\nIf the model has not overfit our training data, we should expect it to produce reasonable results for both seeds. If it has overfit, it might produce pretty good results for something coming directly from the training set, but perform poorly on a new seed. This means that it has learned to replicate our training text, but cannot generalize to produce text based on other inputs. Since the original article was very short, however, the entire vocabulary of the model might be very limited, which is why as input we use a part of another speech given by Obama, instead of completely random text.\nSince we have not trained the model for that long, we will also use a lower temperature to get the model to generate more accurate if less diverse results. Try running the code a few times with different temperature settings to generate different results.", "prediction_length = 500\nseed_from_text = \"america has shown that progress is possible. last year, income gains were larger for households at t\"\nseed_original = \"and as people around the world began to hear the tale of the lowly colonists who overthrew an empire\"\n\nfor seed in [seed_from_text, seed_original]:\n generate(seed, prediction_length, .50)\n print \"-\" * 20" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
YeoLab/single-cell-bioinformatics
notebooks/2016-06-09_darmanis2015_concatenate_data.ipynb
bsd-3-clause
[ "cd ~/Downloads/GSE67835_RAW/\n\nls", "So now we have the unzipped version of the file, \"GSE41265_allGenesTPM.txt\". I wonder how much space they saved by zipping it?\nLet's use the flags \"-l\" for \"long listing\" which will show us the sizes", "ls -1", "oof, this is in pure bytes and I can't convert to multiples of 1024 easily in my head (1024 bytes = 1 kilobyte, 1024 kilobytes = 1 megabtye, etc - the 1000/byte is a lie that the hard drive companies use!). So let's use the -h flag, which tells the computer to do th conversion for us. We can combine multiple flags with the same dash, so\nls -l -h\n\nCan be shortened to:\nls -lh", "! ls -1 | wc -l\n\n! gunzip --help", "See, \"GSE41265_allGenesTPM.txt.gz\" is there!\nSince the file ends in \".gz\", this tells us its a \"gnu-zipped\" or \"gzipped\" (\"gee-zipped\") file, which is a specific flavor of \"zipping\" or compressing a file. We need to use a gnu-zipping-aware program to decompress the file, which is \"gunzip\" (\"gnu-unzip\").\nRun the next cell to unzip the file", "! gunzip -f *gz\n\n3+3\n\nasdf = 'beyonce'\nasdf\n\nasdf + ' runs the world'", "Let's \"ls\" again to see what files have changed", "ls\n\n! head GSM1657872_1772078217.C04.csv \n\nimport glob\n\nimport pandas as pd\n\npd.read_table('GSM1657872_1772078217.C04.csv')\n\npd.read_table('GSM1657872_1772078217.C04.csv', index_col=0)\n\ndataframe = pd.read_table('GSM1657872_1772078217.C04.csv', index_col=0, header=None)\ndataframe\n\nseries = pd.read_table('GSM1657872_1772078217.C04.csv', index_col=0, header=None, squeeze=True)\nseries\n\ndataframe.shape\n\nseries.shape\n\nseries.name\n\nfilename = 'GSM1657872_1772078217.C04.csv'\nfilename\n\nfilename.split('.')\n\nfilename.split('.csv')\n\nfilename.split('.csv')[0]\n\ncells = []\n\nfor filename in glob.iglob('*.csv'):\n cell = pd.read_table(filename, index_col=0, squeeze=True, header=None)\n name = filename.split('_')[0]\n cell.name = name\n cells.append(cell)\nexpression = pd.concat(cells, axis=1)\nexpression.index = expression.index.map(lambda x: x.strip(' '))\nprint(expression.shape)\nexpression.head()", "Read metadata", "! gunzip /Users/kirkreardon/Downloads/*_series_matrix.txt.gz\n\n! head /Users/kirkreardon/Downloads/*_series_matrix.txt\n\n! head -n 20 /Users/kirkreardon/Downloads/*_series_matrix.txt\n\n\"Whooo!!!!!!!!!\".strip(\"!\")\n\n\"Whooo!!!!!!!!!\".strip(\"o\")\n\nmetadata1 = pd.read_table('/Users/kirkreardon/Downloads/GSE67835-GPL15520_series_matrix.txt', \n skiprows=37, header=None, index_col=0)\nmetadata1.index = metadata1.index.map(lambda x: x.strip('!'))\n# Transpose so each row is a cell\nmetadata1 = metadata1.T\nmetadata1.head()\n\nmetadata2 = pd.read_table('/Users/kirkreardon/Downloads/GSE67835-GPL18573_series_matrix.txt', \n skiprows=37, header=None, index_col=0)\nmetadata2.index = metadata2.index.map(lambda x: x.strip('!'))\n# transpose\nmetadata2 = metadata2.T\nmetadata2.head()\n\ndataframes = [metadata1, metadata2]\nmetadata = pd.concat(dataframes)\nprint(metadata.shape)\nmetadata.head()\n\nmetadata = metadata.set_index('Sample_geo_accession')\nmetadata.head()\n\nmkdir -p ~/projects/darmanis2015/processed_data\n\nexpression.to_csv('~/projects/darmanis2015/processed_data/expression.csv')\n\nmetadata.to_csv('~/projects/darmanis2015/processed_data/metadata.csv')\n\nexpression.GSM1657884\n\nbad_rows = ['no_feature', 'ambiguous', 'alignment_not_unique']\ngood_genes = expression.index[~expression.index.isin(bad_rows)]\ngood_genes\n\nexpression.shape\n\nexpression_actually_genes = expression.loc[good_genes]\nexpression_actually_genes.shape\n\nexpression_actually_genes.tail()\n\nexpression_actually_genes.to_csv(\"/Users/kirkreardon/projects/darmanis2015/processed_data/expression_actually_genes.csv\")\n\nexpression_actually_genes.dtypes\n\nexpression_actually_genes.tail().index" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
robertoalotufo/ia898
master/tutorial_numpy_1_8.ipynb
mit
[ "Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Tile\" data-toc-modified-id=\"Tile-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Tile</a></div><div class=\"lev2 toc-item\"><a href=\"#Exemplo-unidimensional---replicando-as-colunas\" data-toc-modified-id=\"Exemplo-unidimensional---replicando-as-colunas-11\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Exemplo unidimensional - replicando as colunas</a></div><div class=\"lev2 toc-item\"><a href=\"#Exemplo-unidimensional---replicando-as-linhas\" data-toc-modified-id=\"Exemplo-unidimensional---replicando-as-linhas-12\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Exemplo unidimensional - replicando as linhas</a></div><div class=\"lev2 toc-item\"><a href=\"#Exemplo-bidimensional---replicando-as-colunas\" data-toc-modified-id=\"Exemplo-bidimensional---replicando-as-colunas-13\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>Exemplo bidimensional - replicando as colunas</a></div><div class=\"lev2 toc-item\"><a href=\"#Exemplo-bidimensional---replicando-as-linhas\" data-toc-modified-id=\"Exemplo-bidimensional---replicando-as-linhas-14\"><span class=\"toc-item-num\">1.4&nbsp;&nbsp;</span>Exemplo bidimensional - replicando as linhas</a></div><div class=\"lev2 toc-item\"><a href=\"#Exemplo-bidimensional---replicando-as-linhas-e-colunas-simultaneamente\" data-toc-modified-id=\"Exemplo-bidimensional---replicando-as-linhas-e-colunas-simultaneamente-15\"><span class=\"toc-item-num\">1.5&nbsp;&nbsp;</span>Exemplo bidimensional - replicando as linhas e colunas simultaneamente</a></div><div class=\"lev1 toc-item\"><a href=\"#Documentação-Oficial-Numpy\" data-toc-modified-id=\"Documentação-Oficial-Numpy-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Documentação Oficial Numpy</a></div>\n\n# Tile\n\nUma função importante da biblioteca numpy é a tile, que gera repetições do array passado com parâmetro. A quantidade de repetições é dada pelo parâmetro reps\n\n## Exemplo unidimensional - replicando as colunas", "import numpy as np\n\na = np.array([0, 1, 2])\nprint('a = \\n', a)\n\nprint()\nprint('Resultado da operação np.tile(a,2): \\n',np.tile(a,2))", "Exemplo unidimensional - replicando as linhas\nPara modificar as dimensões na quais a replicação será realizada modifica-se o parâmetro reps, passando ao invés de um int, uma tupla com as dimensões que se deseja alterar", "print('a = \\n', a)\nprint()\nprint(\"Resultado da operação np.tile(a,(2,1)):\\n\" , np.tile(a,(2,1)))", "Exemplo bidimensional - replicando as colunas", "a = np.array([[0, 1], [2, 3]])\nprint('a = \\n', a)\nprint()\n\nprint(\"Resultado da operação np.tile(a,2):\\n\", np.tile(a,2))", "Exemplo bidimensional - replicando as linhas", "a = np.array([[0, 1], [2, 3]])\nprint('a = \\n', a)\n\nprint()\nprint(\"Resultado da operação np.tile(a,(3,1)):\\n\", np.tile(a,(3,1)))", "Exemplo bidimensional - replicando as linhas e colunas simultaneamente", "a = np.array([[0, 1], [2, 3]])\nprint('a = \\n', a)\n\nprint()\nprint(\"Resultado da operação np.tile(a,(2,2)):\\n\", np.tile(a,(2,2)))", "Documentação Oficial Numpy\n\ntile\nrepeat" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AllenDowney/ModSim
soln/chap18.ipynb
gpl-2.0
[ "Chapter 18\nModeling and Simulation in Python\nCopyright 2021 Allen Downey\nLicense: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International", "# install Pint if necessary\n\ntry:\n import pint\nexcept ImportError:\n !pip install pint\n\n# download modsim.py if necessary\n\nfrom os.path import exists\n\nfilename = 'modsim.py'\nif not exists(filename):\n from urllib.request import urlretrieve\n url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'\n local, _ = urlretrieve(url+filename, filename)\n print('Downloaded ' + local)\n\n# import functions from modsim\n\nfrom modsim import *", "Code from the previous chapter\nRead the data.", "import os\n\nfilename = 'glucose_insulin.csv'\n\nif not os.path.exists(filename):\n !wget https://raw.githubusercontent.com/AllenDowney/ModSimPy/master/data/glucose_insulin.csv\n\nfrom pandas import read_csv\n\ndata = read_csv(filename, index_col='time');", "Interpolate the insulin data.", "from modsim import interpolate\n\nI = interpolate(data.insulin)", "In this chapter, we implement the glucose minimal model described in the previous chapter. We'll start with run_simulation, which solves\ndifferential equations using discrete time steps. This method works well enough for many applications, but it is not very accurate. In this chapter we explore a better option: using an ODE solver.\nImplementation\nTo get started, let's assume that the parameters of the model are known.\nWe'll implement the model and use it to generate time series for G and X. Then we'll see how to find the parameters that generate the series that best fits the data.\nWe can pass params and data to make_system:", "from modsim import State, System\n\ndef make_system(params, data):\n G0, k1, k2, k3 = params\n \n Gb = data.glucose[0]\n Ib = data.insulin[0]\n I = interpolate(data.insulin)\n \n t_0 = data.index[0]\n t_end = data.index[-1]\n \n init = State(G=G0, X=0)\n \n return System(params=params, init=init, \n Gb=Gb, Ib=Ib, I=I,\n t_0=t_0, t_end=t_end, dt=2)", "make_system uses the measurements at t=0 as the basal levels, Gb\nand Ib. It gets t_0 and t_end from the data. And it uses the\nparameter G0 as the initial value for G. Then it packs everything\ninto a System object.\nTaking advantage of estimates from prior work, we'll start with these\nvalues:", "# G0, k1, k2, k3\nparams = 290, 0.03, 0.02, 1e-05\nsystem = make_system(params, data)", "Here's the update function:", "def update_func(state, t, system):\n G, X = state\n G0, k1, k2, k3 = system.params \n I, Ib, Gb = system.I, system.Ib, system.Gb\n dt = system.dt\n \n dGdt = -k1 * (G - Gb) - X*G\n dXdt = k3 * (I(t) - Ib) - k2 * X\n \n G += dGdt * dt\n X += dXdt * dt\n\n return State(G=G, X=X)", "As usual, the update function takes a State object, a time, and a\nSystem object as parameters. The first line uses multiple assignment\nto extract the current values of G and X.\nThe following lines unpack the parameters we need from the System\nobject.\nComputing the derivatives dGdt and dXdt is straightforward; we just\ntranslate the equations from math notation to Python.\nThen, to perform the update, we multiply each derivative by the discrete\ntime step dt, which is 2 min in this example. The return value is a\nState object with the new values of G and X.\nBefore running the simulation, it is a good idea to run the update\nfunction with the initial conditions:", "update_func(system.init, system.t_0, system)", "If it runs without errors and there is nothing obviously wrong with the\nresults, we are ready to run the simulation. We'll use this version of\nrun_simulation, which is very similar to previous versions:", "from modsim import linrange, TimeFrame\n\ndef run_simulation(system, update_func):\n init = system.init\n t_0, t_end, dt = system.t_0, system.t_end, system.dt\n \n t_array = linrange(system.t_0, system.t_end, system.dt)\n n = len(t_array)\n \n frame = TimeFrame(index=t_array, columns=init.index)\n frame.iloc[0] = system.init\n \n for i in range(n-1):\n t = t_array[i]\n state = frame.iloc[i]\n frame.iloc[i+1] = update_func(state, t, system)\n \n return frame", "We can run it like this:", "results = run_simulation(system, update_func)\nresults.head()\n\nfrom modsim import decorate\n\nresults.G.plot(style='-', label='simulation')\ndata.glucose.plot(style='o', color='C0', label='glucose data')\ndecorate(ylabel='Concentration (mg/dL)')\n\nresults.X.plot(color='C1', label='remote insulin')\n\ndecorate(xlabel='Time (min)', \n ylabel='Concentration (arbitrary units)')", "shows the results. The top plot shows\nsimulated glucose levels from the model along with the measured data.\nThe bottom plot shows simulated insulin levels in tissue fluid, which is in unspecified units, and not to be confused with measured insulin\nlevels in the blood.\nWith the parameters I chose, the model fits the data well, except for\nthe first few data points, where we don't expect the model to be\naccurate.\nSolving differential equations\nSo far we have solved differential equations by rewriting them as\ndifference equations. In the current example, the differential equations are: \n$$\\frac{dG}{dt} = -k_1 \\left[ G(t) - G_b \\right] - X(t) G(t)$$\n$$\\frac{dX}{dt} = k_3 \\left[I(t) - I_b \\right] - k_2 X(t)$$ \nIf we multiply both sides by $dt$, we have:\n$$dG = \\left[ -k_1 \\left[ G(t) - G_b \\right] - X(t) G(t) \\right] dt$$\n$$dX = \\left[ k_3 \\left[I(t) - I_b \\right] - k_2 X(t) \\right] dt$$ \nWhen $dt$ is very small, or more precisely infinitesimal, this equation is exact. But in our simulations, $dt$ is 2 min, which is not very small. In effect, the simulations assume that the derivatives $dG/dt$ and $dX/dt$ are constant during each 2 min time step.\nThis method, evaluating derivatives at discrete time steps and assuming that they are constant in between, is called Euler's method (see http://modsimpy.com/euler).\nEuler's method is good enough for some simple problems, but it is not\nvery accurate. Other methods are more accurate, but many of them are\nsubstantially more complicated.\nOne of the best simple methods is called Ralston's method. The\nModSim library provides a function called run_ode_solver that\nimplements it.\nThe \"ODE\" in run_ode_solver stands for \"ordinary differential\nequation\". The equations we are solving are \"ordinary\" because all the\nderivatives are with respect to the same variable; in other words, there are no partial derivatives.\nTo use run_ode_solver, we have to provide a \"slope function\", like\nthis:", "def slope_func(t, state, system):\n G, X = state\n G0, k1, k2, k3 = system.params \n I, Ib, Gb = system.I, system.Ib, system.Gb\n \n dGdt = -k1 * (G - Gb) - X*G\n dXdt = k3 * (I(t) - Ib) - k2 * X\n \n return dGdt, dXdt", "slope_func is similar to update_func; in fact, it takes the same\nparameters in the same order. But slope_func is simpler, because all\nwe have to do is compute the derivatives, that is, the slopes. We don't\nhave to do the updates; run_ode_solver does them for us.\nNow we can call run_ode_solver like this:", "from modsim import run_solve_ivp\n\nresults2, details = run_solve_ivp(system, slope_func)\ndetails", "run_ode_solver is similar to run_simulation: it takes a System\nobject and a slope function as parameters. It returns two values: a\nTimeFrame with the solution and a ModSimSeries with additional\ninformation.\nA ModSimSeries is like a System or State object; it contains a set\nof variables and their values. The ModSimSeries from run_ode_solver,\nwhich we assign to details, contains information about how the solver\nran, including a success code and diagnostic message.\nThe TimeFrame, which we assign to results, has one row for each time\nstep and one column for each state variable. In this example, the rows\nare time from 0 to 182 minutes; the columns are the state variables, G\nand X.", "from modsim import decorate\n\nresults2.G.plot(style='-', label='simulation')\ndata.glucose.plot(style='o', color='C0', label='glucose data')\ndecorate(ylabel='Concentration (mg/dL)')\n\nresults2.X.plot(color='C1', label='remote insulin')\n\ndecorate(xlabel='Time (min)', \n ylabel='Concentration (arbitrary units)')", "shows the results from run_simulation and\nrun_ode_solver. The difference between them is barely visible.\nWe can compute the percentage differences like this:", "diff = results.G - results2.G\npercent_diff = diff / results2.G * 100\npercent_diff.abs().max()", "The largest percentage difference is less than 2%, which is small enough that it probably doesn't matter in practice. \nSummary\nYou might be interested in this article about people making a DIY artificial pancreas.\nExercises\nExercise: Our solution to the differential equations is only approximate because we used a finite step size, dt=2 minutes.\nIf we make the step size smaller, we expect the solution to be more accurate. Run the simulation with dt=1 and compare the results. What is the largest relative error between the two solutions?", "# Solution\n\nsystem.dt = 1\nresults3 = run_simulation(system, update_func)\n\n# Solution\n\nresults2.G.plot(style='C2--', label='run_ode_solver (dt=2)')\nresults3.G.plot(style='C3:', label='run_ode_solver (dt=1)')\n\ndecorate(xlabel='Time (m)', ylabel='mg/dL')\n\n# Solution\n\ndiff = (results2.G - results3.G).dropna()\npercent_diff = diff / results2.G * 100\n\n# Solution\n\nmax(abs(percent_diff))", "Under the hood\nHere's the source code for run_solve_ivp if you'd like to know how it works.\nRK45." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mrklees/kaggle-solutions
Titanic/Kaggle - A Deep Learning Approach.ipynb
apache-2.0
[ "A Deep Learning Approach to the Titanic Data Set\nDec 2017 Update I notice that a few folks have forked this kernal, and so wanted to provide an update on how I'm developing models these days. I will utilize this kernal to build neural networks using both the scikit-learn library as well as Keras to highlight strategies like ensambling many networks together. Feel free to leave a comment if you have any questions! I'll try to answer as best I can. Also, if you find this kernal helpful please upvote so that others can find this resource.", "# Data Structures\nimport pandas as pd\nimport numpy as np\n\n# Data Visualization\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom random import randint\nfrom sklearn.neighbors import KNeighborsRegressor\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.ensemble import VotingClassifier, BaggingClassifier\nfrom sklearn.model_selection import train_test_split, GridSearchCV\nfrom sklearn.preprocessing import LabelBinarizer, StandardScaler", "What follows are a couple classes that I've defined to encapsulate my data pipeline and scikit-learn model. I've added some documentation throughout", "class TitanicData(object):\n \"\"\"Titanic Data\n \n This class will contain the entire data pipeline from raw data to prepared \n numpy arrays. It's eventually inherited by the model class, but is left \n distinct for readbility and logical organization.\n \"\"\"\n filepath = \"../input/\"\n train_fn = 'train.csv'\n test_fn = 'test.csv'\n \n def __init__(self):\n self.X_train, self.y_train, self.X_valid, self.y_valid = self.preproc()\n \n def import_and_split_data(self):\n \"\"\"Import that data and then split it into train/test sets.\n \n Make sure to stratify. This is often not even enough, but will get you closer \n to having your validation score match kaggles score.\"\"\"\n X, y = self.select_features(pd.read_csv(self.filepath + self.train_fn))\n X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size = 0.25, random_state = 606, stratify = y)\n return X_train, y_train, X_valid, y_valid\n \n def select_features(self, data):\n \"\"\"Selects the features that we'll use in the model. Drops unused features\"\"\"\n target = ['Survived']\n features = ['Pclass', 'Name', 'Sex', 'Age', 'SibSp',\n 'Parch', 'Ticket', 'Fare', 'Cabin', 'Embarked']\n dropped_features = ['Cabin', 'Ticket']\n X = data[features].drop(dropped_features, axis=1)\n y = data[target]\n return X, y\n \n def fix_na(self, data):\n \"\"\"Fill na's with the mean (in the case of fare), and with C in the case of embarked\"\"\"\n na_vars = {\"Fare\" : data.Fare.median(), \"Embarked\" : \"C\", \"Age\" : data.Age.median()}\n return data.fillna(na_vars)\n \n def create_dummies(self, data, cat_vars, cat_types):\n \"\"\"Processes categorical data into dummy vars\"\"\"\n cat_data = data[cat_vars].values\n for i in range(len(cat_vars)): \n bins = LabelBinarizer().fit_transform(cat_data[:, 0].astype(cat_types[i]))\n cat_data = np.delete(cat_data, 0, axis=1)\n cat_data = np.column_stack((cat_data, bins))\n return cat_data\n \n def standardize(self, data, real_vars):\n \"\"\"Processes numeric data\"\"\"\n real_data = data[real_vars]\n scale = StandardScaler()\n return scale.fit_transform(real_data)\n \n def extract_titles(self, data):\n \"\"\"Extract titles from the Name field and create appropriate One Hot Encoded Columns\"\"\"\n title_array = data.Name\n first_names = title_array.str.rsplit(', ', expand=True, n=1)\n titles = first_names[1].str.rsplit('.', expand=True, n=1)\n known_titles = ['Mr', 'Mrs', 'Miss', 'Master', 'Don', 'Rev', 'Dr', 'Mme', 'Ms',\n 'Major', 'Lady', 'Sir', 'Mlle', 'Col', 'Capt', 'the Countess',\n 'Jonkheer']\n for title in known_titles:\n try:\n titles[title] = titles[0].str.contains(title).astype('int')\n except:\n titles[title] = 0\n return titles.drop([0,1], axis=1).values\n \n def engineer_features(self, dataset):\n dataset['FamilySize'] = dataset ['SibSp'] + dataset['Parch'] + 1\n dataset['IsAlone'] = 1 #initialize to yes/1 is alone\n dataset['IsAlone'].loc[dataset['FamilySize'] > 1] = 0 #the rest are 0\n return dataset\n \n def preproc(self):\n \"\"\"Executes the full preprocessing pipeline.\"\"\"\n # Import Data & Split\n X_train_, y_train, X_valid_, y_valid = self.import_and_split_data()\n # Fill NAs\n X_train, X_valid = self.fix_na(X_train_), self.fix_na(X_valid_)\n # Feature Engineering\n X_train, X_valid = self.engineer_features(X_train), self.engineer_features(X_valid)\n \n # Preproc Categorical Vars\n cat_vars = ['Pclass', 'Sex', 'Embarked']\n cat_types = ['int', 'str', 'str']\n X_train_cat, X_valid_cat = self.create_dummies(X_train, cat_vars, cat_types), self.create_dummies(X_valid, cat_vars, cat_types)\n \n # Extract Titles\n X_train_titles, X_valid_titles = self.extract_titles(X_train), self.extract_titles(X_valid)\n \n # Preprocess Numeric Vars\n real_vars = ['Fare', 'SibSp', 'Parch', \"FamilySize\", \"IsAlone\"]\n X_train_real, X_valid_real = self.standardize(X_train, real_vars), self.standardize(X_valid, real_vars)\n \n # Recombine\n X_train, X_valid = np.column_stack((X_train_cat, X_train_real, X_train_titles)), np.column_stack((X_valid_cat, X_valid_real, X_valid_titles))\n \n return X_train.astype('float32'), y_train.values, X_valid.astype('float32'), y_valid.values\n\n def preproc_test(self):\n test = pd.read_csv(self.filepath + self.test_fn)\n labels = test.PassengerId.values\n test = self.fix_na(test)\n test = self.engineer_features(test)\n # Preproc Categorical Vars\n cat_vars = ['Pclass', 'Sex', 'Embarked']\n cat_types = ['int', 'str', 'str']\n test_cat = self.create_dummies(test, cat_vars, cat_types)\n \n # Extract Titles\n test_titles = self.extract_titles(test)\n \n # Preprocess Numeric Vars\n real_vars = ['Fare', 'SibSp', 'Parch', \"FamilySize\", \"IsAlone\"]\n test_real = self.standardize(test, real_vars)\n \n # Recombine\n test = np.column_stack((test_cat, test_real, test_titles))\n return labels, test\n \nclass TitanicModel(TitanicData):\n \n def __init__(self):\n self.X_train, self.y_train, self.X_valid, self.y_valid = self.preproc()\n \n def build_single_model(self, random_state, num_layers, verbose=False):\n \"\"\"Create a single neural network with variable layers\n \n This function will both assign the model to the self.model attribute, as well\n as return the model. I'm pretty afraid of side effects resulting from \n changing the state within the object, but then it hasn't ruined by day yet...\n \"\"\"\n model = MLPClassifier(\n hidden_layer_sizes=(1024, ) * num_layers,\n activation='relu',\n solver='adam',\n alpha=0.0001,\n batch_size=100,\n max_iter=64,\n learning_rate_init=0.001,\n random_state=random_state,\n early_stopping=True,\n verbose=verbose\n )\n self.model = model\n return model\n \n def fit(self):\n \"\"\"Fit the model to the training data\"\"\"\n self.model.fit(self.X_train, self.y_train)\n \n def evaluate_model(self):\n \"\"\"Score the model against the validation data\"\"\"\n return self.model.score(self.X_valid, self.y_valid)\n \n def build_voting_model(self, model_size=10, n_jobs=1):\n \"\"\"Build a basic Voting ensamble of neural networks with various seeds and numbers of layers\n \n The idea is that we'll generate a large number of neural networks with various depths \n and then aggregate across their beliefs.\n \"\"\"\n models = [(str(seed), self.build_single_model(seed, randint(2, 15))) for seed in np.random.randint(1, 1e6, size=model_size)]\n ensamble = VotingClassifier(models, voting='soft', n_jobs=n_jobs)\n self.model = ensamble\n return ensamble\n \n def prepare_submission(self, name):\n labels, test_data = self.preproc_test()\n predictions = self.model.predict(test_data)\n subm = pd.DataFrame(np.column_stack([labels, predictions]), columns=['PassengerId', 'Survived'])\n subm.to_csv(\"{}.csv\".format(name), index=False)", "The rest of this workbook is what my typical Jupyter notebooks look like. Note that I'm not going to spend any time on exploratory data analysis. There are lots of great kernals with exploratory visualization of this dataset, much of which I have referenced to do the feature engineering above.", "%matplotlib inline\nmodel = TitanicModel()", "Lets create a basic neural network using our class and fit it.", "model.build_single_model(num_layers=4, random_state=606, verbose=True)\nmodel.fit()", "We can then score our model against our reserved dataset. Note that the validation score it refers to above is actually calculated on 10% of the training set, not the validation set.", "model.evaluate_model()\n\n#model.prepare_submission('simplenn')", "Thanks to the code we've already written, creating an ensamble out of these single models isn't too challenging. Let's start with the ensamble voting classifier provided by scikit-learn. This will let us create a nice ensamble with various numbers of layers and seeds to try to find something better.", "voting = model.build_voting_model(model_size=10, n_jobs=4)\n#model.fit()\n\n#model.evaluate_model()\n\n#model.prepare_submission('ensambled_nn')", "Utilizing the ensamble only gave me ~1% improved accuracy on the validation set, but this carried into submission ,\nSo how could we do better? While scikit-learn is super convenient for quickly building neural networks, there are some clear limitations. For example, scikit-learn still doesn't have a production implementation of dropout, which is currently one of the preferred methods of neural network regularization. With dropout we might be able to train deeper networks without worry about overfitting as much. So lets do it!", "from keras.utils import to_categorical\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout\n\nclass TitanicKeras(TitanicData):\n \n def __init__(self):\n self.X_train, self.y_train, self.X_valid, self.y_valid = self.preproc()\n self.y_train, self.y_valid = to_categorical(self.y_train), to_categorical(self.y_valid)\n self.history = []\n \n def build_model(self):\n model = Sequential()\n model.add(Dense(2056, input_shape=(29,), activation='relu'))\n model.add(Dropout(0.1))\n model.add(Dense(1028, activation='relu'))\n model.add(Dropout(0.2))\n model.add(Dense(1028, activation='relu'))\n model.add(Dropout(0.3))\n model.add(Dense(512, activation='relu'))\n model.add(Dropout(0.4))\n model.add(Dense(2, activation='sigmoid'))\n model.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy'])\n self.model = model\n \n def fit(self, lr=0.001, epochs=1):\n self.model.optimizer.lr = lr\n hist = self.model.fit(self.X_train, self.y_train,\n batch_size=32, epochs=epochs,\n verbose=1, validation_data=(self.X_valid, self.y_valid),\n )\n self.history.append(hist)\n \n def prepare_submission(self, name):\n labels, test_data = self.preproc_test()\n predictions = self.model.predict(test_data)\n subm = pd.DataFrame(np.column_stack([labels, np.around(predictions[:, 1])]).astype('int32'), columns=['PassengerId', 'Survived'])\n subm.to_csv(\"{}.csv\".format(name), index=False)\n return subm\n\nmodel = TitanicKeras()\n\nmodel.build_model()\n\nmodel.fit(lr=0.01, epochs=5)\n\nmodel.fit(lr=0.001, epochs=10)\n\nmodel.prepare_submission('keras')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dtamayo/rebound
ipython_examples/UniquelyIdentifyingParticlesWithHashes.ipynb
gpl-3.0
[ "Uniquely Identifying Particles With Hashes\nIn many cases, one can just identify particles by their position in the particle array, e.g. using sim.particles[5]. However, in cases where particles might get reordered in the particle array finding a particle might be difficult. This is why we added a hash attribute to particles.\nIn REBOUND particles might get rearranged when a tree code is used for the gravity or collision routine, when particles merge, when a particle leaves the simulation box, or when you manually remove or add particles. In general, therefore, the user should not assume that particles stay at the same index or in the same location in memory. The reliable way to access particles is to assign them hashes and to access particles through them. \nNote: When you don't assign particles a hash, they automatically get set to 0. The user is responsible for making sure hashes are unique, so if you set up particles without a hash and later set a particle's hash to 0, you don't know which one you'll get back when you access hash 0. See Possible Pitfalls below.\nIn this example, we show the basic usage of the hash attribute, which is an unsigned integer.", "import rebound\nsim = rebound.Simulation()\nsim.add(m=1., hash=999)\nsim.add(a=0.4, hash=\"mercury\")\nsim.add(a=1., hash=\"earth\")\nsim.add(a=5., hash=\"jupiter\")", "We can now not only access the Earth particle with:", "sim.particles[2]", "but also with", "sim.particles[\"earth\"]", "We can access particles with negative indices like a list. We can get the last particle with", "sim.particles[-1]", "Details\nWe can also access particles through their hash directly. However, to differentiate from passing an integer index, we have to first cast the hash to the underlying C datatype. We can do this through the rebound.hash function:", "from rebound import hash as h\nsim.particles[h(999)]", "which corresponds to particles[0] as it should. sim.particles[999] would try to access index 999, which doesn't exist in the simulation, and REBOUND would raise an AttributeError.\nWhen we above set the hash to a string, REBOUND converted this to an unsigned integer using the same rebound.hash function:", "h(\"earth\")\n\nsim.particles[2].hash", "The hash attribute always returns the appropriate unsigned integer ctypes type. (Depending on your computer architecture, ctypes.c_uint32 can be an alias for another ctypes type).\nSo we could also access the earth with:", "sim.particles[h(1424801690)]", "The numeric hashes could be useful in cases where you have a lot of particles you don't want to assign individual names, but you still need to keep track of them individually as they get rearranged:", "for i in range(1,100):\n sim.add(m=0., a=i, hash=i)\n\nsim.particles[99].a\n\nsim.particles[h(99)].a", "Possible Pitfalls\nThe user is responsible for making sure the hashes are unique. If two particles share the same hash, you could get either one when you access them using their hash (in most cases the first hit in the particles array). Two random strings used for hashes have a $\\sim 10^{-9}$ chance of clashing. The most common case is setting a hash to 0:", "sim = rebound.Simulation()\nsim.add(m=1., hash=0)\nsim.add(a=1., hash=\"earth\")\nsim.add(a=5.)\nsim.particles[h(0)]", "Here we expected to get back the first particle, but instead got the last one. This is because we didn't assign a hash to the last particle and it got automatically set to 0. If we give hashes to all the particles in the simulation, then there's no clash:", "sim = rebound.Simulation()\nsim.add(m=1., hash=0)\nsim.add(a=1., hash=\"earth\")\nsim.add(a=5., hash=\"jupiter\")\nsim.particles[h(0)]", "Due to details of the ctypes library, comparing two ctypes.c_uint32 instances for equality fails:", "h(32) == h(32)", "You have to compare the value", "h(32).value == h(32).value", "See the docs for further information: https://docs.python.org/2/library/ctypes.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/en-snapshot/agents/tutorials/8_networks_tutorial.ipynb
apache-2.0
[ "Copyright 2021 The TF-Agents Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Networks\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/agents/tutorials/8_networks_tutorial\">\n <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />\n View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/8_networks_tutorial.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/agents/blob/master/docs/tutorials/8_networks_tutorial.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/8_networks_tutorial.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nIntroduction\nIn this colab we will cover how to define custom networks for your agents. The networks help us define the model that is trained by agents. In TF-Agents you will find several different types of networks which are useful across agents:\nMain Networks\n\nQNetwork: Used in Qlearning for environments with discrete actions, this network maps an observation to value estimates for each possible action.\nCriticNetworks: Also referred to as ValueNetworks in literature, learns to estimate some version of a Value function mapping some state into an estimate for the expected return of a policy. These networks estimate how good the state the agent is currently in is.\nActorNetworks: Learn a mapping from observations to actions. These networks are usually used by our policies to generate actions.\nActorDistributionNetworks: Similar to ActorNetworks but these generate a distribution which a policy can then sample to generate actions.\n\nHelper Networks\n* EncodingNetwork: Allows users to easily define a mapping of pre-processing layers to apply to a network's input.\n* DynamicUnrollLayer: Automatically resets the network's state on episode boundaries as it is applied over a time sequence.\n* ProjectionNetwork: Networks like CategoricalProjectionNetwork or NormalProjectionNetwork take inputs and generate the required parameters to generate Categorical, or Normal distributions.\nAll examples in TF-Agents come with pre-configured networks. However these networks are not setup to handle complex observations.\nIf you have an environment which exposes more than one observation/action and you need to customize your networks then this tutorial is for you!\nSetup\nIf you haven't installed tf-agents yet, run:", "!pip install tf-agents\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport abc\nimport tensorflow as tf\nimport numpy as np\n\nfrom tf_agents.environments import random_py_environment\nfrom tf_agents.environments import tf_py_environment\nfrom tf_agents.networks import encoding_network\nfrom tf_agents.networks import network\nfrom tf_agents.networks import utils\nfrom tf_agents.specs import array_spec\nfrom tf_agents.utils import common as common_utils\nfrom tf_agents.utils import nest_utils", "Defining Networks\nNetwork API\nIn TF-Agents we subclass from Keras Networks. With it we can:\n\nSimplify copy operations required when creating target networks.\nPerform automatic variable creation when calling network.variables().\nValidate inputs based on network input_specs.\n\nEncodingNetwork\nAs mentioned above the EncodingNetwork allows us to easily define a mapping of pre-processing layers to apply to a network's input to generate some encoding.\nThe EncodingNetwork is composed of the following mostly optional layers:\n\nPreprocessing layers\nPreprocessing combiner\nConv2D \nFlatten\nDense \n\nThe special thing about encoding networks is that input preprocessing is applied. Input preprocessing is possible via preprocessing_layers and preprocessing_combiner layers. Each of these can be specified as a nested structure. If the preprocessing_layers nest is shallower than input_tensor_spec, then the layers will get the subnests. For example, if:\ninput_tensor_spec = ([TensorSpec(3)] * 2, [TensorSpec(3)] * 5)\npreprocessing_layers = (Layer1(), Layer2())\nthen preprocessing will call:\npreprocessed = [preprocessing_layers[0](observations[0]),\n preprocessing_layers[1](observations[1])]\nHowever if\npreprocessing_layers = ([Layer1() for _ in range(2)],\n [Layer2() for _ in range(5)])\nthen preprocessing will call:\npython\npreprocessed = [\n layer(obs) for layer, obs in zip(flatten(preprocessing_layers),\n flatten(observations))\n]\nCustom Networks\nTo create your own networks you will only have to override the __init__ and call methods. Let's create a custom network using what we learned about EncodingNetworks to create an ActorNetwork that takes observations which contain an image and a vector.", "class ActorNetwork(network.Network):\n\n def __init__(self,\n observation_spec,\n action_spec,\n preprocessing_layers=None,\n preprocessing_combiner=None,\n conv_layer_params=None,\n fc_layer_params=(75, 40),\n dropout_layer_params=None,\n activation_fn=tf.keras.activations.relu,\n enable_last_layer_zero_initializer=False,\n name='ActorNetwork'):\n super(ActorNetwork, self).__init__(\n input_tensor_spec=observation_spec, state_spec=(), name=name)\n\n # For simplicity we will only support a single action float output.\n self._action_spec = action_spec\n flat_action_spec = tf.nest.flatten(action_spec)\n if len(flat_action_spec) > 1:\n raise ValueError('Only a single action is supported by this network')\n self._single_action_spec = flat_action_spec[0]\n if self._single_action_spec.dtype not in [tf.float32, tf.float64]:\n raise ValueError('Only float actions are supported by this network.')\n\n kernel_initializer = tf.keras.initializers.VarianceScaling(\n scale=1. / 3., mode='fan_in', distribution='uniform')\n self._encoder = encoding_network.EncodingNetwork(\n observation_spec,\n preprocessing_layers=preprocessing_layers,\n preprocessing_combiner=preprocessing_combiner,\n conv_layer_params=conv_layer_params,\n fc_layer_params=fc_layer_params,\n dropout_layer_params=dropout_layer_params,\n activation_fn=activation_fn,\n kernel_initializer=kernel_initializer,\n batch_squash=False)\n\n initializer = tf.keras.initializers.RandomUniform(\n minval=-0.003, maxval=0.003)\n\n self._action_projection_layer = tf.keras.layers.Dense(\n flat_action_spec[0].shape.num_elements(),\n activation=tf.keras.activations.tanh,\n kernel_initializer=initializer,\n name='action')\n\n def call(self, observations, step_type=(), network_state=()):\n outer_rank = nest_utils.get_outer_rank(observations, self.input_tensor_spec)\n # We use batch_squash here in case the observations have a time sequence\n # compoment.\n batch_squash = utils.BatchSquash(outer_rank)\n observations = tf.nest.map_structure(batch_squash.flatten, observations)\n\n state, network_state = self._encoder(\n observations, step_type=step_type, network_state=network_state)\n actions = self._action_projection_layer(state)\n actions = common_utils.scale_to_spec(actions, self._single_action_spec)\n actions = batch_squash.unflatten(actions)\n return tf.nest.pack_sequence_as(self._action_spec, [actions]), network_state", "Let's create a RandomPyEnvironment to generate structured observations and validate our implementation.", "action_spec = array_spec.BoundedArraySpec((3,), np.float32, minimum=0, maximum=10)\nobservation_spec = {\n 'image': array_spec.BoundedArraySpec((16, 16, 3), np.float32, minimum=0,\n maximum=255),\n 'vector': array_spec.BoundedArraySpec((5,), np.float32, minimum=-100,\n maximum=100)}\n\nrandom_env = random_py_environment.RandomPyEnvironment(observation_spec, action_spec=action_spec)\n\n# Convert the environment to a TFEnv to generate tensors.\ntf_env = tf_py_environment.TFPyEnvironment(random_env)", "Since we've defined the observations to be a dict we need to create preprocessing layers to handle these.", "preprocessing_layers = {\n 'image': tf.keras.models.Sequential([tf.keras.layers.Conv2D(8, 4),\n tf.keras.layers.Flatten()]),\n 'vector': tf.keras.layers.Dense(5)\n }\npreprocessing_combiner = tf.keras.layers.Concatenate(axis=-1)\nactor = ActorNetwork(tf_env.observation_spec(), \n tf_env.action_spec(),\n preprocessing_layers=preprocessing_layers,\n preprocessing_combiner=preprocessing_combiner)", "Now that we have the actor network we can process observations from the environment.", "time_step = tf_env.reset()\nactor(time_step.observation, time_step.step_type)", "This same strategy can be used to customize any of the main networks used by the agents. You can define whatever preprocessing and connect it to the rest of the network. As you define your own custom make sure the output layer definitions of the network match." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
molgor/spystats
notebooks/Sandboxes/spatial_autocorrelation_from_fitted_model_POISSON.ipynb
bsd-2-clause
[ "# Load Biospytial modules and etc.\n%matplotlib inline\nimport sys\nsys.path.append('/apps')\nimport django\ndjango.setup()\nimport pandas as pd\nimport matplotlib.pyplot as plt\n## Use the ggplot style\nplt.style.use('ggplot')\n\n# My mac\n#data = pd.read_csv(\"/RawDataCSV/plotsClimateData_11092017.csv\")\n# My Linux desktop\ndata = pd.read_csv(\"/RawDataCSV/idiv_share/plotsClimateData_11092017.csv\")\n\n\ndata.SppN.mean()\n\nimport geopandas as gpd\n\nfrom django.contrib.gis import geos\nfrom shapely.geometry import Point\n\ndata['geometry'] = data.apply(lambda z: Point(z.LON, z.LAT), axis=1)\nnew_data = gpd.GeoDataFrame(data)", "Let´s reproject to Alberts or something with distance", "new_data.crs = {'init':'epsg:4326'}", "Uncomment to reproject\nproj string taken from: http://spatialreference.org/", "#new_data = new_data.to_crs(\"+proj=aea +lat_1=29.5 +lat_2=45.5 +lat_0=37.5 +lon_0=-96 +x_0=0 +y_0=0 +ellps=GRS80 +datum=NAD83 +units=m +no_defs \")", "Model Fitting Using a GLM\nThe general model will have the form:\n$$ Biomass(x,y) = \\beta_1 AET + \\beta_2 Age + Z(x,y) + \\epsilon $$\nWhere:\n$\\beta_1$ and $\\beta_2$ are model parameters, $Z(x,y)$ is the Spatial Autocorrelation process and $\\epsilon \\sim N(0,\\sigma^2)$", "##### OLD #######\nlen(data.lon)\n#X = data[['AET','StandAge','lon','lat']]\n#X = data[['SppN','lon','lat']]\nX = data[['lon','lat']]\n#Y = data['plotBiomass']\nY = data[['SppN']]\n## First step in spatial autocorrelation\n#Y = pd.DataFrame(np.zeros(len(Y)))\n## Let´s take a small sample only for the spatial autocorrelation\n#import numpy as np\n#sample_size = 2000\n#randindx = np.random.randint(0,X.shape[0],sample_size)\n#nX = X.loc[randindx]\n#nY = Y.loc[randindx]\nnX = X\nnY = Y\n\n\n\n## Small function for systematically selecting the k-th element of the data.\n#### Sughgestion use for now a small k i.e. 10\nsystematic_selection = lambda k : filter(lambda i : not(i % k) ,range(len(data))) \n\nidx = systematic_selection(50)\nprint(len(idx))\nnX = X.loc[idx]\nnY = Y.loc[idx]\nnew_data = data.loc[idx]\n\nlen(new_data)\n\n# Import GPFlow\nimport GPflow as gf\nk = gf.kernels.Matern12(2,ARD=False,active_dims = [0,1]) + gf.kernels.Bias(1)\n#k = gf.kernels.Matern12(2, lengthscales=1, active_dims = [0,1] ) + gf.kernels.Bias(1)\nl = gf.likelihoods.Poisson()\nmodel = gf.gpmc.GPMC(nX.as_matrix(), nY.as_matrix().reshape(len(nY),1).astype(float), k, l)\n\n\n#model = gf.gpr.GPR(nX.as_matrix(),nY.as_matrix().reshape(len(nY),1).astype(float),k)\n## If priors\n#model.kern.matern12.lengthscales.prior = gf.priors.Gaussian(25.0,3.0)\n#model.kern.matern32.variance.prior = GPflow.priors.Gamma(1.,1.)\n#model.kern.bias.variance.prior = gf.priors.Gamma(1.,1.)\n## Optimize\n\n%time model.optimize(maxiter=100) # start near MAP\n\n\nmodel.kern\n\nsamples = model.sample(50, verbose=True, epsilon=1, Lmax=100)", "Fitted parameters (From HEC)", "model.kern.lengthscales = 25.4846122373\nmodel.kern.variance = 10.9742076021\nmodel.likelihood.variance = 4.33463026664\n\n%time mm = k.compute_K_symm(X.as_matrix())\n\nimport numpy as np\nNn = 500\ndsc = data\npredicted_x = np.linspace(min(dsc.lon),max(dsc.lon),Nn)\npredicted_y = np.linspace(min(dsc.lat),max(dsc.lat),Nn)\nXx, Yy = np.meshgrid(predicted_x,predicted_y)\n## Fake richness\nfake_sp_rich = np.ones(len(Xx.ravel()))\npredicted_coordinates = np.vstack([ Xx.ravel(), Yy.ravel()]).transpose()\n#predicted_coordinates = np.vstack([section.SppN, section.newLon,section.newLat]).transpose()\n\nlen(predicted_coordinates)\n\n#We will calculate everything with the new model and parameters\n#model = gf.gpr.GPR(X.as_matrix(),Y.as_matrix().reshape(len(Y),1).astype(float),k)\n\n%time means,variances = model.predict_y(predicted_coordinates)\n\n-", "Predictions with +2std.Dev", "#Using k-partition = 7\nimport cartopy\nplt.figure(figsize=(17,11))\n\nproj = cartopy.crs.PlateCarree()\nax = plt.subplot(111, projection=proj)\n\n\nax = plt.axes(projection=proj)\n#algo = new_data.plot(column='SppN',ax=ax,cmap=colormap,edgecolors='')\n\n\n#ax.set_extent([-93, -70, 30, 50])\nax.set_extent([-125, -60, 20, 50])\n#ax.set_extent([-95, -70, 25, 45])\n\n#ax.add_feature(cartopy.feature.LAND)\nax.add_feature(cartopy.feature.OCEAN)\nax.add_feature(cartopy.feature.COASTLINE)\nax.add_feature(cartopy.feature.BORDERS, linestyle=':')\nax.add_feature(cartopy.feature.LAKES, alpha=0.9)\nax.stock_img()\n#ax.add_geometries(new_data.geometry,crs=cartopy.crs.PlateCarree())\n#ax.add_feature(cartopy.feature.RIVERS)\nmm = ax.pcolormesh(Xx,Yy,means.reshape(Nn,Nn) + (2* np.sqrt(variances).reshape(Nn,Nn)),transform=proj )\n#cs = plt.contour(Xx,Yy,np.sqrt(variances).reshape(Nn,Nn),linewidths=2,cmap=plt.cm.Greys_r,linestyles='dotted')\ncs = plt.contour(Xx,Yy,means.reshape(Nn,Nn) + (2 * np.sqrt(variances).reshape(Nn,Nn)),linewidths=2,colors='k',linestyles='dotted',levels=range(1,20))\nplt.clabel(cs, fontsize=16,inline=True,fmt='%1.1f')\n#ax.scatter(new_data.lon,new_data.lat,edgecolors='',color='white',alpha=0.6)\nplt.colorbar(mm)\nplt.title(\"Predicted Species Richness + 2stdev\")", "Predicted means", "#Using k-partition = 7\nimport cartopy\nplt.figure(figsize=(17,11))\n\nproj = cartopy.crs.PlateCarree()\nax = plt.subplot(111, projection=proj)\n\n\nax = plt.axes(projection=proj)\n#algo = new_data.plot(column='SppN',ax=ax,cmap=colormap,edgecolors='')\n\n\n#ax.set_extent([-93, -70, 30, 50])\nax.set_extent([-125, -60, 20, 50])\n#ax.set_extent([-95, -70, 25, 45])\n\n#ax.add_feature(cartopy.feature.LAND)\nax.add_feature(cartopy.feature.OCEAN)\nax.add_feature(cartopy.feature.COASTLINE)\nax.add_feature(cartopy.feature.BORDERS, linestyle=':')\nax.add_feature(cartopy.feature.LAKES, alpha=0.9)\nax.stock_img()\n#ax.add_geometries(new_data.geometry,crs=cartopy.crs.PlateCarree())\n#ax.add_feature(cartopy.feature.RIVERS)\nmm = ax.pcolormesh(Xx,Yy,means.reshape(Nn,Nn),transform=proj )\n#cs = plt.contour(Xx,Yy,np.sqrt(variances).reshape(Nn,Nn),linewidths=2,cmap=plt.cm.Greys_r,linestyles='dotted')\ncs = plt.contour(Xx,Yy,means.reshape(Nn,Nn),linewidths=2,colors='k',linestyles='dotted',levels=range(1,20))\nplt.clabel(cs, fontsize=16,inline=True,fmt='%1.1f')\n#ax.scatter(new_data.lon,new_data.lat,edgecolors='',color='white',alpha=0.6)\nplt.colorbar(mm)\nplt.title(\"Predicted Species Richness\")\n\n\n#Using k-partition = 7\nimport cartopy\nplt.figure(figsize=(17,11))\n\nproj = cartopy.crs.PlateCarree()\nax = plt.subplot(111, projection=proj)\n\n\nax = plt.axes(projection=proj)\n#algo = new_data.plot(column='SppN',ax=ax,cmap=colormap,edgecolors='')\n\n\n#ax.set_extent([-93, -70, 30, 50])\nax.set_extent([-125, -60, 20, 50])\n#ax.set_extent([-95, -70, 25, 45])\n\n#ax.add_feature(cartopy.feature.LAND)\nax.add_feature(cartopy.feature.OCEAN)\nax.add_feature(cartopy.feature.COASTLINE)\nax.add_feature(cartopy.feature.BORDERS, linestyle=':')\nax.add_feature(cartopy.feature.LAKES, alpha=0.9)\nax.stock_img()\n#ax.add_geometries(new_data.geometry,crs=cartopy.crs.PlateCarree())\n#ax.add_feature(cartopy.feature.RIVERS)\nmm = ax.pcolormesh(Xx,Yy,means.reshape(Nn,Nn) - (2* np.sqrt(variances).reshape(Nn,Nn)),transform=proj )\n#cs = plt.contour(Xx,Yy,np.sqrt(variances).reshape(Nn,Nn),linewidths=2,cmap=plt.cm.Greys_r,linestyles='dotted')\ncs = plt.contour(Xx,Yy,means.reshape(Nn,Nn) - (2 * np.sqrt(variances).reshape(Nn,Nn)),linewidths=2,colors='k',linestyles='dotted',levels=[4.0,5.0,6.0,7.0,8.0])\nplt.clabel(cs, fontsize=16,inline=True,fmt='%1.1f')\n#ax.scatter(new_data.lon,new_data.lat,edgecolors='',color='white',alpha=0.6)\nplt.colorbar(mm)\nplt.title(\"Predicted Species Richness - 2stdev\")", "Model Analysis", "model.get_parameter_dict()", "Let's calculate the residuals", "X_ = data[['LON','LAT']]\n%time Y_hat = model.predict_y(X_)\n\npred_y = pd.DataFrame(Y_hat[0])\nvar_y = pd.DataFrame(Y_hat[1])\n\nnew_data['pred_y'] = pred_y\nnew_data['var_y'] = var_y\n\nnew_data= new_data.assign(error=lambda y : (y.SppN - y.pred_y)**2 )\n\nnew_data.error.hist(bins=50)\n\nprint(new_data.error.mean())\nprint(new_data.error.std())", "Experiment\nIn this section we will bring a Raster Data from the US, using Biospytial Raster API. \n1. First select a polygon, then get A Raster FRom there, say Mean Temperature.", "import raster_api.tools as rt\nfrom raster_api.models import MeanTemperature,ETOPO1,Precipitation,SolarRadiation\nfrom sketches.models import Country\n\n\n## Select US\nus_border = Country.objects.filter(name__contains='United States')[1]\n\nfrom django.db import close_old_connections\n\nclose_old_connections()\n\n#Get Raster API\nus_meantemp = rt.RasterData(Precipitation,us_border.geom)\nus_meantemp.getRaster()\n\nus_meantemp.display_field()\n\n%time coords = us_meantemp.getCoordinates()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/text_models/labs/word2vec.ipynb
apache-2.0
[ "Word2Vec\nLearning Objectives\n\nLearn how to build a Word2Vec model \nPrepare training data for Word2Vec\nTrain a Word2Vec model. In this lab we will build a Skip Gram Model\nLearn how to visualize embeddings and analyze them using the Embedding Projector\n\nIntroduction\nWord2Vec is not a singular algorithm, rather, it is a family of model architectures and optimizations that can be used to learn word embeddings from large datasets. Embeddings learned through Word2Vec have proven to be successful on a variety of downstream natural language processing tasks.\nNote: This notebook is based on Efficient Estimation of Word Representations in Vector Space and\nDistributed\nRepresentations of Words and Phrases and their Compositionality. It is not an exact implementation of the papers. Rather, it is intended to illustrate the key ideas.\nThese papers proposed two methods for learning representations of words: \n\nContinuous Bag-of-Words Model which predicts the middle word based on surrounding context words. The context consists of a few words before and after the current (middle) word. This architecture is called a bag-of-words model as the order of words in the context is not important.\nContinuous Skip-gram Model which predict words within a certain range before and after the current word in the same sentence.\n\nYou'll use the skip-gram approach in this notebook. First, you'll explore skip-grams and other concepts using a single sentence for illustration. Next, you'll train your own Word2Vec model on a small dataset. This notebook also contains code to export the trained embeddings and visualize them in the TensorFlow Embedding Projector.\nSkip-gram and Negative Sampling\nWhile a bag-of-words model predicts a word given the neighboring context, a skip-gram model predicts the context (or neighbors) of a word, given the word itself. The model is trained on skip-grams, which are n-grams that allow tokens to be skipped (see the diagram below for an example). The context of a word can be represented through a set of skip-gram pairs of (target_word, context_word) where context_word appears in the neighboring context of target_word. \nConsider the following sentence of 8 words.\n\nThe wide road shimmered in the hot sun. \n\nThe context words for each of the 8 words of this sentence are defined by a window size. The window size determines the span of words on either side of a target_word that can be considered context word. Take a look at this table of skip-grams for target words based on different window sizes.\nNote: For this lab, a window size of n implies n words on each side with a total window span of 2*n+1 words across a word.\n\nThe training objective of the skip-gram model is to maximize the probability of predicting context words given the target word. For a sequence of words w<sub>1</sub>, w<sub>2</sub>, ... w<sub>T</sub>, the objective can be written as the average log probability\n\nwhere c is the size of the training context. The basic skip-gram formulation defines this probability using the softmax function.\n\nwhere v and v<sup>'<sup> are target and context vector representations of words and W is vocabulary size. Here v<sub>0 and v<sub>1 are model parameters which are updated by gradient descent.\nComputing the denominator of this formulation involves performing a full softmax over the entire vocabulary words which is often large (10<sup>5</sup>-10<sup>7</sup>) terms. \nThe Noise Contrastive Estimation loss function is an efficient approximation for a full softmax. With an objective to learn word embeddings instead of modelling the word distribution, NCE loss can be simplified to use negative sampling. \nThe simplified negative sampling objective for a target word is to distinguish the context word from num_ns negative samples drawn from noise distribution P<sub>n</sub>(w) of words. More precisely, an efficient approximation of full softmax over the vocabulary is, for a skip-gram pair, to pose the loss for a target word as a classification problem between the context word and num_ns negative samples. \nA negative sample is defined as a (target_word, context_word) pair such that the context_word does not appear in the window_size neighborhood of the target_word. For the example sentence, these are few potential negative samples (when window_size is 2).\n(hot, shimmered)\n(wide, hot)\n(wide, sun)\nIn the next section, you'll generate skip-grams and negative samples for a single sentence. You'll also learn about subsampling techniques and train a classification model for positive and negative training examples later in the tutorial.\nSetup", "import io\nimport itertools\nimport os\nimport re\nimport string\n\nimport numpy as np\nimport tensorflow as tf\nimport tqdm\nfrom tensorflow.keras import Model, Sequential\nfrom tensorflow.keras.layers import (\n Activation,\n Dense,\n Dot,\n Embedding,\n Flatten,\n GlobalAveragePooling1D,\n Reshape,\n)\nfrom tensorflow.keras.layers.experimental.preprocessing import TextVectorization", "Please check your tensorflow version using the cell below.", "# Show the currently installed version of TensorFlow you should be using TF 2.6\nprint(\"TensorFlow version: \", tf.version.VERSION)\n\n# Change below if necessary\nPROJECT = !gcloud config get-value project # noqa: E999\nPROJECT = PROJECT[0]\nBUCKET = PROJECT\nREGION = \"us-central1\"\n\nOUTDIR = f\"gs://{BUCKET}/text_models\"\n\n%env PROJECT=$PROJECT\n%env BUCKET=$BUCKET\n%env REGION=$REGION\n%env OUTDIR=$OUTDIR\n\nSEED = 42\nAUTOTUNE = tf.data.experimental.AUTOTUNE", "Vectorize an example sentence\nConsider the following sentence: \nThe wide road shimmered in the hot sun.\nTokenize the sentence:", "sentence = \"The wide road shimmered in the hot sun\"\ntokens = list(sentence.lower().split())\nprint(len(tokens))", "Create a vocabulary to save mappings from tokens to integer indices.", "vocab, index = {}, 1 # start indexing from 1\nvocab[\"<pad>\"] = 0 # add a padding token\nfor token in tokens:\n if token not in vocab:\n vocab[token] = index\n index += 1\nvocab_size = len(vocab)\nprint(vocab)", "Create an inverse vocabulary to save mappings from integer indices to tokens.", "inverse_vocab = {index: token for token, index in vocab.items()}\nprint(inverse_vocab)", "Vectorize your sentence.", "example_sequence = [vocab[word] for word in tokens]\nprint(example_sequence)", "Generate skip-grams from one sentence\nThe tf.keras.preprocessing.sequence module provides useful functions that simplify data preparation for Word2Vec. You can use the tf.keras.preprocessing.sequence.skipgrams to generate skip-gram pairs from the example_sequence with a given window_size from tokens in the range [0, vocab_size).\nNote: negative_samples is set to 0 here as batching negative samples generated by this function requires a bit of code. You will use another function to perform negative sampling in the next section.", "window_size = 2\npositive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(\n example_sequence,\n vocabulary_size=vocab_size,\n window_size=window_size,\n negative_samples=0,\n)\nprint(len(positive_skip_grams))", "Take a look at few positive skip-grams.", "for target, context in positive_skip_grams[:5]:\n print(\n f\"({target}, {context}): ({inverse_vocab[target]}, {inverse_vocab[context]})\"\n )", "Negative sampling for one skip-gram\nThe skipgrams function returns all positive skip-gram pairs by sliding over a given window span. To produce additional skip-gram pairs that would serve as negative samples for training, you need to sample random words from the vocabulary. Use the tf.random.log_uniform_candidate_sampler function to sample num_ns number of negative samples for a given target word in a window. You can call the funtion on one skip-grams's target word and pass the context word as true class to exclude it from being sampled.\nKey point: num_ns (number of negative samples per positive context word) between [5, 20] is shown to work best for smaller datasets, while num_ns between [2,5] suffices for larger datasets.", "# Get target and context words for one positive skip-gram.\ntarget_word, context_word = positive_skip_grams[0]\n\n# Set the number of negative samples per positive context.\nnum_ns = 4\n\ncontext_class = tf.reshape(tf.constant(context_word, dtype=\"int64\"), (1, 1))\nnegative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(\n true_classes=context_class, # class that should be sampled as 'positive'\n num_true=1, # each positive skip-gram has 1 positive context class\n num_sampled=num_ns, # number of negative context words to sample\n unique=True, # all the negative samples should be unique\n range_max=vocab_size, # pick index of the samples from [0, vocab_size]\n seed=SEED, # seed for reproducibility\n name=\"negative_sampling\", # name of this operation\n)\nprint(negative_sampling_candidates)\nprint([inverse_vocab[index.numpy()] for index in negative_sampling_candidates])", "Construct one training example\nFor a given positive (target_word, context_word) skip-gram, you now also have num_ns negative sampled context words that do not appear in the window size neighborhood of target_word. Batch the 1 positive context_word and num_ns negative context words into one tensor. This produces a set of positive skip-grams (labelled as 1) and negative samples (labelled as 0) for each target word.", "# Add a dimension so you can use concatenation (on the next step).\nnegative_sampling_candidates = tf.expand_dims(negative_sampling_candidates, 1)\n\n# Concat positive context word with negative sampled words.\ncontext = tf.concat([context_class, negative_sampling_candidates], 0)\n\n# Label first context word as 1 (positive) followed by num_ns 0s (negative).\nlabel = tf.constant([1] + [0] * num_ns, dtype=\"int64\")\n\n# Reshape target to shape (1,) and context and label to (num_ns+1,).\ntarget = tf.squeeze(target_word)\ncontext = tf.squeeze(context)\nlabel = tf.squeeze(label)", "Take a look at the context and the corresponding labels for the target word from the skip-gram example above.", "print(f\"target_index : {target}\")\nprint(f\"target_word : {inverse_vocab[target_word]}\")\nprint(f\"context_indices : {context}\")\nprint(f\"context_words : {[inverse_vocab[c.numpy()] for c in context]}\")\nprint(f\"label : {label}\")", "A tuple of (target, context, label) tensors constitutes one training example for training your skip-gram negative sampling Word2Vec model. Notice that the target is of shape (1,) while the context and label are of shape (1+num_ns,)", "print(f\"target :\", target)\nprint(f\"context :\", context)\nprint(f\"label :\", label)", "Summary\nThis picture summarizes the procedure of generating training example from a sentence. \n\nLab Task 1\nSkip-gram Sampling table\nA large dataset means larger vocabulary with higher number of more frequent words such as stopwords. Training examples obtained from sampling commonly occuring words (such as the, is, on) don't add much useful information for the model to learn from. Mikolov et al. suggest subsampling of frequent words as a helpful practice to improve embedding quality. \nThe tf.keras.preprocessing.sequence.skipgrams function accepts a sampling table argument to encode probabilities of sampling any token. You can use the tf.keras.preprocessing.sequence.make_sampling_table to generate a word-frequency rank based probabilistic sampling table and pass it to skipgrams function. Take a look at the sampling probabilities for a vocab_size of 10.", "sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(size=10)\nprint(sampling_table)", "sampling_table[i] denotes the probability of sampling the i-th most common word in a dataset. The function assumes a Zipf's distribution of the word frequencies for sampling.\nKey point: The tf.random.log_uniform_candidate_sampler already assumes that the vocabulary frequency follows a log-uniform (Zipf's) distribution. Using these distribution weighted sampling also helps approximate the Noise Contrastive Estimation (NCE) loss with simpler loss functions for training a negative sampling objective.\nGenerate training data\nCompile all the steps described above into a function that can be called on a list of vectorized sentences obtained from any text dataset. Notice that the sampling table is built before sampling skip-gram word pairs. You will use this function in the later sections.\nLab Task 1\n1a - In the code below, complete the following objectives -\n1a: Iterate over all the sentences in your dataset and generate positive skip-grams to be used for training later\n1b - Iterate over all the sequences in your dataset to generate training examples with positive context word and negative samples", "\"\"\"\nGenerates skip-gram pairs with negative sampling for a list of sequences\n(int-encoded sentences) based on window size, number of negative samples\nand vocabulary size.\n\"\"\"\n\n\ndef generate_training_data(sequences, window_size, num_ns, vocab_size, seed):\n # Elements of each training example are appended to these lists.\n targets, contexts, labels = [], [], []\n\n # Build the sampling table for vocab_size tokens.\n # TODO 1a\n sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(\n vocab_size\n )\n\n # Iterate over all sequences (sentences) in dataset.\n #TODO 1a: Your code here\n\n # Iterate over each positive skip-gram pair to produce training examples\n # with positive context word and negative samples.\n # TODO 1b\n for target_word, context_word in positive_skip_grams:\n context_class = #TODO 1b: your code here\n\n # Build context and label vectors (for one target word)\n negative_sampling_candidates = tf.expand_dims(\n negative_sampling_candidates, 1\n )\n\n context = tf.concat(\n [context_class, negative_sampling_candidates], 0\n )\n label = tf.constant([1] + [0] * num_ns, dtype=\"int64\")\n\n # Append each element from the training example to global lists.\n targets.append(target_word)\n contexts.append(context)\n labels.append(label)\n\n return targets, contexts, labels", "Lab Task 2: Prepare training data for Word2Vec\nWith an understanding of how to work with one sentence for a skip-gram negative sampling based Word2Vec model, you can proceed to generate training examples from a larger list of sentences!\nDownload text corpus\nYou will use a text file of Shakespeare's writing for this tutorial. Change the following line to run this code on your own data.", "path_to_file = tf.keras.utils.get_file(\n \"shakespeare.txt\",\n \"https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt\",\n)", "Read text from the file and take a look at the first few lines.", "with open(path_to_file) as f:\n lines = f.read().splitlines()\nfor line in lines[:20]:\n print(line)", "Lab Task 2\nUse the non empty lines to construct a tf.data.TextLineDataset object for next steps.", "# TODO 2a\ntext_ds = #your code here", "Vectorize sentences from the corpus\nYou can use the TextVectorization layer to vectorize sentences from the corpus. Learn more about using this layer in this Text Classification tutorial. Notice from the first few sentences above that the text needs to be in one case and punctuation needs to be removed. To do this, define a custom_standardization function that can be used in the TextVectorization layer.", "\"\"\"\nWe create a custom standardization function to lowercase the text and\nremove punctuation.\n\"\"\"\n\n\ndef custom_standardization(input_data):\n lowercase = tf.strings.lower(input_data)\n return tf.strings.regex_replace(\n lowercase, \"[%s]\" % re.escape(string.punctuation), \"\"\n )\n\n\n\"\"\"\nDefine the vocabulary size and number of words in a sequence.\n\"\"\"\nvocab_size = 4096\nsequence_length = 10\n\n\"\"\"\nUse the text vectorization layer to normalize, split, and map strings to\nintegers. Set output_sequence_length length to pad all samples to same length.\n\"\"\"\nvectorize_layer = TextVectorization(\n standardize=custom_standardization,\n max_tokens=vocab_size,\n output_mode=\"int\",\n output_sequence_length=sequence_length,\n)", "Call adapt on the text dataset to create vocabulary.", "vectorize_layer.adapt(text_ds.batch(1024))", "Once the state of the layer has been adapted to represent the text corpus, the vocabulary can be accessed with get_vocabulary(). This function returns a list of all vocabulary tokens sorted (descending) by their frequency.", "# Save the created vocabulary for reference.\ninverse_vocab = vectorize_layer.get_vocabulary()\nprint(inverse_vocab[:20])", "The vectorize_layer can now be used to generate vectors for each element in the text_ds.", "def vectorize_text(text):\n text = tf.expand_dims(text, -1)\n return tf.squeeze(vectorize_layer(text))\n\n\n# Vectorize the data in text_ds.\ntext_vector_ds = (\n text_ds.batch(1024).prefetch(AUTOTUNE).map(vectorize_layer).unbatch()\n)", "Obtain sequences from the dataset\nYou now have a tf.data.Dataset of integer encoded sentences. To prepare the dataset for training a Word2Vec model, flatten the dataset into a list of sentence vector sequences. This step is required as you would iterate over each sentence in the dataset to produce positive and negative examples. \nNote: Since the generate_training_data() defined earlier uses non-TF python/numpy functions, you could also use a tf.py_function or tf.numpy_function with tf.data.Dataset.map().", "sequences = list(text_vector_ds.as_numpy_iterator())\nprint(len(sequences))", "Take a look at few examples from sequences.", "for seq in sequences[:5]:\n print(f\"{seq} => {[inverse_vocab[i] for i in seq]}\")", "Generate training examples from sequences\nsequences is now a list of int encoded sentences. Just call the generate_training_data() function defined earlier to generate training examples for the Word2Vec model. To recap, the function iterates over each word from each sequence to collect positive and negative context words. Length of target, contexts and labels should be same, representing the total number of training examples.", "targets, contexts, labels = generate_training_data(\n sequences=sequences,\n window_size=2,\n num_ns=4,\n vocab_size=vocab_size,\n seed=SEED,\n)\nprint(len(targets), len(contexts), len(labels))", "Configure the dataset for performance\nTo perform efficient batching for the potentially large number of training examples, use the tf.data.Dataset API. After this step, you would have a tf.data.Dataset object of (target_word, context_word), (label) elements to train your Word2Vec model!", "BATCH_SIZE = 1024\nBUFFER_SIZE = 10000\ndataset = tf.data.Dataset.from_tensor_slices(((targets, contexts), labels))\ndataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)\nprint(dataset)", "Add cache() and prefetch() to improve performance.", "dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)\nprint(dataset)", "Lab Task 3: Model and Training\nThe Word2Vec model can be implemented as a classifier to distinguish between true context words from skip-grams and false context words obtained through negative sampling. You can perform a dot product between the embeddings of target and context words to obtain predictions for labels and compute loss against true labels in the dataset.\nSubclassed Word2Vec Model\nUse the Keras Subclassing API to define your Word2Vec model with the following layers:\n\ntarget_embedding: A tf.keras.layers.Embedding layer which looks up the embedding of a word when it appears as a target word. The number of parameters in this layer are (vocab_size * embedding_dim).\ncontext_embedding: Another tf.keras.layers.Embedding layer which looks up the embedding of a word when it appears as a context word. The number of parameters in this layer are the same as those in target_embedding, i.e. (vocab_size * embedding_dim).\ndots: A tf.keras.layers.Dot layer that computes the dot product of target and context embeddings from a training pair.\nflatten: A tf.keras.layers.Flatten layer to flatten the results of dots layer into logits.\n\nWith the sublassed model, you can define the call() function that accepts (target, context) pairs which can then be passed into their corresponding embedding layer. Reshape the context_embedding to perform a dot product with target_embedding and return the flattened result.\nKey point: The target_embedding and context_embedding layers can be shared as well. You could also use a concatenation of both embeddings as the final Word2Vec embedding.", "class Word2Vec(Model):\n def __init__(self, vocab_size, embedding_dim):\n super().__init__()\n self.target_embedding = Embedding(\n vocab_size,\n embedding_dim,\n input_length=1,\n name=\"w2v_embedding\",\n )\n self.context_embedding = Embedding(\n vocab_size, embedding_dim, input_length=num_ns + 1\n )\n self.dots = Dot(axes=(3, 2))\n self.flatten = Flatten()\n\n def call(self, pair):\n target, context = pair\n we = self.target_embedding(target)\n ce = self.context_embedding(context)\n dots = self.dots([ce, we])\n return self.flatten(dots)", "Define loss function and compile model\nFor simplicity, you can use tf.keras.losses.CategoricalCrossEntropy as an alternative to the negative sampling loss. If you would like to write your own custom loss function, you can also do so as follows:\npython\ndef custom_loss(x_logit, y_true):\n return tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=y_true)\nLab Task 3 It's time to build your model! Instantiate your Word2Vec class with an embedding dimension of 128 (you could experiment with different values). Compile the model with the tf.keras.optimizers.Adam optimizer.", "# TODO 3a\nembedding_dim = 128\nword2vec = Word2Vec(vocab_size, embedding_dim)\n# your code here", "Also define a callback to log training statistics for tensorboard.", "tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=\"logs\")", "Train the model with dataset prepared above for some number of epochs.", "dataset\n\nword2vec.fit(dataset, epochs=20, callbacks=[tensorboard_callback])", "Visualize training on Tensorboard\nIn order to visualize how the model has trained we can use tensorboard to show the Word2Vec model's accuracy and loss. To do that, we first have to copy the logs from local to a GCS (Cloud Storage) folder.", "def copy_tensorboard_logs(local_path: str, gcs_path: str):\n \"\"\"Copies Tensorboard logs from a local dir to a GCS location.\n After training, batch copy Tensorboard logs locally to a GCS location.\n Args:\n local_path: local filesystem directory uri.\n gcs_path: cloud filesystem directory uri.\n Returns:\n None.\n \"\"\"\n pattern = f\"{local_path}/*/events.out.tfevents.*\"\n local_files = tf.io.gfile.glob(pattern)\n gcs_log_files = [\n local_file.replace(local_path, gcs_path) for local_file in local_files\n ]\n for local_file, gcs_file in zip(local_files, gcs_log_files):\n tf.io.gfile.copy(local_file, gcs_file)\n\n\ncopy_tensorboard_logs(\"./logs\", OUTDIR + \"/word2vec_logs\")", "To visualize the embeddings, open Cloud Shell and use the following command:\ntensorboard --port=8081 --logdir OUTDIR/word2vec_logs\nIn Cloud Shell, click Web Preview > Change Port and insert port number 8081. Click Change and Preview to open the TensorBoard.\n\nLab Task 4: Embedding lookup and analysis\nLab Task 4 Obtain the weights from the model using get_layer() and get_weights(). The get_vocabulary() function provides the vocabulary to build a metadata file with one token per line.", "# TODO 4a\nweights = #your code here\nvocab = #your code here", "Create and save the vectors and metadata file.", "out_v = open(\"text_models/vectors.tsv\", \"w\", encoding=\"utf-8\")\nout_m = open(\"text_models/metadata.tsv\", \"w\", encoding=\"utf-8\")\n\nfor index, word in enumerate(vocab):\n if index == 0:\n continue # skip 0, it's padding.\n vec = weights[index]\n out_v.write(\"\\t\".join([str(x) for x in vec]) + \"\\n\")\n out_m.write(word + \"\\n\")\nout_v.close()\nout_m.close()", "Download the vectors.tsv and metadata.tsv to your local machine and then open Embedding Projector. Here you will have the option to upload the two files you have downloaded and visualize the embeddings." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
SubhankarGhosh/NetworkX
6. Network Statistical Inference (Student).ipynb
mit
[ "# Load the data\nimport pandas as pd\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport numpy.random as npr\nfrom scipy.stats import norm, ks_2samp # no scipy - comment out\nfrom custom import custom_funcs as cf\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2", "Introduction\nIn this notebook, we will walk through a hacker's approach to statistical thinking, as applied to network analysis.\nStatistics in a Nutshell\nAll of statistics can be broken down into two activities:\n\nDescriptively summarizing data. (a.k.a. \"descriptive statistics\")\nFiguring out whether something happened by random chance. (a.k.a. \"inferential statistics\")\n\nDescriptive Statistics\n\nCentrality measures: mean, median, mode\nVariance measures: inter-quartile range (IQR), variance and standard deviation\n\nInferential Statistics\n\nModels of Randomness (see below)\nHypothesis Testing\nFitting Statistical Models\n\nLoad Data\nLet's load a protein-protein interaction network dataset.\n\nThis undirected network contains protein interactions contained in yeast. Research showed that proteins with a high degree were more important for the surivial of the yeast than others. A node represents a protein and an edge represents a metabolic interaction between two proteins. The network contains loops.", "# Read in the data.\n# Note from above that we have to skip the first two rows, and that there's no header column,and that the edges are\n# delimited by spaces in between the nodes. Hence the syntax below:\nG = cf.load_propro_network()", "Exercise\nCompute some basic descriptive statistics about the graph, namely:\n\nthe number of nodes,\nthe number of edges,\nthe graph density,\nthe distribution of degree centralities in the graph,", "# Number of nodes:\n\n\n# Number of edges:\n\n\n# Graph density:\n\n\n# Degree centrality distribution:\n", "How are protein-protein networks formed? Are they formed by an Erdos-Renyi process, or something else?\n\nIn the G(n, p) model, a graph is constructed by connecting nodes randomly. Each edge is included in the graph with probability p independent from every other edge.\n\nIf protein-protein networks are formed by an E-R process, then we would expect that properties of the protein-protein graph would look statistically similar to those of an actual E-R graph.\nExercise\nMake a histogram of the degree centralities for the protein-protein interaction graph, and the E-R graph.\nThe construction of an E-R graph requires a value for n and p. \nA reasonable number for n is the number of nodes in our protein-protein graph.\nA reasonable value for p might be the density of the protein-protein graph.", "erG = nx.erdos_renyi_graph(n=__________, p=________________)\n\n", "From visualizing these two distributions, it is clear that they look very different. How do we quantify this difference, and statistically test whether the protein-protein graph could have arisen under an Erdos-Renyi model?\nOne thing we might observe is that the variance, that is the \"spread\" around the mean, differs between the E-R model compared to our data. Therefore, we can compare variance of the data to the distribtion of variances under an E-R model.\nThis is essentially following the logic of statistical inference by 'hacking' (not to be confused with the statistical bad practice of p-hacking).\nExercise\nFill in the skeleton code below to simulate 100 E-R graphs.", "# 1. Generate 100 E-R graph degree centrality variance measurements and store them.\n# Takes ~50 seconds or so.\nn_sims = ______\ner_vars = np.zeros(________) # variances for n simulaed E-R graphs.\nfor i in range(n_sims):\n erG = nx.erdos_renyi_graph(n=____________, p=____________)\n erG_deg_centralities = __________\n er_vars[i] = np.var(__________)\n\n# 2. Compute the test statistic that is going to be used for the hypothesis test.\nppG_var = np.var(______________)\n\n# Do a quick visual check\nn, bins, patches = plt.hist(er_vars)\nplt.vlines(ppG_var, ymin=0, ymax=max(n), color='red', lw=2)", "Visually, it should be quite evident that the protein-protein graph did not come from an E-R distribution. Statistically, we can also use the hypothesis test procedure to quantitatively test this, using our simulated E-R data.", "# Conduct the hypothesis test.\nppG_var > np.percentile(er_vars, 99) # we can only use the 99th percentile, because there are only 100 data points.", "Another way to do this is to use the 2-sample Kolmogorov-Smirnov test implemented in the scipy.stats module. From the docs:\n\nThis tests whether 2 samples are drawn from the same distribution. Note\nthat, like in the case of the one-sample K-S test, the distribution is\nassumed to be continuous.\nThis is the two-sided test, one-sided tests are not implemented.\nThe test uses the two-sided asymptotic Kolmogorov-Smirnov distribution.\nIf the K-S statistic is small or the p-value is high, then we cannot\nreject the hypothesis that the distributions of the two samples\nare the same.\n\nAs an example to convince yourself that this test works, run the synthetic examples below.", "# Scenario 1: Data come from the same distributions.\n# Notice the size of the p-value.\ndist1 = npr.random(size=(100))\ndist2 = npr.random(size=(100))\n\nks_2samp(dist1, dist2)\n# Note how the p-value, which ranges between 0 and 1, is likely to be greater than a commonly-accepted\n# threshold of 0.05\n\n# Scenario 2: Data come from different distributions. \n# Note the size of the KS statistic, and the p-value.\n\ndist1 = norm(3, 1).rvs(100)\ndist2 = norm(5, 1).rvs(100)\n\nks_2samp(dist1, dist2)\n# Note how the p-value is likely to be less than 0.05, and even more stringent cut-offs of 0.01 or 0.001.", "Exercise\nNow, conduct the K-S test for one synthetic graph and the data.", "# Now try it on the data distribution\nks_2samp(___________________, ___________________)", "Networks may be high-dimensional objects, but the logic for inference on network data essentially follows the same logic as for 'regular' data:\n\nIdentify a model of 'randomness' that may model how your data may have been generated.\nCompute a \"test statistic\" for your data and the model.\nCompute the probability of observing the data's test statistic under the model.\n\nFurther Reading\nJake Vanderplas' \"Statistics for Hackers\" slides: https://speakerdeck.com/jakevdp/statistics-for-hackers\nAllen Downey's \"There is Only One Test\": http://allendowney.blogspot.com/2011/05/there-is-only-one-test.html" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
d00d/quantNotebooks
Notebooks/quantopian_research_public/notebooks/lectures/Violations_of_Regression_Models/notebook.ipynb
unlicense
[ "The regression model\nBy Evgenia \"Jenny\" Nitishinskaya and Delaney Granizo-Mackenzie\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\ngithub.com/quantopian/research_public\n\nNotebook released under the Creative Commons Attribution 4.0 License.\n\nWhen using a regression to fit a model to our data, the assumptions of regression analysis myst be satisfied in order to ensure good parameter estimates and accurate fit statistics. We would like parameters to be:\n* unbiased (expected value over different samples is the true value)\n* consistent (converging to the true value with many samples), and\n* efficient (minimized variance)\nBelow we investigate the ways in which these assumptions can be violated and the effect on the parameters and fit statistics. We'll be using single-variable linear equations for the examples, but the same considerations apply to other models. We'll also assume that our model is correctly specified; that is, that the functional form we chose is valid. We discuss model specification errors along with the assumption violations and other problems that they cause in another notebook.\nFocus on the Residuals\nRather than focusing on your model construction, it is possible to gain a huge amount of information from your residuals (errors). Your model may be incredibly complex and impossible to analyze, but as long as you have predictions and observed values, you can compute residuals. Once you have your residuals you can perform many statistical tests.\nIf your residuals do not follow a given distribution (usually normal, but depends on your model), then you know that something is wrong and you should be concerned with the accuracy of your predictions.\nResiduals not normally distributed\nIf the error term is not normally distributed, then our tests of statistical significance will be off. Fortunately, the central limit theorem tells us that, for large enough data samples, the coefficient distributions will be close to normal even if the errors are not. Therefore our analysis will still be valid for large data datasets.\nTesting for normality\nA good test for normality is the Jarque-Bera test. It has a python implementation at statsmodels.stats.stattools.jarque_bera , we will use it frequently in this notebook.\nAlways test for normality!\nIt's incredibly easy and can save you a ton of time.", "# Import all the libraries we'll be using\nimport numpy as np\nimport statsmodels.api as sm\nfrom statsmodels import regression, stats\nimport statsmodels\nimport matplotlib.pyplot as plt\n\nresiduals = np.random.normal(0, 1, 100)\n\n_, pvalue, _, _ = statsmodels.stats.stattools.jarque_bera(residuals)\nprint pvalue\n\nresiduals = np.random.poisson(size = 100)\n\n_, pvalue, _, _ = statsmodels.stats.stattools.jarque_bera(residuals)\nprint pvalue", "Heteroskedasticity\nHeteroskedasticity means that the variance of the error terms is not constant across observations. Intuitively, this means that the observations are not uniformly distributed along the regression line. It often occurs in cross-sectional data where the differences in the samples we are measuring lead to differences in the variance.", "# Artificially create dataset with constant variance around a line\nxs = np.arange(100)\ny1 = xs + 3*np.random.randn(100)\n\n# Get results of linear regression\nslr1 = regression.linear_model.OLS(y1, sm.add_constant(xs)).fit()\n\n# Construct the fit line\nfit1 = slr1.params[0] + slr1.params[1]*xs\n\n# Plot data and regression line\nplt.scatter(xs, y1)\nplt.plot(xs, fit1)\nplt.title('Homoskedastic errors');\nplt.legend(['Predicted', 'Observed'])\nplt.xlabel('X')\nplt.ylabel('Y');\n\n# Artificially create dataset with changing variance around a line\ny2 = xs*(1 + .5*np.random.randn(100))\n\n# Perform linear regression\nslr2 = regression.linear_model.OLS(y2, sm.add_constant(xs)).fit()\nfit2 = slr2.params[0] + slr2.params[1]*xs\n\n# Plot data and regression line\nplt.scatter(xs, y2)\nplt.plot(xs, fit2)\nplt.title('Heteroskedastic errors')\nplt.legend(['Predicted', 'Observed'])\nplt.xlabel('X')\nplt.ylabel('Y')\n\n# Print summary of regression results\nslr2.summary()", "Testing for Heteroskedasticity\nYou can test for heteroskedasticity using a few tests, we'll use the Breush Pagan test from the statsmodels library. We'll also test for normality, which in this case also picks up the weirdness in the second case. HOWEVER, it is possible to have normally distributed residuals which are also heteroskedastic, so both tests must be performed to be sure.", "residuals1 = y1-fit1\nresiduals2 = y2-fit2\n\nxs_with_constant = sm.add_constant(xs)\n\n_, jb_pvalue1, _, _ = statsmodels.stats.stattools.jarque_bera(residuals1)\n_, jb_pvalue2, _, _ = statsmodels.stats.stattools.jarque_bera(residuals2)\nprint \"p-value for residuals1 being normal\", jb_pvalue1\nprint \"p-value for residuals2 being normal\", jb_pvalue2\n\n_, pvalue1, _, _ = stats.diagnostic.het_breushpagan(residuals1, xs_with_constant)\n_, pvalue2, _, _ = stats.diagnostic.het_breushpagan(residuals2, xs_with_constant)\nprint \"p-value for residuals1 being heteroskedastic\", pvalue1\nprint \"p-value for residuals2 being heteroskedastic\", pvalue2", "Correcting for Heteroskedasticity\nHow does heteroskedasticity affect our analysis? The problematic situation, known as conditional heteroskedasticity, is when the error variance is correlated with the independent variables as it is above. This makes the F-test for regression significance and t-tests for the significances of individual coefficients unreliable. Most often this results in overestimation of the significance of the fit.\nThe Breusch-Pagan test and the White test can be used to detect conditional heteroskedasticity. If we suspect that this effect is present, we can alter our model to try and correct for it. One method is generalized least squares, which requires a manual alteration of the original equation. Another is computing robust standard errors, which corrects the fit statistics to account for the heteroskedasticity. statsmodels can compute robust standard errors; note the difference in the statistics below.", "print slr2.summary()\nprint slr2.get_robustcov_results().summary()", "Serial correlation of errors\nA common and serious problem is when errors are correlated across observations (known serial correlation or autocorrelation). This can occur, for instance, when some of the data points are related, or when we use time-series data with periodic fluctuations. If one of the independent variables depends on previous values of the dependent variable - such as when it is equal to the value of the dependent variable in the previous period - or if incorrect model specification leads to autocorrelation, then the coefficient estimates will be inconsistent and therefore invalid. Otherwise, the parameter estimates will be valid, but the fit statistics will be off. For instance, if the correlation is positive, we will have inflated F- and t-statistics, leading us to overestimate the significance of the model.\nIf the errors are homoskedastic, we can test for autocorrelation using the Durbin-Watson test, which is conveniently reported in the regression summary in statsmodels.", "# Load pricing data for an asset\nstart = '2014-01-01'\nend = '2015-01-01'\ny = get_pricing('DAL', fields='price', start_date=start, end_date=end)\nx = np.arange(len(y))\n\n# Regress pricing data against time\nmodel = regression.linear_model.OLS(y, sm.add_constant(x)).fit()\n\n# Construct the fit line\nprediction = model.params[0] + model.params[1]*x\n\n# Plot pricing data and regression line\nplt.plot(x,y)\nplt.plot(x, prediction, color='r')\nplt.legend(['DAL Price', 'Regression Line'])\nplt.xlabel('Time')\nplt.ylabel('Price')\n\n# Print summary of regression results\nmodel.summary()", "Testing for Autocorrelation\nWe can test for autocorrelation in both our prices and residuals. We'll use the built-in method to do this, which is based on the Ljun-Box test. This test computes the probability that the n-th lagged datapoint is predictive of the current. If no max lag is given, then the function computes a max lag and returns the p-values for all lags up to that one. We can see here that for the 5 most recent datapoints, a significant correlation exists with the current. Therefore we conclude that both the data is autocorrelated.\nWe also test for normality for fun.", "_, prices_qstats, prices_qstat_pvalues = statsmodels.tsa.stattools.acf(y, qstat=True)\n_, prices_qstats, prices_qstat_pvalues = statsmodels.tsa.stattools.acf(y-prediction, qstat=True)\n\nprint 'Prices autocorrelation p-values', prices_qstat_pvalues\nprint 'Residuals autocorrelation p-values', prices_qstat_pvalues\n\n_, jb_pvalue, _, _ = statsmodels.stats.stattools.jarque_bera(y-prediction)\n\nprint 'Jarque-Bera p-value that residuals are normally distributed', jb_pvalue", "Newey-West\nNewey-West is a method of computing variance which accounts for autocorrelation. A naive variance computation will actually produce inaccurate standard errors with the presence of autocorrelation.\nWe can attempt to change the regression equation to eliminate serial correlation. A simpler fix is adjusting the standard errors using an appropriate method and using the adjusted values to check for significance. Below we use the Newey-West method from statsmodels to compute adjusted standard errors for the coefficients. They are higher than those originally reported by the regression, which is what we expected for positively correlated errors.", "from math import sqrt\n\n# Find the covariance matrix of the coefficients\ncov_mat = stats.sandwich_covariance.cov_hac(model)\n\n# Print the standard errors of each coefficient from the original model and from the adjustment\nprint 'Old standard errors:', model.bse[0], model.bse[1]\nprint 'Adjusted standard errors:', sqrt(cov_mat[0,0]), sqrt(cov_mat[1,1])", "# Multicollinearity\nWhen using multiple independent variables, it is important to check for multicollinearity; that is, an approximate linear relation between the independent variables, such as\n$$ X_2 \\approx 5 X_1 - X_3 + 4.5 $$\nWith multicollinearity, it is difficult to identify the independent effect of each variable, since we can change around the coefficients according to the linear relation without changing the model. As with truly unnecessary variables, this will usually not hurt the accuracy of the model, but will cloud our analysis. In particular, the estimated coefficients will have large standard errors. The coefficients will also no longer represent the partial effect of each variable, since with multicollinearity we cannot change one variable while holding the others constant.\nHigh correlation between independent variables is indicative of multicollinearity. However, it is not enough, since we would want to detect correlation between one of the variables and a linear combination of the other variables. If we have high R-squared but low t-statistics on the coefficients (the fit is good but the coefficients are not estimated precisely) we may suspect multicollinearity. To resolve the problem, we can drop one of the independent variables involved in the linear relation.\nFor instance, using two stock indices as our independent variables is likely to lead to multicollinearity. Below we can see that removing one of them improves the t-statistics without hurting R-squared.\nAnother important thing to determine here is which variable may be the casual one. If we hypothesize that the market influences both MDY and HPQ, then the market is the variable that we should use in our predictive model.", "# Load pricing data for asset and two market indices\nstart = '2014-01-01'\nend = '2015-01-01'\nb1 = get_pricing('SPY', fields='price', start_date=start, end_date=end)\nb2 = get_pricing('MDY', fields='price', start_date=start, end_date=end)\na = get_pricing('HPQ', fields='price', start_date=start, end_date=end)\n\n# Run multiple linear regression\nmlr = regression.linear_model.OLS(a, sm.add_constant(np.column_stack((b1,b2)))).fit()\n\n# Construct fit curve using dependent variables and estimated coefficients\nmlr_prediction = mlr.params[0] + mlr.params[1]*b1 + mlr.params[2]*b2\n\n# Print regression statistics \nprint 'R-squared:', mlr.rsquared_adj\nprint 't-statistics of coefficients:\\n', mlr.tvalues\n\n# Plot asset and model\na.plot()\nmlr_prediction.plot()\nplt.legend(['Asset', 'Model']);\nplt.ylabel('Price')\n\n# Perform linear regression\nslr = regression.linear_model.OLS(a, sm.add_constant(b1)).fit()\nslr_prediction = slr.params[0] + slr.params[1]*b1\n\n# Print fit statistics\nprint 'R-squared:', slr.rsquared_adj\nprint 't-statistics of coefficients:\\n', slr.tvalues\n\n# Plot asset and model\na.plot()\nslr_prediction.plot()\nplt.ylabel('Price')\nplt.legend(['Asset', 'Model']);", "Example: Anscombe's quartet\nAnscombe constructed 4 datasets which not only have the same mean and variance in each variable, but also the same correlation coefficient, regression line, and R-squared regression value. Below, we test this result as well as plotting the datasets. A quick glance at the graphs shows that only the first dataset satisfies the regression model assumptions. Consequently, the high R-squared values of the other three are not meaningful, which agrees with our intuition that the other three are not modeled well by the lines of best fit.", "from scipy.stats import pearsonr\n\n# Construct Anscombe's arrays\nx1 = [10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5]\ny1 = [8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68]\nx2 = [10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5]\ny2 = [9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74]\nx3 = [10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5]\ny3 = [7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73]\nx4 = [8, 8, 8, 8, 8, 8, 8, 19, 8, 8, 8]\ny4 = [6.58, 5.76, 7.71, 8.84, 8.47, 7.04, 5.25, 12.50, 5.56, 7.91, 6.89]\n\n# Perform linear regressions on the datasets\nslr1 = regression.linear_model.OLS(y1, sm.add_constant(x1)).fit()\nslr2 = regression.linear_model.OLS(y2, sm.add_constant(x2)).fit()\nslr3 = regression.linear_model.OLS(y3, sm.add_constant(x3)).fit()\nslr4 = regression.linear_model.OLS(y4, sm.add_constant(x4)).fit()\n\n# Print regression coefficients, Pearson r, and R-squared for the 4 datasets\nprint 'Cofficients:', slr1.params, slr2.params, slr3.params, slr4.params\nprint 'Pearson r:', pearsonr(x1, y1)[0], pearsonr(x2, y2)[0], pearsonr(x3, y3)[0], pearsonr(x4, y4)[0]\nprint 'R-squared:', slr1.rsquared, slr2.rsquared, slr3.rsquared, slr4.rsquared\n\n# Plot the 4 datasets with their regression lines\nf, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2,2)\nxs = np.arange(20)\nax1.plot(slr1.params[0] + slr1.params[1]*xs, 'r')\nax1.scatter(x1, y1)\nax1.set_xlabel('x1')\nax1.set_ylabel('y1')\nax2.plot(slr2.params[0] + slr2.params[1]*xs, 'r')\nax2.scatter(x2, y2)\nax2.set_xlabel('x2')\nax2.set_ylabel('y2')\nax3.plot(slr3.params[0] + slr3.params[1]*xs, 'r')\nax3.scatter(x3, y3)\nax3.set_xlabel('x3')\nax3.set_ylabel('y3')\nax4.plot(slr4.params[0] + slr4.params[1]*xs, 'r')\nax4.scatter(x4,y4)\nax4.set_xlabel('x4')\nax4.set_ylabel('y4');", "This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. (\"Quantopian\"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
yhat/ggplot
docs/how-to/Customizing Colors.ipynb
bsd-2-clause
[ "%matplotlib inline\nfrom ggplot import *", "Colors\nggplot comes with a variety of \"scales\" that allow you to theme your plots and make them easier to interpret. In addition to the deafult color schemes that ggplot provides, there are also several color scales which allow you to specify more targeted \"palettes\" of colors to use in your plots.\nscale_color_brewer\nscale_color_brewer provides sets of colors that are optimized for displaying data on maps. It comes from Cynthia Brewer's aptly named Color Brewer. Lucky for us, these palettes also look great on plots that aren't maps.", "ggplot(aes(x='carat', y='price', color='clarity'), data=diamonds) +\\\n geom_point() +\\\n scale_color_brewer(type='qual')\n\nggplot(aes(x='carat', y='price', color='clarity'), data=diamonds) + \\\n geom_point() + \\\n scale_color_brewer(type='seq')\n\nggplot(aes(x='carat', y='price', color='clarity'), data=diamonds) + \\\n geom_point() + \\\n scale_color_brewer(type='seq', palette=4)\n\nggplot(aes(x='carat', y='price', color='clarity'), data=diamonds) + \\\n geom_point() + \\\n scale_color_brewer(type='div', palette=5)", "scale_color_gradient\nscale_color_gradient allows you to create gradients of colors that can represent a spectrum of values. For instance, if you're displaying temperature data, you might want to have lower values be blue, hotter values be red, and middle values be somewhere in between. scale_color_gradient will calculate the colors each point should be--even those in between colors.", "import pandas as pd\ntemperature = pd.DataFrame({\"celsius\": range(-88, 58)})\ntemperature['farenheit'] = temperature.celsius*1.8 + 32\ntemperature['kelvin'] = temperature.celsius + 273.15\n\nggplot(temperature, aes(x='celsius', y='farenheit', color='kelvin')) + \\\n geom_point() + \\\n scale_color_gradient(low='blue', high='red')\n\nggplot(aes(x='x', y='y', color='z'), data=diamonds.head(1000)) +\\\n geom_point() +\\\n scale_color_gradient(low='red', high='white')\n\nggplot(aes(x='x', y='y', color='z'), data=diamonds.head(1000)) +\\\n geom_point() +\\\n scale_color_gradient(low='#05D9F6', high='#5011D1')\n\nggplot(aes(x='x', y='y', color='z'), data=diamonds.head(1000)) +\\\n geom_point() +\\\n scale_color_gradient(low='#E1FA72', high='#F46FEE')", "scale_color_manual\nWant to just specify the colors yourself? No problem, just use scale_color_manual. Add it to your plot as a layer and specify the colors you'd like using a list.", "my_colors = [\n \"#ff7f50\",\n \"#ff8b61\",\n \"#ff9872\",\n \"#ffa584\",\n \"#ffb296\",\n \"#ffbfa7\",\n \"#ffcbb9\",\n \"#ffd8ca\",\n \"#ffe5dc\",\n \"#fff2ed\"\n]\nggplot(aes(x='carat', y='price', color='clarity'), data=diamonds) + \\\n geom_point() + \\\n scale_color_manual(values=my_colors)\n\n# https://coolors.co/app/69a2b0-659157-a1c084-edb999-e05263\nggplot(aes(x='carat', y='price', color='cut'), data=diamonds) + \\\n geom_point() + \\\n scale_color_manual(values=['#69A2B0', '#659157', '#A1C084', '#EDB999', '#E05263'])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
aliasm2k/sms-spam
sms-spam.ipynb
mit
[ "SMS Spam Dataset Exploration\nIntroduction\nThis Jupyter Notebook explores the SMS Spam Collection dataset from UCI Machine Learning Repository and compares performance of various machine learning algorithms in text processing.\nData Wrangling\nTo begin with, lets load the dataset into a Pandas Dataframe.", "import csv\nimport pandas as pd\n\nsms_spam_df = pd.read_csv('sms-spam.tsv', quoting=csv.QUOTE_NONE, sep='\\t', names=['label', 'message'])\nsms_spam_df.head()", "Missing values skew the dataset, and should be avoided. Lets see if the dataset has any missing values.", "sms_spam_df.isnull().values.any()", "Now that we are sure there are no missing values, lets have some fun by checking stats about spam and ham(non spam) messages in the dataset.", "sms_spam = sms_spam_df.groupby('label')['message']\nsms_spam.describe()", "Data Preprocessing\nFor messages to be understood by machine learning algorithms, they have to be converted into vectors. To do that, we have to first split our messages into tokens (list of words). This technique is called Bag of Words model as in the end we are left with a collection (bag) of word vectors. The following methods can be used to vectorize messages:\n 1. Tokenization: splitting messages into individual words.\n 2. Lemmatization: splitting messages into individual words and converting them into their base form (lemma).\nTokenization\nTokenization simply splits the message into individual tokens.", "from textblob import TextBlob\n\ndef tokenize(message):\n message = unicode(message, 'utf8')\n return TextBlob(message).words", "Lets try applying this on some of our messages. Here are the original messages we are going to tokenize.", "sms_spam_df['message'].head()", "Now, here are those messages tokenized.", "sms_spam_df['message'].head().apply(tokenize)", "As you can see, tokenization simply splits message into tokens. \nLemmatization\nThe textblob library provides tools that can convert each word in a message to its base form (lemma).", "from textblob import TextBlob\n\ndef lemmatize(message):\n message = unicode(message, 'utf8').lower()\n return [word.lemma for word in TextBlob(message).words]", "Alright, here are first few of our original messages.", "sms_spam_df['message'].head()", "And, here are our messages lemmatized.", "sms_spam_df['message'].head().apply(lemmatize)", "As you can see, lemmatization converts messages into their base form; for example, goes becomes go as you may notice from the last message.\nVectorization\nAs already mentioned, machine learning algorithms can only understand vectors and not text. Converting list of words (obtained after tokenization or lemmatization) into vectors involves the following steps:\n\nTerm Frequency (TF): Determine frequency of each word in the message.\nInverse Document Frequency (IDF): Weigh frequency of each word in the message such that more frequent words get lower weights.\nNormalization: Normalize message vectors to unit length.\n\nCount Vectorization\nCount Vectorization obtains frequency of unique words in each tokenized message.", "from sklearn.feature_extraction.text import CountVectorizer\n\n\"\"\"Bag of Words Transformer using lemmatization\"\"\"\n\nbow_transformer = CountVectorizer(analyzer=lemmatize)\nbow_transformer.fit(sms_spam_df['message'])", "Now, lets try out the Bag of Words transformer on some dummy message.", "dummy_vectorized = bow_transformer.transform(['Hey you... you of the you... This message is to you.'])\nprint dummy_vectorized", "So, the message Hey you... you of the you... This message is to you. contains 8 unique words, of which you is repeated 4 times. Hope you can guess what vector representation of you is. Hint: you is repeated 4 times.", "bow_transformer.get_feature_names()[8737]", "Now, lets transform entire set of messages in our dataset.", "msgs_vectorized = bow_transformer.transform(sms_spam_df['message'])\nmsgs_vectorized.shape", "TF-IDF Transformation\nNow that we have obtained a vectorized representation of messages in our dataset, we can use it to weigh words in our dataset such that words with high frequency have a lower weight (Inverse Document Frequency). Also, this process also performs normalization of messages.", "from sklearn.feature_extraction.text import TfidfTransformer\n\n\"\"\"TFIDF Transformer using vectorized messages\"\"\"\n\ntfidf_transformer = TfidfTransformer().fit(msgs_vectorized)", "Lets use this transformer to weigh the previous message; Hey you... you of the you... This message is to you.", "dummy_transformed = tfidf_transformer.transform(dummy_vectorized)\nprint dummy_transformed", "Now, lets check IDF for you, the most frequently repeated word in the message against hey, a least repeated word.", "print '{}: {}'.format('you', tfidf_transformer.idf_[bow_transformer.vocabulary_['you']])\nprint '{}: {}'.format('hey', tfidf_transformer.idf_[bow_transformer.vocabulary_['hey']])", "As you can see, words with lower frequency are weighed higher than words with higher frequency in the dataset.\nNow, to weigh and normalize all messages in our dataset.", "msgs_tfidf = tfidf_transformer.transform(msgs_vectorized)\nmsgs_tfidf.shape", "Naive Bayes Classifier\nHaving converted text messages into vectors, it can be parsed by machine learning algorithms. Naive Bayes is a classification algorithm commonly used in text processing.", "from sklearn.naive_bayes import MultinomialNB\n\n\"\"\"Naive Bayes classifier trained with vectorized messages and its corresponding labels\"\"\"\n\nnb_clf = MultinomialNB(alpha=0.25)\nnb_clf.fit(msgs_tfidf, sms_spam_df['label'])", "Predictions\nNow that we have a trained classifier, it can be used for prediction.", "msgs_pred = nb_clf.predict(msgs_tfidf)", "Accuracy Score\nLets check the accuracy of our classifier.", "from sklearn.metrics import accuracy_score\n\nprint 'Accuracy Score: {}'.format(accuracy_score(sms_spam_df['label'], msgs_pred))", "Conclusion?\nWoah! 99% accuracy! You really believe that is right? Think again...\nTake Two\nNow, lets improve our procedure. This time, doing machine learning the way its meant to be done.\nSplitting Dataset\nFor our demonstration, we trained a Naive Bayes classifier on the entire dataset. Then we tested our classifier on the same complete dataset. On doing so, we are actually overfitting our classifier.\nA better approach would be to split our dataset into two partitions; one for training the classifier and another for testing the classifer. The sklearn library provides just what we need.", "from sklearn.model_selection import train_test_split\n\nmsgs_train, msgs_test, lbls_train, lbls_test = \\\n train_test_split(sms_spam_df['message'], sms_spam_df['label'], test_size=0.2)", "Pipeline\nAs mentioned in demonstration, we cannot directly feed our text messages to the machine learning algorithm. It has to be vectorized. If you remember, vectorization involved two processes:\n 1. Counting words in each message and converting dataset into one large matrix (Count Vectorization).\n 2. Weighing words based on their frequency (TF-IDF Transformation) and normalization.\nOnce the preprocessing is complete, we can construct the classifier.\nThese operations can be pipelined using the Pipeline class from sklearn library.", "from sklearn.pipeline import Pipeline\n\n\"\"\"Pipeline CountVectorizer, TfidfTransformer and Naive Bayes Classifier\"\"\"\n\npipeline = Pipeline([\n ('bow', CountVectorizer(analyzer=lemmatize)),\n ('tfidf', TfidfTransformer()),\n ('clf', MultinomialNB(alpha=0.25))\n])", "Cross Validation\nCross validation (K-Folds cross validation) involves splitting the training set again into k partitions such that 1 partition is used for testing and remaining k-1 partitions are used for training. The process is repeated k times, and the average score obtained is considered the score of the machine learning model.\nThe cross_val_score function of sklearn library can be used to determine the cross validation score of a model.", "from sklearn.model_selection import cross_val_score\n\nscores = cross_val_score(\n pipeline, \n msgs_train, \n lbls_train,\n cv=10,\n scoring='accuracy',\n n_jobs=-1\n)\n\nprint scores", "Tuning the Model\nUsing pipeline, we were able to construct a model that parses text message and classify them. This model is limited, and not tuned for optimal performance. Each of the model components (namely, CountVectorizer, TfidfTransformer and MultinomialNB) has its own set of hyper parameters which can be set for optimal performance.\nOne method to tune a model is Grid Search, which allows to define a set of hyper parameters for each component of the model, and then exhaustively searches for the best combination of these parameters that provide the best cross validation score.", "from sklearn.model_selection import GridSearchCV\nfrom sklearn.model_selection import StratifiedKFold\n\nparams = {\n 'bow__analyzer': (lemmatize, tokenize),\n 'tfidf__use_idf': (True, False),\n}\n\n\nmodel = GridSearchCV(\n pipeline, \n params,\n refit=True,\n n_jobs=-1,\n scoring='accuracy',\n cv=StratifiedKFold(n_splits=5)\n)", "Our model is almost ready. All we have to do is to train it. Also, we will be timing the trainig operation.", "%time model = model.fit(msgs_train, lbls_train)", "Now that our model is trained, lets try it out.", "print model.predict(['Hi! How are you?'])[0]\nprint model.predict(['Congratulations! You won free credits!'])[0]", "For more fun, here is the classification report of our model.", "msgs_pred = model.predict(msgs_test)\nprint 'Accuracy Score: {}'.format(accuracy_score(lbls_test, msgs_pred))", "The scores are bit lower than the previous results we obtained when we tested our model on unsplit data, but is more reliable." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
adamsteer/nci-notebooks
pgpointcloud/PGpointlcloud tests.ipynb
apache-2.0
[ "Some tests with pgpointcloud\n\nLAS files are intractable for subsetting and visualisation without LASTools or Terrasolid or Bentley Map (or similar) which are awesome but costly.\nFurther, LAS files don't really let us store all manner of things along with the data, for example we already have issues storing digitised waveforms with the ACT dataset\nso what can we do? Here are some experiments with PostGIS-pointcloud, one approach to point cloud data management\n\nSome things about this exercise:\n- I am a postGIS/pgpointcloud n00b. An SQL ninja could probably do a lot better than this and a lot faster!\n- The data set used here has ~406 000 000 points in it\n- made from nine ACT 8pt tiles (no waveform data, because stuff)\n- ingested using a PDAL pipeline (http://pdal.io)\n- LAS storage is ~20 GB\n- PG-pointcloud table is ~5 GB, so reasonably similar to .LAZ\n- there are 82 634 'patches' containing 5 000 points each\n- patches are the primary query tool, so we index over 82 634 rows, which are arranged in a quasi-space-filling-curve\n- tradeoff between number of points in patch, and number of rows in DB\n- but generally speaking, scalable (nearly... research shows that we will hit a limit in a few hundred million points time)\nGathering modules", "import os\n\nimport psycopg2 as ppg\nimport numpy as np\nimport ast\nfrom osgeo import ogr\n\nimport shapely as sp\nfrom shapely.geometry import Point,Polygon,asShape\nfrom shapely.wkt import loads as wkt_loads\nfrom shapely import speedups\n\n\nimport cartopy as cp\nimport cartopy.crs as ccrs\n\nimport pandas as pd\nimport pandas.io.sql as pdsql\n\nimport geopandas as gp\n\nfrom matplotlib.collections import PatchCollection\nfrom mpl_toolkits.basemap import Basemap\n\nimport fiona\n\nfrom descartes import PolygonPatch\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nspeedups.available\n\nspeedups.enable()", "Making a postgres connection with psycopg2", "# PGPASSWORD=pointy_cloudy psql -h localhost -d pcbm_pc -U pcbm\npg_connection = \"dbname=pcbm_pc user=pcbm password=pointy_cloudy host=130.56.244.246\"\nconn = ppg.connect(pg_connection)\ncursor = conn.cursor()", "First query for some blocks - this gets them all!", "#blocks_query = \"SELECT pa::geometry(Polygon, 28355) AS geom, PC_PatchAvg(pa, 'Z') AS elevation, id FROM act_patches;\"\n\nblocks_query = \"SELECT st_astext(PC_envelope(pa)::geometry(Polygon, 28355)) AS geom, PC_PatchAvg(pa, 'Z') AS elevation, id FROM act_patches;\"\n\n%time blks = pdsql.read_sql(blocks_query, conn)", "here I was trying to convert a Pandas dataframe to a GeoPandas dataframe with a geometry column", "gblocks = gp.GeoDataFrame(blks)\n\ngblocks.head()\n\npolys = gp.GeoSeries(blks.geom)\npatches = gp.GeoDataFrame({'geometry': polys, 'elevation': blks.elevation})\n\npatches.head()", "...but I gave up and ingested data straight into a GeoPandas frame instead", "blocks_query = \"SELECT pa::geometry(Polygon, 28355) AS geom, PC_PatchAvg(pa, 'Z') AS elevation, id FROM act_patches where PC_PatchAvg(pa, 'Z') > 700;\"\n\n%time thepatches = gp.read_postgis(blocks_query, conn)\n\nthepatches.head()\n\n%time highpatches = thepatches.query('elevation > 820')\nhighpatches.head()", "Let's map the patches of data we collected - Black Mountain, above 820m high", "highpatches.plot(column='elevation',colormap='BrBG')", "Now collect the points from the same region", "points_query = \"with pts as(SELECT PC_Explode(pa) as pt FROM act_patches where PC_PatchAvg(pa, 'Z') > 820 ) select st_astext(pt::geometry) from pts;\"\n\n#get raw point data, not as a geometry\n#points_query = \"SELECT PC_astext(PC_Explode(pa)) as pt FROM act_patches where PC_PatchAvg(pa, 'Z') > 700 ;\"\n\n%time pts = pdsql.read_sql(points_query, conn)\n\n# point storage schema:\n# 1 = intens, 2 = ReturnNo, 3 = Numreturns, 4 = scandirectionflag, 5 = edgeofflightline\n# 6 = classification (ASPRS), 7 = scananglerank, 8 = user data, 9 = pointsourceID\n# 10 = R, 11 = G, 12 = B, 13 = GPSTime, 14 = X, 15 = Y, 16 = Z\n\n#how many points did we get?\npts.size\n\n#had to check the schema to find point order...\nschema_query = \"SELECT * FROM pointcloud_formats where pcid = 4;\"\n\nschm = pdsql.read_sql(schema_query, conn)\nprint(schm.schema)\n\npts.head()\n\nthepoints = []\n\nfor point in pts.st_astext:\n this = wkt_loads(point)\n thepoints.append([this.x,this.y,this.z])\n\nthepoints = np.squeeze(thepoints)", "Plot the points", "plt.scatter(thepoints[:,0], thepoints[:,1], c = thepoints[:,2], lw=0, s=5, cmap='BrBG')", "Now make a pretty plot - points, patches in the subset, and all the patches in the region", "fig = plt.figure()\nfig.set_size_inches(25/2.51, 25/2.51)\n\nBLUE = '#6699cc'\nRED = '#cc6699'\n\nax = fig.gca() \nax.scatter(thepoints[:,0], thepoints[:,1], c = thepoints[:,2], lw=0, s=3, cmap='BrBG')\nfor patch in thepatches.geom:\n ax.add_patch(PolygonPatch(patch, fc=BLUE, ec=BLUE, alpha=0.2, zorder=2 ))\n \nfor patch in highpatches.geom:\n ax.add_patch(PolygonPatch(patch, fc=BLUE, ec=RED, alpha=0.2, zorder=2 ))\n\n\n", "Selecting points by classification", "#ASPRS class 6 - buildings\nbldng_query = \"WITH filtered_patch AS (SELECT PC_FilterEquals(pa, 'Classification', 6) as f_patch FROM act_patches where PC_PatchAvg(pa, 'Z') > 820) SELECT st_astext(point::geometry) FROM filtered_patch, pc_explode(f_patch) AS point;\" \n%time bld_pts = pdsql.read_sql(bldng_query, conn)\n\nbld_pts.head()\n\nbldpoints = []\n\nfor point in bld_pts.st_astext:\n this = wkt_loads(point)\n bldpoints.append([this.x,this.y,this.z])\nbldpoints = np.squeeze(bldpoints)\n\n#ASPRS class 2 - ground\ngrnd_query = \"WITH filtered_patch AS (SELECT PC_FilterEquals(pa, 'Classification', 2) as f_patch FROM act_patches where PC_PatchAvg(pa, 'Z') > 820) SELECT st_astext(point::geometry) FROM filtered_patch, pc_explode(f_patch) AS point;\" \n%time grnd_pts = pdsql.read_sql(grnd_query, conn)\n\ngrnd_pts.head()\n\ngrndpoints = []\n\nfor point in grnd_pts.st_astext:\n this = wkt_loads(point)\n grndpoints.append([this.x,this.y,this.z])\ngrndpoints = np.squeeze(grndpoints)\n\n#ASPRS class 5 - high vegetation\nhv_query = \"WITH filtered_patch AS (SELECT PC_FilterEquals(pa, 'Classification', 5) as f_patch FROM act_patches where PC_PatchAvg(pa, 'Z') > 820) SELECT st_astext(point::geometry) FROM filtered_patch, pc_explode(f_patch) AS point;\" \n%time hv_pts = pdsql.read_sql(hv_query, conn)\n\nhv_pts.head()\n\nhvpoints = []\n\nfor point in hv_pts.st_astext:\n this = wkt_loads(point)\n hvpoints.append([this.x,this.y,this.z])\nhvpoints = np.squeeze(hvpoints)\n\nfig = plt.figure()\nfig.set_size_inches(25/2.51, 25/2.51)\n\nBLUE = '#6699cc'\nRED = '#cc6699'\n\nax = fig.gca() \nax.scatter(grndpoints[:,0], grndpoints[:,1], c = grndpoints[:,2], lw=0, s=3, cmap='plasma')\nax.scatter(bldpoints[:,0], bldpoints[:,1], c = bldpoints[:,2], lw=0, s=3, cmap='viridis')\nax.scatter(hvpoints[:,0], hvpoints[:,1], c = hvpoints[:,2], lw=0, s=3, cmap='BrBG')\n\nfor patch in thepatches.geom:\n ax.add_patch(PolygonPatch(patch, fc=BLUE, ec=BLUE, alpha=0.2, zorder=2 ))\n \nfor patch in highpatches.geom:\n ax.add_patch(PolygonPatch(patch, fc=BLUE, ec=RED, alpha=0.2, zorder=2 ))", "Add a 3D plot", "#set up for 3d plots\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pylab as pylab\n\nplt_az=300\nplt_elev = 50.\nplt_s = 2\n\ncb_fmt = '%.1f'\n\nfig = plt.figure()\nfig.set_size_inches(30/2.51, 25/2.51)\n\nax0 = fig.add_subplot(111, projection='3d')\nax0.scatter(grndpoints[:,0], grndpoints[:,1],grndpoints[:,2], c=np.ndarray.tolist(grndpoints[:,2]),\\\n lw=0, s=plt_s, cmap='plasma')\nax0.scatter(bldpoints[:,0], bldpoints[:,1],bldpoints[:,2], c=np.ndarray.tolist(bldpoints[:,2]),\\\n lw=0, s=plt_s, cmap='viridis')\nax0.scatter(hvpoints[:,0], hvpoints[:,1],hvpoints[:,2], c=np.ndarray.tolist(hvpoints[:,2]),\\\n lw=0, s=plt_s-1, cmap='BrBG')", "Export to three.js?", "import vtk\n\nnp.savetxt('ground_points.txt', grndpoints, delimiter=',')\nnp.savetxt('bld_points.txt', bldpoints, delimiter=',')\nnp.savetxt('hv_points.txt', hvpoints, delimiter=',')", "To do:\n\ntoo many things!\nselect from database by:\nclass (demonstrated here)\nheight above ground (need to integrate PDAL and PCL)\ntree cover\nintersection with objects\n\n\nthings on the list:\ncomparing LandSAT bare ground and LIDAR bare ground\ntree heights and geophysical properties\n...\n\n\n\nAll very cool but why?\nNational elevation maps - storing and managing many billions of points as a coherent dataset for precise elevation estimation. Also aiming to store provenance - if there's a data issue, we need more than just points. We need to figure out why the issue occurred, and fix it. We can also store things like point accuracy, some QC metrics, whatever point attributes we like! Or points from manifold sources:\n- airborne LiDAR\n- terrestrial scanners\n- 3D photogrammetry\n- geophysical datasets (already as points in netCDF)\n- output of discrete element models (eg. Kool et al, or new sea ice models in development)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AllenDowney/ModSim
soln/chap09.ipynb
gpl-2.0
[ "Chapter 9\nModeling and Simulation in Python\nCopyright 2021 Allen Downey\nLicense: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International", "# install Pint if necessary\n\ntry:\n import pint\nexcept ImportError:\n !pip install pint\n\n# download modsim.py if necessary\n\nfrom os.path import exists\n\nfilename = 'modsim.py'\nif not exists(filename):\n from urllib.request import urlretrieve\n url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'\n local, _ = urlretrieve(url+filename, filename)\n print('Downloaded ' + local)\n\n# import functions from modsim\n\nfrom modsim import *", "In this chapter we express the models from previous chapters as\ndifference equations and differential equations, solve the equations,\nand derive the functional forms of the solutions. We also discuss the\ncomplementary roles of mathematical analysis and simulation.\nRecurrence relations\nThe population models in the previous chapter and this one are simple\nenough that we didn't really need to run simulations. We could have\nsolved them mathematically. For example, we wrote the constant growth\nmodel like this:\nresults[t+1] = results[t] + annual_growth\nIn mathematical notation, we would write the same model like this:\n$$x_{n+1} = x_n + c$$ \nwhere $x_n$ is the population during year $n$,\n$x_0$ is a given initial population, and $c$ is constant annual growth.\nThis way of representing the model is a recurrence relation; see\nhttp://modsimpy.com/recur.\nSometimes it is possible to solve a recurrence relation by writing an\nequation that computes $x_n$, for a given value of $n$, directly; that\nis, without computing the intervening values from $x_1$ through\n$x_{n-1}$.\nIn the case of constant growth we can see that $x_1 = x_0 + c$, and\n$x_2 = x_1 + c$. Combining these, we get $x_2 = x_0 + 2c$, then\n$x_3 = x_0 + 3c$, and it is not hard to conclude that in general\n$$x_n = x_0 + nc$$ \nSo if we want to know $x_{100}$ and we don't care\nabout the other values, we can compute it with one multiplication and\none addition.\nWe can also write the proportional model as a recurrence relation:\n$$x_{n+1} = x_n + \\alpha x_n$$ \nOr more conventionally as:\n$$x_{n+1} = x_n (1 + \\alpha)$$ \nNow we can see that\n$x_1 = x_0 (1 + \\alpha)$, and $x_2 = x_0 (1 + \\alpha)^2$, and in general\n$$x_n = x_0 (1 + \\alpha)^n$$ \nThis result is a geometric progression;\nsee http://modsimpy.com/geom. When $\\alpha$ is positive, the factor\n$1+\\alpha$ is greater than 1, so the elements of the sequence grow\nwithout bound.\nFinally, we can write the quadratic model like this:\n$$x_{n+1} = x_n + \\alpha x_n + \\beta x_n^2$$ \nor with the more\nconventional parameterization like this:\n$$x_{n+1} = x_n + r x_n (1 - x_n / K)$$ \nThere is no analytic solution to\nthis equation, but we can approximate it with a differential equation\nand solve that, which is what we'll do in the next section.\nDifferential equations\nStarting again with the constant growth model \n$$x_{n+1} = x_n + c$$ \nIf we define $\\Delta x$ to be the change in $x$ from one time step to the next, we can write: \n$$\\Delta x = x_{n+1} - x_n = c$$ \nIf we define\n$\\Delta t$ to be the time step, which is one year in the example, we can\nwrite the rate of change per unit of time like this:\n$$\\frac{\\Delta x}{\\Delta t} = c$$ \nThis model is discrete, which\nmeans it is only defined at integer values of $n$ and not in between.\nBut in reality, people are born and die all the time, not once a year,\nso a continuous model might be more realistic.\nWe can make this model continuous by writing the rate of change in the\nform of a derivative: \n$$\\frac{dx}{dt} = c$$ \nThis way of representing the model is a differential equation; see http://modsimpy.com/diffeq.\nWe can solve this differential equation if we multiply both sides by\n$dt$: \n$$dx = c dt$$ \nAnd then integrate both sides: \n$$x(t) = c t + x_0$$\nSimilarly, we can write the proportional growth model like this:\n$$\\frac{\\Delta x}{\\Delta t} = \\alpha x$$ \nAnd as a differential equation\nlike this: \n$$\\frac{dx}{dt} = \\alpha x$$ \nIf we multiply both sides by\n$dt$ and divide by $x$, we get \n$$\\frac{1}{x}~dx = \\alpha~dt$$ \nNow we\nintegrate both sides, yielding: \n$$\\ln x = \\alpha t + K$$ \nwhere $\\ln$ is the natural logarithm and $K$ is the constant of integration.\nExponentiating both sides, we have\n$$\\exp(\\ln(x)) = \\exp(\\alpha t + K)$$ \nThe exponential function can be written $\\exp(x)$ or $e^x$. In this book I use the first form because it resembles the Python code.\nWe can rewrite the previous equation as\n$$x = \\exp(\\alpha t) \\exp(K)$$ \nSince $K$ is an arbitrary constant,\n$\\exp(K)$ is also an arbitrary constant, so we can write\n$$x = C \\exp(\\alpha t)$$ \nwhere $C = \\exp(K)$. There are many solutions\nto this differential equation, with different values of $C$. The\nparticular solution we want is the one that has the value $x_0$ when\n$t=0$.\nWhen $t=0$, $x(t) = C$, so $C = x_0$ and the solution we want is\n$$x(t) = x_0 \\exp(\\alpha t)$$ If you would like to see this derivation\ndone more carefully, you might like this video:\nhttp://modsimpy.com/khan1.\nAnalysis and simulation\nOnce you have designed a model, there are generally two ways to proceed: simulation and analysis. Simulation often comes in the form of a computer program that models changes in a system over time, like births and deaths, or bikes moving from place to place. Analysis often comes in the form of algebra; that is, symbolic manipulation using mathematical notation.\nAnalysis and simulation have different capabilities and limitations.\nSimulation is generally more versatile; it is easy to add and remove\nparts of a program and test many versions of a model, as we have done in the previous examples.\nBut there are several things we can do with analysis that are harder or impossible with simulations:\n\n\nWith analysis we can sometimes compute, exactly and efficiently, a\n value that we could only approximate, less efficiently, with\n simulation. For example, in the quadratic model we plotted growth rate versus population and saw net crossed through zero when the population is\n near 14 billion. We could estimate the crossing point using a\n numerical search algorithm (more about that later). But with the\n analysis in Section xxx, we get the general result that\n $K=-\\alpha/\\beta$.\n\n\nAnalysis often provides \"computational shortcuts\", that is, the\n ability to jump forward in time to compute the state of a system\n many time steps in the future without computing the intervening\n states.\n\n\nWe can use analysis to state and prove generalizations about models;\n for example, we might prove that certain results will always or\n never occur. With simulations, we can show examples and sometimes\n find counterexamples, but it is hard to write proofs.\n\n\nAnalysis can provide insight into models and the systems they\n describe; for example, sometimes we can identify regimes of\n qualitatively different behavior and key parameters that control\n those behaviors.\n\n\nWhen people see what analysis can do, they sometimes get drunk with\npower, and imagine that it gives them a special ability to see past the veil of the material world and discern the laws of mathematics that govern the universe. When they analyze a model of a physical system, they talk about \"the math behind it\" as if our world is the mere shadow of a world of ideal mathematical entities (I am not making this up; see http://modsimpy.com/plato.).\nThis is, of course, nonsense. Mathematical notation is a language\ndesigned by humans for a purpose, specifically to facilitate symbolic\nmanipulations like algebra. Similarly, programming languages are\ndesigned for a purpose, specifically to represent computational ideas\nand run programs.\nEach of these languages is good for the purposes it was designed for and less good for other purposes. But they are often complementary, and one of the goals of this book is to show how they can be used together.\nAnalysis with WolframAlpha\nUntil recently, most analysis was done by rubbing graphite on wood\npulp, a process that is laborious and error-prone. A useful\nalternative is symbolic computation. If you have used services like\nWolframAlpha, you have used symbolic computation.\nFor example, if you go to https://www.wolframalpha.com/ and type\ndf(t) / dt = alpha f(t)\nWolframAlpha infers that f(t) is a function of t and alpha is a\nparameter; it classifies the query as a \"first-order linear ordinary\ndifferential equation\\\", and reports the general solution:\n$$f(t) = c_1 \\exp(\\alpha t)$$ \nIf you add a second equation to specify\nthe initial condition:\ndf(t) / dt = alpha f(t), f(0) = p_0\nWolframAlpha reports the particular solution:\n$$f(t) = p_0 \\exp(\\alpha t)$$\nWolframAlpha is based on Mathematica, a powerful programming language\ndesigned specifically for symbolic computation.\nAnalysis with SymPy\nPython has a library called SymPy that provides symbolic computation\ntools similar to Mathematica. They are not as easy to use as\nWolframAlpha, but they have some other advantages.\nBefore we can use SymPy, we have to import it:\nSymPy defines a Symbol object that represents symbolic variable names,\nfunctions, and other mathematical entities.\nThe symbols function takes a string and returns Symbol objects. So\nif we run this assignment:", "from sympy import symbols\n\nt = symbols('t')", "Python understands that t is a symbol, not a numerical value. If we\nnow run", "expr = t + 1\nexpr", "Python doesn't try to perform numerical addition; rather, it creates a\nnew Symbol that represents the sum of t and 1. We can evaluate the\nsum using subs, which substitutes a value for a symbol. This example\nsubstitutes 2 for t:", "expr.subs(t, 2)", "Functions in SymPy are represented by a special kind of Symbol:", "from sympy import Function\n\nf = Function('f')\nf", "Now if we write f(t), we get an object that represents the evaluation of a function, $f$, at a value, $t$.", "f(t)", "But again SymPy doesn't actually\ntry to evaluate it.\nDifferential equations in SymPy\nSymPy provides a function, diff, that can differentiate a function. We can apply it to f(t) like this:", "from sympy import diff\n\ndfdt = diff(f(t), t)\ndfdt", "The result is a Symbol that represents the derivative of f with\nrespect to t. But again, SymPy doesn't try to compute the derivative\nyet.\nTo represent a differential equation, we use Eq:", "from sympy import Eq\n\nalpha = symbols('alpha')\neq1 = Eq(dfdt, alpha*f(t))\neq1", "The result is an object that represents an equation. Now\nwe can use dsolve to solve this differential equation:", "from sympy import dsolve\n\nsolution_eq = dsolve(eq1)\nsolution_eq", "The result is the general\nsolution, which still contains an unspecified constant, $C_1$. To get the particular solution where $f(0) = p_0$, we substitute p_0 for C1. First, we have to create two more symbols:", "C1, p_0 = symbols('C1 p_0')", "Now we can perform the substitution:", "particular = solution_eq.subs(C1, p_0)\nparticular", "The result is called the exponential growth curve; see\nhttp://modsimpy.com/expo.\nSolving the quadratic growth model\nWe'll use the (r, K) parameterization, so we'll need two more symbols:", "r, K = symbols('r K')", "Now we can write the differential equation.", "eq2 = Eq(diff(f(t), t), r * f(t) * (1 - f(t)/K))\neq2", "And solve it.", "solution_eq = dsolve(eq2)\nsolution_eq", "The result, solution_eq, contains rhs, which is the right-hand side of the solution.", "general = solution_eq.rhs\ngeneral", "We can evaluate the right-hand side at $t=0$", "at_0 = general.subs(t, 0)\nat_0", "Now we want to find the value of C1 that makes f(0) = p_0.\nSo we'll create the equation at_0 = p_0 and solve for C1. Because this is just an algebraic identity, not a differential equation, we use solve, not dsolve.", "from sympy import solve\n\nsolutions = solve(Eq(at_0, p_0), C1)", "The result from solve is a list of solutions. In this case, we have reason to expect only one solution, but we still get a list, so we have to use the bracket operator, [0], to select the first one.", "type(solutions), len(solutions)\n\nvalue_of_C1 = solutions[0]\nvalue_of_C1", "Now in the general solution, we want to replace C1 with the value of C1 we just figured out.", "particular = general.subs(C1, value_of_C1)\nparticular", "The result is complicated, but SymPy provides a function that tries to simplify it.", "particular.simplify()", "This function is called the logistic growth curve; see\nhttp://modsimpy.com/logistic. In the context of growth models, the\nlogistic function is often written like this:\n$$f(t) = \\frac{K}{1 + A \\exp(-rt)}$$ \nwhere $A = (K - p_0) / p_0$.\nWe can use SymPy to confirm that these two forms are equivalent. First we represent the alternative version of the logistic function:", "A = (K - p_0) / p_0\nA\n\nfrom sympy import exp\n\nlogistic = K / (1 + A * exp(-r*t))\nlogistic", "To see whether two expressions are equivalent, we can check whether their difference simplifies to 0.", "(particular - logistic).simplify()", "This test only works one way: if SymPy says the difference reduces to 0, the expressions are definitely equivalent (and not just numerically close).\nBut if SymPy can't find a way to simplify the result to 0, that doesn't necessarily mean there isn't one. Testing whether two expressions are equivalent is a surprisingly hard problem; in fact, there is no algorithm that can solve it in general.\nIf you would like to see this differential equation solved by hand, you might like this video: http://modsimpy.com/khan2\nSummary\nIn this chapter we wrote the growth models from the previous chapters in terms of difference and differential equations.\nWhat I called the constant growth model is more commonly called linear growth because the solution is a line. If we model time as continuous, the solution is\n$$f(t) = p_0 + \\alpha t$$\nwhere $\\alpha$ is the growth rate.\nSimilarly, the proportional growth model is usually called exponential growth because the solution is exponential:\n$$f(t) = p_0 \\exp{\\alpha t}$$\nFinally, the quadratic growth model is called logistic growth because the solution is the logistic function:\n$$f(t) = \\frac{K}{1 + A \\exp(-rt)}$$ \nwhere $A = (K - p_0) / p_0$.\nI avoided these terms until now because they are based on results we had not derived yet.\nExercises\nExercise: Solve the quadratic growth equation using the alternative parameterization\n$\\frac{df(t)}{dt} = \\alpha f(t) + \\beta f^2(t) $", "# Solution\n\nalpha, beta = symbols('alpha beta')\n\n# Solution\n\neq3 = Eq(diff(f(t), t), alpha*f(t) + beta*f(t)**2)\neq3\n\n# Solution\n\nsolution_eq = dsolve(eq3)\nsolution_eq\n\n# Solution\n\ngeneral = solution_eq.rhs\ngeneral\n\n# Solution\n\nat_0 = general.subs(t, 0)\n\n# Solution\n\nsolutions = solve(Eq(at_0, p_0), C1)\nvalue_of_C1 = solutions[0]\nvalue_of_C1\n\n# Solution\n\nparticular = general.subs(C1, value_of_C1)\nparticular.simplify()", "Exercise: Use WolframAlpha to solve the quadratic growth model, using either or both forms of parameterization:\ndf(t) / dt = alpha f(t) + beta f(t)^2\n\nor\ndf(t) / dt = r f(t) (1 - f(t)/K)\n\nFind the general solution and also the particular solution where f(0) = p_0." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mohanprasath/Course-Work
machine_learning/learning_python_3.ipynb
gpl-3.0
[ "Learning Python3 from URL\nhttps://learnpythonthehardway.org/python3/\nExercise 1: A Good First Program\nhttps://learnpythonthehardway.org/python3/ex1.html", "print(\"Hello World!\")\nprint(\"Hello Again\")\nprint(\"I like typing this.\")\nprint(\"This is fun.\")\nprint('Yay! Printing.')\nprint(\"I'd much rather you 'not'.\")\nprint('I \"said\" do not touch this.')\n\n'''\nNotes:\n octothorpe, mesh, or pund #\n'''", "Exercise 2: Comments and Pound Characters\nhttps://learnpythonthehardway.org/python3/ex2.html", "# A comment, this is so you can read your program later.\n# Anything after the # is ignored by python.\n\nprint(\"I could have code like this.\") # and the comment after is ignored\n\n# You can also use a comment to \"disable\" or comment out code:\n# print(\"This won't run.\")\n\nprint(\"This will run.\")", "Exercise 3: Numbers and Math\nhttps://learnpythonthehardway.org/python3/ex3.html", "# BODMAS\nprint(\"I will now count my chickens:\")\n\nprint(\"Hens\", 25 + 30 / 6)\nprint(\"Roosters\", 100 - 25 * 3 % 4)\n\nprint(\"Now I will count the eggs:\")\n\nprint(3 + 2 + 1 - 5 + 4 % 2 - 1 / 4 + 6)\n\nprint(\"Is it true that 3 + 2 < 5 - 7?\")\n\nprint(3 + 2 < 5 - 7)\n\nprint(\"What is 3 + 2?\", 3 + 2)\nprint(\"What is 5 - 7?\", 5 - 7)\n\nprint(\"Oh, that's why it's False.\")\n\nprint(\"How about some more.\")\n\nprint(\"Is it greater?\", 5 > -2)\nprint(\"Is it greater or equal?\", 5 >= -2)\nprint(\"Is it less or equal?\", 5 <= -2)", "Exercise 4: Variables and Names\nhttps://learnpythonthehardway.org/python3/ex4.html", "cars = 100\nspace_in_a_car = 4.0\ndrivers = 30\npassengers = 90\ncars_not_driven = cars - drivers\ncars_driven = drivers\ncarpool_capacity = cars_driven * space_in_a_car\naverage_passengers_per_car = passengers / cars_driven\n\n\nprint(\"There are\", cars, \"cars available.\")\nprint(\"There are only\", drivers, \"drivers available.\")\nprint(\"There will be\", cars_not_driven, \"empty cars today.\")\nprint(\"We can transport\", carpool_capacity, \"people today.\")\nprint(\"We have\", passengers, \"to carpool today.\")\nprint(\"We need to put about\", average_passengers_per_car,\n \"in each car.\")\n\n# assigning variables in a single line\na = b = c = 0\n# this seems easier but when using basic objects like arrays or dictionaries it gets wierder\nl1 = l2 = []\nl1.append(1)\nprint(l1, l2)\nl2.append(2)\nprint(l1, l2)\n# Here list objects l1 and l2 are names assigned to the same memory location. It works different from following\n# code\nl1 = []\nl2 = []\nl1.append(1)\nprint(l1, l2)\nl2.append(2)\nprint(l1, l2)", "Exercise 5: More Variables and Printing\nhttps://learnpythonthehardway.org/python3/ex5.html", "my_name = 'Zed A. Shaw'\nmy_age = 35 # not a lie\nmy_height = 74 # inches\nmy_weight = 180 # lbs\nmy_eyes = 'Blue'\nmy_teeth = 'White'\nmy_hair = 'Brown'\n\nprint(f\"Let's talk about {my_name}.\")\nprint(f\"He's {my_height} inches tall.\")\nprint(f\"He's {my_weight} pounds heavy.\")\nprint(\"Actually that's not too heavy.\")\nprint(f\"He's got {my_eyes} eyes and {my_hair} hair.\")\nprint(f\"His teeth are usually {my_teeth} depending on the coffee.\")\n\n# this line is tricky, try to get it exactly right\ntotal = my_age + my_height + my_weight\nprint(f\"If I add {my_age}, {my_height}, and {my_weight} I get {total}.\")\n\n# f'' format string\n\n# converting inches to pounds\ndef inches_to_centi_meters(inches):\n centi_meters = inches * 2.54\n return centi_meters\n\ndef pounds_to_kilo_grams(pounds):\n kilo_grams = pounds * 0.453592 \n return kilo_grams\n\ninches = 1.0\npounds = 1.0\nprint(inches, inches_to_centi_meters(inches))\nprint(pounds, pounds_to_kilo_grams(pounds))", "Exercise 6: Strings and Text\nhttps://learnpythonthehardway.org/python3/ex6.html", "types_of_people = 10\nx = f\"There are {types_of_people} types of people.\"\n\nbinary = \"binary\"\ndo_not = \"don't\"\ny = f\"Those who know {binary} and those who {do_not}.\"\n\nprint(x)\nprint(y)\n\nprint(f\"I said: {x}\")\nprint(f\"I also said: '{y}'\")\n\nhilarious = False\njoke_evaluation = \"Isn't that joke so funny?! {}\"\n\nprint(joke_evaluation.format(hilarious))\n\nw = \"This is the left side of...\"\ne = \"a string with a right side.\"\n\nprint(w + e)", "Exercise 7: More Printing\nhttps://learnpythonthehardway.org/python3/ex7.html", "print(\"Mary had a little lamb.\")\nprint(\"Its fleece was white as {}.\".format('snow'))\nprint(\"And everywhere that Mary went.\")\nprint(\".\" * 10) # what'd that do?\n\nend1 = \"C\"\nend2 = \"h\"\nend3 = \"e\"\nend4 = \"e\"\nend5 = \"s\"\nend6 = \"e\"\nend7 = \"B\"\nend8 = \"u\"\nend9 = \"r\"\nend10 = \"g\"\nend11 = \"e\"\nend12 = \"r\"\n\n# watch end = ' ' at the end. try removing it to see what happens\nprint(end1 + end2 + end3 + end4 + end5 + end6, end=' ')\nprint(end7 + end8 + end9 + end10 + end11 + end12)", "Exercise 8: Printing, Printing\nhttps://learnpythonthehardway.org/python3/ex8.html", "formatter = \"{} {} {} {}\"\n\nprint(formatter.format(1, 2, 3, 4))\nprint(formatter.format(\"one\", \"two\", \"three\", \"four\"))\nprint(formatter.format(True, False, False, True))\nprint(formatter.format(formatter, formatter, formatter, formatter))\nprint(formatter.format(\n \"Try your\",\n \"Own text here\",\n \"Maybe a poem\",\n \"Or a song about fear\"\n))", "Exercise 9: Printing, Printing, Printing\nhttps://learnpythonthehardway.org/python3/ex9.html", ":( You have to pay 30" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
open-hluttaw/notebooks
Open Hluttaw API Examples.ipynb
gpl-3.0
[ "#Getting Info on All Representatives in Amyotha\n\nimport requests #for REST requests to API\nimport pandas #for working with data\n\n#Get JSON of all posts and memberships in Amyotha:\namyotha_req = requests.get('http://api.openhluttaw.org/en/organizations/897739b2831e41109713ac9d8a96c845')\n\n#Get details of all members of parliament\nMPs = amyotha_req.json()['result']['memberships']\n\n \n#List all MP Posts\nconstituencies = amyotha_req.json()['result']['posts']\n\nfor i in constituencies:\n print i['label']\n\n#MPs memberships also hold details of person and post of MP of a constituent\nfor mp in MPs:\n print mp['person']['name'] +\" MP for \" + mp['post']['label']\n\n\n#Persons and Posts also have other details such as birth_dates, gender etc.\n#see popolo spec \nfor mp in MPs:\n print mp['person']['name'] + \" , \" + mp['person']['gender'] + \" , \" + mp['person']['birth_date']\n\n#To get contact details, we need to make additional requests for the person based on their id\nfor mp in MPs:\n person = requests.get('http://api.openhluttaw.org/en/persons/' + mp['person']['id']).json()['result']\n if person['contact_details']: #not everyone may contact details\n for contact in person['contact_details']: #can be multiple contact types\n if contact['type'] == 'email':\n print contact['value']\n\n\n#Example of CSV contact list of Women MPs\n\nwomen = [ mp for mp in MPs if mp['person']['gender'] == 'female' ]\n\nwoman_mps = []\n\nfor mp in women:\n \n person = requests.get('http://api.openhluttaw.org/en/persons/' + mp['person']['id']).json()['result']\n \n woman_mp = {} \n woman_mp ['name'] = mp['person']['name'] \n woman_mp ['constituency'] = mp['post']['label']\n \n if person['contact_details']: #not everyone may contact details\n for contact in person['contact_details']: #can be multiple contact types\n if contact['type'] == 'cell':\n woman_mp['phone'] = contact['value']\n elif contact['type'] == 'email':\n woman_mp['email'] = contact['value']\n woman_mps.append(woman_mp)\n\n\n\nwomen_df = pandas.DataFrame(woman_mps,columns=['name','constituency','phone','email'])\n\nwomen_df\n\nwomen_df.to_csv() # eg. to save to file: print >>open('women_mp_contacts','a') , women_df.to_csv()", "Committees", "#List all committees\n\nquery = 'classification:Committee'\nr = requests.get('http://api.openhluttaw.org/en/search/organizations?q='+query)\npages = r.json()['num_pages']\n\n\ncommittees = []\nfor page in range(1,pages+1):\n r = requests.get('http://api.openhluttaw.org/en/search/organizations?q='+query+'&page='+str(page))\n orgs = r.json()['results']\n for org in orgs:\n committees.append(org)\n\nfor committee in committees:\n print committee['name']\n\nimport json\n\njson_export = []\nfor committee in committees:\n json_export.append({'name':committee['name'],\n 'id':committee['id']})\n\nprint json.dumps(json_export,sort_keys=True, indent=4)\n \n\n#Looking up a specific committee will list down all members and persons details, a committee is just\n#organization class same as party, same as upper house, lower house\n\n#\"name\": \"Amyotha Hluttaw Local and Overseas Employment Committee\"\nr = requests.get('http://api.openhluttaw.org/en/organizations/9f3448056d2b48e1805475a45a4ae1ed')\ncommittee = r.json()['result']\n\n#List committee members\n\n# missing on behalf_of expanded for organizations https://github.com/Sinar/popit_ng/issues/200\nfor member in committee['memberships']:\n print member['person']['id']\n print member['person']['name']\n print member['person']['image']", "Party", "#List all committees\n\nquery = 'classification:Party'\nr = requests.get('http://api.openhluttaw.org/en/search/organizations?q='+query)\npages = r.json()['num_pages']\n\n\nparties = []\nfor page in range(1,pages+1):\n r = requests.get('http://api.openhluttaw.org/en/search/organizations?q='+query+'&page='+str(page))\n orgs = r.json()['results']\n for org in orgs:\n parties.append(org)\n\n# BUG in https://github.com/Sinar/popit_ng/issues/197\n# use JSON party lookup below to lookup values directly on client side\n\nfor party in parties:\n print party['name']", "[\n {\n \"id\": \"fd24165b8e814a758cd1098dc7a9038a\", \n \"name\": \"National League for Democracy\"\n }, \n {\n \"id\": \"9462adf5cffa41c386e621fee28c59eb\", \n \"name\": \"Union Solidarity and Development Party\"\n }, \n {\n \"id\": \"7997379fe27c4e448af522c85e306bfb\", \n \"name\": \"\\\"Wa\\\" Democratic Party\"\n }, \n {\n \"id\": \"90e4903937bf4b8ba9185157dde06345\", \n \"name\": \"Kokang Democracy and Unity Party\"\n }, \n {\n \"id\": \"2d2c795149c74b6f91cdea8caf28e968\", \n \"name\": \"Zomi Congress for Democracy\"\n }, \n {\n \"id\": \"b366273152a84d579c4e19b14d36c0b5\", \n \"name\": \"Ta'Arng Palaung National Party\"\n }, \n {\n \"id\": \"f2189158953e4d9e9296efeeffe7cf35\", \n \"name\": \"National Unity Party\"\n }, \n {\n \"id\": \"d53d27fef3ac4b2bb4b7bf346215f626\", \n \"name\": \"Pao National Organization\"\n }, \n {\n \"id\": \"2f0c09d5eb05432d8fcf247b5cb1885f\", \n \"name\": \"Mon National Party\"\n }, \n {\n \"id\": \"dc69205c7eb54a7aaf68b3d2e3d9c23e\", \n \"name\": \"Rakhine National Party\"\n }, \n {\n \"id\": \"e67bf2cdb4ff4ce89167cba3a514a6df\", \n \"name\": \"Shan Nationalities League for Democracy\"\n }, \n {\n \"id\": \"a7a1ac9d2f20470d87e556af41dfaa19\", \n \"name\": \"Lisu National Development Party\"\n }, \n {\n \"id\": \"8cc2d69bed8743bbaa229b164afecf9a\", \n \"name\": \"Independent\"\n }, \n {\n \"id\": \"016a8ad7b40343ba96e0c03f47019680\", \n \"name\": \"Arakan National Party\"\n }, \n {\n \"id\": \"6e76561e385946e0a3761d4f25293912\", \n \"name\": \"The Taaung (Palaung) National Party\"\n }, \n {\n \"id\": \"63ec5681df974c67b7a217873fa9cdf5\", \n \"name\": \"Kachin Democratic Party\"\n }\n]", "#Listing people by party in org Amyotha 897739b2831e41109713ac9d8a96c845\n#Pyithu org id would be 7f162ebef80e4a4aba12361ea1151fce\n#We list by membership and specific organization_id and on_behalf_of_id of parties above\n\n#Amyotha Members represented by Arakan National Party\nquery = 'organization_id:897739b2831e41109713ac9d8a96c845 AND on_behalf_of_id:016a8ad7b40343ba96e0c03f47019680'\nr = requests.get('http://api.openhluttaw.org/en/search/memberships?q='+query)\npages = r.json()['num_pages']\n\nmemberships = []\nfor page in range(1,pages+1):\n r = requests.get('http://api.openhluttaw.org/en/search/memberships?q='+query+'&page='+str(page))\n members = r.json()['results']\n for member in members:\n memberships.append(member)\n\nfor member in memberships:\n print member['post']['label']\n print member['person']['id']\n print member['person']['name']\n print member['person']['image']" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
lirenzhucn/dicom-contour-parser
HeuristicLVSegmentation.ipynb
gpl-3.0
[ "Heuristic LV Segmentation\nTwo heuristic approaches are investigated in this notebook: simple thresholding and active contour.\nSimple thresholding only uses intensity information to determine if a pixel is in foreground or background. Active contour, on the other hand, leverages both intensity information and smoothness of the edge to segment foreground from background. Data suggest that both methods fail to achieve an ideal performance. Both methods produces a 83%-84% True Positive Rate (TPR) while having a 28% False Positive Rate (FPR). The TPR is mediocre and the FPR is too high.", "import numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n%matplotlib inline\nmpl.rcParams['font.size'] = 18", "Data Preparation\nTo prepare the data, I use the parser I wrote to ingest the data folder and filter out all the records that have all DICOM, i-contour, and o-contour images. This is easily done as follows.", "from dicom_contour_parser import DicomContourParser\nparser = DicomContourParser('final_data/')\nrecords = [r for r in parser.record_list if r.has_dicom() and r.has_icontour()\n and r.has_ocontour()]\nprint('{:d} records in total were found.'.format(len(records)))", "Simple Thresholding\nTo be able to segment a region reliably using simple thresholding, the pixel values of the foreground and background should have a significant statistical difference. To check that, I plot the histogram of pixel values within the blood pool and within the muscle.", "# Walk through all filtered records and collect pixel values of both regions\nblood_pixels = []\nmuscle_pixels = []\nfor r in records:\n img, ic, oc = r.data.dicom, r.data.ic_mask, r.data.oc_mask\n blood_mask = ic\n # muscle is within oc but not in ic\n muscle_mask = np.logical_and(oc, np.logical_not(ic))\n blood_pixels.extend(list(img[blood_mask]))\n muscle_pixels.extend(list(img[muscle_mask]))\nfig, ax = plt.subplots(1, 1, figsize=(12, 9))\nax.hist(np.array(blood_pixels), bins=256, alpha=0.5, fc='g')\nax.hist(np.array(muscle_pixels), bins=256, alpha=0.5, fc='r')\nax.set_xlabel('Pixel Value')\nax.set_ylabel('Counts')", "Given the significant overlap of the two pixel populations, I would speculate that a simple thresholding scheme would not work. To further demonstrate that, we can plot the ROC curve of the simple thresholding scheme. Note that I treat blood pool as positiv and heart muscle as negative.", "TPR = [1.0]\nFPR = [1.0]\nblood_pixels.sort()\nmuscle_pixels.sort()\ncond_pos = len(blood_pixels)\ncond_neg = len(muscle_pixels)\nfalse_pos = cond_neg\nfalse_neg = 0\nfor v in muscle_pixels:\n false_pos -= 1\n while false_neg < cond_pos and blood_pixels[false_neg] <= v:\n false_neg += 1\n TPR.append(1.0 - false_neg/cond_pos)\n FPR.append(false_pos/cond_neg)\nfig, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 9))\nax1.set_xlabel('False Positive Rate')\nax1.set_ylabel('True Positive Rate')\nax1.plot(FPR, TPR)\nax2.set_xlabel('False Positive Rate')\nax2.set_ylabel('TPR - FPR')\nax2.plot(FPR, [t-f for f, t in zip(FPR, TPR)])", "The ROC curve (top panel) is not as ideal as one may hope. And if we try to build a naive Baysian binary classifier based on just pixel values (see bottom panel), the max. TPR - FPR (in other words, min. Type I error + Type II error) can only reach 0.56, where we still have ~0.28 FPR with a mediocre ~0.84 TPR.\nActive Contour\nOne of the other popular heuristic segmentation methods are the active contour method. As skimage's active contour demo page puts it: \"The active contour model is a method to fit open or closed splines to lines or edges in an image. It works by minimising an energy that is in part defined by the image and part by the spline’s shape: length and smoothness.\"\nThis method has a seemingly better opportunity to succeed because it leverage not just intensity information, but also the intrinsic features of the contour shape, which may be important to segmentation.\nFirst, I'd like to try a concrete example. I randomly pick an image from all records, and try to use the active contour model to fit the i-contour from the o-contour.", "from skimage.filters import gaussian\nfrom skimage.segmentation import active_contour\ndef ac_snake(img, ocm, ocp):\n return active_contour(img*ocm, ocp, alpha=0.75, beta=0.1, gamma=0.1)\n# pick the 10-th record as an example\nimg, _, ocm, icp, ocp = records[10].data\nicp = np.array(icp)\nocp = np.array(ocp)\nsnake = ac_snake(img, ocm, ocp)\nfig, ax = plt.subplots(1, 1, figsize=(10, 10))\nax.imshow(img, cmap='gray')\nax.plot(ocp[:, 0], ocp[:, 1], '--r', lw=2)\nax.plot(snake[:, 0], snake[:, 1], '--b', lw=2)\nax.plot(icp[:, 0], icp[:, 1], '-r', lw=2)\nax.set_xticks([])\nax.set_yticks([])", "Based on this one particular sample, the result looks promising. Notice that the red, dashed curve is the outer contour; the red, solid curve is the inner contour; and the blue, dashed curve is the predicted segmentation obtained by the active contour model.\nThe next step is to validate that this method also makes sense on a statistical level.", "from dicom_contour_parser.parsing import poly_to_mask\ntotal_cond_pos = 0\ntotal_cond_neg = 0\ntotal_false_pos = 0\ntotal_false_neg = 0\nfor i, r in enumerate(records):\n img, icm, ocm, _, ocp = r.data\n ocp = np.array(ocp)\n acp = ac_snake(img, ocm, ocp)\n acp = [tuple(row) for row in list(acp)]\n acm = poly_to_mask(acp, img.shape[1], img.shape[0])\n cond_pos = np.sum(icm)\n cond_neg = np.sum(np.logical_and(ocm, np.logical_not(icm)))\n false_pos = np.sum(np.logical_and(acm, np.logical_not(icm)))\n false_neg = np.sum(np.logical_and(icm, np.logical_not(acm)))\n total_cond_pos += cond_pos\n total_cond_neg += cond_neg\n total_false_pos += false_pos\n total_false_neg += false_neg\n # print('{:02d}, {:s}-{:04d}: False positive rate = {:.2f}, False negative rate = {:.2f}'\n # .format(i, r.patient_id, r.serial_id, false_pos/cond_neg, false_neg/cond_pos))\nprint('Overal: False positive rate = {:.2f}, False negative rate = {:.2f}'.format(\ntotal_false_pos/total_cond_neg, total_false_neg/total_cond_pos))", "Although conceptually more appealing, the data, however, suggest that the active contour model does not provide statistically better results than simple thresholding, as a similar FPR and FNR/TPR relation is observed." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
chbrandt/pynotes
moc/MOC_LaMassa.ipynb
gpl-2.0
[ "from IPython.display import HTML\n\nHTML('''<script>\ncode_show=true; \nfunction code_toggle() {\n if (code_show){\n $('div.input').hide();\n } else {\n $('div.input').show();\n }\n code_show = !code_show\n} \n$( document ).ready(code_toggle);\n</script>\n<form action=\"javascript:code_toggle()\"><input type=\"submit\" value=\"Click here to toggle on/off the raw code.\"></form>''')", "Building a MOC from a CDS/Vizier catalog\nAfter a couple of conversations I had during the week of the 2nd ASTERICS VO school I've got two small TODO's hanging on my list, this is an answer to both. First one (to François) is an observation about CDS/Vizier LaMassa 2016 catalog's metadata; the second one (to Markus) is about MOC catalogs format. I will not get into the details -- since the respective discussion had already being done --, it is sufficient to say that:\n\nsome of Lamassa's null value are not properly read when using Astropy;\nwhat I understand about a MOC catalog.\n\nThe data to be used is provided by Vizier, the LaMassa et al, 2016, ApJ, 817, 172, in particular the ReadMe and chandra.dat files.\nSoftware to handle the catalog will be Astropy and Thomas Boch's mocpy and Healpy.\nTOC:\n * Dealing with null values from Vizier metadata\n * Generating a MOC catalog", "baseurl = 'ftp://cdsarc.u-strasbg.fr/pub/cats/J/ApJ/817/172/'\nreadme_file = 'ReadMe'\nchandra_file = 'chandra.dat'\n\nimport astropy\nprint \"astropy version:\",astropy.__version__\n\nimport mocpy\nprint \"mocpy version:\",mocpy.__version__\n\nimport healpy\nprint \"healpy version:\",healpy.__version__", "Download ReadMe and chandra.dat files and save them inside ./data/ dir", "def download(path,filename,outdir):\n from urllib2 import urlopen\n url = path+filename\n f = urlopen(url)\n data = f.read()\n with open(outdir+filename, \"wb\") as fp:\n fp.write(data)\n\nimport os\nif not os.path.isdir('data'):\n os.mkdir('data')\ndownload(baseurl,readme_file,outdir='data/')\ndownload(baseurl,chandra_file,outdir='data/')\n!ls 'data/'", "Dealing with null values from Vizier metadata\nThe goal here, as said before, is to show the \"bug\" in Vizier ReadMe (description) file when dealing with null values not properly formatted.\nWe start by opening the chandra table and noticing the values -999 not being properly handled by Astropy as null values.", "from astropy.table import Table\nchandra = Table.read('data/chandra.dat',readme='data/ReadMe',format='ascii.cds')\n\nchandra # Notice the '-999' values", "We can see records in columns logLSoft, logLHard, logLFull, for example, showing the values -999.0.\nAlthough we already suspect, we go to ReadMe and double-check it to see what is there about Null values.\nFor those three example columns we see ?=-999, which is the right --although truncated/integer-- value.\nFor some reason, Astropy is not handling it as it should.\nTo fix this, we have sync the number of significat digits of the null values with the (declared) format.\nFor instance, those columns have a Format=F7.2 and so should the null values be declared as ?=-999.00.\nChanging such signatures for those columns (logLSoft, logLHard and logLFull) and saving them to file ReadMe_fix give us the following:", "from astropy.table import Table\nchandra = Table.read('data/chandra.dat',readme='data/ReadMe_fix',format='ascii.cds')\nchandra", "That's it. After declaring the null values with all the significant numbers following the Format, such values are properly handled.\nGenerating a MOC catalog\nNow we go through a MOC catalog creation. There is no problem here, just to answer Markus what I understand what a MOC is: a list of (unique) element numbers. The section 2.3.1 NUNIQ packing of the IVOA MOC document version-1 explains how the conversion between the two kinds of elements representation.\nThe lines below will use the catalog we just have in hands, chandra, to build the MOC.\nFirst, I will compute Healpix level/nside values based on the mean positional error of the catalog and then the MOC elements are computed from (RA,Dec) using HealPy.\nMOCPy is used at the end to plot the MOC elements; Also Aladin is used to have a better view of the elements.\nFiles can be download from data/ directory.", "# A function to find out which healpix level corresponds a given (typical) size of coverage\ndef size2level(size):\n \"\"\"\n Returns nearest Healpix level corresponding to a given diamond size\n \n The 'nearest' Healpix level is here to be the nearest greater level, \n right before the first level smaller than 'size'.\n \"\"\"\n # units\n from astropy import units as u\n\n # Structure to map healpix' levels to their angular sizes\n #\n healpix_levels = {\n 0 : 58.63 * u.deg,\n 1 : 29.32 * u.deg, \n 2 : 14.66 * u.deg, \n 3 : 7.329 * u.deg, \n 4 : 3.665 * u.deg, \n 5 : 1.832 * u.deg, \n 6 : 54.97 * u.arcmin, \n 7 : 27.48 * u.arcmin, \n 8 : 13.74 * u.arcmin, \n 9 : 6.871 * u.arcmin, \n 10 : 3.435 * u.arcmin, \n 11 : 1.718 * u.arcmin, \n 12 : 51.53 * u.arcsec, \n 13 : 25.77 * u.arcsec, \n 14 : 12.88 * u.arcsec, \n 15 : 6.442 * u.arcsec,\n 16 : 3.221 * u.arcsec,\n 17 : 1.61 * u.arcsec,\n 18 : 805.2 * u.milliarcsecond,\n 19 : 402.6 * u.milliarcsecond,\n 20 : 201.3 * u.milliarcsecond,\n 21 : 100.6 * u.milliarcsecond,\n 22 : 50.32 * u.milliarcsecond,\n 23 : 25.16 * u.milliarcsecond,\n 24 : 12.58 * u.milliarcsecond,\n 25 : 6.291 * u.milliarcsecond,\n 26 : 3.145 * u.milliarcsecond,\n 27 : 1.573 * u.milliarcsecond,\n 28 : 786.3 * u.microarcsecond,\n 29 : 393.2 * u.microarcsecond\n }\n \n assert size.unit\n ko = None\n for k,v in healpix_levels.iteritems():\n if v < 2 * size: # extrapolating the error by one order of magnitude\n break\n ko = k\n return ko\n\nimport numpy as np\nfrom astropy import units as u\n\nmedian_positional_error = np.median(chandra['e_Pos']) * u.arcsec\nlevel = size2level(median_positional_error)\nnside = 2**level\n\nprint \"Typical (median) position error: \\n{}\".format(median_positional_error)\nprint \"\\nCorrespondig healpix level: {} \\n\\t and nsize value: {}\".format(level,nside)\n\ndef healpix_radec2pix(nside, ra, dec, nest=True):\n \"\"\"\n convert ra,dec to healpix elements\n \"\"\"\n def radec2thetaphi(ra,dec):\n \"\"\"\n convert equatorial ra, dec in degrees\n to polar theta, phi in radians\n \"\"\"\n def ra2phi(ra):\n import math\n return math.radians(ra)\n\n def dec2theta(dec):\n import math\n return math.pi/2 - math.radians(dec)\n\n _phi = ra2phi(ra)\n _theta = dec2theta(dec)\n return _theta,_phi\n \n import healpy\n\n _theta,_phi = radec2thetaphi(ra, dec)\n return healpy.ang2pix(nside, _theta, _phi, nest=nest)\n\nradec = zip(chandra['RAdeg'],chandra['DEdeg'])\nhpix = [ healpix_radec2pix(nside,ra,dec) for ra,dec in radec ]", "Here it is, the MOC catalog (the list of elements to be more precise):", "hpix", "The plot made by MOCPy:", "moc = mocpy.MOC()\nmoc.add_pix_list(level,hpix)\nmoc.plot()\nmoc.write('data/MOC_chandra.fits')", "And here after importing it to Aladin:", "from IPython.display import HTML\nHTML('''\n<figure>\n <img src=\"data/MOC_on_Aladin.png\" alt=\"MOC printed on Aladin\">\n <figcaption>Figure 1: MOC printed on Aladin</figcaption>\n</figure>\n''')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mathewzilla/redcard
.ipynb_checkpoints/Crowdstorming_visualisation-checkpoint.ipynb
mit
[ "Crowdsourcing data analysis: Do soccer referees give more red cards to dark skin toned players?\nCrowdstorming analytics, see: https://osf.io/gvm2z/\nInitial data visualisation from Team 23: (Tom Stafford, Mat Evans, Colin Bannard & Tim Heaton)\nHere we present the code, results and commentary of our initial exploration of the dataset. Our final analysis (lead: Tom Stafford) was conducted in R, using the packages lme4 (lead: Tim Heaton) and MCMCglmm (lead: Colin Bannard). All code for this is available on the OSF. This analysis was dependent on the exploration presented below (lead: Mat Evans).\nEnjoy our final team report here\nThe figures below were generated using matplotlib, but made pretty through the use of seaborn and mpld3\n1 Loading the data and analysis libraries\nTo get started, let's import some libraries", "# IMPORTING LIBRARIES\n\nprint \"importing libraries\"\nimport pandas as pd # for dealing with csv import\nimport numpy as np # arrays and other matlab like manipulation\nimport os # for joining paths and filenames sensibly\nimport matplotlib.pyplot as plt # Matplotlib's pyplot: MATLAB-like syntax\nimport scipy.stats.mstats as ssm # for bootstrap\nfrom scipy.stats import gaussian_kde as kde\nimport random\n\n%matplotlib inline\nimport seaborn as sns # For pretty plots\n\n# from mpld3 import display_d3\n# mpld3.enable_notebook() # Uncomment these lines to use interactive plots as a default. This can lead to slow loading of the notebook\n# mpld3.disable_notebook()", "Now we need some data.", "# Import original data, then do a bit of data-munging to get it in displayable form.\nprint \"loading data from file\"\nfilename=os.path.join('data','CrowdstormingDataJuly1st.csv') \ndf = pd.read_csv(filename)\n\n", "By default the data is in a rather counter-intuitive format: referee-player dyads. What is a dyad..?", "from IPython.display import HTML\nHTML('<iframe src=http://en.wikipedia.org/wiki/Dyad_(sociology) width=1000 height=350></iframe>')", "A referee-player dyad describes the interactions between a particular ref and one player. This means that each row in the dataset is of a unique player-ref combination, listing all of the games by a given player with a particular referee at any point in his career. Let's look at the first few rows of the dataset as an example:", "# Display the first 10 rows of the dataset. Only 13 columns for space reasons\ndf.ix[:10,:13]\n\n# Display the other columns too\ndf.ix[:10,13:28]", "We already see some strange things in the data. Some refs officiated very few games. The two raters disagree about player skintone quite often. For some players there isn't a photo, so their skin tone couldn't be rated. Most dyads don't feature cards at all. In general, it's difficult to get an intuition for what the population looks like from inspecting small samples of the data. This is particularly difficult in the dyad format, as each player or ref's cards are spread un-evenly across the dataset. \n2 Disaggregating the data into single player-ref interactions\nTeam Sheffield felt a more natural format for the data was to disaggregate it. In other words, to unpack each dyad into singular ref-player interaction. In other words, each time a player and a referee met they would contribute one row to the dataset (and thus that interaction has a maximum of one red card). \nIf you're interested in how the disaggregation was done, the code is in (our project folder on the OSF](https://osf.io/akqt4/) - the file is disaggregate_v3.py", "# Load disaggregated data\nprint \"loading disaggregated data\"\nfilename=os.path.join('data','crowdstorm_disaggregated.csv') \ndfd = pd.read_csv(filename)", "We now have a much more 'normal' dataset, where each data point accounts for a single interaction (i.e. a game) . \nWhat is the distribution of individual ref-player dyad game numbers?", "# Pull out the number of games in each dyad and plot \ngames = dfd.games\n\nfig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12, 4))\n\naxes[0].hist(games,bins=max(games),histtype = 'stepfilled')\naxes[0].set_xlabel('Number of interactions')\naxes[0].set_ylabel('Frequency')\n\naxes[1].hist(games,bins=max(games),histtype = 'stepfilled')\naxes[1].set_yscale('symlog') # symetric log scale NOTE this breaks the nice plotting tools\naxes[1].set_title(\"Log scaled\")\naxes[1].set_xlabel('Number of interactions')\naxes[1].set_ylabel('Log Frequency')\n\n\nfig.tight_layout()\n# display_d3()\n\nprint \"highest number of games in a single dyad = \" + str(max(games))", "i.e. how likely is it that a single ref officiates the same player many times?\nAnswer: not very\nHalf of the dyads are unique, with the other half following an exponential decay all the way out to a high of 47 games, which is actually a pretty huge number. Who are the players featuring in these high (>=40, say) game dyads (ie the guys who meet the same ref 40+ times)?", "stalwarts = dfd[dfd.games>=40]\nprint stalwarts.player.unique()", "Now, that at least makes sense. We see that many England Internationals, and two World Cup winning German Internationals (each with long careers in top-flight football) have played with the same referees many times.\nBy contrast, how rare are red cards?", "print \"Total interactions =\", len(dfd)\nclean_interactions = dfd[(dfd['allreds'] == 0)]\nprint \"Number of red cards in the dataset =\", len(dfd) - len(clean_interactions)\nprint \"Number of interactions without a red card =\", len(clean_interactions)\n\nprint \"Proportion of interactions that are 'clean' =\", len(clean_interactions) / float(len(dfd))", "In sum, red cards are very rare. We also showed that the distributions of player and ref occurance are highly skewed. Therefore, any analysis method applied to this population needs to be able to handle these properties of the data set.\n3 Discovering a major source of data impurity\nHow we worked out that players' entire histories are included in the data set\nHow many refs are there in the dataset?", "allRefs = dfd.refNum.value_counts()\nprint \"Number of refs =\", len(allRefs)\nprint \"Number of dyads in the dataset =\", sum(allRefs)", "Let's now look at the countries referees come from. There are a lot of them", "import mpld3\nmpld3.enable_notebook()\n# Histogram of country frequency. \nfig, ax = plt.subplots(1,1,figsize=(12, 4))\nx = dfd.Alpha_3.value_counts()\nlines = ax.plot(x,marker='.',ms=20)\n\ny = x.index.tolist() \n\ntooltips = mpld3.plugins.PointLabelTooltip(lines[0], labels=y)\nmpld3.plugins.connect(plt.gcf(), tooltips)\n\nax.set_title('Referee nationality by number of games')\nax.set_xlabel('Country number (ordered by frequency)')\nax.set_ylabel('Frequency of games')\nax.set_xlim([-3,160]) # a hack so we can see the first point most clearly\n\nmpld3.disable_notebook()", "Please enjoy roll-over functionality for this plot!\nMost games are ref'ed by someone from one of a small number of countries, as we would expect - the four premier leagues which defined selection for our data set: england, germany, france and spain.\nHowever, it also seems like 160 different nationalities are represented by our referees. This seemed unlikely, are there really refs from almost every country on earth in the four premier leagues in the season 2012-13?\nAnd what does the distribution of ref occurance look like?", "numRefs = len(dfd.refNum.value_counts())\nprint \"Total number of referees =\", numRefs\nprint \"Median number of dyads per referee =\", np.median(dfd.refNum.value_counts())\n\nfig, axes = plt.subplots(nrows=1, ncols=1, figsize=(12, 4))\naxes.hist(dfd.refNum.value_counts().tolist(),bins=numRefs)\naxes.set_xscale('symlog') # symetric log scale \naxes.set_yscale('symlog') \naxes.set_title(\"Referee occurance, log scaled\")\naxes.set_xlabel('log (number of occurances)')\naxes.set_ylabel('log (frequency)')", "We see that though most refs are only involved in a small number of dyads, many officiated over thousands. A median of 11 indicates that more than half of the refs officiated less than one game!\nSomething funny is going on - If a ref officiated a full game in one of our selected premier leagues they would be in at least 22 dyads (2 teams of 11 players each, more if substitutions occur).\nFuther analysis (not shown), including a bit of eyeballing, revealled that players' entire career histories are included in the dataset. This means that if a player gets sent off in a Uruguayan league game in 2002, but then transfers to the English premier league by 2012 then this booking ends up in our dataset - as does the referee. This explains the high number of ref nationalities present in the data set\nWe clean the data by excluding interactions by refs who feature in fewer than 22 dyads\nIf you aren't in at least 22 dyads you didn't ref a game in one of our four defining leagues", "goodRefs = allRefs[allRefs>21]\n\n\nfig, axes = plt.subplots(nrows=1, ncols=1, figsize=(12, 4))\naxes.hist(goodRefs.tolist(),numRefs-11)\naxes.set_xscale('symlog') # symetric log scale \nplt.xlim([1,10000])\naxes.set_yscale('symlog') \nplt.ylim([0,1000])\naxes.set_title(\"Referee occurance following our cull, log scaled\")\naxes.set_xlabel('log (number of occurances)')\naxes.set_ylabel('log (frequency)')\n\nprint \"Number of refs featuring in at least 22 dyads =\",len(goodRefs)\nprint \"Number of dyads, excluding refs who officiate fewer than 22 games =\", sum(goodRefs)", "We lose approx 2/3rds of refs, but keep 97.4% of dyads", "#Copying from \n#http://stackoverflow.com/questions/12065885/how-to-filter-the-dataframe-rows-of-pandas-by-within-in\n#\n#This line defines a new dataframe based on our >21 games filter\ndfd_good=dfd[dfd['refNum'].isin(goodRefs.index.values)]\n\n#pandas is like being in a alien spaceship \n#- you know you potentially control unimaginable power, but don't know what any of the buttongs actually do\n#\n\n\nimport mpld3\nmpld3.enable_notebook()\n# Histogram of country frequency. \nfig, ax = plt.subplots(1,1,figsize=(12, 4))\nx = dfd_good.Alpha_3.value_counts()\nlines = ax.plot(x,marker='.',ms=20)\n\ny = x.index.tolist() \n\ntooltips = mpld3.plugins.PointLabelTooltip(lines[0], labels=y)\nmpld3.plugins.connect(plt.gcf(), tooltips)\n\nax.set_title('Referee nationality by number of games')\nax.set_xlabel('Country number (ordered by frequency)')\nax.set_ylabel('Frequency of games')\nax.set_xlim([-3,160]) # a hack so we can see the first point most clearly\n\nmpld3.disable_notebook()\n", "Still 100+ nationalities represented\n4 Skintone ratings could be better\nWe noted that ratings of skintone could be more reliable. The ratings are fairly different at the light end of the spectrum. The two raters disagree on 28742 or 19% of the time, and looking at the histograms of their responses most of these are between the first two categories. These first two categories account for ~ 70% of both rater's classifications, so biases/inconsistencies/uncertainty in this part of the dataset could have a large effect on the rest of the analysis. There could be many reasons for this, but one obvious way of dealing with it would be to use N>2 raters.", "# Plot of skin tone rating distributions, showing skewed nature of the data w/ histograms \n# and degree of disagreement between raters with a scatterplot\nrated = ((dfd['rater1']+dfd['rater2'])/2).dropna()\n\nfig, ax = plt.subplots(1,4,figsize=(12, 4))\nc = sns.color_palette()\nax[0].hist(rated,bins = 5, range = (0,1),color = c[0])\nax[0].set_title(\"Mean rating\")\n\nax[1].hist(dfd['rater1'].dropna().tolist(),bins = 5, range = (0,1), color = c[1])\nax[1].set_title('Rater 1')\n\nax[2].hist(dfd['rater2'].dropna().tolist(),bins = 5, range = (0,1),color = c[2])\nax[2].set_title('Rater 2')\n\nax[3].hist((dfd['rater1'] - dfd['rater2']).dropna(), bins = 5,range = (-0.5,0.5),color = c[3])\nax[3].set_title('Difference')\n\n\nfig.tight_layout()\n\nprint 'Mean skintone across the population =', np.mean(rated)\n\n", "Scatter plot of skintone ratings. Jitter added for visualisation\nNote: This cell takes quite a long time (minutes) to run", "c = sns.color_palette()\njitter_x = np.random.normal(0, 0.04, size=len(dfd.rater1))\njitter_y = np.random.normal(0, 0.04, size=len(dfd.rater2))\nsns.jointplot(dfd.rater1 + jitter_x, dfd.rater2 + jitter_y, kind='kde')", "5 Let's look at the implicit and explicit racism scores, and their sample size distributions\nMost of the refs come from 4 countries (where the 4 represented leagues are). It's a shame there is no data from Italy but never mind. It's hard to know how reliable the IAT and Exp scores are with respect to the other countries, esp if there is some variation between ref biases within a country. The IAT and Exp scores might not have as much variation as within country variation..\nThere doesn't seem to be much 'signal' here. Also the sample sizes, though large, are then undermined by the small sample sizes for refs in some countries (data not shown, but see above)", "# Linked subplots showing these distributions, or a scatter plot with variable dot sizes showing how much overlap there is.\nfig, ax = plt.subplots(2,2,figsize=(12, 8))\nax[0,0].hist(dfd.meanIAT.dropna().unique())\nax[0,0].set_title(\"Mean IAT - all\")\nax[0,1].hist(dfd['meanExp'].dropna().unique())\nax[0,1].set_title(\"Mean Exp - all\")\n\nax[1,0].hist(dfd_good.meanIAT.dropna().unique())\nax[1,0].set_title(\"Mean IAT - culled\")\nax[1,1].hist(dfd_good['meanExp'].dropna().unique())\nax[1,1].set_title(\"Mean Exp - culled\")\n\nsns.jointplot(dfd_good['meanIAT'].dropna().unique(),dfd_good['meanExp'].dropna().unique())\n ", "implicit (IAT) and explicit (EXP) attitude correlate highly", "x = dfd_good.nIAT.dropna().unique()\nplt.plot(np.sort(x),'.')\n\n\nplt.xlabel('Country number (ordered by sample size)')\nplt.ylabel('Sample size for IAT score')\n\nplt.ylim([0,3000])\nplt.title('Most IAT scores are based on samples of <1000')", "So perhaps not too surprising that the country attitude scores (both explicit and implicit) don't predict carding by individual referrees\n6 Concluding thoughts\nAt least half of the analysis effort was the initial data exploration and visualisation. Normally this is not shown directly in the reporting of a scientific project. We felt, particularly for this project, that it was worth bringing to light some of this work.\nThe ipython notebook allows code, commentary and results to be combined (and, if you run it in interactive mode, to be actively developed between several people)\nThere is a trend in behavioural science to very large datasets. When these are incidentally collected (e.g. data provided by companies or bureaucracies), or when they are collected via highly complex/opaque techniques (e.g. many kinds of imaging data) we, as scientists, are often ignorant of the exact form the data take, and the attendant peculiarities and nuances. This means that every analysis project is more and more a visualisation project as well.\nMat Evans @mathewe\nTom Stafford @tomstafford\nNov 2014" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
0x4a50/udacity-0x4a50-deep-learning-nanodegree
image-classification/dlnd_image_classification.ipynb
mit
[ "Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\n# Use Floyd's cifar-10 dataset if present\nfloyd_cifar10_location = '/cifar/cifar-10-python.tar.gz'\nif isfile(floyd_cifar10_location):\n tar_gz_path = floyd_cifar10_location\nelse:\n tar_gz_path = 'cifar-10-python.tar.gz'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(tar_gz_path):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n tar_gz_path,\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open(tar_gz_path) as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)", "Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane\n* automobile\n* bird\n* cat\n* deer\n* dog\n* frog\n* horse\n* ship\n* truck\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 2\nsample_id = 15\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)", "Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.", "import numpy\n\ndef normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n # TODO: Implement Function\n x = numpy.array(x)\n x_normed = (x - x.min(0)) / x.ptp(0)\n return x_normed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)", "One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint: Don't reinvent the wheel.", "def one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n nb_classes = 10\n targets = numpy.array(x).reshape(-1)\n one_hot_targets = numpy.eye(nb_classes)[targets]\n return one_hot_targets\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)", "Randomize Data\nAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.\nPreprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))", "Build the network\nFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.\n\nNote: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the \"Convolutional and Max Pooling Layer\" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.\nHowever, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. \n\nLet's begin!\nInput\nThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions\n* Implement neural_net_image_input\n * Return a TF Placeholder\n * Set the shape using image_shape with batch size set to None.\n * Name the TensorFlow placeholder \"x\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_label_input\n * Return a TF Placeholder\n * Set the shape using n_classes with batch size set to None.\n * Name the TensorFlow placeholder \"y\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_keep_prob_input\n * Return a TF Placeholder for dropout keep probability.\n * Name the TensorFlow placeholder \"keep_prob\" using the TensorFlow name parameter in the TF Placeholder.\nThese names will be used at the end of the project to load your saved model.\nNote: None for shapes in TensorFlow allow for a dynamic size.", "import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a batch of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n return tf.placeholder(tf.float32, shape=(None, image_shape[0], \n image_shape[1], image_shape[2]), name=\"x\")\n\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n return tf.placeholder(tf.float32, shape=(None, n_classes), name=\"y\")\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32, name=\"keep_prob\")\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)", "Convolution and Max Pooling Layer\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\n* Apply a convolution to x_tensor using weight and conv_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\n* Add bias\n* Add a nonlinear activation to the convolution.\n* Apply Max Pooling using pool_ksize and pool_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.", "def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n batch_size, in_width, in_height, in_depth = x_tensor.get_shape().as_list()\n weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], \n in_depth, conv_num_outputs]))\n biases = tf.Variable(tf.zeros(conv_num_outputs))\n \n conv = tf.nn.conv2d(x_tensor, weights, strides=[1, conv_strides[0], conv_strides[1], 1],\n padding='SAME')\n conv = tf.nn.bias_add(conv, biases)\n conv = tf.nn.relu(conv)\n \n filter_shape = [1, pool_ksize[0], pool_ksize[1], 1]\n strides = [1, pool_strides[0], pool_strides[1], 1]\n return tf.nn.max_pool(conv, filter_shape, strides, 'SAME')\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)", "Flatten Layer\nImplement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.", "def flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n # TODO: Implement Function\n return tf.contrib.layers.flatten(x_tensor)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)", "Fully-Connected Layer\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.", "def fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n return tf.contrib.layers.fully_connected(x_tensor, num_outputs)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)", "Output Layer\nImplement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\nNote: Activation, softmax, or cross entropy should not be applied to this.", "def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n return tf.contrib.layers.fully_connected(x_tensor, num_outputs)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)", "Create Convolutional Model\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\n\nApply 1, 2, or 3 Convolution and Max Pool layers\nApply a Flatten Layer\nApply 1, 2, or 3 Fully Connected Layers\nApply an Output Layer\nReturn the output\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob.", "def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers\n # Play around with different number of outputs, kernel size and stride\n # Function Definition from Above:\n # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n conv_ksize = [2,2]\n conv_strides = [1,1]\n pool_ksize = [2,2]\n pool_strides = [1,1]\n\n conv_num_outputs = 16\n x_tensor = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n x_tensor = tf.nn.dropout(x_tensor, keep_prob)\n \n conv_ksize = [3,3]\n conv_strides = [2,2]\n pool_ksize = [2,2]\n pool_strides = [2,2]\n \n conv_num_outputs = 40\n x_tensor = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n x_tensor = tf.nn.dropout(x_tensor, keep_prob)\n \n conv_num_outputs = 10\n x_tensor = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n x_tensor = tf.nn.dropout(x_tensor, keep_prob)\n\n # TODO: Apply a Flatten Layer\n # Function Definition from Above:\n # flatten(x_tensor)\n x_tensor = flatten(x_tensor)\n\n # TODO: Apply 1, 2, or 3 Fully Connected Layers\n # Play around with different number of outputs\n num_outputs = 60\n x_tensor = fully_conn(x_tensor, num_outputs)\n num_outputs = 40\n x_tensor = fully_conn(x_tensor, num_outputs)\n num_outputs = 20\n x_tensor = fully_conn(x_tensor, num_outputs)\n\n num_classes = 10\n return output(x_tensor, num_classes)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)", "Train the Neural Network\nSingle Optimization\nImplement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:\n* x for image input\n* y for labels\n* keep_prob for keep probability for dropout\nThis function will be called for each batch, so tf.global_variables_initializer() has already been called.\nNote: Nothing needs to be returned. This function is only optimizing the neural network.", "def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)", "Show Stats\nImplement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.", "def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n # TODO: Implement Function\n cost = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})\n acc = session.run(accuracy, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})\n validation_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})\n\n print('cost: {}'.format(cost))\n print('accuracy: {}'.format(accuracy)) \n print('validation accuracy: {}'.format(validation_accuracy)) ", "Hyperparameters\nTune the following parameters:\n* Set epochs to the number of iterations until the network stops learning or start overfitting\n* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:\n * 64\n * 128\n * 256\n * ...\n* Set keep_probability to the probability of keeping a node using dropout", "# TODO: Tune Parameters\nepochs = 70\nbatch_size = 256\nkeep_probability = 0.9", "Train on a Single CIFAR-10 Batch\nInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)", "Fully Train the Model\nNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)", "Checkpoint\nThe model has been saved to disk.\nTest Model\nTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()", "Why 50-80% Accuracy?\nYou might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_image_classification.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
albahnsen/ML_RiskManagement
exercises/08-fraud_ensemble_bagging.ipynb
mit
[ "Exercise 08\n\nFraud Detection Dataset from Microsoft Azure: data\n\nFraud detection is one of the earliest industrial applications of data mining and machine learning. Fraud detection is typically handled as a binary classification problem, but the class population is unbalanced because instances of fraud are usually very rare compared to the overall volume of transactions. Moreover, when fraudulent transactions are discovered, the business typically takes measures to block the accounts from transacting to prevent further losses.", "import pandas as pd\nimport zipfile\nwith zipfile.ZipFile('../datasets/fraud_detection.csv.zip', 'r') as z:\n f = z.open('15_fraud_detection.csv')\n data = pd.io.parsers.read_table(f, index_col=0, sep=',')\ndata.head()\n\nX = data.drop(['Label'], axis=1)\ny = data['Label']\ny.value_counts(normalize=True)", "Exercice 08.1\nEstimate a Logistic Regression, GaussianNB, K-nearest neighbors and a Decision Tree Classifiers\nEvaluate using the following metrics:\n* Accuracy\n* F1-Score\n* F_Beta-Score (Beta=10)\nComment about the results\nCombine the classifiers and comment", "from sklearn.linear_model import LogisticRegression\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.neighbors import KNeighborsClassifier\n\nmodels = {'lr': LogisticRegression(),\n 'dt': DecisionTreeClassifier(),\n 'nb': GaussianNB(),\n 'nn': KNeighborsClassifier()}\n\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=2)\n# Train all the models\nfor model in models.keys():\n models[model].fit(X_train, y_train)\n\n# predict test for each model\ny_pred = pd.DataFrame(index=X_test.index, columns=models.keys())\nfor model in models.keys():\n y_pred[model] = models[model].predict(X_test)\ny_pred.sample(10)\n\ny_pred_ensemble1 = (y_pred.mean(axis=1) > 0.5).astype(int)\n\ny_pred_ensemble1.mean()\n\nfrom sklearn.metrics import accuracy_score, f1_score, recall_score, precision_score\n\nstats = {'acc': accuracy_score,\n 'f1': f1_score,\n 'rec': recall_score,\n 'pre': precision_score}\nres = pd.DataFrame(index=models.keys(), columns=stats.keys())\n\nfor model in models.keys():\n for stat in stats.keys():\n res.loc[model, stat] = stats[stat](y_test, y_pred[model])\n\nres\n\nres.loc['ensemble1'] = 0\nfor stat in stats.keys():\n res.loc['ensemble1', stat] = stats[stat](y_test, y_pred_ensemble1)\n\nres", "Exercice 08.2\nApply random-undersampling with a target percentage of 0.5\nhow does the results change\nExercice 08.3\nFor each model estimate a BaggingClassifier of 100 models using the under sampled datasets\nExercice 08.4\nUsing the under-sampled dataset\nEvaluate a RandomForestClassifier and compare the results\nchange n_estimators=100, what happened" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
arogozhnikov/einops
docs/1-einops-basics.ipynb
mit
[ "Einops tutorial, part 1: basics\n<!-- <img src='http://arogozhnikov.github.io/images/einops/einops_logo_350x350.png' height=\"80\" /> -->\n\nWelcome to einops-land!\nWe don't write \npython\ny = x.transpose(0, 2, 3, 1)\nWe write comprehensible code\npython\ny = rearrange(x, 'b c h w -&gt; b h w c')\neinops supports widely used tensor packages (such as numpy, pytorch, chainer, gluon, tensorflow), and extends them.\nWhat's in this tutorial?\n\nfundamentals: reordering, composition and decomposition of axes\noperations: rearrange, reduce, repeat\nhow much you can do with a single operation!\n\nPreparations", "# Examples are given for numpy. This code also setups ipython/jupyter\n# so that numpy arrays in the output are displayed as images\nimport numpy\nfrom utils import display_np_arrays_as_images\ndisplay_np_arrays_as_images()", "Load a batch of images to play with", "ims = numpy.load('./resources/test_images.npy', allow_pickle=False)\n# There are 6 images of shape 96x96 with 3 color channels packed into tensor\nprint(ims.shape, ims.dtype)\n\n# display the first image (whole 4d tensor can't be rendered)\nims[0]\n\n# second image in a batch\nims[1]\n\n# we'll use three operations\nfrom einops import rearrange, reduce, repeat\n\n# rearrange, as its name suggests, rearranges elements\n# below we swapped height and width.\n# In other words, transposed first two axes (dimensions)\nrearrange(ims[0], 'h w c -> w h c')", "Composition of axes\ntransposition is very common and useful, but let's move to other capabilities provided by einops", "# einops allows seamlessly composing batch and height to a new height dimension\n# We just rendered all images by collapsing to 3d tensor!\nrearrange(ims, 'b h w c -> (b h) w c')\n\n# or compose a new dimension of batch and width\nrearrange(ims, 'b h w c -> h (b w) c')\n\n# resulting dimensions are computed very simply\n# length of newly composed axis is a product of components\n# [6, 96, 96, 3] -> [96, (6 * 96), 3]\nrearrange(ims, 'b h w c -> h (b w) c').shape\n\n# we can compose more than two axes. \n# let's flatten 4d array into 1d, resulting array has as many elements as the original\nrearrange(ims, 'b h w c -> (b h w c)').shape", "Decomposition of axis", "# decomposition is the inverse process - represent an axis as a combination of new axes\n# several decompositions possible, so b1=2 is to decompose 6 to b1=2 and b2=3\nrearrange(ims, '(b1 b2) h w c -> b1 b2 h w c ', b1=2).shape\n\n# finally, combine composition and decomposition:\nrearrange(ims, '(b1 b2) h w c -> (b1 h) (b2 w) c ', b1=2)\n\n# slightly different composition: b1 is merged with width, b2 with height\n# ... so letters are ordered by w then by h\nrearrange(ims, '(b1 b2) h w c -> (b2 h) (b1 w) c ', b1=2)\n\n# move part of width dimension to height. \n# we should call this width-to-height as image width shrunk by 2 and height doubled. \n# but all pixels are the same!\n# Can you write reverse operation (height-to-width)?\nrearrange(ims, 'b h (w w2) c -> (h w2) (b w) c', w2=2)", "Order of axes matters", "# compare with the next example\nrearrange(ims, 'b h w c -> h (b w) c')\n\n# order of axes in composition is different\n# rule is just as for digits in the number: leftmost digit is the most significant, \n# while neighboring numbers differ in the rightmost axis.\n\n# you can also think of this as lexicographic sort\nrearrange(ims, 'b h w c -> h (w b) c')\n\n# what if b1 and b2 are reordered before composing to width?\nrearrange(ims, '(b1 b2) h w c -> h (b1 b2 w) c ', b1=2) # produces 'einops'\nrearrange(ims, '(b1 b2) h w c -> h (b2 b1 w) c ', b1=2) # produces 'eoipns'", "Meet einops.reduce\nIn einops-land you don't need to guess what happened\npython\nx.mean(-1)\nBecause you write what the operation does\npython\nreduce(x, 'b h w c -&gt; b h w', 'mean')\nif axis is not present in the output — you guessed it — axis was reduced.", "# average over batch\nreduce(ims, 'b h w c -> h w c', 'mean')\n\n# the previous is identical to familiar:\nims.mean(axis=0)\n# but is so much more readable\n\n# Example of reducing of several axes \n# besides mean, there are also min, max, sum, prod\nreduce(ims, 'b h w c -> h w', 'min')\n\n# this is mean-pooling with 2x2 kernel\n# image is split into 2x2 patches, each patch is averaged\nreduce(ims, 'b (h h2) (w w2) c -> h (b w) c', 'mean', h2=2, w2=2)\n\n# max-pooling is similar\n# result is not as smooth as for mean-pooling\nreduce(ims, 'b (h h2) (w w2) c -> h (b w) c', 'max', h2=2, w2=2)\n\n# yet another example. Can you compute result shape?\nreduce(ims, '(b1 b2) h w c -> (b2 h) (b1 w)', 'mean', b1=2)", "Stack and concatenate", "# rearrange can also take care of lists of arrays with the same shape\nx = list(ims)\nprint(type(x), 'with', len(x), 'tensors of shape', x[0].shape)\n# that's how we can stack inputs\n# \"list axis\" becomes first (\"b\" in this case), and we left it there\nrearrange(x, 'b h w c -> b h w c').shape\n\n# but new axis can appear in the other place:\nrearrange(x, 'b h w c -> h w c b').shape\n\n# that's equivalent to numpy stacking, but written more explicitly\nnumpy.array_equal(rearrange(x, 'b h w c -> h w c b'), numpy.stack(x, axis=3))\n\n# ... or we can concatenate along axes\nrearrange(x, 'b h w c -> h (b w) c').shape\n\n# which is equivalent to concatenation\nnumpy.array_equal(rearrange(x, 'b h w c -> h (b w) c'), numpy.concatenate(x, axis=1))", "Addition or removal of axes\nYou can write 1 to create a new axis of length 1. Similarly you can remove such axis.\nThere is also a synonym () that you can use. That's a composition of zero axes and it also has a unit length.", "x = rearrange(ims, 'b h w c -> b 1 h w 1 c') # functionality of numpy.expand_dims\nprint(x.shape)\nprint(rearrange(x, 'b 1 h w 1 c -> b h w c').shape) # functionality of numpy.squeeze\n\n# compute max in each image individually, then show a difference \nx = reduce(ims, 'b h w c -> b () () c', 'max') - ims\nrearrange(x, 'b h w c -> h (b w) c')", "Repeating elements\nThird operation we introduce is repeat", "# repeat along a new axis. New axis can be placed anywhere\nrepeat(ims[0], 'h w c -> h new_axis w c', new_axis=5).shape\n\n# shortcut\nrepeat(ims[0], 'h w c -> h 5 w c').shape\n\n# repeat along w (existing axis)\nrepeat(ims[0], 'h w c -> h (repeat w) c', repeat=3)\n\n# repeat along two existing axes\nrepeat(ims[0], 'h w c -> (2 h) (2 w) c')\n\n# order of axes matters as usual - you can repeat each element (pixel) 3 times \n# by changing order in parenthesis\nrepeat(ims[0], 'h w c -> h (w repeat) c', repeat=3)", "Note: repeat operation covers functionality identical to numpy.repeat, numpy.tile and actually more than that.\nReduce ⇆ repeat\nreduce and repeat are like opposite of each other: first one reduces amount of elements, second one increases.\nIn the following example each image is repeated first, then we reduce over new axis to get back original tensor. Notice that operation patterns are \"reverse\" of each other", "repeated = repeat(ims, 'b h w c -> b h new_axis w c', new_axis=2)\nreduced = reduce(repeated, 'b h new_axis w c -> b h w c', 'min')\nassert numpy.array_equal(ims, reduced)", "Fancy examples in random order\n(a.k.a. mad designer gallery)", "# interweaving pixels of different pictures\n# all letters are observable\nrearrange(ims, '(b1 b2) h w c -> (h b1) (w b2) c ', b1=2)\n\n# interweaving along vertical for couples of images\nrearrange(ims, '(b1 b2) h w c -> (h b1) (b2 w) c', b1=2)\n\n# interweaving lines for couples of images\n# exercise: achieve the same result without einops in your favourite framework\nreduce(ims, '(b1 b2) h w c -> h (b2 w) c', 'max', b1=2)\n\n# color can be also composed into dimension\n# ... while image is downsampled\nreduce(ims, 'b (h 2) (w 2) c -> (c h) (b w)', 'mean')\n\n# disproportionate resize\nreduce(ims, 'b (h 4) (w 3) c -> (h) (b w)', 'mean')\n\n# spilt each image in two halves, compute mean of the two\nreduce(ims, 'b (h1 h2) w c -> h2 (b w)', 'mean', h1=2)\n\n# split in small patches and transpose each patch\nrearrange(ims, 'b (h1 h2) (w1 w2) c -> (h1 w2) (b w1 h2) c', h2=8, w2=8)\n\n# stop me someone!\nrearrange(ims, 'b (h1 h2 h3) (w1 w2 w3) c -> (h1 w2 h3) (b w1 h2 w3) c', h2=2, w2=2, w3=2, h3=2)\n\nrearrange(ims, '(b1 b2) (h1 h2) (w1 w2) c -> (h1 b1 h2) (w1 b2 w2) c', h1=3, w1=3, b2=3)\n\n# patterns can be arbitrarily complicated\nreduce(ims, '(b1 b2) (h1 h2 h3) (w1 w2 w3) c -> (h1 w1 h3) (b1 w2 h2 w3 b2) c', 'mean', \n h2=2, w1=2, w3=2, h3=2, b2=2)\n\n# subtract background in each image individually and normalize\n# pay attention to () - this is composition of 0 axis, a dummy axis with 1 element.\nim2 = reduce(ims, 'b h w c -> b () () c', 'max') - ims\nim2 /= reduce(im2, 'b h w c -> b () () c', 'max')\nrearrange(im2, 'b h w c -> h (b w) c')\n\n# pixelate: first downscale by averaging, then upscale back using the same pattern\naveraged = reduce(ims, 'b (h h2) (w w2) c -> b h w c', 'mean', h2=6, w2=8)\nrepeat(averaged, 'b h w c -> (h h2) (b w w2) c', h2=6, w2=8)\n\nrearrange(ims, 'b h w c -> w (b h) c')\n\n# let's bring color dimension as part of horizontal axis\n# at the same time horizontal axis is downsampled by 2x\nreduce(ims, 'b (h h2) (w w2) c -> (h w2) (b w c)', 'mean', h2=3, w2=3)", "Ok, numpy is fun, but how do I use einops with some other framework?\nIf that's what you've done with ims being numpy array:\npython\nrearrange(ims, 'b h w c -&gt; w (b h) c')\nThat's how you adapt the code for other frameworks:\n```python\npytorch:\nrearrange(ims, 'b h w c -> w (b h) c')\ntensorflow:\nrearrange(ims, 'b h w c -> w (b h) c')\nchainer:\nrearrange(ims, 'b h w c -> w (b h) c')\ngluon:\nrearrange(ims, 'b h w c -> w (b h) c')\ncupy:\nrearrange(ims, 'b h w c -> w (b h) c')\njax:\nrearrange(ims, 'b h w c -> w (b h) c')\n...well, you got the idea.\n```\nEinops allows backpropagation as if all operations were native to framework.\nOperations do not change when moving to another framework - einops notation is universal\nSummary\n\nrearrange doesn't change number of elements and covers different numpy functions (like transpose, reshape, stack, concatenate, squeeze and expand_dims)\nreduce combines same reordering syntax with reductions (mean, min, max, sum, prod, and any others)\nrepeat additionally covers repeating and tiling\n\ncomposition and decomposition of axes are a corner stone, they can and should be used together\n\n\nSecond part of tutorial shows how einops works with other frameworks\n\nThird part of tutorial shows how to improve your DL code with einops" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
joeandrewkey/deep-learning
first-neural-network/Your_first_neural_network.ipynb
mit
[ "Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!", "data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()", "Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.", "rides[:24*10].plot(x='dteday', y='cnt')", "Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().", "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()", "Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.", "quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std", "Splitting the data into training, testing, and validation sets\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.", "# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]", "We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).", "# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]", "Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n<img src=\"assets/neural_network.png\" width=300px>\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.", "class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, \n (self.input_nodes, self.hidden_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n\n self.lr = learning_rate\n \n #### TODO: Set self.activation_function to your implemented sigmoid function ####\n # Replace 0 with your sigmoid calculation.\n self.activation_function = lambda x : 1 / (1 + np.exp(-x)) \n \n def train(self, features, targets):\n ''' Train the network on batch of features and targets. \n \n Arguments\n ---------\n \n features: 2D array, each row is one data record, each column is a feature\n targets: 1D array of target values\n \n '''\n n_records = features.shape[0]\n delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)\n delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)\n for X, y in zip(features, targets):\n #### Implement the forward pass here ####\n\n ### Forward pass ###\n # hidden_inputs: signals into hidden layer\n # hidden_outputs: signals from hidden layer\n # final_inputs: signals into final output layer\n # final_outputs: signals from final output layer\n\n hidden_inputs, hidden_outputs, final_inputs, final_outputs = self.forward_pass(X)\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n error, output_error_term, hidden_error, hidden_error_term = self.backward_pass_errors(y, final_outputs, hidden_outputs)\n # Weight step (input to hidden)\n # the hidden_error_term by the transpose(inputs)\n delta_weights_i_h += hidden_error_term * X[:, None]\n # Weight step (hidden to output)\n # the output_error_term by the transpose(hidden_outputs)\n delta_weights_h_o += error * hidden_outputs[:, None]\n\n # TODO: Update the weights - Replace these values with your calculations.\n # update hidden-to-output weights with gradient descent step\n self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records \n # update input-to-hidden weights with gradient descent step\n self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records \n \n def backward_pass_errors(self, targets, final_outputs, hidden_outputs):\n \"\"\"Calculate the errors and error terms for the network, output layer, and hidden layer\n Notes about the shapes\n Numbers\n # i: number of input units, 3 in our tests\n # j: number of hidden units, 2 in our tests\n # k: number of output units, 1 in our tests\n Matrices\n # y: a 1 x k matrix of target outputs\n # final_outputs: a 1 x k matrix of network outputs\n # hidden_outputs: sigmoid of hidden_inputs, a 1 x j matrix\n # error: target - final_outputs, a 1 x k matrix for each of the k output units\n # output_error_term = error, a 1 x k matix\n # hidden_error = error DOT transpose(w_h_o) or 1 x k DOT k x j, yields a 1 x j matrix\n for each of the j hidden units\n # hidden_error_term = hidden_error * activation derivative of hidden_outputs,\n 1 x j * 1 x j, yields a 1 x j matrix for each of the j hidden units\n \"\"\"\n # Output layer error is the difference between desired target and actual output.\n error = targets - final_outputs \n # Get the output error, which is just the error\n output_error_term = error\n # Take the output error term and scale by the weights from the hidden layer to that output\n hidden_error = np.dot(output_error_term, self.weights_hidden_to_output.T)\n # Use derivative of activation for the hidden outputs\n hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)\n\n return error, output_error_term, hidden_error, hidden_error_term\n\n def forward_pass(self, features):\n \"\"\"Calculate the values for the inputs and outputs of the hidden and final layers\n Notes about the shapes\n Numbers\n # i: number of input units, 3 in our tests\n # j: number of hidden units, 2 in our tests\n # k: number of output units, 1 in our tests\n Matrices\n # features: a 1 x i row vector\n # w_i_h: i x j matrix of weights from input units to hidden units\n # w_h_o: j x k matrix of weights from hidden units to output units\n # hidden_inputs: features DOT w_i_h, yeilds a 1 x j matrix, for each of the j hidden units\n # hidden_outputs: sigmoid of hidden_inputs, also 1 x j matrix\n # final_inputs: hidden_outputs DOT w_h_o, so 1 x j DOT j x k, yields 1 x k matrix\n # final_outputs: same as the final inputs, so 1 x k matrix\n \"\"\"\n hidden_inputs = np.dot(features, self.weights_input_to_hidden)\n hidden_outputs = self.activation_function(hidden_inputs)\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output)\n final_outputs = final_inputs\n return hidden_inputs, hidden_outputs, final_inputs, final_outputs\n\n def run(self, features):\n ''' Run a forward pass through the network with input features \n \n Arguments\n ---------\n features: 1D array of feature values\n '''\n \n #### Implement the forward pass here ####\n hidden_inputs, hidden_outputs, final_inputs, final_outputs = self.forward_pass(features)\n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)", "Unit tests\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.", "import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner(verbosity=1).run(suite)", "Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.", "import sys\n\n### Set the hyperparameters here ###\n# Results from earlier hyperparameter settings\n# (iterations, learning_rate, hidden_nodes) -> (Training Loss, Validation Loss)\n# (5000, 0.8, 8) -> (0.059, 0.148)\n\niterations = 5000 #100\nlearning_rate = 0.9 #0.1\nhidden_nodes = 8 #2\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()", "Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.", "fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)", "OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below\nI have lingering concerns with respect to selecting the best learning rate\nIn my first submsission after setting the following hyperparameters:\npython\nlearning_rate = 0.4 #0.1\nhidden_nodes = 8 #2\nAfter the first review I was advised to lower the learning rate:\n\nYou notice your loss plot can be a bit spiky? That indicates a high learning rate usually (but more prominent in more complex models). At such high learning rates, the network doesn't usually converge because the weight update steps are too large and the weights don't end up converging. So there's overshooting of the minimum of the cost function. You can experiment with different values here (lower it a bit and try).\n\nIn my second submission, I followed the recommendation and lowered my learning rate to:\npython\nlearning_rate = 0.18 #0.1\nAfter the second review I was instead advised to raise the learning rate\n\nPlease note that the learning rate is the learn rate divided by the n_records, here you're aiming at a very very low learn rate. I would suggest since you anyway have to change other things, to change this one as well. Set it to ~0.8 or 0.9.\n\nIt seems if I set the learning_rate to 0.9, things work out fine, but I don't know if in general I should see a spiky loss plot as a clue to lower the learning rate or use the recommended guide to set the learning rate higher. Any additional recommendations are welcome." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google/fuzzbench
analysis/notebooks/example.ipynb
apache-2.0
[ "This Colab demonstrates how to use the FuzzBench analysis library to show experiment results that might not be included in the default report. \nGet the data\nEach report contains a link to the raw data, e.g., see the bottom of our sample report: https://www.fuzzbench.com/reports/sample/index.html.\nFind all our published reports at https://www.fuzzbench.com/reports.", "!wget https://www.fuzzbench.com/reports/sample/data.csv.gz", "Get the code", "# Install requirements.\n!pip install Orange3 pandas scikit-posthocs scipy seaborn\n# Get FuzzBench\n!git clone https://github.com/google/fuzzbench.git\n# Add fuzzbench to PYTHONPATH.\nimport sys; sys.path.append('fuzzbench')", "Experiment results", "import pandas\nfrom fuzzbench.analysis import experiment_results, plotting\nfrom IPython.display import SVG, Image\n\n# Load the data and initialize ExperimentResults.\nexperiment_data = pandas.read_csv('data.csv.gz')\nfuzzer_names = experiment_data.fuzzer.unique()\nplotter = plotting.Plotter(fuzzer_names)\nresults = experiment_results.ExperimentResults(experiment_data, '.', plotter)", "Top level results", "results.summary_table", "Rank by median on benchmarks, then by average rank", "# The critial difference plot visualizes this ranking\nSVG(results.critical_difference_plot)\n\nresults.rank_by_median_and_average_rank.to_frame()", "Rank by pair-wise statistical test wins on benchmarks, then by average rank", "results.rank_by_stat_test_wins_and_average_rank.to_frame()", "Rank by median on benchmarks, then by avereage normalized score", "results.rank_by_median_and_average_normalized_score.to_frame()", "Rank by average rank on benchmarks, then by avereage rank", "results.rank_by_average_rank_and_average_rank.to_frame()", "Benchmark level results", "# List benchmarks\nbenchmarks = {b.name:b for b in results.benchmarks}\nfor benchmark_name in benchmarks.keys(): print(benchmark_name)\n\nsqlite = benchmarks['sqlite3_ossfuzz']\nSVG(sqlite.violin_plot)\n\nSVG(sqlite.coverage_growth_plot)\n\nSVG(sqlite.mann_whitney_plot)\n\n# Show p values\nsqlite.mann_whitney_p_values" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
scoyote/RHealthDataImport
ImportAppleHealthXML.ipynb
mit
[ "Download, Parse and Interrogate Apple Health Export Data\nThe first part of this program is all about getting the Apple Health export and putting it into an analyzable format. At that point it can be analysed anywhere. The second part of this program is concerned with using SAS Scripting Wrapper for Analytics Transfer (SWAT) Python library to transfer the data to SAS Viya, and analyze it there. The SWAT package provides native python language access to the SAS Viya codebase.\n\nhttps://github.com/sassoftware/python-swat\n\nThis file was created from a desire to get my hands on data collected by Apple Health, notably heart rate information collected by Apple Watch. For this to work, this file needs to be in a location accessible to Python code. A little bit of searching told me that iCloud file access is problematic and that there were already a number of ways of doing this with the Google API if the file was saved to Google Drive. I chose PyDrive. So for the end to end program to work with little user intervention, you will need to sign up for Google Drive, set up an application in the Google API and install Google Drive app to your iPhone. \nThis may sound involved, and it is not necessary if you simply email the export file to yourself and copy it to a filesystem that Python can see. If you choose to do that, all of the Google Drive portion can be removed. I like the Google Drive process though as it enables a minimal manual work scenario.\nThis version requires the user to grant Google access, requiring some additional clicks, but it is not too much. I think it is possible to automate this to run without user intervention as well using security files.\nThe first step to enabling this process is exporting the data from Apple Health. As of this writing, open Apple Health and click on your user icon or photo. Near the bottom of the next page in the app will be a button or link called Export Health Data. Clicking on this will generate a xml file, zipped up. THe next dialog will ask you where you want to save it. Options are to email, save to iCloud, message etc... Select Google Drive. Google Drive allows multiple files with the same name and this is accounted for by this program.", "import xml.etree.ElementTree as et\nimport pandas as pd\nimport numpy as np\nfrom datetime import *\n\nimport matplotlib.pyplot as plt\nimport re \nimport os.path\nimport zipfile\nimport pytz\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = 16, 8", "Authenticate with Google\nThis will open a browser to let you beging the process of authentication with an existing Google Drive account. This process will be separate from Python. For this to work, you will need to set up a Other Authentication OAuth credential at https://console.developers.google.com/apis/credentials, save the secret file in your root directory and a few other things that are detailed at https://pythonhosted.org/PyDrive/. The PyDrive instructions also show you how to set up your Google application. There are other methods for accessing the Google API from python, but this one seems pretty nice. \nThe first time through the process, regular sign in and two factor authentication is required (if you require two factor auth) but after that it is just a process of telling Google that it is ok for your Google application to access Drive.", "# Authenticate into Google Drive\nfrom pydrive.auth import GoogleAuth\n\ngauth = GoogleAuth()\ngauth.LocalWebserverAuth() ", "Download the most recent Apple Health export file\nNow that we are authenticated into Google Drive, use PyDrive to access the API and get to files stored.\nGoogle Drive allows multiple files with the same name, but it indexes them with the ID to keep them separate.\nIn this block, we make one pass of the file list where the file name is called export.zip, and save the row that corresponds with the most recent date. We will use that file id later to download the correct file that corresponds with the most recent date. Apple Health export names the file export.zip, and at the time this was written, there is no other option.", "from pydrive.drive import GoogleDrive\ndrive = GoogleDrive(gauth)\n\nfile_list = drive.ListFile({'q': \"'root' in parents and trashed=false\"}).GetList()\n\n# Step through the file list and find the most current export.zip file id, then use \n# that later to download the file to the local machine.\n# This may look a little old school, but these file lists will never be massive and \n# it is readable and easy one pass way to get the most current file using the \n# least (or low) amount of resouces\nselection_dt = datetime.strptime(\"2000-01-01T01:01:01.001Z\",\"%Y-%m-%dT%H:%M:%S.%fZ\")\nprint(\"Matching Files\")\nfor file1 in file_list: \n if re.search(\"^export-*\\d*.zip\",file1['title']):\n dt = datetime.strptime(file1['createdDate'],\"%Y-%m-%dT%H:%M:%S.%fZ\")\n if dt > selection_dt:\n selection_id = file1['id']\n selection_dt = dt\n print(' title: %s, id: %s createDate: %s' % (file1['title'], file1['id'], file1['createdDate']))\n\nif not os.path.exists('healthextract'):\n os.mkdir('healthextract')", "Download the file from Google Drive\nEnsure that the file downloaded is the latest file generated", "for file1 in file_list:\n if file1['id'] == selection_id:\n print('Downloading this file: %s, id: %s createDate: %s' % (file1['title'], file1['id'], file1['createdDate']))\n file1.GetContentFile(\"healthextract/export.zip\")", "Unzip the most current file to a holding directory", "zip_ref = zipfile.ZipFile('healthextract/export.zip', 'r')\nzip_ref.extractall('healthextract')\nzip_ref.close()", "Parse Apple Health Export document", "path = \"healthextract/apple_health_export/export.xml\"\ne = et.parse(path)\n#this was from an older iPhone, to demonstrate how to join files\nlegacy = et.parse(\"healthextract/apple_health_legacy/export.xml\")\n\n#<<TODO: Automate this process\n\n#legacyFilePath = \"healthextract/apple_health_legacy/export.xml\"\n#if os.path.exists(legacyFilePath):\n# legacy = et.parse(\"healthextract/apple_health_legacy/export.xml\")\n#else:\n# os.mkdir('healthextract/apple_health_legacy')\n ", "List XML headers by element count", "pd.Series([el.tag for el in e.iter()]).value_counts()", "List types for \"Record\" Header", "pd.Series([atype.get('type') for atype in e.findall('Record')]).value_counts()", "Extract Values to Data Frame\nTODO: Abstraction of the next code block", "import pytz\n\n#Extract the heartrate values, and get a timestamp from the xml\n# there is likely a more efficient way, though this is very fast\ndef txloc(xdate,fmt):\n eastern = pytz.timezone('US/Eastern')\n dte = xdate.astimezone(eastern)\n return datetime.strftime(dte,fmt)\n\ndef xmltodf(eltree, element,outvaluename):\n dt = []\n v = []\n for atype in eltree.findall('Record'):\n if atype.get('type') == element:\n dt.append(datetime.strptime(atype.get(\"startDate\"),\"%Y-%m-%d %H:%M:%S %z\"))\n v.append(atype.get(\"value\"))\n\n myd = pd.DataFrame({\"Create\":dt,outvaluename:v})\n colDict = {\"Year\":\"%Y\",\"Month\":\"%Y-%m\", \"Week\":\"%Y-%U\",\"Day\":\"%d\",\"Hour\":\"%H\",\"Days\":\"%Y-%m-%d\",\"Month-Day\":\"%m-%d\"}\n for col, fmt in colDict.items():\n myd[col] = myd['Create'].dt.tz_convert('US/Eastern').dt.strftime(fmt)\n\n myd[outvaluename] = myd[outvaluename].astype(float).astype(int)\n print('Extracting ' + outvaluename + ', type: ' + element)\n \n return(myd)\n\nHR_df = xmltodf(e,\"HKQuantityTypeIdentifierHeartRate\",\"HeartRate\")\n\nEX_df = xmltodf(e,\"HKQuantityTypeIdentifierAppleExerciseTime\",\"Extime\")\nEX_df.head()\n\n#comment this cell out if no legacy exports.\n# extract legacy data, create series for heartrate to join with newer data\n#HR_df_leg = xmltodf(legacy,\"HKQuantityTypeIdentifierHeartRate\",\"HeartRate\")\n#HR_df = pd.concat([HR_df_leg,HR_df])\n\n#import pytz\n#eastern = pytz.timezone('US/Eastern')\n#st = datetime.strptime('2017-08-12 23:45:00 -0400', \"%Y-%m-%d %H:%M:%S %z\")\n#ed = datetime.strptime('2017-08-13 00:15:00 -0400', \"%Y-%m-%d %H:%M:%S %z\")\n#HR_df['c2'] = HR_df['Create'].dt.tz_convert('US/Eastern').dt.strftime(\"%Y-%m-%d\")\n\n\n#HR_df[(HR_df['Create'] >= st) & (HR_df['Create'] <= ed) ].head(10)\n\n#reset plot - just for tinkering \nplt.rcParams['figure.figsize'] = 30, 8\n\nHR_df.boxplot(by='Month',column=\"HeartRate\", return_type='axes')\nplt.grid(axis='x')\nplt.title('All Months')\nplt.ylabel('Heart Rate')\nplt.ylim(40,140)\n\ndx = HR_df[HR_df['Year']=='2019'].boxplot(by='Week',column=\"HeartRate\", return_type='axes')\nplt.title('All Weeks')\nplt.ylabel('Heart Rate')\nplt.xticks(rotation=90)\nplt.grid(axis='x')\n[plt.axvline(_x, linewidth=1, color='blue') for _x in [10,12]]\nplt.ylim(40,140)\n\nmonthval = '2019-03'\n#monthval1 = '2017-09'\n#monthval2 = '2017-10'\n#HR_df[(HR_df['Month']==monthval1) | (HR_df['Month']== monthval2)].boxplot(by='Month-Day',column=\"HeartRate\", return_type='axes')\nHR_df[HR_df['Month']==monthval].boxplot(by='Month-Day',column=\"HeartRate\", return_type='axes')\nplt.grid(axis='x') \nplt.rcParams['figure.figsize'] = 16, 8\nplt.title('Daily for Month: '+ monthval)\nplt.ylabel('Heart Rate')\nplt.xticks(rotation=90)\nplt.ylim(40,140)\n\nHR_df[HR_df['Month']==monthval].boxplot(by='Hour',column=\"HeartRate\")\nplt.title('Hourly for Month: '+ monthval)\nplt.ylabel('Heart Rate')\nplt.grid(axis='x')\nplt.ylim(40,140)", "import calmap\nts = pd.Series(HR_df['HeartRate'].values, index=HR_df['Days'])\nts.index = pd.to_datetime(ts.index)\ntstot = ts.groupby(ts.index).median()\nplt.rcParams['figure.figsize'] = 16, 8\nimport warnings\nwarnings.simplefilter(action='ignore', category=FutureWarning)\ncalmap.yearplot(data=tstot,year=2017)\nFlag Chemotherapy Days for specific analysis\nThe next two cells provide the ability to introduce cycles that start on specific days and include this data in the datasets so that they can be overlaid in graphics. In the example below, there are three cycles of 21 days. The getDelta function returns the cycle number when tpp == 0 and the days since day 0 when tpp == 2. This allows the overlaying of the cycles, with the days since day 0 being overlaid.", "# This isnt efficient yet, just a first swipe. It functions as intended.\ndef getDelta(res,ttp,cyclelength):\n mz = [x if (x >= 0) & (x < cyclelength) else 999 for x in res]\n if ttp == 0:\n return(mz.index(min(mz))+1)\n else:\n return(mz[mz.index(min(mz))])\n\n#chemodays = np.array([date(2017,4,24),date(2017,5,16),date(2017,6,6),date(2017,8,14)])\nchemodays = np.array([date(2018,1,26),date(2018,2,2),date(2018,2,9),date(2018,2,16),date(2018,2,26),date(2018,3,2),date(2018,3,19),date(2018,4,9),date(2018,5,1),date(2018,5,14),date(2018,6,18),date(2018,7,10),date(2018,8,6)])\n\nHR_df = xmltodf(e,\"HKQuantityTypeIdentifierHeartRate\",\"HeartRate\")\n#I dont think this is efficient yet...\na = HR_df['Create'].apply(lambda x: [x.days for x in x.date()-chemodays])\nHR_df['ChemoCycle'] = a.apply(lambda x: getDelta(x,0,21))\nHR_df['ChemoDays'] = a.apply(lambda x: getDelta(x,1,21))\n\nimport seaborn as sns\nplotx = HR_df[HR_df['ChemoDays']<=21]\nplt.rcParams['figure.figsize'] = 24, 8\nax = sns.boxplot(x=\"ChemoDays\", y=\"HeartRate\", hue=\"ChemoCycle\", data=plotx, palette=\"Set2\",notch=1,whis=0,width=0.75,showfliers=False)\nplt.ylim(65,130)\n#the next statement puts the chemodays variable as a rowname, we need to fix that\nplotx_med = plotx.groupby('ChemoDays').median()\n#this puts chemodays back as a column in the frame. I need to see if there is a way to prevent the effect\nplotx_med.index.name = 'ChemoDays'\nplotx_med.reset_index(inplace=True)\n\nsnsplot = sns.pointplot(x='ChemoDays', y=\"HeartRate\", data=plotx_med,color='Gray')", "Boxplots Using Seaborn", "import seaborn as sns\nsns.set(style=\"ticks\", palette=\"muted\", color_codes=True)\n\nsns.boxplot(x=\"Month\", y=\"HeartRate\", data=HR_df,whis=np.inf, color=\"c\")\n# Add in points to show each observation\nsnsplot = sns.stripplot(x=\"Month\", y=\"HeartRate\", data=HR_df,jitter=True, size=1, alpha=.15, color=\".3\", linewidth=0)\n\nhr_only = HR_df[['Create','HeartRate']]\nhr_only.tail()\n\nhr_only.to_csv('~/Downloads/stc_hr.csv')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dtamayo/rebound
ipython_examples/TransitTimingVariations.ipynb
gpl-3.0
[ "Calculating Transit Timing Variations (TTV) with REBOUND\nThe following code finds the transit times in a two planet system. The transit times of the inner planet are not exactly periodic, due to planet-planet interactions.\nFirst, let's import the REBOUND and numpy packages.", "import rebound\nimport numpy as np", "Let's set up a coplanar two planet system.", "sim = rebound.Simulation()\nsim.add(m=1)\nsim.add(m=1e-5, a=1,e=0.1,omega=0.25)\nsim.add(m=1e-5, a=1.757)\nsim.move_to_com()", "We're now going to integrate the system forward in time. We assume the observer of the system is in the direction of the positive x-axis. We want to meassure the time when the inner planet transits. In this geometry, this happens when the y coordinate of the planet changes sign. Whenever we detect a change in sign between two steps, we try to find the transit time, which must lie somewhere within the last step, by bisection.", "N=174\ntransittimes = np.zeros(N)\np = sim.particles\ni = 0\nwhile i<N:\n y_old = p[1].y - p[0].y # (Thanks to David Martin for pointing out a bug in this line!)\n t_old = sim.t\n sim.integrate(sim.t+0.5) # check for transits every 0.5 time units. Note that 0.5 is shorter than one orbit\n t_new = sim.t\n if y_old*(p[1].y-p[0].y)<0. and p[1].x-p[0].x>0.: # sign changed (y_old*y<0), planet in front of star (x>0)\n while t_new-t_old>1e-7: # bisect until prec of 1e-5 reached\n if y_old*(p[1].y-p[0].y)<0.:\n t_new = sim.t\n else:\n t_old = sim.t\n sim.integrate( (t_new+t_old)/2.)\n transittimes[i] = sim.t\n i += 1\n sim.integrate(sim.t+0.05) # integrate 0.05 to be past the transit ", "Next, we do a linear least square fit to remove the linear trend from the transit times, thus leaving us with the transit time variations.", "A = np.vstack([np.ones(N), range(N)]).T\nc, m = np.linalg.lstsq(A, transittimes)[0]", "Finally, let us plot the TTVs.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize=(10,5))\nax = plt.subplot(111)\nax.set_xlim([0,N])\nax.set_xlabel(\"Transit number\")\nax.set_ylabel(\"TTV [hours]\")\nplt.scatter(range(N), (transittimes-m*np.array(range(N))-c)*(24.*365./2./np.pi));" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bjshaw/phys202-2015-work
project/NeuralNetworks.ipynb
mit
[ "Neural Networks\nThis project was created by Brian Granger. All content is licensed under the MIT License.\n\nIntroduction\nNeural networks are a class of algorithms that can learn how to compute the value of a function given previous examples of the functions output. Because neural networks are capable of learning how to compute the output of a function based on existing data, they generally fall under the field of Machine Learning.\nLet's say that we don't know how to compute some function $f$:\n$$ f(x) \\rightarrow y $$\nBut we do have some data about the output that $f$ produces for particular input $x$:\n$$ f(x_1) \\rightarrow y_1 $$\n$$ f(x_2) \\rightarrow y_2 $$\n$$ \\ldots $$\n$$ f(x_n) \\rightarrow y_n $$\nA neural network learns how to use that existing data to compute the value of the function $f$ on yet unseen data. Neural networks get their name from the similarity of their design to how neurons in the brain work.\nWork on neural networks began in the 1940s, but significant advancements were made in the 1970s (backpropagation) and more recently, since the late 2000s, with the advent of deep neural networks. These days neural networks are starting to be used extensively in products that you use. A great example of the application of neural networks is the recently released Flickr automated image tagging. With these algorithms, Flickr is able to determine what tags (\"kitten\", \"puppy\") should be applied to each photo, without human involvement.\nIn this case the function takes an image as input and outputs a set of tags for that image:\n$$ f(image) \\rightarrow {tag_1, \\ldots} $$\nFor the purpose of this project, good introductions to neural networks can be found at:\n\nThe Nature of Code, Daniel Shiffman.\nNeural Networks and Deep Learning, Michael Nielsen.\nData Science from Scratch, Joel Grus\n\nThe Project\nYour general goal is to write Python code to predict the number associated with handwritten digits. The dataset for these digits can be found in sklearn:", "%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom IPython.html.widgets import interact\n\nfrom sklearn.datasets import load_digits\ndigits = load_digits()\nprint(digits.data.shape)\n\ndef show_digit(i):\n plt.matshow(digits.images[i]);\n\ninteract(show_digit, i=(0,100));", "The actual, known values (0,1,2,3,4,5,6,7,8,9) associated with each image can be found in the target array:", "digits.target", "Here are some of the things you will need to do as part of this project:\n\nSplit the original data set into two parts: 1) a training set that you will use to train your neural network and 2) a test set you will use to see if your trained neural network can accurately predict previously unseen data.\nWrite Python code to implement the basic building blocks of neural networks. This code should be modular and fully tested. While you can look at the code examples in the above resources, your code should be your own creation and be substantially different. One way of ensuring your code is different is to make it more general.\nCreate appropriate data structures for the neural network.\nFigure out how to initialize the weights of the neural network.\nWrite code to implement forward and back propagation.\nWrite code to train the network with the training set.\n\nYour base question should be to get a basic version of your code working that can predict handwritten digits with an accuracy that is significantly better than that of random guessing.\nHere are some ideas of questions you could explore as your two additional questions:\n\nHow to specify, train and use networks with more hidden layers.\nThe best way to determine the initial weights.\nMaking it all fast to handle more layers and neurons per layer (%timeit and %%timeit).\nExplore different ways of optimizing the weights/output of the neural network.\nTackle the full MNIST benchmark of $10,000$ digits.\nHow different sigmoid function affect the results.\n\nImplementation hints\nThere are optimization routines in scipy.optimize that may be helpful.\nYou should use NumPy arrays and fast NumPy operations (dot) everywhere that is possible." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
angelmtenor/data-science-keras
student_admissions.ipynb
mit
[ "Student Admissions\nPredicting student admissions to graduate school at UCLA based on GRE Scores, GPA Scores, and class rank \nSupervised Learning. Classification\nDataset from http://www.ats.ucla.edu/\nBased on the Predicting Student Admissions mini project of the Udacity's Artificial Intelligence Nanodegree", "%matplotlib inline\n\nimport os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport keras\nimport helper\n\nhelper.reproducible(seed=0)\nsns.set()", "Load and prepare the data", "data_path = 'data/student_admissions.csv'\ndf = pd.read_csv(data_path)\ndf.head()\n\ndf.describe()\n\ntargets = ['admit']\nfeatures = ['gre', 'gpa', 'rank']\n\ncategorical = ['admit', 'rank']\nnumerical = ['gre', 'gpa']\n\n# NaN values\ndf.fillna(df[numerical].median(), inplace=True) # NaN from numerical feature replaced by median\ndf.dropna(axis='index', how='any', inplace = True) # NaN from categorical feature: delete row\n\ndf_visualize = df # copy for model visualization\ndf.shape", "Visualize data", "def plot_data(dataf, hue=\"admit\"):\n \"\"\" Custom plot for this project \"\"\"\n g = sns.FacetGrid(dataf, col=\"rank\", hue=hue)\n g = (g.map(plt.scatter, \"gre\", \"gpa\", edgecolor=\"w\").add_legend())\n return g\n \nplot_data(df)", "Create dummy variables", "dummies = pd.get_dummies(df['rank'], prefix='rank', drop_first=False)\ndf = pd.concat([df, dummies], axis=1)\ndf = df.drop('rank', axis='columns')\ndf.head()", "Scale numerical features", "# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor f in numerical:\n mean, std = df[f].mean(), df[f].std()\n scaled_features[f] = [mean, std]\n df.loc[:, f] = (df[f] - mean)/std\ndf.head() ", "Split the data into training and test sets", "from sklearn.model_selection import train_test_split\n\ntrain, test = train_test_split(df, test_size=0.2, random_state=9)\n\n# Separate the data into features and targets (x=features, y=targets)\nx_train, y_train = train.drop(targets, axis=1).values, train[targets].values\nx_test, y_test = test.drop(targets, axis=1).values, test[targets].values", "One-hot encoding the target", "num_classes = 2\ny_train = keras.utils.to_categorical(y_train, num_classes)\ny_test = keras.utils.to_categorical(y_test, num_classes)\n\nprint(\"Training set: \\t x-shape = {} \\t y-shape = {}\".format(x_train.shape ,y_train.shape))\nprint(\"Test set: \\t x-shape = {} \\t y-shape = {}\".format(x_test.shape ,y_test.shape))", "Deep Neural Network", "from keras.models import Sequential\nfrom keras.layers.core import Dense, Dropout\n\ninput_nodes = x_train.shape[1]*8\nweights = keras.initializers.RandomNormal(stddev=0.1)\n\nmodel = Sequential()\nmodel.add(Dense(input_nodes, input_dim=x_train.shape[1], activation='relu'))\nmodel.add(Dropout(.2))\nmodel.add(Dense(2,activation='softmax'))\nmodel.summary()\n\nmodel.compile(loss = 'binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nprint('\\nTraining ....')\ncallbacks = [keras.callbacks.EarlyStopping(monitor='val_loss', patience=2, verbose=0)]\n%time history = model.fit(x_train, y_train, epochs=1000, batch_size=64, verbose=0, \\\n validation_split=0.25, callbacks=callbacks)\nhelper.show_training(history)\n\nmodel_path = os.path.join(\"models\", \"student_admissions.h5\")\nmodel.save(model_path)\nprint(\"\\nModel saved at\",model_path)", "Evaluate the model", "model = keras.models.load_model(model_path)\nprint(\"Model loaded:\", model_path)\n\nscore = model.evaluate(x_test, y_test, verbose=0)\nprint(\"\\nTest Accuracy: {:.2f}\".format(score[1]))", "Visualize the model", "predictions = model.predict(df.drop(targets, axis=1).values)\npredictions = np.argmax(predictions, axis=1)\n\ndf_visualize['predicted'] = predictions \n\nplot_data(df_visualize).fig.suptitle('Actual')\nplot_data(df_visualize, hue='predicted').fig.suptitle('Model')", "A bit overfitting can be appreciated in rank 2 sometimes. More information can be extracted when looking at the predicted probabilities instead of the binary accepted-rejected result shown here.\nMake predictions", "def predict_admission(student):\n # student_data: {id: [gre, gpa, 'rank1, rank2, rank3, rank4]}\n\n print('Admission Probabilities: \\n')\n\n for key, value in student.items():\n p_name = key\n single_data = value\n\n # normalize data\n for idx, f in enumerate(numerical):\n single_data[idx] = (single_data[idx] - scaled_features[f][0]) / scaled_features[f][1]\n\n # make prediction\n single_pred = model.predict(np.array([single_data]))\n print('{}: \\t {:.0f}%\\n'.format(p_name, single_pred[0, 1] * 100))\n\n\ndf_visualize.describe()\n\n# student_data: {id: [gre, gpa, 'rank1, rank2, rank3, rank4]}\nnew_students = {'High scores rank-1': [730, 3.83, 1, 0, 0, 0],\n 'High scores rank-2': [730, 3.83, 0, 1, 0, 0],\n 'High scores rank-3': [730, 3.83, 0, 0, 1, 0],\n 'High scores rank-4': [730, 3.83, 0, 0, 0, 1], \n 'Avg scores rank-1': [588, 3.4, 1, 0, 0, 0],\n 'Avg scores rank-2': [588, 3.4, 0, 1, 0, 0],\n 'Avg scores rank-3': [588, 3.4, 0, 0, 1, 0],\n 'Avg scores rank-4': [588, 3.4, 0, 0, 0, 1],\n }\npredict_admission(new_students)", "The predictions confirm that rank is the most influential feature in determining the admission, which seems reasonable. The absolute grades of the students are more relevant for rank-1 students (Q1)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
crystalzhaizhai/cs207_yi_zhai
lectures/L18/L18.ipynb
mit
[ "Lecture 18\nWednesday, November 8th, 2017\nDatabases with SQlite\nSQLite Exercises\nToday you will work with the candidates and contributors datasets to create a database in Python using SQLite.\nThe exercises will consist of a sequence of steps to help illustrate basic commands.\n<a id='deliverables'></a>\nExercise Deliverables\n\nCreate a Jupyter notebook called Exercises-Final.ipynb inside the L18 directory. This is the one we will grade.\nFor each step in this lecture, there were instructions labeled \"Do the following:\". Put all the code from those instructions in a single Jupyter notebook cell. It should look like a Python script. You must comment where appropriate to demonstrate that you understand what you are doing.\nSave and close your database. Be sure to upload your database with the lecture exercises. You must name your database L18DB.sqlite.\n\nTable of Contents\nSetting the Stage\nStep 1\nInterlude: Not required but highly recommended.\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\n\n<a id='setting_the_stage'></a>\nSetting the Stage\nYou should import sqlite3 again like last time.", "import sqlite3", "We will also use a basic a pandas feature to display tables in the database. Although this lecture isn't on pandas, I will still have you use it a little bit.", "import pandas as pd\npd.set_option('display.width', 500)\npd.set_option('display.max_columns', 100)\npd.set_option('display.notebook_repr_html', True)", "Now we create the tables in the database (just like last time).", "db = sqlite3.connect('L18DB_demo.sqlite')\ncursor = db.cursor()\ncursor.execute(\"DROP TABLE IF EXISTS candidates\")\ncursor.execute(\"DROP TABLE IF EXISTS contributors\")\ncursor.execute(\"PRAGMA foreign_keys=1\")\n\ncursor.execute('''CREATE TABLE candidates (\n id INTEGER PRIMARY KEY NOT NULL, \n first_name TEXT, \n last_name TEXT, \n middle_init TEXT, \n party TEXT NOT NULL)''')\n\ndb.commit() # Commit changes to the database\n\ncursor.execute('''CREATE TABLE contributors (\n id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, \n last_name TEXT, \n first_name TEXT, \n middle_name TEXT, \n street_1 TEXT, \n street_2 TEXT, \n city TEXT, \n state TEXT, \n zip TEXT, \n amount REAL, \n date DATETIME, \n candidate_id INTEGER NOT NULL, \n FOREIGN KEY(candidate_id) REFERENCES candidates(id))''')\n\ndb.commit()", "<a id='step_1'></a>\nStep 1\nRead candidates.txt and contributors.txt and insert their values into the respective tables.", "with open (\"candidates.txt\") as candidates:\n next(candidates) # jump over the header\n for line in candidates.readlines():\n cid, first_name, last_name, middle_name, party = line.strip().split('|')\n vals_to_insert = (int(cid), first_name, last_name, middle_name, party)\n cursor.execute('''INSERT INTO candidates \n (id, first_name, last_name, middle_init, party)\n VALUES (?, ?, ?, ?, ?)''', vals_to_insert)\n\nwith open (\"contributors.txt\") as contributors:\n next(contributors)\n for line in contributors.readlines():\n cid, last_name, first_name, middle_name, street_1, street_2, \\\n city, state, zip_code, amount, date, candidate_id = line.strip().split('|')\n vals_to_insert = (last_name, first_name, middle_name, street_1, street_2, \n city, state, int(zip_code), amount, date, candidate_id)\n cursor.execute('''INSERT INTO contributors (last_name, first_name, middle_name, \n street_1, street_2, city, state, zip, amount, date, candidate_id) \n VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)''', vals_to_insert)", "<a id='interlude'></a>\nInterlude\nNow that you have values in the tables of the database, it would be convenient to be able to visualize those tables in some way. We'll write a little helper function to accomplish this.", "def viz_tables(cols, query):\n q = cursor.execute(query).fetchall()\n framelist = []\n for i, col_name in enumerate(cols):\n framelist.append((col_name, [col[i] for col in q]))\n return pd.DataFrame.from_items(framelist)", "Here's how we can use our helper function. It gives a pretty nice visualization of our table. You should do the same thing with the contributors table.", "candidate_cols = [col[1] for col in cursor.execute(\"PRAGMA table_info(candidates)\")]\nquery = '''SELECT * FROM candidates'''\nviz_tables(candidate_cols, query)", "<a id='step_2'></a>\nStep 2: Various Queries\nWe can query our database for entries with certain characteristics. For example, we can query the candidates table for entries who's middle name fields are not empty.", "query = '''SELECT * FROM candidates WHERE middle_init <> \"\"'''\nviz_tables(candidate_cols, query)", "We can also see how many entries satisfy the query:", "print(\"{} candidates have a middle initial.\".format(viz_tables(candidate_cols, query).shape[0]))", "Do the following queries:\n\nDisplay the contributors where the state is \"PA\"\nDisplay the contributors where the amount contributed is greater than $\\$1000.00$.\nDisplay the contributors from \"UT\" where the amount contributed is greater than $\\$1000.00$.\nDisplay the contributors who didn't list their state\nHint: Match state to the empty string\nDisplay the contributors from \"WA\" and \"PA\"\nHint: You will need to use IN (\"WA\", \"PA\") in your SELECT statement.\nDisplay the contributors who contributed between $\\$100.00$ and $\\$200.00$.\nHint: You can use the BETWEEN 100.00 and 200.00 clause.\n\n<a id='step_3'></a>\nStep 3: Sorting\nIt could be beneficial to sort by one of the attributes in the database. The following cell contains a basic sorting demo.", "query = '''SELECT * FROM candidates ORDER BY id DESC'''\nviz_tables(candidate_cols, query)", "Do the following sorts on the contributors table:\n\nSort the contributors table by last_name.\nSort by the amount in decending order where amount is restricted to be between $\\$1000.00$ and $\\$5000.00$.\nSort the contributors who donted between $\\$1000.00$ and $\\$5000.00$ by candidate_id and then by amount in descending order.\nHint: Multiple orderings can be accomplished by separating requests after ORDER BY with commas.\ne.g. ORDER BY amount ASC, last_name DESC\n\n<a id='step_4'></a>\nStep 4: Selecting Columns\nSo far, we've been selecting all columns from a table (i.e. SELECT * FROM). Often, we just want to select specific columns (e.g. SELECT amount FROM).", "query = '''SELECT last_name, party FROM candidates'''\nviz_tables(['last_name', 'party'], query)", "Using the DISTINCT clause, you remove duplicate rows.", "query = '''SELECT DISTINCT party FROM candidates'''\nviz_tables(['party'], query)", "Do the following:\n\nGet the first and last name of contributors. Make sure each row has distinct values.\n\n<a id='step_5'></a>\nStep 5: Altering Tables\nThe ALTER clause allows us to modify tables in our database. Here, we had a new column to our candidates table called nick_name.", "cursor.execute('''ALTER TABLE candidates ADD COLUMN full_name TEXT''')\ncandidate_cols = [col[1] for col in cursor.execute(\"PRAGMA table_info(candidates)\")]\nviz_tables(candidate_cols, '''SELECT * FROM candidates''')", "What if we want to rename or delete a columm? It can't be done with SQLite with a single command. We need to follow some roundabout steps (see SQLite ALTER TABLE). We won't consider this case at the moment.\nFor now, let's put a few commands together to populate the full_name column.", "candidate_cols = [col[1] for col in cursor.execute(\"PRAGMA table_info(candidates)\")] # regenerate columns with full_name\nquery = '''SELECT id, last_name, first_name FROM candidates''' # Select a few columns\nfull_name_and_id = [(attr[1] + \", \" + attr[2], attr[0]) for attr in cursor.execute(query).fetchall()] # List of tuples: (full_name, id)\n\nupdate = '''UPDATE candidates SET full_name = ? WHERE id = ?''' # Update the table\nfor rows in full_name_and_id:\n cursor.execute(update, rows)\n\nquery = '''SELECT * FROM candidates'''\nviz_tables(candidate_cols, query)", "Here's another update, this time on an existing column.", "update = '''UPDATE candidates SET full_name = \"Eventual Winner\" WHERE last_name = \"Obama\"'''\ncursor.execute(update)\nupdate = '''UPDATE candidates SET full_name = \"Eventual Loser\" WHERE last_name = \"McCain\"'''\ncursor.execute(update)\nviz_tables(candidate_cols, query)", "Do the following:\n\nAdd a new column to the contributors table called full_name. The value in that column should be in the form last_name, first_name.\nChange the value in the full_name column to the string \"Too Much\" if someone donated more than $\\$1000.00$.\n\n<a id='step_6'></a>\nStep 6: Aggregation\nYou can perform some nice operations on the values in the database. For example, you can compute the maximum, minimum, and sum of a set. You can also count the number of items in a given set. Here's a little example. You can do the rest.", "contributor_cols = [col[1] for col in cursor.execute(\"PRAGMA table_info(contributors)\")] # You've already done this part. I just need to do it here b/c I haven't yet.\nfunction = '''SELECT *, MAX(amount) AS max_amount FROM contributors'''\npcursor.execute(function)", "Do the following:\n\nCount how many donations there were above $\\$1000.00$.\nCalculate the average donation.\nCalculate the average contribution from each state and display in a table.\nHint: Use code that looks like: \n\npython\n \"SELECT state,SUM(amount) FROM contributors GROUP BY state\"\n<a id='step_7'></a>\nStep 7: DELETE\nWe have already noted that SQLite can't drop columns in a straightfoward manner. However, it can delete rows quite simply. Here's the syntax:\npython\ndeletion = '''DELETE FROM table_name WHERE condition'''\nDo the following:\n\nDelete rows in the contributors table with last name \"Ahrens\".\n\n<a id='step_8'></a>\nStep 8: LIMIT\nThe LIMIT clause offers convenient functionality. It allows you to constrain the number of rows returned by your query. It shows up in many guises.", "query = '''SELECT * FROM candidates LIMIT 3'''\nviz_tables(candidate_cols, query)\n\nquery = '''SELECT * FROM candidates LIMIT 4 OFFSET 5'''\nviz_tables(candidate_cols, query)\n\nquery = '''SELECT * FROM candidates ORDER BY last_name LIMIT 4 OFFSET 5'''\nviz_tables(candidate_cols, query)", "Do the following:\n\nQuery and display the ten most generous donors.\nQuery and display the ten least generous donors who donated a positive amount of money (since the data we have has some negative numbers in it...).\n\nSave\nDon't forget to save all of these changes to your database using db.commit(). Before closing shop, be sure to close the database connection with db.close()." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CalPolyPat/Python-Workshop
Python Workshop/IPython Widgets.ipynb
mit
[ "IPython Widgets\nIPython widgets are tools that give us interactivity within our analysis. This is most useful when looking at a complication plot and trying to figure out how it depends on a single parameter. You could make 20 different plots and vary the parameter a bit each time, or you could use an IPython slider widget. Let's first import the widgets.", "import IPython.html.widgets as widg\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.integrate import odeint\n%matplotlib inline\n", "The object we will learn about today is called interact. Let's find out how to use it.", "widg.interact?", "We see that we need a function with parameters that we want to vary, let's make one. We will examine the lorenz equations. They exhibit chaotic behaviour and are quite beautiful.", "def lorentz_derivs(yvec, t, sigma, rho, beta):\n \"\"\"Compute the the derivatives for the Lorentz system at yvec(t).\"\"\"\n dx = sigma*(yvec[1]-yvec[0])\n dy = yvec[0]*(rho-yvec[2])-yvec[1]\n dz = yvec[0]*yvec[1]-beta*yvec[2]\n return [dx,dy,dz]\ndef solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):\n \"\"\"Solve the Lorenz system for a single initial condition.\n \n Parameters\n ----------\n ic : array, list, tuple\n Initial conditions [x,y,z].\n max_time: float\n The max time to use. Integrate with 250 points per time unit.\n sigma, rho, beta: float\n Parameters of the differential equation.\n \n Returns\n -------\n soln : np.ndarray\n The array of the solution. Each row will be the solution vector at that time.\n t : np.ndarray\n The array of time points used.\n \n \"\"\"\n t = np.linspace(0,max_time, max_time*250)\n return odeint(lorentz_derivs, ic, t, args = (sigma, rho, beta)), t\n\ndef plot_lorentz(N=1, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):\n \"\"\"Plot [x(t),z(t)] for the Lorenz system.\n \n Parameters\n ----------\n N : int\n Number of initial conditions and trajectories to plot.\n max_time: float\n Maximum time to use.\n sigma, rho, beta: float\n Parameters of the differential equation.\n \"\"\"\n f = plt.figure(figsize=(15, N*8))\n np.random.seed(1)\n colors = plt.cm.hot(np.linspace(0,1,N))\n for n in range(N):\n plt.subplot(N,1,n)\n x0 = np.random.uniform(-15, 15)\n y0 = np.random.uniform(-15, 15)\n z0 = np.random.uniform(-15, 15)\n soln, t = solve_lorentz([x0,y0,z0], max_time, sigma, rho, beta)\n plt.plot(soln[:,0], soln[:, 2], color=colors[n])\n\nplot_lorentz()\n\nwidg.interact(plot_lorentz, N=1, max_time=(0,10,.1), sigma=(0,10,.1), rho=(0,100, .1), beta=(0,10,.1))", "Okay! So now you are ready to analyze the world! Just kidding. Let's make a simpler example. Consider the best fitting straight line through a set of points. When a curve fitter fits a straight line, it tries to minimize the sum of the \"errors\" from all the data points and the fit line. Mathematically this is represented as\n$$\\sum_{i=0}^{n}(f(x_i)-y_i)^2$$\nNow, $f(x_i)=mx_i+b$. Your task is to write a function that plots a line and prints out the error, make an interact that allows you to vary the m and b parameters, then vary those parameters until you find the smallest error.", "#Make a function that takes two parameters m and b and prints the total error and plots the the line and the data.\n#Use this x and y into your function to use as the data\nx=np.linspace(0,1,10)\ny=(np.random.rand(10)+4)*x+5\n\n#Make an interact as above that allows you to vary m and b.\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ajdawson/python_for_climate_scientists
course_content/solutions/numpy_exercise_2.ipynb
gpl-3.0
[ "Exercise: trapezoidal integration\nIn this exercise, you are tasked with implementing the simple trapezoid rule\nformula for numerical integration. If we want to compute the definite integral\n$$\n \\int_{a}^{b}f(x)dx\n$$\nwe can partition the integration interval $[a,b]$ into smaller subintervals. We then approximate the area under the curve for each subinterval by calculating the area of the trapezoid created by linearly interpolating between the two function values at each end of the subinterval:\n\nFor a pre-computed $y$ array (where $y = f(x)$ at discrete samples) the trapezoidal rule equation is:\n$$\n \\int_{a}^{b}f(x)dx\\approx\\frac{1}{2}\\sum_{i=1}^{n}\\left(x_{i}-x_{i-1}\\right)\\left(y_{i}+y_{i-1}\\right).\n$$\nIn pure python, this can be written as:\ndef trapz_slow(x, y):\n area = 0.\n for i in range(1, len(x)):\n area += (x[i] - x[i-1]) * (y[i] + y[i-1])\n return area / 2\n\nExercise 2\nPart 1\nCreate two arrays $x$ and $y$, where $x$ is a linearly spaced array in the interval $[0, 3]$ of length 10, and $y$ represents the function $f(x) = x^2$ sampled at $x$.", "import numpy as np\n\nx = np.linspace(0, 3, 10)\ny = x ** 2\n\nprint(x)\nprint(y)", "Part 2\nUse indexing (not a for loop) to find the 9 values representing $y_{i}+y_{i-1}$ for $i$ between 1 and 10.\nHint: What indexing would be needed to get all but the last element of the 1d array y. Similarly what indexing would be needed to get all but the first element of a 1d array.", "y_roll_sum = y[:-1] + y[1:]\nprint(y_roll_sum)", "Part 3\nWrite a function trapz(x, y), that applies the trapezoid formula to pre-computed values, where x and y are 1-d arrays. The function should not use a for loop.", "def trapz(x, y):\n return 0.5 * np.sum((x[1:] - x[:-1]) * (y[:-1] + y[1:]))", "Part 4\nVerify that your function is correct by using the arrays created in #1 as input to trapz. Your answer should be a close approximation of $\\int_0^3 x^2$ which is $9$.", "trapz(x, y)", "Part 5 (extension)\nnumpy and scipy.integrate provides many common integration schemes. Find the documentation for NumPy's own version of the trapezoidal integration scheme and check its result with your own:", "print(np.trapz(y, x))", "Part 6 (extension)\nWrite a function trapzf(f, a, b, npts=100) that accepts a function f, the endpoints a and b and the number of samples to take npts. Sample the function uniformly at these\npoints and return the value of the integral.\nUse the trapzf function to identify the minimum number of sampling points needed to approximate the integral $\\int_0^3 x^2$ with an absolute error of $<=0.0001$. (A loop is necessary here)", "def trapzf(f, a, b, npts=100):\n x = np.linspace(a, b, npts)\n y = f(x)\n return trapz(x, y)\n\ndef x_squared(x):\n return x ** 2\n\nabs_err = 1.0\nn_samples = 0\nexpected = 9\nwhile abs_err > 0.0001:\n n_samples += 1\n integral = trapzf(x_squared, 0, 3, npts=n_samples)\n abs_err = np.abs(integral - 9)\n\nprint('Minimum samples for absolute error less than or equal to 0.0001:', n_samples)\n " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.23/_downloads/00e78bba5d10188fcf003ef05e32a6f7/decoding_time_generalization_conditions.ipynb
bsd-3-clause
[ "%matplotlib inline", "Decoding sensor space data with generalization across time and conditions\nThis example runs the analysis described in :footcite:KingDehaene2014. It\nillustrates how one can\nfit a linear classifier to identify a discriminatory topography at a given time\ninstant and subsequently assess whether this linear model can accurately\npredict all of the time samples of a second set of conditions.", "# Authors: Jean-Remi King <jeanremi.king@gmail.com>\n# Alexandre Gramfort <alexandre.gramfort@inria.fr>\n# Denis Engemann <denis.engemann@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport matplotlib.pyplot as plt\n\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import LogisticRegression\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.decoding import GeneralizingEstimator\n\nprint(__doc__)\n\n# Preprocess data\ndata_path = sample.data_path()\n# Load and filter data, set up epochs\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevents_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nraw = mne.io.read_raw_fif(raw_fname, preload=True)\npicks = mne.pick_types(raw.info, meg=True, exclude='bads') # Pick MEG channels\nraw.filter(1., 30., fir_design='firwin') # Band pass filtering signals\nevents = mne.read_events(events_fname)\nevent_id = {'Auditory/Left': 1, 'Auditory/Right': 2,\n 'Visual/Left': 3, 'Visual/Right': 4}\ntmin = -0.050\ntmax = 0.400\n# decimate to make the example faster to run, but then use verbose='error' in\n# the Epochs constructor to suppress warning about decimation causing aliasing\ndecim = 2\nepochs = mne.Epochs(raw, events, event_id=event_id, tmin=tmin, tmax=tmax,\n proj=True, picks=picks, baseline=None, preload=True,\n reject=dict(mag=5e-12), decim=decim, verbose='error')", "We will train the classifier on all left visual vs auditory trials\nand test on all right visual vs auditory trials.", "clf = make_pipeline(StandardScaler(), LogisticRegression(solver='lbfgs'))\ntime_gen = GeneralizingEstimator(clf, scoring='roc_auc', n_jobs=1,\n verbose=True)\n\n# Fit classifiers on the epochs where the stimulus was presented to the left.\n# Note that the experimental condition y indicates auditory or visual\ntime_gen.fit(X=epochs['Left'].get_data(),\n y=epochs['Left'].events[:, 2] > 2)", "Score on the epochs where the stimulus was presented to the right.", "scores = time_gen.score(X=epochs['Right'].get_data(),\n y=epochs['Right'].events[:, 2] > 2)", "Plot", "fig, ax = plt.subplots(1)\nim = ax.matshow(scores, vmin=0, vmax=1., cmap='RdBu_r', origin='lower',\n extent=epochs.times[[0, -1, 0, -1]])\nax.axhline(0., color='k')\nax.axvline(0., color='k')\nax.xaxis.set_ticks_position('bottom')\nax.set_xlabel('Testing Time (s)')\nax.set_ylabel('Training Time (s)')\nax.set_title('Generalization across time and condition')\nplt.colorbar(im, ax=ax)\nplt.show()", "References\n.. footbibliography::" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
zoofIO/flexx-notebooks
flexx_tutorial_app.ipynb
bsd-3-clause
[ "Tutorial for flexx.app - connecting to the browser", "from flexx import flx", "In normal operation, one uses flx.launch() to fire up a browser (or desktop app) to run the JavaScript in. This is followed by flx.run() (or flx.start() for servers) to enter Flexx' main loop.\nIn the notebook, however, there already is a browser. To tell Flexx that we're in the notebook, use flx.init_notebook() at the start of your notebook. Since Flexx's event system is based on asyncio, we need to \"activate\" asyncio as well.", "%gui asyncio\nflx.init_notebook()\n\nclass MyComponent(flx.JsComponent):\n\n foo = flx.StringProp('', settable=True)\n \n @flx.reaction('foo')\n def on_foo(self, *events):\n if self.foo:\n window.alert('foo is ' + self.foo, + len(events))\n\nm = MyComponent()\n\nm.set_foo('helo')", "Let's use an example model:", "from flexxamples.testers.find_prime import PrimeFinder\np = PrimeFinder()\n\np.find_prime_py(2000)\n\np.find_prime_js(2000) # Result is written to JS console, open F12 to see it" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
jealdana/Open-Notebooks
Python Hacks/PythonHacks.ipynb
gpl-3.0
[ "Web scraping\n\n\nPython Wars\n\n\nDownload of courses' files", "import os\nimport requests\n\npath = \".\"\n\nfor i in range(1,15):\n print('Downloading week %s...' %i )\n URL = \"https://engineering.purdue.edu/~ee562/wk\"+str(i)+\".pdf\"\n res = requests.get(URL)\n webFile = open(os.path.join(path, os.path.basename(URL)), 'w+')\n for chunk in res.iter_content(2^25): # 2^11 = 2 KB, 2^21 = 2 MB, 2^25 = 32 MB \n webFile.write(chunk)\n webFile.close()", "Jupyter\nJupyter - Automatic creation of html and py files\n\nSet up Jupyter to create html and py files when saving Reference", "% jupyter notebook --generate-config\nifile = open(\"~/.jupyter/jupyter_notebook_config.py\")\nc = get_config()\n### If you want to auto-save .html and .py versions of your notebook:\n# modified from: https://github.com/ipython/ipython/issues/8009\nimport os\nfrom subprocess import check_call\n\ndef post_save(model, os_path, contents_manager):\n \"\"\"post-save hook for converting notebooks to .py scripts\"\"\"\n if model['type'] != 'notebook':\n return # only do this for notebooks\n d, fname = os.path.split(os_path)\n check_call(['jupyter', 'nbconvert', '--to', 'script', fname], cwd=d)\n check_call(['jupyter', 'nbconvert', '--to', 'html', fname], cwd=d)\nc.FileContentsManager.post_save_hook = post_save", "Misc\nGetting list of files and subdirectories Reference link", "\"\"\"Solution top 1\"\"\"\nimport os\npath = \"/Users/jesusenriquealdanasigona/Documents/Documents - Jesus’s Mac mini/Github/nanoHUB_roles/nanohub/Literature support/1 Escience\"\narr_txt = [filename for filename in os.listdir(path) if (\".pdf\") in filename] \n\narr_txt[:5]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
JeffAbrahamson/MLWeek
practicum/03_sklearn/sklearn-intro.ipynb
gpl-3.0
[ "Scikit Learn et données\nScikit-learn propose quelques ensembles de données, notamment iris et digits (classification) et le boston house prices dataset (regression).\nExercice : en trouvez d'autres...", "import numpy as np\nimport scipy as sp\nfrom sklearn import datasets\n\niris = datasets.load_iris()\ndigits = datasets.load_digits()\nboston = datasets.load_boston()", "Même avant scikit-learn\nLes libraries numpy et scipy ont plein de bonnes choses dedans. Explorez-les un peu.\nsklearn.datasets\nUn dataset ressemble à un dict. Explorez les membres suivants (e.g., iris.DESCR) :\n* data\n* target\n* feature_names\n* DESCR\nPuis utilisez ce que vous avez appris dans la module pandas pour explorer les données elles-même.\n<img src=\"petal_sepal_label.png\">\nEn anglais (pour correspondre aux noms des fonctions) : \"We fit an estimator to the data to predict the classes to which unseen samples belong\". Donc, un estimator implemente les méthode fit(X, y) et predit(T).\nLe constructeur d'un estimateur accepte les paramètes du modèle.\nIl est également possible de changer les paramètes après création.", "from sklearn import svm\n\nmodel = svm.SVC(gamma=0.002, C=100.)\nprint(model.gamma)\nmodel.set_params(gamma=.001)\nprint(model.gamma)\nmodel.fit(digits.data[:-1], digits.target[:-1])\nmodel.predict([digits.data[-1]])", "Nous pouvons regarder l'image.\n\nQu'est-ce qui est l'effet de cmap?", "import pylab as pl\n%matplotlib inline\n\npl.imshow(digits.images[-1], cmap=pl.cm.gray_r)", "À savoir (mais pour un autre jour) :\n* pickle marche\n* sklearn.externals.joblib est parfois plus efficace\nUn estimator prend un ensemble de données, typiquement un array de dimension 2 (np.ndarray, cf. .shape).\nRegardons les iris :\n* Il y a combien de classes d'iris?\n* Il y a combien de vecteurs dans le training data?\n* Il y a combien de dimensions?", "iris = datasets.load_iris()\niris_X = iris.data\niris_y = iris.target", "Le classifieur le plus simple imagineable s'appelle kNN. Avec scikit-learn, c'est facile. (Visualisaton à suivre.)\nLe nombre de dimensions peut monter très vite, ce qui pose des problèmes pour kNN.\n* Il y a combien de point sur une lattice espacés de $1/n$ en dimension 1, 2, 3, ..., n ?\n* Qu'est-ce qui est la distance entre 0 et 1 (les vecteurs des coins opposés) dans $[0,1]^d$?", "# Split iris data in train and test data\n# A random permutation, to split the data randomly\nnp.random.seed(0)\nindices = np.random.permutation(len(iris_X))\niris_X_train = iris_X[indices[:-10]]\niris_y_train = iris_y[indices[:-10]]\niris_X_test = iris_X[indices[-10:]]\niris_y_test = iris_y[indices[-10:]]\n# Create and fit a nearest-neighbor classifier\nfrom sklearn.neighbors import KNeighborsClassifier\nknn = KNeighborsClassifier()\nknn.fit(iris_X_train, iris_y_train) \nprint(knn.predict(iris_X_test))\nprint(iris_y_test)\nknn.score(iris_X_test, iris_y_test)", "La régression logistique est un algorithm important de classification dans l'apprentissage. Le voilà sur les mêmes données :", "from sklearn import linear_model\n\nlogistic = linear_model.LogisticRegression(C=1e5)\nlogistic.fit(iris_X_train, iris_y_train)\nprint(logistic.predict(iris_X_test))\nprint(iris_y_test)\nlogistic.score(iris_X_test, iris_y_test)", "Exercice :\n* Pourquoi sont les scores les mêmes dans les deux exemples précédents?\n* À quoi sert le score?", "scores = []\nfor k in range(10):\n indices = np.random.permutation(len(iris_X))\n iris_X_train = iris_X[indices[:-10]]\n iris_y_train = iris_y[indices[:-10]]\n iris_X_test = iris_X[indices[-10:]]\n iris_y_test = iris_y[indices[-10:]]\n \n knn = KNeighborsClassifier()\n knn.fit(iris_X_train, iris_y_train) \n scores.append(knn.score(iris_X_test, iris_y_test))\nprint(scores)\n\nX_digits = digits.data\ny_digits = digits.target\nsvc = svm.SVC(C=1, kernel='linear')\n\nN = 10\nX_folds = np.array_split(X_digits, N)\ny_folds = np.array_split(y_digits, N)\nscores = list()\nfor k in range(N):\n # We use 'list' to copy, in order to 'pop' later on\n X_train = list(X_folds)\n X_test = X_train.pop(k)\n X_train = np.concatenate(X_train)\n y_train = list(y_folds)\n y_test = y_train.pop(k)\n y_train = np.concatenate(y_train)\n scores.append(svc.fit(X_train, y_train).score(X_test, y_test))\nscores", "Ce qu'on vient de faire s'appelle \"cross validation\" (validation croisée). On peut le faire plus facilement :", "from sklearn import model_selection\n\nk_fold = cross_validation.KFold(n=6, n_folds=3)\nfor train_indices, test_indices in k_fold:\n print('Train: %s | test: %s' % (train_indices, test_indices))\n\nkfold = cross_validation.KFold(len(X_digits), n_folds=N)\n[svc.fit(X_digits[train], y_digits[train]).score(\n X_digits[test], y_digits[test])\n for train, test in kfold]\n\ncross_validation.cross_val_score(\n svc, X_digits, y_digits, cv=kfold, n_jobs=-1)", "En validation croisée, plus c'est grand, plus c'est bon.\nÀ voir également :\n* KFold\n* StratifiedKFold\n* LeaveOneOut\n* LeaveOneLabelOut\nEstimation d'un paramètre\nNous voudrions trouver quelle valeur du paramètre $C$ nous donne un bon rendu de SVM avec noyau linéaire. Pour l'instant, on ne parle ni de SVM ni des noyaux : ce sont simplement des classificateurs. L'important ici est qu'il existe un paramètre $C$ qui touche sur la qualité de nos résultats.\nC'est $C$ qui gère le séparateur : marge dure ($C$ grand) ou molle (douce) ($C>0$ petit).", "import numpy as np\nfrom sklearn import cross_validation, datasets, svm\n\ndigits = datasets.load_digits()\nX = digits.data\ny = digits.target\n\nsvc = svm.SVC(kernel='linear')\nC_s = np.logspace(-10, 0, 10)\n\nscores = list()\nscores_std = list()\nfor C in C_s:\n svc.C = C\n this_scores = cross_validation.cross_val_score(svc, X, y, n_jobs=1)\n scores.append(np.mean(this_scores))\n scores_std.append(np.std(this_scores))\n\n# Do the plotting\nimport matplotlib.pyplot as plt\nplt.figure(1, figsize=(4, 3))\nplt.clf()\nplt.semilogx(C_s, scores)\nplt.semilogx(C_s, np.array(scores) + np.array(scores_std), 'b--')\nplt.semilogx(C_s, np.array(scores) - np.array(scores_std), 'b--')\nlocs, labels = plt.yticks()\nplt.yticks(locs, list(map(lambda x: \"%g\" % x, locs)))\nplt.ylabel('CV score')\nplt.xlabel('Parameter C')\nplt.ylim(0, 1.1)\nplt.show()", "Grid search\nNote pour plus tard : Voir l'argument cv. GridSearch fait 3-fold validation croisée pour la régression, stratified 3-fold pour un classificateur.", "from sklearn.model_selection import GridSearchCV, cross_val_score\n\nCs = np.logspace(-6, -1, 10)\nclf = GridSearchCV(estimator=svc, param_grid=dict(C=Cs), n_jobs=-1)\nclf.fit(X_digits[:1000], y_digits[:1000])\nprint(clf.best_score_)\nprint(clf.best_estimator_.C)\n\n# Prediction performance on test set is not as good as on train set\nprint(clf.score(X_digits[1000:], y_digits[1000:]))", "Pipelining\nGrace à l'interface uniforme des classes estimateurs, nous avons la possibilité de créer des pipelines : des composition de plusieurs estimateurs.\nDigits\nUne première exemple d'un pipeline.", "from sklearn import linear_model, decomposition, datasets\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.model_selection import GridSearchCV\n\nlogistic = linear_model.LogisticRegression()\n\npca = decomposition.PCA()\npipe = Pipeline(steps=[('pca', pca), ('logistic', logistic)])\n\ndigits = datasets.load_digits()\nX_digits = digits.data\ny_digits = digits.target\n\n###############################################################################\n# Plot the PCA spectrum\npca.fit(X_digits)\n\nplt.figure(1, figsize=(4, 3))\nplt.clf()\nplt.axes([.2, .2, .7, .7])\nplt.plot(pca.explained_variance_, linewidth=2)\nplt.axis('tight')\nplt.xlabel('n_components')\nplt.ylabel('explained_variance_')\n\n###############################################################################\n# Prediction\n\nn_components = [20, 40, 64]\nCs = np.logspace(-4, 4, 3)\n\n#Parameters of pipelines can be set using ‘__’ separated parameter names:\n\nestimator = GridSearchCV(pipe,\n dict(pca__n_components=n_components,\n logistic__C=Cs))\nestimator.fit(X_digits, y_digits)\n\nplt.axvline(estimator.best_estimator_.named_steps['pca'].n_components,\n linestyle=':', label='n_components chosen')\nplt.legend(prop=dict(size=12))\n", "Eigenfaces\nUne deuxième exemple d'un pipeline.", "\"\"\"\n===================================================\nFaces recognition example using eigenfaces and SVMs\n===================================================\n\nThe dataset used in this example is a preprocessed excerpt of the\n\"Labeled Faces in the Wild\", aka LFW_:\n\n http://vis-www.cs.umass.edu/lfw/lfw-funneled.tgz (233MB)\n\n.. _LFW: http://vis-www.cs.umass.edu/lfw/\n\nExpected results for the top 5 most represented people in the dataset:\n\n================== ============ ======= ========== =======\n precision recall f1-score support\n================== ============ ======= ========== =======\n Ariel Sharon 0.67 0.92 0.77 13\n Colin Powell 0.75 0.78 0.76 60\n Donald Rumsfeld 0.78 0.67 0.72 27\n George W Bush 0.86 0.86 0.86 146\nGerhard Schroeder 0.76 0.76 0.76 25\n Hugo Chavez 0.67 0.67 0.67 15\n Tony Blair 0.81 0.69 0.75 36\n\n avg / total 0.80 0.80 0.80 322\n================== ============ ======= ========== =======\n\n\"\"\"\nfrom __future__ import print_function\n\nfrom time import time\nimport logging\nimport matplotlib.pyplot as plt\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.datasets import fetch_lfw_people\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.decomposition import PCA\nfrom sklearn.svm import SVC\n\n\nprint(__doc__)\n\n# Display progress logs on stdout\nlogging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')\n\n\n###############################################################################\n# Download the data, if not already on disk and load it as numpy arrays\n\nlfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4)\n\n# introspect the images arrays to find the shapes (for plotting)\nn_samples, h, w = lfw_people.images.shape\n\n# for machine learning we use the 2 data directly (as relative pixel\n# positions info is ignored by this model)\nX = lfw_people.data\nn_features = X.shape[1]\n\n# the label to predict is the id of the person\ny = lfw_people.target\ntarget_names = lfw_people.target_names\nn_classes = target_names.shape[0]\n\nprint(\"Total dataset size:\")\nprint(\"n_samples: %d\" % n_samples)\nprint(\"n_features: %d\" % n_features)\nprint(\"n_classes: %d\" % n_classes)\n\n\n###############################################################################\n# Split into a training set and a test set using a stratified k fold\n\n# split into a training and testing set\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.25, random_state=42)\n\n\n###############################################################################\n# Compute a PCA (eigenfaces) on the face dataset (treated as unlabeled\n# dataset): unsupervised feature extraction / dimensionality reduction\nn_components = 150\n\nprint(\"Extracting the top %d eigenfaces from %d faces\"\n % (n_components, X_train.shape[0]))\nt0 = time()\npca = PCA(n_components=n_components, svd_solver='randomized',\n whiten=True).fit(X_train)\nprint(\"done in %0.3fs\" % (time() - t0))\n\neigenfaces = pca.components_.reshape((n_components, h, w))\n\nprint(\"Projecting the input data on the eigenfaces orthonormal basis\")\nt0 = time()\nX_train_pca = pca.transform(X_train)\nX_test_pca = pca.transform(X_test)\nprint(\"done in %0.3fs\" % (time() - t0))\n\n\n###############################################################################\n# Train a SVM classification model\n\nprint(\"Fitting the classifier to the training set\")\nt0 = time()\nparam_grid = {'C': [1e3, 5e3, 1e4, 5e4, 1e5],\n 'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1], }\nclf = GridSearchCV(SVC(kernel='rbf', class_weight='balanced'), param_grid)\nclf = clf.fit(X_train_pca, y_train)\nprint(\"done in %0.3fs\" % (time() - t0))\nprint(\"Best estimator found by grid search:\")\nprint(clf.best_estimator_)\n\n\n###############################################################################\n# Quantitative evaluation of the model quality on the test set\n\nprint(\"Predicting people's names on the test set\")\nt0 = time()\ny_pred = clf.predict(X_test_pca)\nprint(\"done in %0.3fs\" % (time() - t0))\n\nprint(classification_report(y_test, y_pred, target_names=target_names))\nprint(confusion_matrix(y_test, y_pred, labels=range(n_classes)))\n\n\n###############################################################################\n# Qualitative evaluation of the predictions using matplotlib\n\ndef plot_gallery(images, titles, h, w, n_row=3, n_col=4):\n \"\"\"Helper function to plot a gallery of portraits\"\"\"\n plt.figure(figsize=(1.8 * n_col, 2.4 * n_row))\n plt.subplots_adjust(bottom=0, left=.01, right=.99, top=.90, hspace=.35)\n for i in range(n_row * n_col):\n plt.subplot(n_row, n_col, i + 1)\n plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray)\n plt.title(titles[i], size=12)\n plt.xticks(())\n plt.yticks(())\n\n\n# plot the result of the prediction on a portion of the test set\n\ndef title(y_pred, y_test, target_names, i):\n pred_name = target_names[y_pred[i]].rsplit(' ', 1)[-1]\n true_name = target_names[y_test[i]].rsplit(' ', 1)[-1]\n return 'predicted: %s\\ntrue: %s' % (pred_name, true_name)\n\nprediction_titles = [title(y_pred, y_test, target_names, i)\n for i in range(y_pred.shape[0])]\n\nplot_gallery(X_test, prediction_titles, h, w)\n\n# plot the gallery of the most significative eigenfaces\n\neigenface_titles = [\"eigenface %d\" % i for i in range(eigenfaces.shape[0])]\nplot_gallery(eigenfaces, eigenface_titles, h, w)\n\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tpin3694/tpin3694.github.io
python/mine_a_twitter_hashtags_and_words.ipynb
mit
[ "Title: Mine Twitter's Stream For Hashtags Or Words\nSlug: mine_a_twitter_hashtags_and_words \nSummary: Mine Twitter's Stream For Hashtags Or Words\nDate: 2016-11-02 12:00\nCategory: Python\nTags: Other\nAuthors: Chris Albon \nThis is a script which monitor's Twitter for tweets containing certain hashtags, words, or phrases. When one of those appears, it saves that tweet, and the user's information to a csv file. A similar version of this script is available on GitHub here. The main difference between the code presented here and the repo is that here I am added extensive comments in the code explaining what is happening. Also, the code below runs as a Jupyter notebook.\nTo get the code below to run, you need to added your own Twitter API credentials.\nPreliminaries", "#Import libraries\nfrom tweepy.streaming import StreamListener\nfrom tweepy import OAuthHandler\nfrom tweepy import Stream\nimport time\nimport csv\nimport sys", "Create A Twitter Stream Miner", "# Create a streamer object\nclass StdOutListener(StreamListener):\n \n # Define a function that is initialized when the miner is called\n def __init__(self, api = None):\n # That sets the api\n self.api = api\n # Create a file with 'data_' and the current time\n self.filename = 'data'+'_'+time.strftime('%Y%m%d-%H%M%S')+'.csv'\n # Create a new file with that filename\n csvFile = open(self.filename, 'w')\n \n # Create a csv writer\n csvWriter = csv.writer(csvFile)\n \n # Write a single row with the headers of the columns\n csvWriter.writerow(['text',\n 'created_at',\n 'geo',\n 'lang',\n 'place',\n 'coordinates',\n 'user.favourites_count',\n 'user.statuses_count',\n 'user.description',\n 'user.location',\n 'user.id',\n 'user.created_at',\n 'user.verified',\n 'user.following',\n 'user.url',\n 'user.listed_count',\n 'user.followers_count',\n 'user.default_profile_image',\n 'user.utc_offset',\n 'user.friends_count',\n 'user.default_profile',\n 'user.name',\n 'user.lang',\n 'user.screen_name',\n 'user.geo_enabled',\n 'user.profile_background_color',\n 'user.profile_image_url',\n 'user.time_zone',\n 'id',\n 'favorite_count',\n 'retweeted',\n 'source',\n 'favorited',\n 'retweet_count'])\n\n # When a tweet appears\n def on_status(self, status):\n \n # Open the csv file created previously\n csvFile = open(self.filename, 'a')\n \n # Create a csv writer\n csvWriter = csv.writer(csvFile)\n \n # If the tweet is not a retweet\n if not 'RT @' in status.text:\n # Try to \n try:\n # Write the tweet's information to the csv file\n csvWriter.writerow([status.text,\n status.created_at,\n status.geo,\n status.lang,\n status.place,\n status.coordinates,\n status.user.favourites_count,\n status.user.statuses_count,\n status.user.description,\n status.user.location,\n status.user.id,\n status.user.created_at,\n status.user.verified,\n status.user.following,\n status.user.url,\n status.user.listed_count,\n status.user.followers_count,\n status.user.default_profile_image,\n status.user.utc_offset,\n status.user.friends_count,\n status.user.default_profile,\n status.user.name,\n status.user.lang,\n status.user.screen_name,\n status.user.geo_enabled,\n status.user.profile_background_color,\n status.user.profile_image_url,\n status.user.time_zone,\n status.id,\n status.favorite_count,\n status.retweeted,\n status.source,\n status.favorited,\n status.retweet_count])\n # If some error occurs\n except Exception as e:\n # Print the error\n print(e)\n # and continue\n pass\n \n # Close the csv file\n csvFile.close()\n\n # Return nothing\n return\n\n # When an error occurs\n def on_error(self, status_code):\n # Print the error code\n print('Encountered error with status code:', status_code)\n \n # If the error code is 401, which is the error for bad credentials\n if status_code == 401:\n # End the stream\n return False\n\n # When a deleted tweet appears\n def on_delete(self, status_id, user_id):\n \n # Print message\n print(\"Delete notice\")\n \n # Return nothing\n return\n\n # When reach the rate limit\n def on_limit(self, track):\n \n # Print rate limiting error\n print(\"Rate limited, continuing\")\n \n # Continue mining tweets\n return True\n\n # When timed out\n def on_timeout(self):\n \n # Print timeout message\n print(sys.stderr, 'Timeout...')\n \n # Wait 10 seconds\n time.sleep(10)\n \n # Return nothing\n return", "Create A Wrapper For The Miner", "# Create a mining function\ndef start_mining(queries):\n '''\n Inputs list of strings. Returns tweets containing those strings.\n '''\n \n #Variables that contains the user credentials to access Twitter API\n consumer_key = \"YOUR_CREDENTIALS\"\n consumer_secret = \"YOUR_CREDENTIALS\"\n access_token = \"YOUR_CREDENTIALS\"\n access_token_secret = \"YOUR_CREDENTIALS\"\n\n # Create a listener\n l = StdOutListener()\n \n # Create authorization info\n auth = OAuthHandler(consumer_key, consumer_secret)\n auth.set_access_token(access_token, access_token_secret)\n \n # Create a stream object with listener and authorization\n stream = Stream(auth, l)\n\n # Run the stream object using the user defined queries\n stream.filter(track=queries)", "Run The Stream Miner", "# Start the miner\nstart_mining(['python', '#Python'])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
feststelltaste/software-analytics
notebooks/Storing Git commit information into Pandas' DataFrame.ipynb
gpl-3.0
[ "Software version control systems contain a huge amount of evolutionary data. It's very common to mine these repositories to gain some insight about how the development of a software product works. But there is the need for some preprocessing of that data to avoid false analysis.\nThat’s why I show you how to read the commit information of a Git repository into Pandas’ DataFrame!\nIdea\nThe main idea is to use an existing Git library for Python that provides the necessary (and hopefully efficient) access to a Git repository. In this notebook, we'll use GitPython, because at first glance it seems easy to use and to do the things we need.\nOur implementation strategy is straightforward: We try to avoid any functions as much as possible but try to use all the processing power Pandas delivers. The So let's get started.\nCreate an initial DataFrame\nFirst, we import our two main libraries for analysis: Pandas and GitPython.", "import pandas as pd\nimport git", "With GitPython, you can access a Git repository via a <tt>Repo</tt> object. That's your entry point to the world of Git.\nFor this notebook, we analyze the Sprint PetClinic repository that can be easily cloned to your local computer with a\n<pre>git clone https://github.com/spring-projects/spring-petclinic.git</pre>\n\n<tt>Repo</tt> needs at least the directory to your Git repository. I've added an additional argument <tt>odbt</tt> with the <tt>git.GitCmdObjectDB</tt>. With this, GitPython will be using a more performant approach for retrieving all the data (see doc for more details).", "repo = git.Repo(r'C:\\dev\\repos\\spring-petclinic', odbt=git.GitCmdObjectDB)\nrepo", "To transform the complete repository into Pandas' <tt>DataFrame</tt>, we simply iterate over all commits of the <tt>master</tt> branch.", "commits = pd.DataFrame(repo.iter_commits('master'), columns=['raw'])\ncommits.head()", "Our <tt>raw</tt> column now contains all the commits as PythonGit's <tt>Commit</tt> Objects (to be more accurate: references to these objects). The string representation is coincidental the SHA key of the commit.\nInvestigate commit data\nLet's have a look at the last commit.", "last_commit = commits.ix[0, 'raw']\nlast_commit", "Such a <tt>Commit</tt> object is our entry point for retrieving further data.", "print(last_commit.__doc__)", "It provides all data we need:", "last_commit.__slots__", "E. g. basic data like the commit message.", "last_commit.message", "Or the date of the commit", "last_commit.committed_datetime", "Some information about the author.", "last_commit.author.name\n\nlast_commit.author.email", "Or file statistics about the commit,", "last_commit.stats.files", "Fill the DataFrame with data\nLet's check how fast we can retrieve all the authors from the commit's data.", "%%time\ncommits['author'] = commits['raw'].apply(lambda x: x.author.name)\ncommits.head()", "Let's go further and retrieve some more data (<tt>DataFrame</tt> is transposed / rotated via a <tt>T</tt> for displaying reasons).", "%%time\ncommits['email'] = commits['raw'].apply(lambda x: x.author.email)\ncommits['committed_date'] = commits['raw'].apply(lambda x: pd.to_datetime(x.committed_datetime))\ncommits['message'] = commits['raw'].apply(lambda x: x.message)\ncommits['sha'] = commits['raw'].apply(lambda x: str(x))\ncommits.head(2).T", "Dead easy and reasonable fast, but what about the modified files? Let's challenge our computer a little bit more by extracting the statistics data about every commit. The <tt>Stats</tt> object contains all the touched files per commit including the information about the number of lines that were either inserted or deleted.\nAdditionally, we need some tricks to get the data we need. For this, I guide you step by step through this approach. The main idea is to retrieve the real statistics data (not only the object's references) and temporarily store these statistics information as Pandas' <tt>Series</tt>. Then we take another round to transform this data to use it in <tt>DataFrame</tt>.\nCracking the <tt>stats</tt> files statistic object\nThis step is a little bit tricky and was found only by a good amount of trial and error. But it works in the end as we will see. The goal is to unpack the information in the <tt>stats</tt> object into nice columns of out <tt>DataFrame</tt> via the <tt>Series#apply</tt> method. I'll show you step by step how this works in principle (albeit it will work a little bit different when using the <tt>apply</tt> approach).\nAs seen above, we have access to every file modification of each commit. In the end, it's a dictionary with the filename as the key and a dictionary of the change attributes as values.", "some_commit = commits.ix[56, 'raw']\nsome_commit.stats.files", "We extract the dictionary of dictionaries in two steps. We have to keep in mind that all tricky data transformation is highly dependent on the right <tt>index</tt>. But first things first.\nFirst, to the outer dictionary: We create a <tt>Series</tt> of the dictionary.", "dict_as_series = pd.Series(some_commit.stats.files)\ndict_as_series", "Second, we wrap that series into a <tt>DataFrame</tt> (for <tt>index</tt> reasons):", "dict_as_series_wrapped_in_dataframe = pd.DataFrame(dict_as_series)\ndict_as_series_wrapped_in_dataframe", "After that, some magic occurs. We stack the <tt>DataFrame</tt>, meaning that we put our columns into our <tt>index</tt> which becomes a <tt>MultiIndex</tt>.", "stacked_dataframe = dict_as_series_wrapped_in_dataframe.stack()\nstacked_dataframe\n\nstacked_dataframe.index", "With some manipulation of the <tt>index</tt>, we achive what we need: an expansion of the rows for each file in a commit.", "stacked_dataframe.reset_index().set_index('level_1')", "With this (dirty?) trick, we achieved that all files from the <tt>stats</tt> object can be assigned to the original <tt>index</tt> of our <tt>DataFrame</tt>.\nIn the context of a call with the <tt>apply</tt> method, the command looks a little bit different, but in the end, the result is the same (I took a commit with multiple modified files from the <tt>DataFrame</tt> just to show the tranformation a little bit better):", "pd.DataFrame(commits[64:65]['raw'].apply(\n lambda x: pd.Series(x.stats.files)).stack()).reset_index(level=1)\n\n%%time\nstats = pd.DataFrame(commits['raw'].apply(\n lambda x: pd.Series(x.stats.files)).stack()).reset_index(level=1)\nstats = stats.rename(columns={ 'level_1' : 'filename', 0 : 'stats_modifications'})\nstats.head()", "Unfortunately, this takes almost 30 seconds on my machine :-( (Help needed! Maybe there is a better way for doing this).\nNext, we extract the data from the <tt>stats_modification</tt> column. We do this by simply wrapping the dictionary in a <tt>Series</tt>, that will return the data needed.", "pd.Series(stats.ix[0, 'stats_modifications'])", "With an <tt>apply</tt>, it looks a little bit different because we are applying the <tt>lambda</tt> function along the <tt>DataFrame</tt>'s <tt>index</tt>.\nWe get a warning because there seems to be a problem with the ordering of the index. But I haven't found any errors so far with this approach.", "stats_modifications = stats['stats_modifications'].apply(lambda x: pd.Series(x))\nstats_modifications.head(7)", "We join the newly created data with the existing one with a <tt>join</tt> method.", "stats = stats.join(stats_modifications)\nstats.head()", "After we get rid of the now obsolete <tt>stats_modifications</tt> columns...", "del(stats['stats_modifications'])\nstats.head()", "...we join the existing <tt>DataFrame</tt> with the <tt>stats</tt> information (transposed for displaying reasons)...", "commits = commits.join(stats)\ncommits.head(2).T", "...and come to an end by deleting the <tt>raw</tt> data column, too (and also transposed for displaying reasons).", "del(commits['raw'])\ncommits.head(2).T", "So we're finished! A <tt>DataFrame</tt> that contains all the repository information needed for further analysis!", "commits.info()", "At the end, we still have our commits from the beginning, but with all information that we can work on in another notebook.", "len(commits.index.unique())", "Store for later usage\nFor now, we just store the <tt>DataFrame</tt> into a h5 format with compression for later usage (we get a warning because of the string objects we're using, but that's no problem AFAIK).", "commits.to_hdf(\"data/commits.h5\", 'commits', mode='w', complevel=9, complib='zlib')", "All in one code block\nThis notebook is really long because it includes a lot of explanations. But if you just need the code to extract a Git repository, here it is:", "import pandas as pd\nimport git\n\nrepo = git.Repo(r'C:\\dev\\repos\\spring-petclinic', odbt=git.GitCmdObjectDB)\n\ncommits = pd.DataFrame(repo.iter_commits('master'), columns=['raw'])\ncommits['author'] = commits['raw'].apply(lambda x: x.author.name)\ncommits['email'] = commits['raw'].apply(lambda x: x.author.email)\ncommits['committed_date'] = commits['raw'].apply(lambda x: pd.to_datetime(x.committed_datetime))\ncommits['message'] = commits['raw'].apply(lambda x: x.message)\ncommits['sha'] = commits['raw'].apply(lambda x: str(x))\n\nstats = pd.DataFrame(commits['raw'].apply(lambda x: pd.Series(x.stats.files)).stack()).reset_index(level=1)\nstats = stats.rename(columns={ 'level_1' : 'filename', 0 : 'stats_modifications'})\nstats_modifications = stats['stats_modifications'].apply(lambda x: pd.Series(x))\nstats = stats.join(stats_modifications)\ndel(stats['stats_modifications'])\n\ncommits = commits.join(stats)\ndel(commits['raw'])\n\ncommits.to_hdf(\"data/commits.h5\", 'commits', mode='w', complevel=9, complib='zlib')", "Summary\nI hope you aren't demotivated now by my Pandas' approach for extracting data from Git repositories. Agreed, the <tt>stats</tt> object is little unconventional to work with (and there may be better ways for doing it), but I think in the end, the result is pretty useful." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
luofan18/deep-learning
transfer-learning/Transfer_Learning.ipynb
mit
[ "Transfer Learning\nMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.\n<img src=\"assets/cnnarchitecture.jpg\" width=700px>\nVGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.\nYou can read more about transfer learning from the CS231n course notes.\nPretrained VGGNet\nWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. This code is already included in 'tensorflow_vgg' directory, sdo you don't have to clone it.\nThis is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.", "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\nvgg_dir = 'tensorflow_vgg/'\n# Make sure vgg exists\nif not isdir(vgg_dir):\n raise Exception(\"VGG directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(vgg_dir + \"vgg16.npy\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',\n vgg_dir + 'vgg16.npy',\n pbar.hook)\nelse:\n print(\"Parameter file already exists!\")", "Flower power\nHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.", "import tarfile\n\ndataset_folder_path = 'flower_photos'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('flower_photos.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:\n urlretrieve(\n 'http://download.tensorflow.org/example_images/flower_photos.tgz',\n 'flower_photos.tar.gz',\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with tarfile.open('flower_photos.tar.gz') as tar:\n tar.extractall()\n tar.close()", "ConvNet Codes\nBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.\nHere we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \\times 224 \\times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):\n```\nself.conv1_1 = self.conv_layer(bgr, \"conv1_1\")\nself.conv1_2 = self.conv_layer(self.conv1_1, \"conv1_2\")\nself.pool1 = self.max_pool(self.conv1_2, 'pool1')\nself.conv2_1 = self.conv_layer(self.pool1, \"conv2_1\")\nself.conv2_2 = self.conv_layer(self.conv2_1, \"conv2_2\")\nself.pool2 = self.max_pool(self.conv2_2, 'pool2')\nself.conv3_1 = self.conv_layer(self.pool2, \"conv3_1\")\nself.conv3_2 = self.conv_layer(self.conv3_1, \"conv3_2\")\nself.conv3_3 = self.conv_layer(self.conv3_2, \"conv3_3\")\nself.pool3 = self.max_pool(self.conv3_3, 'pool3')\nself.conv4_1 = self.conv_layer(self.pool3, \"conv4_1\")\nself.conv4_2 = self.conv_layer(self.conv4_1, \"conv4_2\")\nself.conv4_3 = self.conv_layer(self.conv4_2, \"conv4_3\")\nself.pool4 = self.max_pool(self.conv4_3, 'pool4')\nself.conv5_1 = self.conv_layer(self.pool4, \"conv5_1\")\nself.conv5_2 = self.conv_layer(self.conv5_1, \"conv5_2\")\nself.conv5_3 = self.conv_layer(self.conv5_2, \"conv5_3\")\nself.pool5 = self.max_pool(self.conv5_3, 'pool5')\nself.fc6 = self.fc_layer(self.pool5, \"fc6\")\nself.relu6 = tf.nn.relu(self.fc6)\n```\nSo what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\nThis creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,\nfeed_dict = {input_: images}\ncodes = sess.run(vgg.relu6, feed_dict=feed_dict)", "import os\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_vgg import vgg16\nfrom tensorflow_vgg import utils\n\ndata_dir = 'flower_photos/'\ncontents = os.listdir(data_dir)\nclasses = [each for each in contents if os.path.isdir(data_dir + each)]", "Below I'm running images through the VGG network in batches.\n\nExercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).", "# Set the batch size higher if you can fit in in your GPU memory\nbatch_size = 16\ncodes_list = []\nlabels = []\nbatch = []\n\ncodes = None\n\nwith tf.Session() as sess:\n \n # TODO: Build the vgg network here\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, (None, 224, 224, 3))\n with tf.name_scope('content_vgg'):\n vgg.build(input_)\n\n for each in classes:\n print(\"Starting {} images\".format(each))\n class_path = data_dir + each\n files = os.listdir(class_path)\n for ii, file in enumerate(files, 1):\n # Add images to the current batch\n # utils.load_image crops the input images for us, from the center\n img = utils.load_image(os.path.join(class_path, file))\n batch.append(img.reshape((1, 224, 224, 3)))\n labels.append(each)\n \n # Running the batch through the network to get the codes\n if ii % batch_size == 0 or ii == len(files):\n \n # Image batch to pass to VGG network\n images = np.concatenate(batch)\n \n # TODO: Get the values from the relu6 layer of the VGG network\n codes_batch = sess.run(vgg.relu6, feed_dict={input_: images})\n \n # Here I'm building an array of the codes\n if codes is None:\n codes = codes_batch\n else:\n codes = np.concatenate((codes, codes_batch))\n \n # Reset to start building the next batch\n batch = []\n print('{} images processed'.format(ii))\n\n# write codes to file\nwith open('codes', 'w') as f:\n codes.tofile(f)\n \n# write labels to file\nimport csv\nwith open('labels', 'w') as f:\n writer = csv.writer(f, delimiter='\\n')\n writer.writerow(labels)", "Building the Classifier\nNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.", "# read codes and labels from file\nimport csv\n\nwith open('labels') as f:\n reader = csv.reader(f, delimiter='\\n')\n labels = np.array([each for each in reader if len(each) > 0]).squeeze()\nwith open('codes') as f:\n codes = np.fromfile(f, dtype=np.float32)\n codes = codes.reshape((len(labels), -1))", "Data prep\nAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!\n\nExercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.", "from sklearn.preprocessing import LabelBinarizer\nlb = LabelBinarizer()\nlabels_vecs = lb.fit_transform(labels) # Your one-hot encoded labels array here", "Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.\nYou can create the splitter like so:\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nThen split the data with \nsplitter = ss.split(x, y)\nss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.\n\nExercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.", "from sklearn.model_selection import StratifiedShuffleSplit\n\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nsplitter = ss.split(codes, labels_vecs)\n\ntrain_idx, val_idx = next(splitter)\n\nhalf_val = int(len(val_idx) / 2)\ntest_idx = val_idx[:half_val]\nval_idx = val_idx[half_val:]\n\ntrain_x, train_y = codes[train_idx], labels_vecs[train_idx]\nval_x, val_y = codes[val_idx], labels_vecs[val_idx]\ntest_x, test_y = codes[test_idx], labels_vecs[test_idx]\n\nprint(\"Train shapes (x, y):\", train_x.shape, train_y.shape)\nprint(\"Validation shapes (x, y):\", val_x.shape, val_y.shape)\nprint(\"Test shapes (x, y):\", test_x.shape, test_y.shape)", "If you did it right, you should see these sizes for the training sets:\nTrain shapes (x, y): (2936, 4096) (2936, 5)\nValidation shapes (x, y): (367, 4096) (367, 5)\nTest shapes (x, y): (367, 4096) (367, 5)\nClassifier layers\nOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.\n\nExercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.", "inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])\nlabels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])\n\n# TODO: Classifier layers and operations\nout = tf.layers.dense(inputs_, 256)\nout = tf.maximum(0., out)\n\nlogits = tf.layers.dense(out, labels_vecs.shape[1]) # output layer logits\ncost = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_) # cross entropy loss\ncost = tf.reduce_mean(cost)\n\noptimizer = tf.train.AdamOptimizer().minimize(cost) # training optimizer\n\n# Operations for validation/test accuracy\npredicted = tf.nn.softmax(logits)\ncorrect_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Batches!\nHere is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.", "def get_batches(x, y, n_batches=10):\n \"\"\" Return a generator that yields batches from arrays x and y. \"\"\"\n batch_size = len(x)//n_batches\n \n for ii in range(0, n_batches*batch_size, batch_size):\n # If we're not on the last batch, grab data with size batch_size\n if ii != (n_batches-1)*batch_size:\n X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] \n # On the last batch, grab the rest of the data\n else:\n X, Y = x[ii:], y[ii:]\n # I love generators\n yield X, Y", "Training\nHere, we'll train the network.\n\nExercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!", "epochs = 10\niteration = 0\n\nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n \n # TODO: Your training code here\n sess.run(tf.global_variables_initializer())\n \n for e in range(epochs):\n for x, y in get_batches(train_x, train_y):\n loss, _ = sess.run([cost, optimizer], feed_dict={inputs_: x, labels_: y})\n print ('Epoch: {}/{}, iteration: {}, training loss: {:.5f}'.format(e, epochs, iteration, loss)) \n iteration += 1\n \n if iteration % 5 == 0:\n val_acc = sess.run(accuracy, feed_dict={inputs_: val_x, labels_: val_y})\n print ('Epoch:{}/{}, iteration: {}, validation accuracy: {:.5f}'.\n format(e, epochs, iteration, val_acc))\n \n saver.save(sess, \"checkpoints/flowers.ckpt\")", "Testing\nBelow you see the test accuracy. You can also see the predictions returned for images.", "with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: test_x,\n labels_: test_y}\n test_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Test accuracy: {:.4f}\".format(test_acc))\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom scipy.ndimage import imread", "Below, feel free to choose images and see how the trained classifier predicts the flowers in them.", "test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'\ntest_img = imread(test_img_path)\nplt.imshow(test_img)\n\n# Run this cell if you don't have a vgg graph built\nif 'vgg' in globals():\n print('\"vgg\" object already exists. Will not create again.')\nelse:\n #create vgg\n with tf.Session() as sess:\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n vgg = vgg16.Vgg16()\n vgg.build(input_)\n\nwith tf.Session() as sess:\n img = utils.load_image(test_img_path)\n img = img.reshape((1, 224, 224, 3))\n\n feed_dict = {input_: img}\n code = sess.run(vgg.relu6, feed_dict=feed_dict)\n \nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: code}\n prediction = sess.run(predicted, feed_dict=feed).squeeze()\n\nplt.imshow(test_img)\n\nplt.barh(np.arange(5), prediction)\n_ = plt.yticks(np.arange(5), lb.classes_)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
MatteusDeloge/opengrid
notebooks/Water Leak Detection.ipynb
apache-2.0
[ "This notebook shows step by step how water leaks of different severity can be detected", "import os\nimport sys\nimport pytz\nimport inspect\nimport numpy as np\nimport pandas as pd\nimport datetime as dt\nimport matplotlib.pyplot as plt\nimport tmpo\n\nfrom opengrid import config\nfrom opengrid.library import plotting\nfrom opengrid.library import houseprint\n\nc=config.Config()\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = 16,8\n\n# path to data\npath_to_data = c.get('data', 'folder')\nif not os.path.exists(path_to_data):\n raise IOError(\"Provide your path to the data in your config.ini file. This is a folder containing a 'zip' and 'csv' subfolder.\")\n\nhp = houseprint.Houseprint()\n\n#set start and end date\nend = pd.Timestamp('2015/1/4')\nstart = end - dt.timedelta(days=60)\n\nhp.init_tmpo()\n\ndef dif_interp(ts, freq='min', start=None, end=None):\n \"\"\"\n Return a fixed frequency discrete difference time series from an unevenly spaced cumulative time series\n \"\"\"\n if ts.empty and (start is None or end is None):\n return ts\n start = start or ts.index[0]\n start = start.replace(tzinfo=pytz.utc)\n end = end or max(start, ts.index[-1])\n end = end.replace(tzinfo=pytz.utc)\n start = min(start, end)\n newindex = pd.DataFrame([0, 0], index=[start, end]).resample(freq).index\n if ts.dropna().empty:\n tsmin = ts.reindex(newindex)\n else:\n tsmin = ts.reindex(ts.index + newindex)\n tsmin = tsmin.interpolate(method='time')\n tsmin = tsmin.reindex(newindex)\n return tsmin.diff()*3600/60\n\ndf = hp.get_data(sensortype='water', head=start, tail=end).diff()\n\nwater_sensors = [sensor for sensor in hp.get_sensors('water') if sensor.key in df.columns]\nprint \"{} water sensors\".format(len(water_sensors))", "The purpose is to automatically detect leaks, undesired high consumption, etc.. so we can warn the user\nLet's first have a look at the carpet plots in order to see whether we have such leaks, etc... in our database.", "for sensor in water_sensors:\n ts = df[sensor.key]\n if not ts.dropna().empty:\n plotting.carpet(ts, title=sensor.device.key, zlabel=r'Flow [l/hour]')\n", "Yes, we do! The most obvious is FL03001579 with a more or less constant leak in the first month and later on some very large leaks during several hours. FL03001556 has a moderate leak once and seems to have similar, but less severe leaks later again. Also in FL03001561 there was once a strange (but rather short) issue and later on small, stubborn and irregularly deteriorating leaks of a different kind.\nSo, out of 6 water consumption profiles, there are 3 with possible leaks of different types and severities! This looks a very promising case to detect real issues and show the value of opengrid.\nSo, we would like to detect the following issues:\n* FL03001579: constant leak in first month and big water leak during several hours on some days (toilet leaking?)\n* FL03001556: moderate leak once and small water leak during several hours on some days (toilet leaking?)\n* FL03001561: rather short leak and small, irregularly deteriorating water leak?\nHow could we detect these? Let's look first at the daily load curves of each sensor", "for sensor in water_sensors:\n ts = df[sensor.key]\n if not ts.dropna().empty:\n tsday = ts.resample('D', how='sum')\n tsday.plot(label=sensor.device.key)\n (tsday*0.+1000.).plot(style='--', lw=3, label='_nolegend_')\n plt.legend()", "So, the big water leaks of FL03001579 is relatively easy to detect, e.g. by raising an alarm as soon as the daily consumption exceeds 1500 l. However, by that time a lot of water has been wasted already. One could lower the threshold a bit, but below 1000l a false alarm would be raised for FL03001525 on one day. Moreover, the other issues are not detected by such an alarm.\nLet's try it in a different way. First have a look at the load duration curve, maybe there we could find something usefull for the alarm.", "for sensor in water_sensors:\n ts = df[sensor.key]\n if not ts.dropna().empty:\n plt.figure()\n for day in pd.date_range(start, end):\n try:\n tsday = ts[day.strftime('%Y/%m/%d')].order(ascending=False) * 60.\n plt.plot(tsday.values/60.)\n x = np.arange(len(tsday.values)) + 10.\n plt.plot(x + 100., 500./x**1.5, 'k--')\n plt.gca().set_yscale('log')\n plt.ylim(ymin=1/60.)\n plt.title(sensor.device.key)\n except:\n pass", "This way, most of the issues could be detected, but some marginally. For small leaks it may take a full day before the alarm is raised.\nMaybe we can improve this. A more reliable way may be to look for consecutive minutes with high load. So, let's have a look at the 60 minutes rolling minimum of the load.", "for sensor in water_sensors:\n ts = df[sensor.key] * 60.\n if not ts.dropna().empty:\n tsday = pd.rolling_min(ts, 60)\n ax = tsday.plot(label=sensor.device.key)\n(tsday*0.+20.).plot(style='--', lw=3, label='_nolegend_')\nplt.gca().set_yscale('log')\nax.set_ylim(ymin=1)\nplt.legend()", "The large leaks are very pronounced and easily detected (remark that this is a logarithmic scale!) one hour after the leak started. But the smaller leaks are still not visible.\nRemark that a typical characteristic of a leak is that it is more or less constant, and thus the mean is probably pretty close to the minimum. An expected high load typically varies a lot more over one hour and thus its mean is probably a lot higher than its minimum. So, we could exploit this characteristic of leaks by subtracting some fraction of the rolling mean from the rolling minimum. Leaks should then stand out compared to normal loads.", "for sensor in water_sensors:\n ts = df[sensor.key] * 60.\n if not ts.dropna().empty:\n tsday = pd.rolling_min(ts, 60) - 0.4*pd.rolling_mean(ts, 60)\n ax = tsday.plot(label=sensor.device.key)\n(tsday*0.+1.).plot(style='--', lw=3, label='_nolegend_')\nax.set_yscale('log')\nax.set_ylim(ymin=0.1)\nplt.legend()", "Now this works! The large leaks of FL03001579 stand out by two orders of magnitude, but also the small leaks of FL03001556 and FL03001561 are detected one hour after they started." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
TESScience/FPE_Test_Procedures
John Doty's Global Calibration of the Housekeeping Data Collection.ipynb
mit
[ "John Doty's Global FPE Calibration\nby Matthew Wampler-Doty and Ed Bokhour\nRemember that whenever you power-cycle the Observatory Simulator, you should set preload=True below.\nWhen you are running this notbook and it has not been power cycled, you should set preload=False.", "from tessfpe.dhu.fpe import FPE\nfrom tessfpe.dhu.unit_tests import check_house_keeping_voltages\nfpe1 = FPE(1, debug=False, preload=False, FPE_Wrapper_version='6.1.1')\nprint fpe1.version\nif check_house_keeping_voltages(fpe1):\n print \"Wrapper load complete. Interface voltages OK.\"", "We assume that there is a slight global error in the housekeeping that may be corrected by a linear transformation:\n$$ f(x) := m \\cdot x + c $$\nTo calculus $c$, we can average the biases in the housekeeping, but we must first convert them to ADUs from uA by using the unscale_value function.\nNote that because of statistical variation, we collect 50 samples:", "def estimate_c_param(fpe,samples=50):\n from tessfpe.dhu.house_keeping import unscale_value\n from tessfpe.data.housekeeping_channels import housekeeping_channels\n sample_data = []\n for _ in range(samples):\n analogue_house_keeping = fpe.house_keeping[\"analogue\"]\n biases = [unscale_value(analogue_house_keeping[k],\n 16,\n housekeeping_channels[k][\"low\"],\n housekeeping_channels[k][\"high\"])\n for k in fpe1.house_keeping[\"analogue\"]\n if 'bias' in k]\n for b in biases:\n sample_data.append(b)\n \n return sum(sample_data) / len(sample_data)", "Below run the above estimation; this should be approximately $-4.83$ based on previous calculations", "estimate_c_param(fpe1)", "Next we can use the known voltages from the power supply to recover the slope $m$, which is done as follows. Once again, 50 samples are collected because we are scientists at MIT:", "def unscale_1_8_f(x):\n from tessfpe.data.housekeeping_channels import housekeeping_channels\n from tessfpe.dhu.house_keeping import unscale_value\n low = housekeeping_channels['+1.8f']['low']\n high = housekeeping_channels['+1.8f']['high']\n return unscale_value(x, 16, low, high)\n\ndef unscale_1_f(x):\n from tessfpe.data.housekeeping_channels import housekeeping_channels\n from tessfpe.dhu.house_keeping import unscale_value\n low = housekeeping_channels['+1f']['low']\n high = housekeeping_channels['+1f']['high']\n return unscale_value(x, 16, low, high)\n\ndef estimate_m_param(fpe,samples=50):\n slope_samples = []\n for _ in range(samples):\n a_ = unscale_1_8_f(1.807)\n b_ = unscale_1_f(0.998)\n a = unscale_1_8_f(fpe1.house_keeping['analogue']['+1.8f'])\n b = unscale_1_f(fpe1.house_keeping['analogue']['+1f'])\n slope_samples.append((a_ - b_) / (a - b))\n return sum(slope_samples) / len(slope_samples)", "Once again, we run the calculation; this should be $\\approx 1$", "estimate_m_param(fpe1)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
justacec/bokeh
examples/howto/charts/donut.ipynb
bsd-3-clause
[ "from bokeh.charts import Donut, show, output_notebook, vplot\nfrom bokeh.charts.utils import df_from_json\nfrom bokeh.sampledata.olympics2014 import data\nfrom bokeh.sampledata.autompg import autompg\n\noutput_notebook()\n\nimport pandas as pd", "Generic Examples\nValues with implied index", "d = Donut([2, 4, 5, 2, 8])\nshow(d)", "Values with Explicit Index", "d = Donut(pd.Series([2, 4, 5, 2, 8], index=['a', 'b', 'c', 'd', 'e']))\nshow(d)", "Autompg Data\nTake a look at the data", "autompg.head()", "Simple example implies count when object or categorical", "d = Donut(autompg.cyl.astype(str))\nshow(d)", "Equivalent with columns specified", "d = Donut(autompg, label='cyl', agg='count')\nshow(d)", "Given an indexed series of data pre-aggregated", "d = Donut(autompg.groupby('cyl').displ.mean())\nshow(d)", "Equivalent with columns specified", "d = Donut(autompg, label='cyl',\n values='displ', agg='mean')\nshow(d)", "Given a multi-indexed series fo data pre-aggregated\nSince the aggregation type isn't specified, we must provide it to the chart for use in the tooltip, otherwise it will just say \"value\".", "d = Donut(autompg.groupby(['cyl', 'origin']).displ.mean(), hover_text='mean')\nshow(d)", "Column Labels Produces Slightly Different Result\nIn previous series input example we do not have the original values so we cannot size the wedges based on the mean of displacement for Cyl, then size the wedges proportionally inside of the Cyl wedge. This column labeled example can perform the right sizing, so would be preferred for any aggregated values.", "d = Donut(autompg, label=['cyl', 'origin'],\n values='displ', agg='mean')\nshow(d)", "The spacing between each donut level can be altered\nBy default, this is applied to only the levels other than the first.", "d = Donut(autompg, label=['cyl', 'origin'],\n values='displ', agg='mean', level_spacing=0.15)\nshow(d)", "Can specify the spacing for each level\nThis is applied to each level individually, including the first.", "d = Donut(autompg, label=['cyl', 'origin'],\n values='displ', agg='mean', level_spacing=[0.8, 0.3])\nshow(d)", "Olympics Example\nTake a look at source data", "print(data.keys())\ndata['data'][0]", "Look at table formatted data", "# utilize utility to make it easy to get json/dict data converted to a dataframe\ndf = df_from_json(data)\ndf.head()", "Prepare the data\nThis data is in a \"pivoted\" format, and since the charts interface is built around referencing columns, it is more convenient to de-pivot the data.\n\nWe will sort the data by total medals and select the top rows by the total medals.\nUse pandas.melt to de-pivot the data.", "# filter by countries with at least one medal and sort by total medals\ndf = df[df['total'] > 8]\ndf = df.sort(\"total\", ascending=False)\nolympics = pd.melt(df, id_vars=['abbr'],\n value_vars=['bronze', 'silver', 'gold'],\n value_name='medal_count', var_name='medal')\nolympics.head()\n\n# original example\nd0 = Donut(olympics, label=['abbr', 'medal'], values='medal_count',\n text_font_size='8pt', hover_text='medal_count')\nshow(d0)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bwgref/nustar_pysolar
notebooks/20200912/EstimateDrift.ipynb
mit
[ "from nustar_pysolar import planning, io\nimport astropy.units as u\nfrom astropy.coordinates import SkyCoord\nimport warnings\nwarnings.filterwarnings('ignore')", "Download the list of occultation periods from the MOC at Berkeley.\nNote that the occultation periods typically only are stored at Berkeley for the future and not for the past. So this is only really useful for observation planning.", "#fname = io.download_occultation_times(outdir='../data/')\nfname = '../data/NUSTAR.2020_253.SHADOW_ANALYSIS.txt'\nprint(fname)", "Download the NuSTAR TLE archive.\nThis contains every two-line element (TLE) that we've received for the whole mission. We'll expand on how to use this later.\nThe times, line1, and line2 elements are now the TLE elements for each epoch.", "tlefile = io.download_tle(outdir='../data')\nprint(tlefile)\ntimes, line1, line2 = io.read_tle_file(tlefile)", "Here is where we define the observing window that we want to use.\nNote that tstart and tend must be in the future otherwise you won't find any occultation times and sunlight_periods will return an error.", "tstart = '2020-09-12T08:30:00'\ntend = '2020-09-13T01:00:00'\norbits = planning.sunlight_periods(fname, tstart, tend)\n\norbits\n\n# Get the solar parameter\nfrom sunpy.coordinates import sun\n\nangular_size = sun.angular_radius(t='now')\ndx = angular_size.arcsec\nprint(dx)\n\noffset = [-dx, 0]*u.arcsec\nfor ind, orbit in enumerate(orbits):\n# midTime = (0.5*(orbit[1] - orbit[0]) + orbit[0])\n# sky_pos = planning.get_skyfield_position(midTime, offset, load_path='./data', parallax_correction=True)\n\n \n midTime = (0.5*(orbit[1] - orbit[0]) + orbit[0])\n sky_pos_beginning = planning.get_skyfield_position(orbit[0], offset, load_path='./data', parallax_correction=True)\n sky_pos_end = planning.get_skyfield_position(orbit[1], offset, load_path='./data', parallax_correction=True)\n\n pos1 = SkyCoord(sky_pos_beginning[0], sky_pos_beginning[1], unit='deg')\n pos2 = SkyCoord(sky_pos_end[0], sky_pos_end[1], unit='deg')\n \n print(\"Orbit: {}\".format(ind))\n# print(\"Orbit start: {} Orbit end: {}\".format(orbit[0].iso, orbit[1].iso))\n# print(f'Aim time: {midTime.iso} RA (deg): {sky_pos[0]:8.3f} Dec (deg): {sky_pos[1]:8.3f}')\n \n\n print(pos1.separation(pos2).arcmin)\n \n \n print(\"\")\n \n ", "This is where you actually make the Mosaic for Orbit 0", "from importlib import reload\nreload(planning)\n\npa = planning.get_nustar_roll(tstart, 0)\nprint(tstart)\nprint(\"NuSTAR Roll angle for Det0 in NE quadrant: {}\".format(pa))\n\n# We're actually using a SKY PA of 340. So...we'll need to rotate \ntarget_pa = 150\nextra_roll = ( 150 - pa.value ) * u.deg\nprint(f'Extra roll used: {extra_roll}')\n\n\n# Just use the first orbit...or choose one. This may download a ton of deltat.preds, which is a known \n# bug to be fixed.\norbit = orbits[0].copy()\nprint(orbit)\n#...adjust the index above to get the correct orbit. Then uncomment below.\n\nplanning.make_mosaic(orbit, make_regions=True, extra_roll = extra_roll, outfile='orbit0_mosaic.txt', write_output=True)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
albahnsen/PracticalMachineLearningClass
exercises/E7-DecisionTrees.ipynb
mit
[ "Exercise 7\nCapital Bikeshare data\nIntroduction\n\nCapital Bikeshare dataset from Kaggle: data, data dictionary\nEach observation represents the bikeshare rentals initiated during a given hour of a given day", "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.tree import DecisionTreeRegressor, export_graphviz\n\n# read the data and set \"datetime\" as the index\nurl = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/bikeshare.csv'\nbikes = pd.read_csv(url, index_col='datetime', parse_dates=True)\n\n# \"count\" is a method, so it's best to rename that column\nbikes.rename(columns={'count':'total'}, inplace=True)\n\n# create \"hour\" as its own feature\nbikes['hour'] = bikes.index.hour\n\nbikes.head()\n\nbikes.tail()", "hour ranges from 0 (midnight) through 23 (11pm)\nworkingday is either 0 (weekend or holiday) or 1 (non-holiday weekday)\n\nExercise 7.1\nRun these two groupby statements and figure out what they tell you about the data.", "# mean rentals for each value of \"workingday\"\nbikes.groupby('workingday').total.mean()\n\n# mean rentals for each value of \"hour\"\nbikes.groupby('hour').total.mean()", "Exercise 7.2\nRun this plotting code, and make sure you understand the output. Then, separate this plot into two separate plots conditioned on \"workingday\". (In other words, one plot should display the hourly trend for \"workingday=0\", and the other should display the hourly trend for \"workingday=1\".)", "# mean rentals for each value of \"hour\"\nbikes.groupby('hour').total.mean().plot()", "Plot for workingday == 0 and workingday == 1", "# hourly rental trend for \"workingday=0\"\nbikes[bikes.workingday==0].groupby('hour').total.mean().plot()\n\n# hourly rental trend for \"workingday=1\"\nbikes[bikes.workingday==1].groupby('hour').total.mean().plot()\n\n# combine the two plots\nbikes.groupby(['hour', 'workingday']).total.mean().unstack().plot()", "Write about your findings\nExercise 7.3\nFit a linear regression model to the entire dataset, using \"total\" as the response and \"hour\" and \"workingday\" as the only features. Then, print the coefficients and interpret them. What are the limitations of linear regression in this instance?\nExercice 7.4\nCreate a Decision Tree to forecast \"total\" by manually iterating over the features \"hour\" and \"workingday\". The algorithm must at least have 6 end nodes.\nExercise 7.5\nTrain a Decision Tree using scikit-learn. Comment about the performance of the models." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
adrianstaniec/deep-learning
13_seq2seq/sequence_to_sequence_implementation.ipynb
mit
[ "Character Sequence to Sequence\nIn this notebook, we'll build a model that takes in a sequence of letters, and outputs a sorted version of that sequence. We'll do that using what we've learned so far about Sequence to Sequence models. This notebook was updated to work with TensorFlow 1.1 and builds on the work of Dave Currie. Check out Dave's post Text Summarization with Amazon Reviews.\n<img src=\"images/sequence-to-sequence.jpg\"/>\nDataset\nThe dataset lives in the /data/ folder. At the moment, it is made up of the following files:\n * letters_source.txt: The list of input letter sequences. Each sequence is its own line. \n * letters_target.txt: The list of target sequences we'll use in the training process. Each sequence here is a response to the input sequence in letters_source.txt with the same line number.", "import numpy as np\nimport time\n\nimport helper\n\nsource_path = 'data/letters_source.txt'\ntarget_path = 'data/letters_target.txt'\n\nsource_sentences = helper.load_data(source_path)\ntarget_sentences = helper.load_data(target_path)", "Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.", "source_sentences[:50].split('\\n')", "target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line.", "target_sentences[:50].split('\\n')", "Preprocess\nTo do anything useful with it, we'll need to turn the each string into a list of characters: \n<img src=\"images/source_and_target_arrays.png\"/>\nThen convert the characters to their int values as declared in our vocabulary:", "def extract_character_vocab(data):\n special_words = ['<PAD>', '<UNK>', '<GO>', '<EOS>']\n\n set_words = set([character for line in data.split('\\n') for character in line])\n int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}\n vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}\n\n return int_to_vocab, vocab_to_int\n\n# Build int2letter and letter2int dicts\nsource_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)\ntarget_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)\n\n# Convert characters to ids\nsource_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<UNK>']) for letter in line] for line in source_sentences.split('\\n')]\ntarget_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<UNK>']) for letter in line] + [target_letter_to_int['<EOS>']] for line in target_sentences.split('\\n')] \n\nprint(\"Example source sequence\")\nprint(source_letter_ids[:3])\nprint(\"\\n\")\nprint(\"Example target sequence\")\nprint(target_letter_ids[:3])", "This is the final shape we need them to be in. We can now proceed to building the model.\nModel\nCheck the Version of TensorFlow\nThis will check to make sure you have the correct version of TensorFlow", "from distutils.version import LooseVersion\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\n\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))", "Hyperparameters", "# Number of Epochs\nepochs = 60\n# Batch Size\nbatch_size = 128\n# RNN Size\nrnn_size = 50\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 15\ndecoding_embedding_size = 15\n# Learning Rate\nlearning_rate = 0.001", "Input", "def get_model_inputs():\n input_data = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None], name='targets')\n lr = tf.placeholder(tf.float32, name='learning_rate')\n\n target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')\n max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')\n source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')\n \n return input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length\n", "Sequence to Sequence Model\nWe can now start defining the functions that will build the seq2seq model. We are building it from the bottom up with the following components:\n2.1 Encoder\n2.2 Decoder\n2.3 Seq2seq model connecting the encoder and decoder\n2.4 Build the training graph hooking up the model with the \n optimizer\n\n2.1 Encoder\nThe first bit of the model we'll build is the encoder. Here, we'll embed the input data, construct our encoder, then pass the embedded data to the encoder.\n\n\nEmbed the input data using tf.contrib.layers.embed_sequence\n<img src=\"images/embed_sequence.png\" />\n\n\nPass the embedded input into a stack of RNNs. Save the RNN state and ignore the output.\n<img src=\"images/encoder.png\" />", "def encoding_layer(input_data, rnn_size, num_layers,\n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n\n # Encoder embedding\n enc_embed_input = tf.contrib.layers.embed_sequence(input_data, \n source_vocab_size, \n encoding_embedding_size)\n # RNN cell\n def make_cell(rnn_size):\n return tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n\n enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, \n sequence_length=source_sequence_length, dtype=tf.float32)\n return enc_output, enc_state", "2.2 Decoder\nThe decoder is probably the most involved part of this model. The following steps are needed to create it:\n1- Process decoder inputs\n2- Set up the decoder components\n - Embedding\n - Decoder cell\n - Dense output layer\n - Training decoder\n - Inference decoder\n\nProcess Decoder Input\nIn the training process, the target sequences will be used in two different places:\n\nUsing them to calculate the loss\nFeeding them to the decoder during training to make the model more robust.\n\nNow we need to address the second point. Let's assume our targets look like this in their letter/word form (we're doing this for readibility. At this point in the code, these sequences would be in int form):\n<img src=\"images/targets_1.png\"/>\nWe need to do a simple transformation on the tensor before feeding it to the decoder:\n1- We will feed an item of the sequence to the decoder at each time step. Think about the last timestep -- where the decoder outputs the final word in its output. The input to that step is the item before last from the target sequence. The decoder has no use for the last item in the target sequence in this scenario. So we'll need to remove the last item. \nWe do that using tensorflow's tf.strided_slice() method. We hand it the tensor, and the index of where to start and where to end the cutting.\n<img src=\"images/strided_slice_1.png\"/>\n2- The first item in each sequence we feed to the decoder has to be GO symbol. So We'll add that to the beginning.\n<img src=\"images/targets_add_go.png\"/>\nNow the tensor is ready to be fed to the decoder. It looks like this (if we convert from ints to letters/symbols):\n<img src=\"images/targets_after_processing_1.png\"/>", "# Process the input we'll feed to the decoder\ndef process_decoder_input(target_data, vocab_to_int, batch_size):\n '''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''\n ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)\n\n return dec_input", "Set up the decoder components\n1- Embedding\nNow that we have prepared the inputs to the training decoder, we need to embed them so they can be ready to be passed to the decoder. \nWe'll create an embedding matrix like the following then have tf.nn.embedding_lookup convert our input to its embedded equivalent:\n<img src=\"images/embeddings.png\" />\n2- Decoder Cell\nThen we declare our decoder cell. Just like the encoder, we'll use an tf.contrib.rnn.LSTMCell here as well.\nWe need to declare a decoder for the training process, and a decoder for the inference/prediction process. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model).\nFirst, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM.\n3- Dense output layer\nBefore we move to declaring our decoders, we'll need to create the output layer, which will be a tensorflow.python.layers.core.Dense layer that translates the outputs of the decoder to logits that tell us which element of the decoder vocabulary the decoder is choosing to output at each time step.\n4- Training decoder\nEssentially, we'll be creating two decoders which share their parameters. One for training and one for inference. The two are similar in that both created using tf.contrib.seq2seq.BasicDecoder and tf.contrib.seq2seq.dynamic_decode. They differ, however, in that we feed the the target sequences as inputs to the training decoder at each time step to make it more robust.\nWe can think of the training decoder as looking like this (except that it works with sequences in batches):\n<img src=\"images/sequence-to-sequence-training-decoder.png\"/>\nThe training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters).\n5- Inference decoder\nThe inference decoder is the one we'll use when we deploy our model to the wild.\n<img src=\"images/sequence-to-sequence-inference-decoder.png\"/>\nWe'll hand our encoder hidden state to both the training and inference decoders and have it process its output. TensorFlow handles most of the logic for us. We just have to use the appropriate methods from tf.contrib.seq2seq and supply them with the appropriate inputs.", "def decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size,\n target_sequence_length, max_target_sequence_length, enc_state, dec_input):\n \n # 1. Decoder Embedding\n target_vocab_size = len(target_letter_to_int)\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n\n # 2. Construct the decoder cell\n def make_cell(rnn_size):\n return tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n\n dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n # 3. Dense layer to translate the decoder's output at each time \n # step into a choice from the target vocabulary\n output_layer = Dense(target_vocab_size,\n kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))\n\n\n # 4. Set up a training decoder and an inference decoder\n # Training Decoder\n with tf.variable_scope(\"decode\"):\n # Helper for the training process. Used by BasicDecoder to read inputs.\n training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,\n sequence_length=target_sequence_length,\n time_major=False)\n # Basic decoder\n training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n training_helper,\n enc_state,\n output_layer) \n # Perform dynamic decoding using the decoder\n training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)[0]\n \n # 5. Inference Decoder\n # Reuses the same parameters trained by the training process\n with tf.variable_scope(\"decode\", reuse=True):\n start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), \n [batch_size], name='start_tokens')\n\n # Helper for the inference process.\n inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,\n start_tokens,\n target_letter_to_int['<EOS>'])\n # Basic decoder\n inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n inference_helper,\n enc_state,\n output_layer) \n # Perform dynamic decoding using the decoder\n inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)[0]\n return training_decoder_output, inference_decoder_output", "2.3 Seq2seq model\nLet's now go a step above, and hook up the encoder and decoder using the methods we just declared", "def seq2seq_model(input_data, targets, lr, target_sequence_length, \n max_target_sequence_length, source_sequence_length,\n source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size, \n rnn_size, num_layers):\n \n # Pass the input data through the encoder. We'll ignore the encoder output, but use the state\n _, enc_state = encoding_layer(input_data, \n rnn_size, \n num_layers, \n source_sequence_length,\n source_vocab_size, \n encoding_embedding_size)\n \n # Prepare the target sequences we'll feed to the decoder in training mode\n dec_input = process_decoder_input(targets, target_letter_to_int, batch_size)\n \n # Pass encoder state and decoder inputs to the decoders\n training_decoder_output, inference_decoder_output = decoding_layer(target_letter_to_int, \n decoding_embedding_size, \n num_layers, \n rnn_size,\n target_sequence_length,\n max_target_sequence_length,\n enc_state, \n dec_input)\n \n return training_decoder_output, inference_decoder_output", "Model outputs training_decoder_output and inference_decoder_output both contain a 'rnn_output' logits tensor that looks like this:\n<img src=\"images/logits.png\"/>\nThe logits we get from the training tensor we'll pass to tf.contrib.seq2seq.sequence_loss() to calculate the loss and ultimately the gradient.", "# Build the graph\ntrain_graph = tf.Graph()\n# Set the graph to default to ensure that it is ready for training\nwith train_graph.as_default():\n \n # Load the model inputs \n input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length = get_model_inputs()\n \n # Create the training and inference logits\n training_decoder_output, inference_decoder_output = seq2seq_model(input_data, \n targets, \n lr, \n target_sequence_length, \n max_target_sequence_length, \n source_sequence_length,\n len(source_letter_to_int),\n len(target_letter_to_int),\n encoding_embedding_size, \n decoding_embedding_size, \n rnn_size, \n num_layers) \n \n # Create tensors for the training logits and inference logits\n training_logits = tf.identity(training_decoder_output.rnn_output, 'logits')\n inference_logits = tf.identity(inference_decoder_output.sample_id, 'predictions')\n \n # Create the weights for sequence_loss\n masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, \n dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n \n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(training_logits, targets, masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Get Batches\nThere's little processing involved when we retreive the batches. This is a simple example assuming batch_size = 2\nSource sequences (it's actually in int form, we're showing the characters for clarity):\n<img src=\"images/source_batch.png\" />\nTarget sequences (also in int, but showing letters for clarity):\n<img src=\"images/target_batch.png\" />", "def pad_sentence_batch(sentence_batch, pad_int):\n \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n max_sentence = max([len(sentence) for sentence in sentence_batch])\n return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n\ndef get_batches(targets, sources, batch_size, source_pad_int, target_pad_int):\n \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n for batch_i in range(0, len(sources)//batch_size):\n start_i = batch_i * batch_size\n sources_batch = sources[start_i:start_i + batch_size]\n targets_batch = targets[start_i:start_i + batch_size]\n pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n \n # Need the lengths for the _lengths parameters\n pad_targets_lengths = []\n for target in pad_targets_batch:\n pad_targets_lengths.append(len(target))\n \n pad_source_lengths = []\n for source in pad_sources_batch:\n pad_source_lengths.append(len(source))\n \n yield pad_targets_batch, pad_sources_batch, pad_targets_lengths, pad_source_lengths", "Train\nWe're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.", "# Split data to training and validation sets\ntrain_source = source_letter_ids[batch_size:]\ntrain_target = target_letter_ids[batch_size:]\nvalid_source = source_letter_ids[:batch_size]\nvalid_target = target_letter_ids[:batch_size]\n(valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = next(get_batches(valid_target, \n valid_source, \n batch_size,\n source_letter_to_int['<PAD>'],\n target_letter_to_int['<PAD>']))\n\ndisplay_step = 20 # Check training loss after every 20 batches\n\ncheckpoint = \"best_model.ckpt\" \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n \n for epoch_i in range(1, epochs+1):\n for batch_i, (targets_batch, sources_batch, targets_lengths, sources_lengths) in enumerate(\n get_batches(train_target, train_source, batch_size,\n source_letter_to_int['<PAD>'],\n target_letter_to_int['<PAD>'])):\n \n # Training step\n _, loss = sess.run([train_op, cost], {input_data: sources_batch,\n targets: targets_batch,\n lr: learning_rate,\n target_sequence_length: targets_lengths,\n source_sequence_length: sources_lengths})\n\n # Debug message updating us on the status of the training\n if batch_i % display_step == 0 and batch_i > 0:\n \n # Calculate validation cost\n validation_loss = sess.run([cost], {input_data: valid_sources_batch,\n targets: valid_targets_batch,\n lr: learning_rate,\n target_sequence_length: valid_targets_lengths,\n source_sequence_length: valid_sources_lengths})\n \n print('Epoch {:>3}/{} Batch {:>4}/{} - Loss: {:>6.3f} - Validation loss: {:>6.3f}'\n .format(epoch_i, epochs, batch_i, len(train_source) // batch_size, loss, validation_loss[0]))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, checkpoint)\n print('Model Trained and Saved')", "Prediction", "def source_to_seq(text):\n '''Prepare the text for the model'''\n sequence_length = 7\n return [source_letter_to_int.get(word, source_letter_to_int['<UNK>']) for word in text]+ [source_letter_to_int['<PAD>']]*(sequence_length-len(text))\n\ninput_sentence = 'prejudice'\ntext = source_to_seq(input_sentence)\n\ncheckpoint = \"./best_model.ckpt\"\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(checkpoint + '.meta')\n loader.restore(sess, checkpoint)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n \n #Multiply by batch_size to match the model's input parameters\n answer_logits = sess.run(logits, {input_data: [text]*batch_size, \n target_sequence_length: [len(text)]*batch_size, \n source_sequence_length: [len(text)]*batch_size})[0] \n\npad = source_letter_to_int[\"<PAD>\"] \n\nprint('Original Text:', input_sentence)\n\nprint('\\nSource')\nprint(' Word Ids: {}'.format([i for i in text]))\nprint(' Input Words: {}'.format(\" \".join([source_int_to_letter[i] for i in text])))\n\nprint('\\nTarget')\nprint(' Word Ids: {}'.format([i for i in answer_logits if i != pad]))\nprint(' Response Words: {}'.format(\" \".join([target_int_to_letter[i] for i in answer_logits if i != pad])))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/niwa/cmip6/models/sandbox-1/ocnbgchem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: NIWA\nSource ID: SANDBOX-1\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:30\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'niwa', 'sandbox-1', 'ocnbgchem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\n3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\n4. Key Properties --&gt; Transport Scheme\n5. Key Properties --&gt; Boundary Forcing\n6. Key Properties --&gt; Gas Exchange\n7. Key Properties --&gt; Carbon Chemistry\n8. Tracers\n9. Tracers --&gt; Ecosystem\n10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\n11. Tracers --&gt; Ecosystem --&gt; Zooplankton\n12. Tracers --&gt; Disolved Organic Matter\n13. Tracers --&gt; Particules\n14. Tracers --&gt; Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Elemental Stoichiometry\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n", "1.5. Elemental Stoichiometry Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.7. Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Damping\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for passive tracers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "2.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for passive tracers (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for biology sources and sinks", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "3.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transport scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n", "4.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTransport scheme used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4.3. Use Different Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how atmospheric deposition is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n", "5.2. River Input\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river input is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n", "5.3. Sediments From Boundary Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Sediments From Explicit Model\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from explicit sediment model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.2. CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe CO2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.3. O2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs O2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.4. O2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe O2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. DMS Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs DMS gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.6. DMS Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify DMS gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.7. N2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.8. N2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.9. N2O Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2O gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.10. N2O Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2O gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.11. CFC11 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC11 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.12. CFC11 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.13. CFC12 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC12 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.14. CFC12 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.15. SF6 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs SF6 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.16. SF6 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify SF6 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.17. 13CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.18. 13CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.19. 14CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.20. 14CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.21. Other Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any other gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how carbon chemistry is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n", "7.2. PH Scale\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.3. Constants If Not OMIP\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Sulfur Cycle Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sulfur cycle modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Nutrients Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Nitrous Species If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous species.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.5. Nitrous Processes If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous processes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Tracers --&gt; Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Upper Trophic Levels Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefine how upper trophic level are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of phytoplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n", "10.2. Pft\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Tracers --&gt; Ecosystem --&gt; Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of zooplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nZooplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Tracers --&gt; Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there bacteria representation ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Lability\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Tracers --&gt; Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Types If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Size If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n", "13.4. Size If Discrete\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.5. Sinking Speed If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Tracers --&gt; Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n", "14.2. Abiotic Carbon\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs abiotic carbon modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.3. Alkalinity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is alkalinity modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rodabt/notebooks
2_cargos_contrata.ipynb
mit
[ "Análisis de puestos", "import pandas as pd\nimport pandas_profiling\nimport numpy as np\nfrom pandasql import *\n\nfrom nltk.corpus import stopwords\nfrom nltk.tokenize import word_tokenize\nimport string\n\ndf = pd.read_pickle('sueldos_contrata.pkl')\n\npandas_profiling.ProfileReport(df)", "Limpieza de variable Estamento\nVeamos cómo es la distribución de valores de esta variable", "q = \"SELECT Estamento,count(*) as Nro FROM df GROUP BY Estamento ORDER BY 2 desc\"\nsqldf(q)", "Observaciones:\n\nMezcla de mayúsculas y minúsculas\nAbreviaciones con puntuaciones extrañas (d.,j.,h.,g.)\nExiste el concepto \"No Aplica\". Hay que investigar la razón.\n\nPara la limpieza, vamos a eliminar todas las puntuaciones y reemplazaré algunas abreviaciones. Examinemos primero estos casos raros.", "q = '''\nSELECT \n distinct Estamento,\n Cargo, \n Servicio \nFROM \n df \nWHERE \n Estamento like \"_.%\"\n'''\nsqldf(q)", "Observamos que todos pertencen al Servicio de Evaluación Ambiental. Si miramos la fuente, podemos ver que efectivamente no hay referencias del significado de los prefijos mostrados, por lo que se eliminarán.", "# No hay stop_words a borrar. Normalmente se usaría stop_words = stopwords.words('spanish')\nstop_words = []\n\n# Borramos abreviaciones sin significado\npalabras_a_borrar = ['d.','g.','h.','j.']\n\n# Normalmente se usa puntuaciones = string.punctuation\npuntuaciones = []\n\ndef clean_text(s):\n \n # 1. Pasar todo a minúsculas y separar en arreglo de palabras\n ws = s.lower().split()\n\n # 2. Eliminar espaciones innecesarios, junto con conectores gramaticales'''\n ws = [w.strip() for w in ws if w not in stop_words]\n \n # 3. Remover puntuación\n ws = [w for w in ws if w not in puntuaciones]\n\n # 4. \n ws = [w for w in ws if w not in palabras_a_borrar]\n return \" \".join(ws)\n\nestamentos = df['Estamento'].tolist()\n\nestamentos = pd.Series([clean_text(s) for s in estamentos])\n\n# Uniformemos valores con un diccionario y reemplazamos\nd = {\n 'técnico': 'tecnico',\n 'fiscalizadores': 'fiscalizador',\n 'administrativos': 'administrativo',\n 'profesionales': 'profesional',\n 'tco no profes': 'tecnico',\n 'no aplica': 'otro',\n 'auxiliares': 'auxiliar',\n 'técnicos': 'tecnico'\n}\nestamentos.replace(d,inplace=True)\n\nestamentos.sort_values().value_counts()", "No aparecen más problemas.\nLimpieza de variable Cargo", "# Primero veamos la lista de cargos con mayor frecuencia de aparición, ej. con frecuencia > 50\nq = \"SELECT Cargo,count(*) as Nro FROM df GROUP BY Cargo HAVING Nro>50 ORDER BY 2 desc\"\nsqldf(q)", "Podemos observar que hay fechas metidas entre medio. Veamos los casos:", "s = 'select servicio,count(*) as nro from df where Cargo like \"%/2016\" group by Cargo'\nsqldf(s)", "Nos encontramos con que los casos corresponden a la misma institución (qu.e además tiene el nombre malo). Revisando la página web, podemos ver que el problema es del origen:\n\nPor lo que borraremos estos registros en el merge final", "# Pasamos los cargos a una variable para limpiar\ncargos = df['Cargo'].tolist()\n\n# Para usar la función \"clean_text\" usaremos además limpieza de conectores gramaticales de la lengua española\nstop_words = stopwords.words('spanish')\npalabras_a_borrar = []\npuntuaciones = string.punctuation\n\ncargos = pd.Series([clean_text(s) for s in cargos])\n\n# Muestra para ver como quedó\ncargos[0:10].to_frame(name='cargo')\n\n# Veamos una lista general\ndf_cargos = cargos.to_frame(name='cargo')\nq = \"SELECT cargo,count(*) as nro FROM df_cargos GROUP BY cargo ORDER BY 2 desc\"\nsqldf(q)", "Para hacer la limpieza más eficiente, vamos a separar el nombre del cargo en dos componentes: \n\nLa denominación principal del cargo, ej.: jefe, encargado, tecnico\nÁrea de aplicación\n\nLuego procederemos a normalizar cada campo:\n\nUnificando género, ej.: jefe = jefa\nEliminando subrogancias\nLimpiando acrónimos, ej: cdr r\nNormalizando textos equivalentes en significado: depto. = departamento", "# Creamos un nuevo Data Frame expandiendo por el primer split. Luego de las correciones los uniremos\ndf_cargos = df_cargos['cargo'].str.split(' ', 1, expand=True)\ndf_cargos.columns = ['nivel','ambito']\ndf_cargos.head()\n\n# Veamos las correcciones al campo \"nivel\"\ns='select nivel,count(*) as nro from df_cargos group by nivel having nro>1 order by 2 desc'\nsqldf(s)", "La lista nos muestra varios tipos de reemplazos. Haremos un diccionario y aplicaremos las correcciones", "# Cambiemos acentos\n# Uniformemos valores con un diccionario y reemplazamos (sugerencia: cargar desde archivo dict.txt)\nd = {\n 'director/a':'director',\n 'directora':'director',\n 'subdirector/a': 'subdirector',\n 'subdirectora': 'subdirector',\n 'jefa':'jefe',\n 'jefe/a':'jefe',\n 'jefe(a)':'jefe',\n 'jefatura': 'jefe',\n 'jefatura,': 'jefe',\n 'consultor/a,': 'consultor',\n 'consultora': 'consultor',\n 'asesora': 'asesor',\n 'asesor(a)': 'asesor',\n 'asesoria': 'asesor',\n 'asesora,':'asesor',\n 'enc.': 'encargado',\n 'encargada':'encargado',\n 'encargada,': 'encargado',\n 'encargada(s)':'encargado',\n 'encargado/a': 'encargado',\n 'encargado(a)': 'encargado',\n 'encagado': 'encargado',\n 'coordinador(a)': 'coordinador',\n 'coordinadora': 'coordinador',\n 'coordinador/a': 'coordinador',\n 'abogado/a': 'abogado',\n 'técnico': 'tecnico',\n 'tecnico(a)': 'tecnico',\n 'técnico(a)' : 'tecnico',\n 'estadístico': 'estadistico',\n 'fiscalizadora': 'fiscalizador',\n 'ejecutiva':'ejecutivo',\n 'operadora': 'operador',\n 'abogada': 'abogado',\n 'profesionalde': 'profesional',\n 'jede': 'jefe',\n 'tesorera': 'tesorero',\n 'adm.': 'administrativo',\n 'administrativo/a': 'administrativo',\n 'administrativa': 'administrativo',\n 'administrativo(a)': 'administrativo',\n 'administrtivo' : 'administrativo',\n 'analista,': 'analista',\n 'editora': 'editor',\n 'guía': 'guia',\n 'digitadora': 'digitador',\n 'examinadora': 'examinador',\n 'secretaria,': 'secretaria',\n 'transcriptor,': 'transcriptor',\n 'auditora': 'auditor',\n 'profesional,' : 'profesional',\n 'profsional' : 'profesional',\n 'prosefional' : 'profesional',\n 'reportera' : 'reportero',\n 'nutricionista,' : 'nutricionista',\n 'profecional' : 'profesional',\n 'restaurador,' : 'restaurador',\n 'psicólogo' : 'psicologo',\n 'trabajadora': 'trabajador',\n 'asistentente': 'asistente',\n 'supervisora': 'supervisor',\n 'líder' : 'lider',\n 'secretario' : 'secretariado',\n 'secretaria' : 'secretariado',\n 'admninistrativo' : 'administrativo',\n 'administración': 'administrativo',\n 'experta': 'experto'\n}\ndf_cargos[\"nivel\"].replace(d,inplace=True)\n\ns = 'select nivel, count(*) as nro from df_cargos group by nivel having nro>1 order by 2 desc'\nprint(sqldf(s)['nivel'].tolist())", "Revisaremos además:\n\nasist.\nsub\nanalisis\ndpto.\ngabinete\ndesarrollar\ncoordinar\nprograma\nsecretaría\noficina", "s = '''\nselect * from df where Cargo like \"asist. %\" union\nselect * from df where Cargo like \"sub %\" union\nselect * from df where Cargo like \"analisis %\" union\nselect * from df where Cargo like \"dpto. %\" union\nselect * from df where Cargo like \"gabinete %\" union\nselect * from df where Cargo like \"desarrollar %\" union\nselect * from df where Cargo like \"programa %\" union\nselect * from df where Cargo like \"secretaría %\" union\nselect * from df where Cargo like \"oficina %\"\n'''\nsqldf(s)", "Hay varias situaciones:\n\nMezclas de cargos con función\nSe escribe nombre de área de impacto\nasist. = asistente\nsub = subjefe\n\nEn los primeros dos casos, reemplazaremos el cargo por el nombre del estamento", "d = {}\nd = {\n 'asist.': 'asistente',\n 'programa': 'administrativo', \n 'secretaría' : 'administrativo',\n 'analisis': 'profesional',\n 'dpto.': 'profesional',\n 'gabinete': 'profesional',\n 'desarrollar': 'profesional',\n 'coordinar': 'profesional',\n 'programa': 'profesional',\n 'sub': 'subjefe',\n 'oficina': 'auxiliar'\n}\ndf_cargos[\"nivel\"].replace(d,inplace=True)\n\n# Veamos ahora los cargos de menor frecuencia\ns='select nivel,count(*) as nro from df_cargos group by nivel having nro=1 order by 2 desc'\nsqldf(s)\n\nd = {\n 'contralora': 'contralor',\n 'administrativo-chofer' : 'chofer',\n 'administrativo-estafeta' : 'estafeta',\n 'administrativo-servicios' : 'admninistrativo',\n 'enc.unidad' : 'encargado',\n 'apoyotécnico' : 'técnico',\n}\ndf_cargos[\"nivel\"].replace(d,inplace=True)\n\n# Veamos ahora los cargos de menor frecuencia\ns='select * from df where Cargo like \"&nbs%\"'\nsqldf(s)", "Cambiamos por el estamento", "d = {'&nbs': 'profesional'}\ndf_cargos[\"nivel\"].replace(d,inplace=True)\n\n# Hagamos una revisión\nprint(df_cargos['nivel'].unique().tolist())", "Pareciera estar bastante mejor. Veamos ahora el ámbito", "s = 'select distinct ambito from df_cargos order by 1 asc'\nsqldf(s)\n\nrep", "Limpieza de variable Grado", "s = 'select distinct `Grado EUS` from df'\nprint(sqldf(s)['Grado EUS'].tolist())", "Hay varias correcciones que hacer:\n\nSacar el símbolo de grado (º)\nEliminar la palabra EUS\nExaminar aquellos registros en que aparece una región", "# Limpieza de símbolos\ngrados = df['Grado EUS']\ngrados = grados.str.replace('°','')\ngrados = grados.str.replace('º','')\ngrados = grados.str.replace(' EUS','')\ngrados = grados.str.replace('1C','1')\n\nprint(grados.unique().tolist())", "Examinemos los casos anómalos", "s = 'select servicio,count(*) as nro from df where `Grado EUS` like \"Reg%\" group by servicio'\nsqldf(s)", "Todos los datos pertenecen a esta institución. El error viene de la fuente:\n\nComo tenemos la Remuneración Mensualizada Bruta, podemos imputar el grado con el valor de la mediana de asociada a cada grado, ya que la información de grados de cada página es imprecisa para hacer esta imputación", "# Reemplazaremos los datos anómalos por Nulo e imputaremos posteriormente\nd = {\n 'I' : '0', \n 'II' : '0', \n 'III' : '0',\n 'Región Metropolitana de Santiago' : '0', \n 'Región del Biobío' : '0', \n 'Atacama' : '0', \n 'Región Aisén del Gral. Carlos Ibáñez del Campo' : '0', \n 'Región del Libertador Gral. Bernardo OHiggins' : '0', \n 'Región de Los Lagos' : '0', \n 'Región de la Araucanía' : '0', \n 'Región de Los Ríos' : '0', \n 'Tarapacá' : '0'\n}\ngrados.replace(d,inplace=True)\ngrados = grados.astype('int')\nprint(grados.unique().tolist())", "Limpieza de variables fecha de inicio y término", "s = 'select distinct `Fecha de inicio` from df'\nprint(sqldf(s)['Fecha de inicio'].tolist())", "Un examen preliminar de los datos no evidencia elementos extraños en el campo. Intentaremos convertir a tipo fecha.", "fecha_inicio = df['Fecha de inicio']\n\ndf.columns\n\ns = 'select distinct `Fecha de término` from df'\nprint(sqldf(s)['Fecha de término'].tolist())", "En el caso de la fecha de término, observamos que la fecha de término contiene el string \"Indefinido\", y en ambos campos no hay valores nulos. Reemplazaremos Indefinido por nulo.", "# Reemplazo de indefinido por nulo\nfecha_termino = df['Fecha de término'].replace('Indefinido',np.nan)", "No hay más correcciones. Ensamblaremos el Data Frame final.", "# Data Frame final\ndatos = {\n 'estamento': estamentos,\n 'nivel': df_cargos['nivel'].tolist(),\n 'ambito': df_cargos['ambito'].tolist(),\n 'grado': grados.values,\n 'rbm': df['Remuneración Bruta Mensualizada'].tolist(),\n 'fecha_inicio': fecha_inicio.values,\n 'fecha_termino': fecha_termino.values,\n 'servicio': df['servicio'].values\n}\ndf_final = pd.DataFrame(datos)\ndf_final.head()\n\n# Cambiamos el order\ncols = ['servicio','estamento','grado','nivel','ambito','fecha_inicio','fecha_termino','rbm']\ndf_final = df_final[cols]\ndf_final.head()\n\ns = 'select * from df_final where nivel not like \"%/2016\"'\ndf_final = sqldf(s)\n\ndf_final['fecha_inicio'] = pd.to_datetime(df_final['fecha_inicio'])\ndf_final['fecha_termino'] = pd.to_datetime(df_final['fecha_termino'])\n\ndf_final.to_pickle('sueldos_contrata_clean.pkl')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
2php/CodeToolKit
9.caffe-ssd/examples/ssd_detect.ipynb
mit
[ "Detection with SSD\nIn this example, we will load a SSD model and use it to detect objects.\n1. Setup\n\nFirst, Load necessary libs and set up caffe and caffe_root", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.rcParams['figure.figsize'] = (10, 10)\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# Make sure that caffe is on the python path:\ncaffe_root = '../' # this file is expected to be in {caffe_root}/examples\nimport os\nos.chdir(caffe_root)\nimport sys\nsys.path.insert(0, 'python')\n\nimport caffe\ncaffe.set_device(0)\ncaffe.set_mode_gpu()", "Load LabelMap.", "from google.protobuf import text_format\nfrom caffe.proto import caffe_pb2\n\n# load PASCAL VOC labels\nlabelmap_file = 'data/VOC0712/labelmap_voc.prototxt'\nfile = open(labelmap_file, 'r')\nlabelmap = caffe_pb2.LabelMap()\ntext_format.Merge(str(file.read()), labelmap)\n\ndef get_labelname(labelmap, labels):\n num_labels = len(labelmap.item)\n labelnames = []\n if type(labels) is not list:\n labels = [labels]\n for label in labels:\n found = False\n for i in xrange(0, num_labels):\n if label == labelmap.item[i].label:\n found = True\n labelnames.append(labelmap.item[i].display_name)\n break\n assert found == True\n return labelnames", "Load the net in the test phase for inference, and configure input preprocessing.", "model_def = 'models/VGGNet/VOC0712/SSD_300x300/deploy.prototxt'\nmodel_weights = 'models/VGGNet/VOC0712/SSD_300x300/VGG_VOC0712_SSD_300x300_iter_60000.caffemodel'\n\nnet = caffe.Net(model_def, # defines the structure of the model\n model_weights, # contains the trained weights\n caffe.TEST) # use test mode (e.g., don't perform dropout)\n\n# input preprocessing: 'data' is the name of the input blob == net.inputs[0]\ntransformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})\ntransformer.set_transpose('data', (2, 0, 1))\ntransformer.set_mean('data', np.array([104,117,123])) # mean pixel\ntransformer.set_raw_scale('data', 255) # the reference model operates on images in [0,255] range instead of [0,1]\ntransformer.set_channel_swap('data', (2,1,0)) # the reference model has channels in BGR order instead of RGB", "2. SSD detection\n\nLoad an image.", "# set net to batch size of 1\nimage_resize = 300\nnet.blobs['data'].reshape(1,3,image_resize,image_resize)\n\nimage = caffe.io.load_image('examples/images/fish-bike.jpg')\nplt.imshow(image)", "Run the net and examine the top_k results", "transformed_image = transformer.preprocess('data', image)\nnet.blobs['data'].data[...] = transformed_image\n\n# Forward pass.\ndetections = net.forward()['detection_out']\n\n# Parse the outputs.\ndet_label = detections[0,0,:,1]\ndet_conf = detections[0,0,:,2]\ndet_xmin = detections[0,0,:,3]\ndet_ymin = detections[0,0,:,4]\ndet_xmax = detections[0,0,:,5]\ndet_ymax = detections[0,0,:,6]\n\n# Get detections with confidence higher than 0.6.\ntop_indices = [i for i, conf in enumerate(det_conf) if conf >= 0.6]\n\ntop_conf = det_conf[top_indices]\ntop_label_indices = det_label[top_indices].tolist()\ntop_labels = get_labelname(labelmap, top_label_indices)\ntop_xmin = det_xmin[top_indices]\ntop_ymin = det_ymin[top_indices]\ntop_xmax = det_xmax[top_indices]\ntop_ymax = det_ymax[top_indices]", "Plot the boxes", "colors = plt.cm.hsv(np.linspace(0, 1, 21)).tolist()\n\nplt.imshow(image)\ncurrentAxis = plt.gca()\n\nfor i in xrange(top_conf.shape[0]):\n xmin = int(round(top_xmin[i] * image.shape[1]))\n ymin = int(round(top_ymin[i] * image.shape[0]))\n xmax = int(round(top_xmax[i] * image.shape[1]))\n ymax = int(round(top_ymax[i] * image.shape[0]))\n score = top_conf[i]\n label = int(top_label_indices[i])\n label_name = top_labels[i]\n display_txt = '%s: %.2f'%(label_name, score)\n coords = (xmin, ymin), xmax-xmin+1, ymax-ymin+1\n color = colors[label]\n currentAxis.add_patch(plt.Rectangle(*coords, fill=False, edgecolor=color, linewidth=2))\n currentAxis.text(xmin, ymin, display_txt, bbox={'facecolor':color, 'alpha':0.5})" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
barronh/GCandPython
PNC_03Evaluation.ipynb
gpl-3.0
[ "from notebook.services.config import ConfigManager\ncfgm = ConfigManager()\ncfgm.update('livereveal', {\n 'theme': 'simple',\n 'transition': 'convex',\n 'start_slideshow_at': 'selected'\n})\n", "Python Analysis Evaluation\nAuthor: Barron H. Henderson", "# Prepare my slides\n%pylab inline\n%cd working", "Process AQS for evaluation\n\nDownload annual zip file(s)\nUnzip\nUse sed, grep, or awk to get spatial/temporal subset\nReshape data\nMissing data should be masked\nDimensions (time, point)\n\n\n\nCHECK POINT\n\n\nWhat do you think the dimensions should be for AQS observations?\n\n\n\n\n\nWhat meta-data should be present?\n\n\n\n\n\nGetting AQS Observations\n\nGet AQS observations\nRaw outputs from AQS website\nRepresentational State Transfer - good for small amounts of data\n\n\nREST format was having problems due to its transition.\n\nRaw outputs\n\nDownload directly or download inline", "!pncaqsraw4pnceval.py --help", "wktpolygon\nThis option is most relevant for regional extractions.", "from shapely.wkt import loads\ngeom = loads(\"POLYGON ((30 10, 40 35, 20 40, 10 20, 30 10))\")\nx, y = geom.exterior.xy\nplt.plot(x, y, ls = '-', marker = 'o')", "CHECK POINT\nWhat should the bounding box be as a WKT Polygon?\nANSWERS Hidden\n<div style=\"visibility: hidden\">\n\n\"POLYGON ((llcrnrlon llcrnrlat, lrcrnrlon lrcrnrlat, urcrnrlon urcrnrlat, ulcrnrlon ulcrnrlat, llcrnrlon llcrnrlat))\"\n\n</div>\n\nDownload and Process", "!pncaqsraw4pnceval.py -O --timeresolution=daily \\\n --start-date 2013-05-01 --end-date 2013-07-01 \\\n --wktpolygon \"POLYGON ((-181.25 0, 178.75 0, 178.75 90, -181.25 90, -181.25 0))\"\n\n%ls -l AQS_DATA_20130501-20130701.nc", "Review Output", "!pncdump.py --header AQS_DATA_20130501-20130701.nc", "Extract GEOS-Chem at AQS", "!pncgen -O -f \"bpch,vertgrid='GEOS-5-NATIVE',nogroup=('IJ-AVG-$',)\" \\\n --extract-file AQS_DATA_20130501-20130701.nc --stack=time -v O3 -s layer72,0 \\\n bpch/ctm.bpch.v10-01-public-Run0.2013050100 \\\n bpch/ctm.bpch.v10-01-public-Run0.2013050100 \\\n bpch_aqs_extract.nc\n\n!pncdump.py --header bpch_aqs_extract.nc\n\n!pnceval.py --help\n\n%%bash\npnceval.py --funcs NO,NP,NOP,MO,MP,MB,RMSE,IOA,AC -v O3 \\\n--pnc \" --expr O3=Ozone*1000;O3.units=\\'ppb\\' -r time,mean AQS_DATA_20130501-20130701.nc\" \\\n--pnc \" -r time,mean bpch_aqs_extract.nc\"\n\nfrom PseudoNetCDF import pnceval\nhelp(pnceval)", "Reproduced in Python\n\nRead in files\nGet variables\nCalculate MB, RMSE\nModify\nRepeat except for month specific results\nRepeat except for site specific results\n\n\n\nReproduction Provided", "from PseudoNetCDF import PNC, pnceval\n\naqs = PNC(\"--reduce=time,mean\", \"--expr=O3=Ozone*1000\", \"AQS_DATA_20130501-20130701.nc\")\ngeos = PNC(\"--reduce=time,mean\", \"bpch_aqs_extract.nc\")\naqso3 = aqs.ifiles[0].variables['O3']\ngeoso3 = geos.ifiles[0].variables['O3']\nprint(aqso3.shape)\nprint(geoso3.shape)\nprint(pnceval.RMSE(aqso3, geoso3))", "ANSWERS Hidden\n<div style=\"visibility: hidden\">\n\n```\nfrom PseudoNetCDF import PNC, pnceval\n\naqs = PNC(\"--expr=O3=Ozone*1000\", \"AQS_DATA_20130501-20130701.nc\")\ngeos = PNC(\"bpch_aqs_extract.nc\")\naqso3 = aqs.ifiles[0].variables['O3'].reshape(2, 31, 1, 1295).mean(1)\ngeoso3 = geos.ifiles[0].variables['O3']\nprint(aqso3.shape)\nprint(geoso3.shape)\nprint(pnceval.RMSE(aqso3, geoso3, axis = 2))\n```\n</div>" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ssarangi/self_driving_cars
traffic_sign_classifier/LeNet-Traffic-Sign.ipynb
apache-2.0
[ "LeNet Lab Solution\n\nSource: Yan LeCun\nLoad Data\nLoad the MNIST data, which comes pre-loaded with TensorFlow.\nYou do not need to modify this section.", "# Load pickled data\nimport pickle\nimport pandas as pd\n\n# TODO: Fill this in based on where you saved the training and testing data\n\ntraining_file = 'data/train.p'\nvalidation_file= 'data/valid.p'\ntesting_file = 'data/test.p'\n\nwith open(training_file, mode='rb') as f:\n train = pickle.load(f)\nwith open(testing_file, mode='rb') as f:\n test = pickle.load(f)\n \nX_train, y_train = train['features'], train['labels']\nX_test, y_test = test['features'], test['labels']", "The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.\nHowever, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.\nIn order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).\nYou do not need to modify this section.", "from sklearn.model_selection import train_test_split\n\nX_train, X_validation, y_train, y_validation = train_test_split(X_train, y_train, test_size=0.2, random_state=0)\n\nprint(\"Updated Image Shape: {}\".format(X_train[0].shape))", "Visualize Data\nView a sample from the dataset.\nYou do not need to modify this section.", "import random\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nindex = random.randint(0, len(X_train))\nimage = X_train[index].squeeze()\n\nplt.figure(figsize=(1,1))\nplt.imshow(image)\nprint(y_train[index])", "Preprocess Data\nShuffle the training data.\nYou do not need to modify this section.", "from sklearn.utils import shuffle\n\nX_train, y_train = shuffle(X_train, y_train)", "Setup TensorFlow\nThe EPOCH and BATCH_SIZE values affect the training speed and model accuracy.\nYou do not need to modify this section.", "import tensorflow as tf\n\nEPOCHS = 10\nBATCH_SIZE = 128", "SOLUTION: Implement LeNet-5\nImplement the LeNet-5 neural network architecture.\nThis is the only cell you need to edit.\nInput\nThe LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.\nArchitecture\nLayer 1: Convolutional. The output shape should be 28x28x6.\nActivation. Your choice of activation function.\nPooling. The output shape should be 14x14x6.\nLayer 2: Convolutional. The output shape should be 10x10x16.\nActivation. Your choice of activation function.\nPooling. The output shape should be 5x5x16.\nFlatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.\nLayer 3: Fully Connected. This should have 120 outputs.\nActivation. Your choice of activation function.\nLayer 4: Fully Connected. This should have 84 outputs.\nActivation. Your choice of activation function.\nLayer 5: Fully Connected (Logits). This should have 10 outputs.\nOutput\nReturn the result of the 2nd fully connected layer.", "from tensorflow.contrib.layers import flatten\n\ndef LeNet(x): \n # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer\n mu = 0\n sigma = 0.1\n \n # SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.\n conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean = mu, stddev = sigma))\n conv1_b = tf.Variable(tf.zeros(6))\n conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b\n\n # SOLUTION: Activation.\n conv1 = tf.nn.relu(conv1)\n \n print(conv1.get_shape())\n\n # SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.\n conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n\n # SOLUTION: Layer 2: Convolutional. Output = 10x10x16.\n conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))\n conv2_b = tf.Variable(tf.zeros(16))\n conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b\n \n # SOLUTION: Activation.\n conv2 = tf.nn.relu(conv2)\n\n # SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.\n conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n\n # SOLUTION: Flatten. Input = 5x5x16. Output = 400.\n fc0 = flatten(conv2)\n \n # SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.\n fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))\n fc1_b = tf.Variable(tf.zeros(120))\n fc1 = tf.matmul(fc0, fc1_W) + fc1_b\n \n # SOLUTION: Activation.\n fc1 = tf.nn.relu(fc1)\n\n # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.\n fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))\n fc2_b = tf.Variable(tf.zeros(84))\n fc2 = tf.matmul(fc1, fc2_W) + fc2_b\n \n # SOLUTION: Activation.\n fc2 = tf.nn.relu(fc2)\n\n # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.\n fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))\n fc3_b = tf.Variable(tf.zeros(43))\n logits = tf.matmul(fc2, fc3_W) + fc3_b\n \n return logits", "Features and Labels\nTrain LeNet to classify MNIST data.\nx is a placeholder for a batch of input images.\ny is a placeholder for a batch of output labels.\nYou do not need to modify this section.", "x = tf.placeholder(tf.float32, (None, 32, 32, 3))\ny = tf.placeholder(tf.int32, (None))\none_hot_y = tf.one_hot(y, 43)", "Training Pipeline\nCreate a training pipeline that uses the model to classify MNIST data.\nYou do not need to modify this section.", "rate = 0.001\n\nlogits = LeNet(x)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)\nloss_operation = tf.reduce_mean(cross_entropy)\noptimizer = tf.train.AdamOptimizer(learning_rate = rate)\ntraining_operation = optimizer.minimize(loss_operation)", "Model Evaluation\nEvaluate how well the loss and accuracy of the model for a given dataset.\nYou do not need to modify this section.", "correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))\naccuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nsaver = tf.train.Saver()\n\ndef evaluate(X_data, y_data):\n num_examples = len(X_data)\n total_accuracy = 0\n sess = tf.get_default_session()\n for offset in range(0, num_examples, BATCH_SIZE):\n batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]\n accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})\n total_accuracy += (accuracy * len(batch_x))\n return total_accuracy / num_examples", "Train the Model\nRun the training data through the training pipeline to train the model.\nBefore each epoch, shuffle the training set.\nAfter each epoch, measure the loss and accuracy of the validation set.\nSave the model after training.\nYou do not need to modify this section.", "with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n num_examples = len(X_train)\n \n print(\"Training...\")\n print()\n for i in range(EPOCHS):\n X_train, y_train = shuffle(X_train, y_train)\n for offset in range(0, num_examples, BATCH_SIZE):\n end = offset + BATCH_SIZE\n batch_x, batch_y = X_train[offset:end], y_train[offset:end]\n sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})\n \n validation_accuracy = evaluate(X_validation, y_validation)\n print(\"EPOCH {} ...\".format(i+1))\n print(\"Validation Accuracy = {:.3f}\".format(validation_accuracy))\n print()\n \n saver.save(sess, './lenet')\n print(\"Model saved\")", "Evaluate the Model\nOnce you are completely satisfied with your model, evaluate the performance of the model on the test set.\nBe sure to only do this once!\nIf you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.\nYou do not need to modify this section.", "with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('.'))\n\n test_accuracy = evaluate(X_test, y_test)\n print(\"Test Accuracy = {:.3f}\".format(test_accuracy))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
opengeostat/pygslib
pygslib/Ipython_templates/.ipynb_checkpoints/declus_raw-checkpoint.ipynb
mit
[ "PyGSLIB\nDeclustering\nThis is how declustering works", "#general imports\nimport matplotlib.pyplot as plt \nimport pygslib \nimport numpy as np\n\n#make the plots inline\n%matplotlib inline ", "Getting the data ready for work\nIf the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.", "#get the data in gslib format into a pandas Dataframe\nmydata= pygslib.gslib.read_gslib_file('../data/cluster.dat') \n\n# This is a 2D file, in this GSLIB version we require 3D data and drillhole name or domain code\n# so, we are adding constant elevation = 0 and a dummy BHID = 1 \nmydata['Zlocation']=0\nmydata['bhid']=1\n\n# printing to verify results\nprint (' \\n **** 5 first rows in my datafile \\n\\n ', mydata.head(n=5))\n\n#view data in a 2D projection\nplt.scatter(mydata['Xlocation'],mydata['Ylocation'], c=mydata['Primary'])\nplt.colorbar()\nplt.grid(True)\nplt.show()", "Testing declust\nThis parameters ar selected bu default... as in the file declus.par included in the gslib 77 distribution\n<pre><code>\n Parameters for DECLUS\n *********************\n\nSTART OF PARAMETERS:\n../data/cluster.dat \\file with data\n1 2 0 3 \\ columns for X, Y, Z, and variable\n-1.0e21 1.0e21 \\ trimming limits\ndeclus.sum \\file for summary output\ndeclus.out \\file for output with data & weights\n1.0 1.0 \\Y and Z cell anisotropy (Ysize=size*Yanis)\n0 \\0=look for minimum declustered mean (1=max)\n24 1.0 25.0 \\number of cell sizes, min size, max size\n5 \\number of origin offsets\n\n</code></pre>\n\n\nNote: The trimming limits are not implemented in the Fortran module. you may filter the data before using this function", "#Check the data is ok\na=mydata['Primary'].isnull()\nprint (\"Undefined values:\", len(a[a==True]))\nprint (\"Minimum value :\", mydata['Primary'].min())\nprint (\"Maximum value :\", mydata['Primary'].max())\n\nparameters_declus = { \n 'x' : mydata['Xlocation'], # data x coordinates, array('f') with bounds (na), na is number of data points\n 'y' : mydata['Ylocation'], # data y coordinates, array('f') with bounds (na)\n 'z' : mydata['Zlocation'], # data z coordinates, array('f') with bounds (na)\n 'vr' : mydata['Primary'], # variable, array('f') with bounds (na)\n 'anisy' : 1., # Y cell anisotropy (Ysize=size*Yanis), 'f' \n 'anisz' : 1., # Z cell anisotropy (Zsize=size*Zanis), 'f' \n 'minmax' : 0, # 0=look for minimum declustered mean (1=max), 'i' \n 'ncell' : 24, # number of cell sizes, 'i' \n 'cmin' : 1., # minimum cell sizes, 'i' \n 'cmax' : 25., # maximum cell sizes, 'i'. Will be update to cmin if ncell == 1\n 'noff' : 5, # number of origin offsets, 'i'. This is to avoid local minima/maxima\n 'maxcel' : 100000} # maximum number of cells, 'i'. This is to avoid large calculations, if MAXCEL<1 this check will be ignored\n\n\nwtopt,vrop,wtmin,wtmax,error,xinc,yinc,zinc,rxcs,rycs,rzcs,rvrcr = pygslib.gslib.declus(parameters_declus)\n\n\n# to know what the output means print the help\nhelp(pygslib.gslib.declus)", "Plotting results", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nplt.figure(num=None, figsize=(10, 10), dpi=200, facecolor='w', edgecolor='k')\n\n#view data in a 2D projection\nplt.scatter(mydata['Xlocation'],mydata['Ylocation'], c=wtopt, s=wtopt*320, alpha=0.5)\nplt.plot(mydata['Xlocation'],mydata['Ylocation'], '.', color='k')\nl=plt.colorbar()\nl.set_label('Declustering weight')\nplt.grid(True)\nplt.show()\n\n\n\n#The declustering size\nplt.plot (rxcs, rvrcr)\n\nrvrcr\n\nprint ('=========================================')\nprint ('declustered mean :', vrop )\nprint ('weight minimum :', wtmin )\nprint ('weight maximum :', wtmax )\nprint ('runtime error :', error )\nprint ('cell size increments :', xinc,yinc,zinc )\nprint ('sum of weight :', np.sum(wtopt) )\nprint ('n data :', len(wtopt) )\nprint ('=========================================')\n", "Running only one cell size\nFrom the plot above the optimum are 5x5x5 cells", "parameters_declus = { \n 'x' : mydata['Xlocation'], # data x coordinates, array('f') with bounds (na), na is number of data points\n 'y' : mydata['Ylocation'], # data y coordinates, array('f') with bounds (na)\n 'z' : mydata['Zlocation'], # data z coordinates, array('f') with bounds (na)\n 'vr' : mydata['Primary'], # variable, array('f') with bounds (na)\n 'anisy' : 1., # Y cell anisotropy (Ysize=size*Yanis), 'f' \n 'anisz' : 1., # Z cell anisotropy (Zsize=size*Zanis), 'f' \n 'minmax' : 0, # 0=look for minimum declustered mean (1=max), 'i' \n 'ncell' : 1, # number of cell sizes, 'i' \n 'cmin' : 5., # minimum cell sizes, 'i' \n 'cmax' : 5., # maximum cell sizes, 'i'. Will be update to cmin if ncell == 1\n 'noff' : 5, # number of origin offsets, 'i'. This is to avoid local minima/maxima\n 'maxcel' : 100000} # maximum number of cells, 'i'. This is to avoid large calculations, if MAXCEL<1 this check will be ignored\n\n\nwtopt,vrop,wtmin,wtmax,error,xinc,yinc,zinc,rxcs,rycs,rzcs,rvrcr = pygslib.gslib.declus(parameters_declus)\n\n\nprint ('=========================================')\nprint ('declustered mean :', vrop )\nprint ('weight minimum :', wtmin )\nprint ('weight maximum :', wtmax )\nprint ('runtime error :', error )\nprint ('cell size increments :', xinc,yinc,zinc )\nprint ('sum of weight :', np.sum(wtopt) ) \nprint ('n data :', len(wtopt) )\nprint ('=========================================')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.24/_downloads/93b9388c9b54989a6ee795fd5dedd153/otp.ipynb
bsd-3-clause
[ "%matplotlib inline", "Plot sensor denoising using oversampled temporal projection\nThis demonstrates denoising using the OTP algorithm :footcite:LarsonTaulu2018\non data with with sensor artifacts (flux jumps) and random noise.", "# Author: Eric Larson <larson.eric.d@gmail.com>\n#\n# License: BSD-3-Clause\n\nimport os.path as op\nimport mne\nimport numpy as np\n\nfrom mne import find_events, fit_dipole\nfrom mne.datasets.brainstorm import bst_phantom_elekta\nfrom mne.io import read_raw_fif\n\nprint(__doc__)", "Plot the phantom data, lowpassed to get rid of high-frequency artifacts.\nWe also crop to a single 10-second segment for speed.\nNotice that there are two large flux jumps on channel 1522 that could\nspread to other channels when performing subsequent spatial operations\n(e.g., Maxwell filtering, SSP, or ICA).", "dipole_number = 1\ndata_path = bst_phantom_elekta.data_path()\nraw = read_raw_fif(\n op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif'))\nraw.crop(40., 50.).load_data()\norder = list(range(160, 170))\nraw.copy().filter(0., 40.).plot(order=order, n_channels=10)", "Now we can clean the data with OTP, lowpass, and plot. The flux jumps have\nbeen suppressed alongside the random sensor noise.", "raw_clean = mne.preprocessing.oversampled_temporal_projection(raw)\nraw_clean.filter(0., 40.)\nraw_clean.plot(order=order, n_channels=10)", "We can also look at the effect on single-trial phantom localization.\nSee the tut-brainstorm-elekta-phantom\nfor more information. Here we use a version that does single-trial\nlocalization across the 17 trials are in our 10-second window:", "def compute_bias(raw):\n events = find_events(raw, 'STI201', verbose=False)\n events = events[1:] # first one has an artifact\n tmin, tmax = -0.2, 0.1\n epochs = mne.Epochs(raw, events, dipole_number, tmin, tmax,\n baseline=(None, -0.01), preload=True, verbose=False)\n sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None,\n verbose=False)\n cov = mne.compute_covariance(epochs, tmax=0, method='oas',\n rank=None, verbose=False)\n idx = epochs.time_as_index(0.036)[0]\n data = epochs.get_data()[:, :, idx].T\n evoked = mne.EvokedArray(data, epochs.info, tmin=0.)\n dip = fit_dipole(evoked, cov, sphere, n_jobs=1, verbose=False)[0]\n actual_pos = mne.dipole.get_phantom_dipoles()[0][dipole_number - 1]\n misses = 1000 * np.linalg.norm(dip.pos - actual_pos, axis=-1)\n return misses\n\n\nbias = compute_bias(raw)\nprint('Raw bias: %0.1fmm (worst: %0.1fmm)'\n % (np.mean(bias), np.max(bias)))\nbias_clean = compute_bias(raw_clean)\nprint('OTP bias: %0.1fmm (worst: %0.1fmm)'\n % (np.mean(bias_clean), np.max(bias_clean),))", "References\n.. footbibliography::" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
darkomen/TFG
ipython_notebooks/06_regulador_experto/ensayo2.ipynb
cc0-1.0
[ "Análisis de los datos obtenidos\nUso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 12 de Agosto del 2015\nLos datos del experimento:\n* Hora de inicio: 11:05\n* Hora final : 11:35\n* Filamento extruido: 435cm\n* $T: 150ºC$\n* $V_{min} tractora: 1.5 mm/s$\n* $V_{max} tractora: 3.4 mm/s$\n* Los incrementos de velocidades en las reglas del sistema experto son distintas:\n * En el caso 5 se pasa de un incremento de velocidad de +1 a un incremento de +2.", "#Importamos las librerías utilizadas\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\n#Mostramos las versiones usadas de cada librerías\nprint (\"Numpy v{}\".format(np.__version__))\nprint (\"Pandas v{}\".format(pd.__version__))\nprint (\"Seaborn v{}\".format(sns.__version__))\n\n#Abrimos el fichero csv con los datos de la muestra\ndatos = pd.read_csv('ensayo2.CSV')\n\n%pylab inline\n\n#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar\ncolumns = ['Diametro X','Diametro Y', 'RPM TRAC']\n\n#Mostramos un resumen de los datos obtenidoss\ndatos[columns].describe()\n#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]", "Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica", "graf=datos.ix[:, \"Diametro X\"].plot(figsize=(16,10),ylim=(0.5,3))\ngraf.axhspan(1.65,1.85, alpha=0.2)\ngraf.set_xlabel('Tiempo (s)')\ngraf.set_ylabel('Diámetro (mm)')\n#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')\n\nbox=datos.ix[:, \"Diametro X\":\"Diametro Y\"].boxplot(return_type='axes')\nbox.axhspan(1.65,1.85, alpha=0.2)", "Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como segunda aproximación, vamos a modificar los incrementos en los que el diámetro se encuentra entre $1.80mm$ y $1.70 mm$, en ambos sentidos. (casos 3 a 6)\nComparativa de Diametro X frente a Diametro Y para ver el ratio del filamento", "plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')", "Filtrado de datos\nLas muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.", "datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]\n\n#datos_filtrados.ix[:, \"Diametro X\":\"Diametro Y\"].boxplot(return_type='axes')", "Representación de X/Y", "plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')", "Analizamos datos del ratio", "ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']\nratio.describe()\n\nrolling_mean = pd.rolling_mean(ratio, 50)\nrolling_std = pd.rolling_std(ratio, 50)\nrolling_mean.plot(figsize=(12,6))\n# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)\nratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))", "Límites de calidad\nCalculamos el número de veces que traspasamos unos límites de calidad. \n$Th^+ = 1.85$ and $Th^- = 1.65$", "Th_u = 1.85\nTh_d = 1.65\n\ndata_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |\n (datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]\n\ndata_violations.describe()\n\ndata_violations.plot(subplots=True, figsize=(12,12))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/starthinker
colabs/smartsheet_to_bigquery.ipynb
apache-2.0
[ "SmartSheet Sheet To BigQuery\nMove sheet data into a BigQuery table.\nLicense\nCopyright 2020 Google LLC,\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nDisclaimer\nThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.\nThis code generated (see starthinker/scripts for possible source):\n - Command: \"python starthinker_ui/manage.py colab\"\n - Command: \"python starthinker/tools/colab.py [JSON RECIPE]\"\n1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.", "!pip install git+https://github.com/google/starthinker\n", "2. Set Configuration\nThis code is required to initialize the project. Fill in required fields and press play.\n\nIf the recipe uses a Google Cloud Project:\n\nSet the configuration project value to the project identifier from these instructions.\n\n\nIf the recipe has auth set to user:\n\nIf you have user credentials:\nSet the configuration user value to your user credentials JSON.\n\n\n\nIf you DO NOT have user credentials:\n\nSet the configuration client value to downloaded client credentials.\n\n\n\nIf the recipe has auth set to service:\n\nSet the configuration service value to downloaded service credentials.", "from starthinker.util.configuration import Configuration\n\n\nCONFIG = Configuration(\n project=\"\",\n client={},\n service={},\n user=\"/content/user.json\",\n verbose=True\n)\n\n", "3. Enter SmartSheet Sheet To BigQuery Recipe Parameters\n\nSpecify SmartSheet token.\nLocate the ID of a sheet by viewing its properties.\nProvide a BigQuery dataset ( must exist ) and table to write the data into.\nStarThinker will automatically map the correct schema.\nModify the values below for your use case, can be done multiple times, then click play.", "FIELDS = {\n 'auth_read':'user', # Credentials used for reading data.\n 'auth_write':'service', # Credentials used for writing data.\n 'token':'', # Retrieve from SmartSheet account settings.\n 'sheet':'', # Retrieve from sheet properties.\n 'dataset':'', # Existing BigQuery dataset.\n 'table':'', # Table to create from this report.\n 'schema':'', # Schema provided in JSON list format or leave empty to auto detect.\n 'link':True, # Add a link to each row as the first column.\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n", "4. Execute SmartSheet Sheet To BigQuery\nThis does NOT need to be modified unless you are changing the recipe, click play.", "from starthinker.util.configuration import execute\nfrom starthinker.util.recipe import json_set_fields\n\nTASKS = [\n {\n 'smartsheet':{\n 'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for reading data.'}},\n 'token':{'field':{'name':'token','kind':'string','order':2,'default':'','description':'Retrieve from SmartSheet account settings.'}},\n 'sheet':{'field':{'name':'sheet','kind':'string','order':3,'description':'Retrieve from sheet properties.'}},\n 'link':{'field':{'name':'link','kind':'boolean','order':7,'default':True,'description':'Add a link to each row as the first column.'}},\n 'out':{\n 'bigquery':{\n 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},\n 'dataset':{'field':{'name':'dataset','kind':'string','order':4,'default':'','description':'Existing BigQuery dataset.'}},\n 'table':{'field':{'name':'table','kind':'string','order':5,'default':'','description':'Table to create from this report.'}},\n 'schema':{'field':{'name':'schema','kind':'json','order':6,'description':'Schema provided in JSON list format or leave empty to auto detect.'}}\n }\n }\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\n\nexecute(CONFIG, TASKS, force=True)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jserenson/Python_Bootcamp
StringIO.ipynb
gpl-3.0
[ "StringIO\nThe StringIO module implements an in-memory file like object. This object can then be used as input or output to most functions that would expect a standard file object.\nThe best way to show this is by example:", "import StringIO\n\n# Arbitrary String\nmessage = 'This is just a normal string.'\n\n# Use StringIO method to set as file object\nf = StringIO.StringIO(message)", "Now we have an object f that we will be able to treat just like a file. For example:", "f.read()", "We can also write to it:", "f.write(' Second line written to file like object')\n\n# Reset cursor just like you would a file\nf.seek(0)\n\n# Read again\nf.read()", "Great! Now you've seen how we can use StringIO to turn normal strings into in-memory file objects in our code. This kind of action has various use cases, especially in web scraping cases where you want to read some string you scraped as a file.\nFor more info on StringIO check out the documentation:https://docs.python.org/2/library/stringio.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
eds-uga/csci1360e-su17
assignments/A5/A5_Q1.ipynb
mit
[ "Q1\nThis question will focus entirely on NumPy arrays: vectorized programming, slicing, and broadcasting.\nPart A\nIn this question, you'll implement the vector dot product.\nWrite a function which:\n\nis named dot\ntakes two NumPy arrays as arguments\nreturns one number: the floating-point dot product of the two input vectors\n\nRecall how a dot product works: corresponding elements of two arrays are multiplied together, then all these products are summed.\nFor example: if I have two NumPy arrays [1, 2, 3] and [4, 5, 6], their dot product would be (1*4) + (2*5) + (3*6), or 4 + 10 + 18, or 32.\nYou can use NumPy arrays, and the np.sum() function, but no other NumPy functions.", "import numpy as np\nnp.random.seed(57442)\n\nx1 = np.random.random(10)\nx2 = np.random.random(10)\nnp.testing.assert_allclose(x1.dot(x2), dot(x1, x2))\n\nnp.random.seed(495835)\n\nx1 = np.random.random(100)\nx2 = np.random.random(100)\nnp.testing.assert_allclose(x1.dot(x2), dot(x1, x2))", "Part B\nWrite a function which:\n\nis named subarray\ntakes two arguments, both NumPy arrays: one containing data, one containing indices\nreturns one NumPy array\n\nThe function should return a NumPy array that corresponds to the elements of the input array of data selected by the indices array.\nFor example, subarray([1, 2, 3], [2]) should return a NumPy array of [3].\nYou cannot use any built-in functions, NumPy functions, or loops!", "import numpy as np\nnp.random.seed(5381)\n\nx1 = np.random.random(43)\ni1 = np.random.randint(0, 43, 10)\na1 = np.array([ 0.24317871, 0.16900041, 0.20687451, 0.38726974, 0.49798077,\n 0.32797843, 0.18801287, 0.29021025, 0.65418547, 0.78651195])\nnp.testing.assert_allclose(a1, subarray(x1, i1), rtol = 1e-5)\n\nx2 = np.random.random(74)\ni2 = np.random.randint(0, 74, 5)\na2 = np.array([ 0.96372034, 0.84256813, 0.08188566, 0.71852542, 0.92384611])\nnp.testing.assert_allclose(a2, subarray(x2, i2), rtol = 1e-5)", "Part C\nWrite a function which:\n\nis named less_than\ntakes two arguments: a NumPy array, and a floating-point number\nreturns a NumPy array\n\nYou should use a boolean mask to return only the values in the NumPy array that are less than the specified floating-point value (the second parameter). No loops are allowed, or any built-in functions or loops.\nFor example, less_than([1, 2, 3], 2.5) should return a NumPy array of [1, 2].", "import numpy as np\nnp.random.seed(85928)\n\nx = np.random.random((10, 20, 30))\nt = 0.001\ny = np.array([ 0.0005339 , 0.00085714, 0.00091265, 0.00037283])\nnp.testing.assert_allclose(y, less_than(x, t))\n\nnp.random.seed(8643)\n\nx2 = np.random.random((100, 100, 10))\nt2 = 0.0001\ny2 = np.array([ 2.91560413e-06, 6.80065620e-06, 3.63294064e-05,\n 7.50659065e-05, 1.61602031e-06, 9.37205052e-05])\nnp.testing.assert_allclose(y2, less_than(x2, t2), rtol = 1e-05)", "Part D\nWrite a function which:\n\nis named greater_than\ntakes two arguments: a NumPy array, and a threshold number (float)\nreturns a NumPy array\n\nYou should use a boolean mask to return only the values in the NumPy array that are greater than the specified threshold value (the second parameter). No loops are allowed, or built-in functions, or NumPy functions.\nFor example, greater_than([1, 2, 3], 2.5) should return a NumPy array of [3].", "import numpy as np\nnp.random.seed(592582)\n\nx = np.random.random((10, 20, 30))\nt = 0.999\ny = np.array([ 0.99910167, 0.99982779, 0.99982253, 0.9991043 ])\nnp.testing.assert_allclose(y, greater_than(x, t))\n\nnp.random.seed(689388)\n\nx2 = np.random.random((100, 100, 10))\nt2 = 0.9999\ny2 = np.array([ 0.99997265, 0.99991169, 0.99998906, 0.99999012, 0.99992325,\n 0.99993289, 0.99996637, 0.99996416, 0.99992627, 0.99994388,\n 0.99993102, 0.99997486, 0.99992968, 0.99997598])\nnp.testing.assert_allclose(y2, greater_than(x2, t2), rtol = 1e-05)", "Part E\nWrite a function which:\n\nis named in_between\ntakes three parameters: a NumPy array, a lower threshold (float), and an upper threshold (float)\nreturns a NumPy array\n\nYou should use a boolean mask to return only the values in the NumPy array that are in between the two specified threshold values, lower and upper. No loops are allowed, or built-in functions, or NumPy functions.\nFor example, in_between([1, 2, 3], 1, 3) should return a NumPy array of [2].\nHint: you can use your functions from Parts C and D to help!", "import numpy as np\nnp.random.seed(7472)\n\nx = np.random.random((10, 20, 30))\nlo = 0.499\nhi = 0.501\ny = np.array([ 0.50019884, 0.50039172, 0.500711 , 0.49983418, 0.49942259,\n 0.4994417 , 0.49979261, 0.50029046, 0.5008376 , 0.49985266,\n 0.50015914, 0.50068227, 0.50060399, 0.49968918, 0.50091042,\n 0.50063015, 0.50050032])\nnp.testing.assert_allclose(y, in_between(x, lo, hi))\n\nimport numpy as np\nnp.random.seed(14985)\n\nx = np.random.random((30, 40, 50))\nlo = 0.49999\nhi = 0.50001\ny = np.array([ 0.50000714, 0.49999045])\nnp.testing.assert_allclose(y, in_between(x, lo, hi))", "Part F\nWrite a function which:\n\nis named not_in_between\ntakes three parameters: a NumPy array, a lower threshold (float), and an upper threshold (float)\nreturns a NumPy array\n\nYou should use a boolean mask to return only the values in the NumPy array that are NOT in between the two specified threshold values, lower and upper. No loops are allowed, or built-in functions, or NumPy functions.\nFor example, not_in_between([1, 2, 3, 4], 1, 3) should return a NumPy array of [4].\nHint: you can use your functions from Parts C and D to help!", "import numpy as np\nnp.random.seed(475185)\n\nx = np.random.random((10, 20, 30))\nlo = 0.001\nhi = 0.999\ny = np.array([ 9.52511605e-04, 8.62993716e-04, 3.70243252e-04,\n 9.99945849e-01, 7.21751759e-04, 9.36931041e-04,\n 5.10792605e-04, 6.44911672e-04])\nnp.testing.assert_allclose(y, not_in_between(x, lo, hi))\n\nnp.random.seed(51954)\n\nx = np.random.random((30, 40, 50))\nlo = 0.00001\nhi = 0.99999\ny = np.array([ 8.46159001e-06, 9.99998669e-01, 9.99993873e-01,\n 5.58488698e-06, 9.99993348e-01])\nnp.testing.assert_allclose(y, not_in_between(x, lo, hi))", "Part G\nWrite a function which:\n\nis named reverse_array\ntakes 1 parameter: a 1D NumPy array of data\nreturns the 1D NumPy array, reversed\n\nThis function uses fancy indexing to reverse the ordering of the elements in the input array, and returns the reversed array. You cannot use the [::-1] notation, nor the built-in reversed method, or any other Python function or loops. You can use the list(), range(), and np.arange() functions, however, and only some or all of those (but again, no loops!).\nHint: Construct a list of indices and use NumPy fancy indexing to reverse the ordering of the elements in the input list, then return the reversed array.", "import numpy as np\nnp.random.seed(5748)\n\nx1 = np.random.random(75)\ny1 = x1[::-1] # Sorry, you're not allowed to do this!\nnp.testing.assert_allclose(y1, reverse_array(x1))\n\nnp.random.seed(68382)\n\nx2 = np.random.random(581)\ny2 = x2[::-1] # Sorry, you're not allowed to do this!\nnp.testing.assert_allclose(y2, reverse_array(x2))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
andrewosh/notebooks
worker/notebooks/thunder/tutorials/input_formats.ipynb
mit
[ "Input formats\nData in Thunder can be loaded from a variety of formats and locations. We support several input formats for both image and series data, but also encourage a set of standard, recommended formats for these data, especially when working with large datasets.\nSetup plotting", "%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_context('notebook')\nfrom thunder import Colorize\nimage = Colorize.image", "Loading Images\nImages data are distributed collections of images or volumes. Two-dimensional images can be loaded from png or tif files, and three-dimensional volumes can be loaded from multi-page tif stacks, or flat binary volumes. For an example, we'll load a set of tif images by specifying a folder (note: these are small, highly downsampled images included with Thunder purely for demonstration and testing):\nFor these examples, first we find the location relative to the thunder installation", "import os.path as pth\nimagepath = pth.join(pth.dirname(pth.realpath(thunder.__file__)), 'utils/data/fish/images')", "Now load the images", "data = tsc.loadImages(imagepath, inputFormat='tif')", "To look at the first image:", "img = data.values().first()\nimage(img[:,:,0])", "To check the dimensions of the images:", "data.dims.count", "And to check the total number of images:", "data.nrecords", "For any of these image formats, the images can be stored on a local file system (for local use), a networked file system (accessible to all nodes of a cluster), Amazon S3, or Google Storage. To load images from S3, the location of the data must be specified as a URI, with \"s3://\" or \"s3n://\" given as the scheme.\n For instance, data stored in an S3 bucket named \"my-bucket\" under keys named \"my-data/images0.tif, my-data/images1.tif, ...\" could be retrieved by passing 's3n://my-bucket/my-data/images*.tif' to loadImages.\nIt's also easy to load only a subset of images (indexed assuming alphanumeric ordering):", "data = tsc.loadImages(imagepath, inputFormat='tif', startIdx=0, stopIdx=10)\n\ndata.nrecords", "Flat binary files provide a particularly simple format for image data. We recommended storing one image per binary file, alongside a single file conf.json specifying the dimensions and the numerical type:\n{\n \"dims\": [64, 64], \n \"dtype\": \"int16\"\n}\nIf this file is included in the folder with the binary files it will be used automatically, but the parameters can also be passed as arguments to loadImages\nYou can load an Images object directly from a numpy arrays in memory. To see that, first we'll collect the images from the tif files into a local array.", "data = tsc.loadImages(imagepath, inputFormat='tif', startIdx=0, stopIdx=10)\narrys = data.collectValuesAsArray()\narrys.shape", "We can now load this array directly as an Images object", "datanew = tsc.loadImagesFromArray(arrys)\n\ndatanew.nrecords\n\ndatanew.dims.count\n\nimg = datanew.values().first()\nimage(img[:,:,0])", "When loading n-dimensional array data, the first dimension is assumed to index the images, so a 4D array will be treated as a collection of 3D volumes, and a 3D array will be treated as a collection of 2D images.", "datanew = tsc.loadImagesFromArray(arrys[:,:,:,0])\n\ndatanew.nrecords\n\ndatanew.dims.count", "Loading Series\nA Series object is a distributed collection of one-dimensional arrays with tuple key identifiers. All arrays in a Series must have the same length. They can be loaded from flat text or binary files.\nText files must contain a line for each record, with numbers separated by spaces. The first numbers of each line will be interpreted as keys, and subsequent numbers will be interpreted as values. The number of keys is user-specified. As before, we'll load example series data from a small file included with Thunder purely for testing purposes (this is the iris dataset).", "seriespath = pth.join(pth.dirname(pth.realpath(thunder.__file__)), 'utils/data/iris/')\ndata = tsc.loadSeries(seriespath + 'iris.txt', inputFormat='text', nkeys=1)", "If data were split across multiple files, we could have also provided the folder name, rather than the file, and all file(s) of the given format would be loaded. Let's look at the first entry of the Series:", "data.first()", "The index is automatically calculated based on the length of the values array:", "data.index", "For comparison, you can look at the first raw line of the text file:", "dataraw = sc.textFile(seriespath + 'iris.txt')\ndataraw.first()", "Flat binary files must store each record as a contiguous sequence of bytes, with a fixed size in bytes for each record, including both keys and values. The number and numerical type of keys and records are most convieniently specified in a configuration file in the same directory as the data, but can also be specified as input arguments. Here, we show two ways of loading a binary version of the same iris data loaded previously.", "data = tsc.loadSeries(seriespath + 'iris.bin', inputFormat='binary')\ndata.first()", "In this case, the number of keys and number of values in each record, along with the data types of the keys and values, are automatically read out from a conf.json file located in the same directory as iris.bin. This file has the following simple JSON format.\n{\n \"keytype\": \"int16\", \n \"valuetype\": \"uint8\", \n \"nkeys\": 3, \n \"nvalues\": 240\n}\nWhen a conf.json file is unavailable, these parameters can also be passed as arguments to the loadSeries method:", "path = seriespath + 'iris.bin'\ndata = tsc.loadSeries(path, inputFormat='binary', nkeys=1, nvalues=4, keyType='float', valueType='float')\ndata.first()", "Flat binary files are a particularly convienient format when exporting large data sets from other scientific computing environments, e.g. Matlab. For example, the data loaded above was written out from within Matlab using\nf = fopen('iris.bin','w')\nfwrite(f, [[0:149]' data]','double')\nWhere data is a matrix containing the data, and we append a column for the indices. Note that the data must be transposed due to ordering conventions.\nBoth text and binary data can be loaded from a single file or multiple files, stored on a local file system, a networked file system, Amazon S3, or HDFS. To load multiple files at once, specify a directory as the filename, or a wildcard pattern.\nYou can load Series data from local arrays saved in either numpy npy or Matlab MAT format. This is particular useful for local use, or for distributing a smaller data set for performing intensive computations. In the latter case, the number of partitions should be set to approximately 2-3 times the number of cores avaialble on your cluster, so that different cores can work on different portions of the data.", "data = tsc.loadSeries(seriespath + '/iris.mat', inputFormat='mat', varName='data', minPartitions=5)\ndata.first()\n\ndata = tsc.loadSeries(seriespath + '/iris.npy', inputFormat='npy', minPartitions=5)\ndata.first()", "Finally, like Images, a Series object can be constructed directly from a numpy array in memory. To see this, we'll collect the data we just loaded as an array, and then use it to create a new Series object", "arry = data.collectValuesAsArray()\n\narry.shape\n\ndatanew = tsc.loadSeriesFromArray(arry)\n\ndatanew.first()\n\ndatanew.nrecords" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
astarostin/MachineLearningSpecializationCoursera
course4/week2 - Параметрические критерии - quiz.ipynb
apache-2.0
[ "3.\nВ одном из выпусков программы \"Разрушители легенд\" проверялось, действительно ли заразительна зевота. В эксперименте участвовало 50 испытуемых, проходивших собеседование на программу. Каждый из них разговаривал с рекрутером; в конце 34 из 50 бесед рекрутер зевал. Затем испытуемых просили подождать решения рекрутера в соседней пустой комнате.\nВо время ожидания 10 из 34 испытуемых экспериментальной группы и 4 из 16 испытуемых контрольной начали зевать. Таким образом, разница в доле зевающих людей в этих двух группах составила примерно 4.4%. Ведущие заключили, что миф о заразительности зевоты подтверждён.\nМожно ли утверждать, что доли зевающих в контрольной и экспериментальной группах отличаются статистически значимо? Посчитайте достигаемый уровень значимости при альтернативе заразительности зевоты, округлите до четырёх знаков после десятичной точки.\n4.\nИмеются данные измерений двухсот швейцарских тысячефранковых банкнот, бывших в обращении в первой половине XX века. Сто из банкнот были настоящими, и сто — поддельными.\nОтделите 50 случайных наблюдений в тестовую выборку с помощью функции sklearn.cross_validation.train_test_split (зафиксируйте random state = 1). На оставшихся 150 настройте два классификатора поддельности банкнот:\n\nлогистическая регрессия по признакам X1,X2,X3;\nлогистическая регрессия по признакам X4,X5,X6.\nКаждым из классификаторов сделайте предсказания меток классов на тестовой выборке. Одинаковы ли доли ошибочных предсказаний двух классификаторов? Проверьте гипотезу, вычислите достигаемый уровень значимости. Введите номер первой значащей цифры (например, если вы получили 5.5×10−8, нужно ввести 8).\n\n5.\nВ предыдущей задаче посчитайте 95% доверительный интервал для разности долей ошибок двух классификаторов. Чему равна его ближайшая к нулю граница? Округлите до четырёх знаков после десятичной точки.\n6.\nЕжегодно более 200000 людей по всему миру сдают стандартизированный экзамен GMAT при поступлении на программы MBA. Средний результат составляет 525 баллов, стандартное отклонение — 100 баллов.\nСто студентов закончили специальные подготовительные курсы и сдали экзамен. Средний полученный ими балл — 541.4. Проверьте гипотезу о неэффективности программы против односторонней альтернативы о том, что программа работает. Отвергается ли на уровне значимости 0.05 нулевая гипотеза? Введите достигаемый уровень значимости, округлённый до 4 знаков после десятичной точки.\n7.\nОцените теперь эффективность подготовительных курсов, средний балл 100 выпускников которых равен 541.5. Отвергается ли на уровне значимости 0.05 та же самая нулевая гипотеза против той же самой альтернативы? Введите достигаемый уровень значимости, округлённый до 4 знаков после десятичной точки.", "import numpy as np\nimport pandas as pd\n\nimport scipy\nfrom statsmodels.stats.weightstats import *\nfrom statsmodels.stats.proportion import proportion_confint\n%pylab inline\nfrom sklearn.cross_validation import train_test_split \nfrom sklearn.linear_model import LogisticRegression", "train", "alpha = 0.05\nppf = scipy.stats.norm.ppf(1 - alpha / 2.)\nppf\n\ncdf = scipy.stats.norm.cdf(1 - alpha / 2.)\ncdf\n\npdf = scipy.stats.norm.pdf(1 - alpha / 2.)\npdf\n\nx = np.linspace(scipy.stats.norm.ppf(0.01), scipy.stats.norm.ppf(0.99), 100)\nfig, ax = plt.subplots(1, 1)\nax.plot(x, scipy.stats.norm.pdf(x), 'r-', lw=5, alpha=0.6, label='pdf')\n\nfig, ax = plt.subplots(1, 1)\nax.plot(x, scipy.stats.norm.cdf(x), 'r-', lw=5, alpha=0.6, label='cdf')\n\nfig, ax = plt.subplots(1, 1)\nax.plot(x, scipy.stats.norm.ppf(x), 'r-', lw=5, alpha=0.6, label='ppf')", "Task 3", "def proportions_diff_confint_ind(a, n1, b, n2, alpha = 0.05): \n z = scipy.stats.norm.ppf(1 - alpha / 2.)\n \n p1 = float(a) / n1\n p2 = float(b) / n2\n \n left_boundary = (p1 - p2) - z * np.sqrt(p1 * (1 - p1)/ n1 + p2 * (1 - p2)/ n2)\n right_boundary = (p1 - p2) + z * np.sqrt(p1 * (1 - p1)/ n1 + p2 * (1 - p2)/ n2)\n \n return (left_boundary, right_boundary)\n\ndef proportions_diff_z_stat_ind(a, n1, b, n2): \n p1 = float(a) / n1\n p2 = float(b) / n2 \n P = float(p1*n1 + p2*n2) / (n1 + n2)\n \n return (p1 - p2) / np.sqrt(P * (1 - P) * (1. / n1 + 1. / n2))\n\ndef proportions_diff_z_test(z_stat, alternative = 'two-sided'):\n if alternative not in ('two-sided', 'less', 'greater'):\n raise ValueError(\"alternative not recognized\\n\"\n \"should be 'two-sided', 'less' or 'greater'\")\n \n if alternative == 'two-sided':\n return 2 * (1 - scipy.stats.norm.cdf(np.abs(z_stat)))\n \n if alternative == 'less':\n return scipy.stats.norm.cdf(z_stat)\n\n if alternative == 'greater':\n return 1 - scipy.stats.norm.cdf(z_stat)\n\nprint \"95%% confidence interval for a difference between proportions: [%f, %f]\" %\\\n proportions_diff_confint_ind(10, 34, 4, 16)\n\nprint \"p-value: %f\" % proportions_diff_z_test(proportions_diff_z_stat_ind(10, 34, 4, 16), alternative='greater')", "Task 4", "data = pd.read_csv('banknotes.txt', sep='\\t')\n\ndata.head()\n\ndata.describe()\n\ny = data['real']\nX = data.drop('real', axis=1)\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=50, random_state=1)\nX1_train = X_train[['X1', 'X2', 'X3']]\nX2_train = X_train[['X4', 'X5', 'X6']]\nX1_test = X_test[['X1', 'X2', 'X3']]\nX2_test = X_test[['X4', 'X5', 'X6']]\n\nlr1 = LogisticRegression()\nlr1.fit(X1_train,y_train)\nlr2 = LogisticRegression()\nlr2.fit(X2_train,y_train)\n\npred1 = lr1.predict(X1_test)\npred2 = lr2.predict(X2_test)\n\npred1[:5]", "y_test[:5].values", "a = [1 if v == y_test.values[i] else 0 for (i, v) in enumerate(pred1)]\nb = [1 if v == y_test.values[i] else 0 for (i, v) in enumerate(pred2)]\n\nsum(a), sum(b)\n\ndef proportions_diff_confint_rel(sample1, sample2, alpha = 0.05):\n z = scipy.stats.norm.ppf(1 - alpha / 2.)\n sample = zip(sample1, sample2)\n n = len(sample)\n \n f = sum([1 if (x[0] == 1 and x[1] == 0) else 0 for x in sample])\n g = sum([1 if (x[0] == 0 and x[1] == 1) else 0 for x in sample])\n \n left_boundary = float(f - g) / n - z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)\n right_boundary = float(f - g) / n + z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)\n return (left_boundary, right_boundary)\n\ndef proportions_diff_z_stat_rel(sample1, sample2):\n sample = zip(sample1, sample2)\n n = len(sample)\n \n f = sum([1 if (x[0] == 1 and x[1] == 0) else 0 for x in sample])\n g = sum([1 if (x[0] == 0 and x[1] == 1) else 0 for x in sample])\n \n return float(f - g) / np.sqrt(f + g - float((f - g)**2) / n )\n\nprint \"95%% confidence interval for a difference between proportions: [%f, %f]\" \\\n % proportions_diff_confint_rel(a, b)\n\nprint \"p-value: %f\" % proportions_diff_z_test(proportions_diff_z_stat_rel(a, b))", "Task 6", "def get_z_score(sample_mean, pop_mean, sigma, n):\n return float((sample_mean - pop_mean)) / (float(sigma) / np.sqrt(n))\n\ndef get_p_value(z):\n return 1 - scipy.stats.norm.cdf(z)\n\nget_p_value(get_z_score(541.4, 525, 100, 100))\n\nget_p_value(get_z_score(541.5, 525, 100, 100))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pushpajnc/models
DeepLearning/dog_app.ipynb
mit
[ "Artificial Intelligence Nanodegree\nConvolutional Neural Networks\nProject: Write an Algorithm for a Dog Identification App\n\nIn this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! \n\nNote: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \\n\",\n \"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.\n\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.\n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.\n\nThe rubric contains optional \"Stand Out Suggestions\" for enhancing the project beyond the minimum requirements. If you decide to pursue the \"Stand Out Suggestions\", you should include the code in this IPython notebook.\n\nWhy We're Here\nIn this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!). \n\nIn this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!\nThe Road Ahead\nWe break the notebook into separate steps. Feel free to use the links below to navigate the notebook.\n\nStep 0: Import Datasets\nStep 1: Detect Humans\nStep 2: Detect Dogs\nStep 3: Create a CNN to Classify Dog Breeds (from Scratch)\nStep 4: Use a CNN to Classify Dog Breeds (using Transfer Learning)\nStep 5: Create a CNN to Classify Dog Breeds (using Transfer Learning)\nStep 6: Write your Algorithm\nStep 7: Test Your Algorithm\n\n\n<a id='step0'></a>\nStep 0: Import Datasets\nImport Dog Dataset\nIn the code cell below, we import a dataset of dog images. We populate a few variables through the use of the load_files function from the scikit-learn library:\n- train_files, valid_files, test_files - numpy arrays containing file paths to images\n- train_targets, valid_targets, test_targets - numpy arrays containing onehot-encoded classification labels \n- dog_names - list of string-valued dog breed names for translating labels", "from sklearn.datasets import load_files \nfrom keras.utils import np_utils\nimport numpy as np\nfrom glob import glob\n\n# define function to load train, test, and validation datasets\ndef load_dataset(path):\n data = load_files(path)\n dog_files = np.array(data['filenames'])\n dog_targets = np_utils.to_categorical(np.array(data['target']), 133)\n return dog_files, dog_targets\n\n# load train, test, and validation datasets\ntrain_files, train_targets = load_dataset('/data/dog_images/train')\nvalid_files, valid_targets = load_dataset('/data/dog_images/valid')\ntest_files, test_targets = load_dataset('/data/dog_images/test')\n\n# load list of dog names\ndog_names = [item[20:-1] for item in sorted(glob(\"/data/dog_images/train/*/\"))]\n\n# print statistics about the dataset\nprint('There are %d total dog categories.' % len(dog_names))\nprint('There are %s total dog images.\\n' % len(np.hstack([train_files, valid_files, test_files])))\nprint('There are %d training dog images.' % len(train_files))\nprint('There are %d validation dog images.' % len(valid_files))\nprint('There are %d test dog images.'% len(test_files))", "Import Human Dataset\nIn the code cell below, we import a dataset of human images, where the file paths are stored in the numpy array human_files.", "import random\nrandom.seed(8675309)\n\n# load filenames in shuffled human dataset\nhuman_files = np.array(glob(\"/data/lfw/*/*\"))\nrandom.shuffle(human_files)\n\n# print statistics about the dataset\nprint('There are %d total human images.' % len(human_files))", "<a id='step1'></a>\nStep 1: Detect Humans\nWe use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images. OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the haarcascades directory.\nIn the next code cell, we demonstrate how to use this detector to find human faces in a sample image.", "import cv2 \nimport matplotlib.pyplot as plt \n%matplotlib inline \n\n# extract pre-trained face detector\nface_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')\n\n# load color (BGR) image\nimg = cv2.imread(human_files[3])\n# convert BGR image to grayscale\ngray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n\n# find faces in image\nfaces = face_cascade.detectMultiScale(gray)\n\n# print number of faces detected in the image\nprint('Number of faces detected:', len(faces))\n\n# get bounding box for each detected face\nfor (x,y,w,h) in faces:\n # add bounding box to color image\n cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)\n \n# convert BGR image to RGB for plotting\ncv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n\n# display the image, along with bounding box\nplt.imshow(cv_rgb)\nplt.show()", "Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The detectMultiScale function executes the classifier stored in face_cascade and takes the grayscale image as a parameter. \nIn the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.\nWrite a Human Face Detector\nWe can use this procedure to write a function that returns True if a human face is detected in an image and False otherwise. This function, aptly named face_detector, takes a string-valued file path to an image as input and appears in the code block below.", "# returns \"True\" if face is detected in image stored at img_path\ndef face_detector(img_path):\n img = cv2.imread(img_path)\n gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n faces = face_cascade.detectMultiScale(gray)\n return len(faces) > 0", "(IMPLEMENTATION) Assess the Human Face Detector\nQuestion 1: Use the code cell below to test the performance of the face_detector function.\n- What percentage of the first 100 images in human_files have a detected human face?\n- What percentage of the first 100 images in dog_files have a detected human face? \nIdeally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays human_files_short and dog_files_short.\nAnswer: \n* Of the first 100 images in 'human_files', the face detector detects a human face in each of the 100 files, hence the accuracy is 1.0 and 100% of the images have a human face detected in them.\n* This detector has been trained on human faces and sometimes misclassifies dog faces as human faces. It does so for 11 of the first 100 images in 'dog_files'. 11% of the dog images has a human face detected in them.", "human_files_short = human_files[:100]\ndog_files_short = train_files[:100]\n# Do NOT modify the code above this line.\n\n## TODO: Test the performance of the face_detector algorithm \n## on the images in human_files_short and dog_files_short.\n\nhumans = 0\ndogs = 0\n\nfor img in human_files_short:\n if face_detector(img) :\n humans = humans + 1\n\nfor img in dog_files_short:\n if face_detector(img) :\n dogs = dogs + 1\n \naccuracy_on_humans = humans / 100.0 \naccuracy_on_dogs = dogs / 100.0\n\nprint (\"Accuracy on humans is \" + str(accuracy_on_humans))\nprint (\"Accuracy on dogs is \" + str(accuracy_on_dogs))\n", "Question 2: This algorithmic choice necessitates that we communicate to the user that we accept human images only when they provide a clear view of a face (otherwise, we risk having unneccessarily frustrated users!). In your opinion, is this a reasonable expectation to pose on the user? If not, can you think of a way to detect humans in images that does not necessitate an image with a clearly presented face?\nAnswer: Asking a user to provide a clear view of the human face is not a great experience for the user. It would be better if we could associate a confidence interval with our predictions and use that to choose an operational point that balances our precision and recall. Adding non-human faces to the dataset, specially those with similar facial features like eyes, human like ears and using them to retrain the algorithm to correctly identify humans in an image is a much preferred way.\nWe suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this optional task, report performance on each of the datasets.\n Haar Cascades \nHaar like features are named after Haar-wavelets. The wavelet has a peculaiar form, where by it is of value 1 for the range [0,1/2] and -1 for the range [-1/2,1]. The use of wavelets allows the images to be processed in the single dimension of intensities rather than using individual channels such as RGB pixel values.\nAs the Haar wavelet is discontinuous it is not differentiable, this property is advantageous when detecting edges and sudden transitions. This makes it suitable for dtecting objects within the images. Haar like features could also be used for dtecting faces in images. A Haar-like feature considers adjacent regions and sums up the pixels within a region. It then calculates the difference between the sums of adjacent rectangles. This difference is used to categorize subsections of an image. For a human face the region near the eye are known to be darker than the region near the cheeks. Therefore Haar like features are common for creating weak classifiers for detection of faces. In the work by Viola-Jones, a rectangular window of target size is moved over the entire image and for each subsection Haar like features are computed. A large number of such Haar-like features provide a set of classifiers that are organized into a hieracrchy known as Haar-cascade. Such a strong classifier is able to detect human faces with a higher accuracy.\nHaar cascades are definitely better than a single classifier as they exploit they adaptively adjust the window sizes and are more generalizable.", "## (Optional) TODO: Report the performance of another \n## face detection algorithm on the LFW dataset\n### Feel free to use as many code cells as needed.", "<a id='step2'></a>\nStep 2: Detect Dogs\nIn this section, we use a pre-trained ResNet-50 model to detect dogs in images. Our first line of code downloads the ResNet-50 model, along with weights that have been trained on ImageNet, a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of 1000 categories. Given an image, this pre-trained ResNet-50 model returns a prediction (derived from the available categories in ImageNet) for the object that is contained in the image.", "from keras.applications.resnet50 import ResNet50\n\n# define ResNet50 model\nResNet50_model = ResNet50(weights='imagenet')", "Pre-process the Data\nWhen using TensorFlow as backend, Keras CNNs require a 4D array (which we'll also refer to as a 4D tensor) as input, with shape\n$$\n(\\text{nb_samples}, \\text{rows}, \\text{columns}, \\text{channels}),\n$$\nwhere nb_samples corresponds to the total number of images (or samples), and rows, columns, and channels correspond to the number of rows, columns, and channels for each image, respectively. \nThe path_to_tensor function below takes a string-valued file path to a color image as input and returns a 4D tensor suitable for supplying to a Keras CNN. The function first loads the image and resizes it to a square image that is $224 \\times 224$ pixels. Next, the image is converted to an array, which is then resized to a 4D tensor. In this case, since we are working with color images, each image has three channels. Likewise, since we are processing a single image (or sample), the returned tensor will always have shape\n$$\n(1, 224, 224, 3).\n$$\nThe paths_to_tensor function takes a numpy array of string-valued image paths as input and returns a 4D tensor with shape \n$$\n(\\text{nb_samples}, 224, 224, 3).\n$$\nHere, nb_samples is the number of samples, or number of images, in the supplied array of image paths. It is best to think of nb_samples as the number of 3D tensors (where each 3D tensor corresponds to a different image) in your dataset!", "from keras.preprocessing import image \nfrom tqdm import tqdm\n\ndef path_to_tensor(img_path):\n # loads RGB image as PIL.Image.Image type\n img = image.load_img(img_path, target_size=(224, 224))\n # convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)\n x = image.img_to_array(img)\n # convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor\n return np.expand_dims(x, axis=0)\n\ndef paths_to_tensor(img_paths):\n list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)]\n return np.vstack(list_of_tensors)", "Making Predictions with ResNet-50\nGetting the 4D tensor ready for ResNet-50, and for any other pre-trained model in Keras, requires some additional processing. First, the RGB image is converted to BGR by reordering the channels. All pre-trained models have the additional normalization step that the mean pixel (expressed in RGB as $[103.939, 116.779, 123.68]$ and calculated from all pixels in all images in ImageNet) must be subtracted from every pixel in each image. This is implemented in the imported function preprocess_input. If you're curious, you can check the code for preprocess_input here.\nNow that we have a way to format our image for supplying to ResNet-50, we are now ready to use the model to extract the predictions. This is accomplished with the predict method, which returns an array whose $i$-th entry is the model's predicted probability that the image belongs to the $i$-th ImageNet category. This is implemented in the ResNet50_predict_labels function below.\nBy taking the argmax of the predicted probability vector, we obtain an integer corresponding to the model's predicted object class, which we can identify with an object category through the use of this dictionary.", "from keras.applications.resnet50 import preprocess_input, decode_predictions\n\ndef ResNet50_predict_labels(img_path):\n # returns prediction vector for image located at img_path\n img = preprocess_input(path_to_tensor(img_path))\n return np.argmax(ResNet50_model.predict(img))", "Write a Dog Detector\nWhile looking at the dictionary, you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from 'Chihuahua' to 'Mexican hairless'. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained ResNet-50 model, we need only check if the ResNet50_predict_labels function above returns a value between 151 and 268 (inclusive).\nWe use these ideas to complete the dog_detector function below, which returns True if a dog is detected in an image (and False if not).", "### returns \"True\" if a dog is detected in the image stored at img_path\ndef dog_detector(img_path):\n prediction = ResNet50_predict_labels(img_path)\n return ((prediction <= 268) & (prediction >= 151)) ", "(IMPLEMENTATION) Assess the Dog Detector\nQuestion 3: Use the code cell below to test the performance of your dog_detector function.\n- What percentage of the images in human_files_short have a detected dog?\n- What percentage of the images in dog_files_short have a detected dog?\nAnswer: \n* 0% of the human images has a dog detected in them\n* 100% of dog imgaes have a dog detected in them", "### TODO: Test the performance of the dog_detector function\n### on the images in human_files_short and dog_files_short.\nhumans = 0\ndogs = 0\n\nfor img in human_files_short:\n if dog_detector(img) :\n humans = humans + 1\n\nfor img in dog_files_short:\n if dog_detector(img) :\n dogs = dogs + 1\n \naccuracy_on_humans = humans / 100.0 \naccuracy_on_dogs = dogs / 100.0\n\nprint (\"Accuracy on humans is \" + str(accuracy_on_humans))\nprint (\"Accuracy on dogs is \" + str(accuracy_on_dogs))", "<a id='step3'></a>\nStep 3: Create a CNN to Classify Dog Breeds (from Scratch)\nNow that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN from scratch (so, you can't use transfer learning yet!), and you must attain a test accuracy of at least 1%. In Step 5 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.\nBe careful with adding too many trainable layers! More parameters means longer training, which means you are more likely to need a GPU to accelerate the training process. Thankfully, Keras provides a handy estimate of the time that each epoch is likely to take; you can extrapolate this estimate to figure out how long it will take for your algorithm to train. \nWe mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that even a human would have great difficulty in distinguishing between a Brittany and a Welsh Springer Spaniel. \nBrittany | Welsh Springer Spaniel\n- | - \n<img src=\"images/Brittany_02625.jpg\" width=\"100\"> | <img src=\"images/Welsh_springer_spaniel_08203.jpg\" width=\"200\">\nIt is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels). \nCurly-Coated Retriever | American Water Spaniel\n- | -\n<img src=\"images/Curly-coated_retriever_03896.jpg\" width=\"200\"> | <img src=\"images/American_water_spaniel_00648.jpg\" width=\"200\">\nLikewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed. \nYellow Labrador | Chocolate Labrador | Black Labrador\n- | -\n<img src=\"images/Labrador_retriever_06457.jpg\" width=\"150\"> | <img src=\"images/Labrador_retriever_06455.jpg\" width=\"240\"> | <img src=\"images/Labrador_retriever_06449.jpg\" width=\"220\">\nWe also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%. \nRemember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun! \nPre-process the Data\nWe rescale the images by dividing every pixel in every image by 255.", "from PIL import ImageFile \nImageFile.LOAD_TRUNCATED_IMAGES = True \n\n# pre-process the data for Keras\ntrain_tensors = paths_to_tensor(train_files).astype('float32')/255\nvalid_tensors = paths_to_tensor(valid_files).astype('float32')/255\ntest_tensors = paths_to_tensor(test_files).astype('float32')/255", "(IMPLEMENTATION) Model Architecture\nCreate a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line:\n model.summary()\n\nWe have imported some Python modules to get you started, but feel free to import as many modules as you need. If you end up getting stuck, here's a hint that specifies a model that trains relatively fast on CPU and attains >1% test accuracy in 5 epochs:\n\nQuestion 4: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. If you chose to use the hinted architecture above, describe why you think that CNN architecture should work well for the image classification task.\nAnswer: \nI used the architecture hinted above, the three interleaved convolution and max pooling layer filters and aggregation layers that are able to capture the features such as edges, curves and the color contrast information in the image, which are helpful in learning distinguishing characteristsics for dog breeds.", "from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D\nfrom keras.layers import Dropout, Flatten, Dense, Activation\nfrom keras.models import Sequential\n\nmodel = Sequential()\n\n### TODO: Define your architecture.\n# 1st Layer - Add an input layer of 32 nodes with the same input shape as\n# the training samples in X\nmodel.add(Conv2D(filters=16, kernel_size=2, padding='same', activation='relu', \n input_shape=(224, 224, 3)))\n\nmodel.add(MaxPooling2D(pool_size=2))\n\nmodel.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu', \n input_shape=(110, 110, 3)))\nmodel.add(MaxPooling2D(pool_size=2))\n\nmodel.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='relu', \n input_shape=(54, 54, 3)))\nmodel.add(MaxPooling2D(pool_size=2))\n\nmodel.add(GlobalAveragePooling2D())\nmodel.add(Dense(133, activation='softmax'))\n\nmodel.summary()", "Compile the Model", "model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])", "(IMPLEMENTATION) Train the Model\nTrain your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.\nYou are welcome to augment the training data, but this is not a requirement.", "from keras.callbacks import ModelCheckpoint \n\n### TODO: specify the number of epochs that you would like to use to train the model.\n\nepochs = 20\n\n### Do NOT modify the code below this line.\n\ncheckpointer = ModelCheckpoint(filepath='saved_models/weights.best.from_scratch.hdf5', \n verbose=1, save_best_only=True)\n\nmodel.fit(train_tensors, train_targets, \n validation_data=(valid_tensors, valid_targets),\n epochs=epochs, batch_size=20, callbacks=[checkpointer], verbose=1)", "Load the Model with the Best Validation Loss", "model.load_weights('saved_models/weights.best.from_scratch.hdf5')", "Test the Model\nTry out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 1%.", "# get index of predicted dog breed for each image in test set\ndog_breed_predictions = [np.argmax(model.predict(np.expand_dims(tensor, axis=0))) for tensor in test_tensors]\n\n# report test accuracy\ntest_accuracy = 100*np.sum(np.array(dog_breed_predictions)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions)\nprint('Test accuracy: %.4f%%' % test_accuracy)", "<a id='step4'></a>\nStep 4: Use a CNN to Classify Dog Breeds\nTo reduce training time without sacrificing accuracy, we show you how to train a CNN using transfer learning. In the following step, you will get a chance to use transfer learning to train your own CNN.\nObtain Bottleneck Features", "bottleneck_features = np.load('/data/bottleneck_features/DogVGG16Data.npz')\ntrain_VGG16 = bottleneck_features['train']\nvalid_VGG16 = bottleneck_features['valid']\ntest_VGG16 = bottleneck_features['test']", "Model Architecture\nThe model uses the the pre-trained VGG-16 model as a fixed feature extractor, where the last convolutional output of VGG-16 is fed as input to our model. We only add a global average pooling layer and a fully connected layer, where the latter contains one node for each dog category and is equipped with a softmax.", "VGG16_model = Sequential()\nVGG16_model.add(GlobalAveragePooling2D(input_shape=train_VGG16.shape[1:]))\nVGG16_model.add(Dense(133, activation='softmax'))\n\nVGG16_model.summary()", "Compile the Model", "VGG16_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])", "Train the Model", "checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.VGG16.hdf5', \n verbose=1, save_best_only=True)\n\nVGG16_model.fit(train_VGG16, train_targets, \n validation_data=(valid_VGG16, valid_targets),\n epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1)", "Load the Model with the Best Validation Loss", "VGG16_model.load_weights('saved_models/weights.best.VGG16.hdf5')", "Test the Model\nNow, we can use the CNN to test how well it identifies breed within our test dataset of dog images. We print the test accuracy below.", "# get index of predicted dog breed for each image in test set\nVGG16_predictions = [np.argmax(VGG16_model.predict(np.expand_dims(feature, axis=0))) for feature in test_VGG16]\n\n# report test accuracy\ntest_accuracy = 100*np.sum(np.array(VGG16_predictions)==np.argmax(test_targets, axis=1))/len(VGG16_predictions)\nprint('Test accuracy: %.4f%%' % test_accuracy)", "Predict Dog Breed with the Model", "from extract_bottleneck_features import *\n\ndef VGG16_predict_breed(img_path):\n # extract bottleneck features\n bottleneck_feature = extract_VGG16(path_to_tensor(img_path))\n # obtain predicted vector\n predicted_vector = VGG16_model.predict(bottleneck_feature)\n # return dog breed that is predicted by the model\n return dog_names[np.argmax(predicted_vector)]", "<a id='step5'></a>\nStep 5: Create a CNN to Classify Dog Breeds (using Transfer Learning)\nYou will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.\nIn Step 4, we used transfer learning to create a CNN using VGG-16 bottleneck features. In this section, you must use the bottleneck features from a different pre-trained model. To make things easier for you, we have pre-computed the features for all of the networks that are currently available in Keras. These are already in the workspace, at /data/bottleneck_features. If you wish to download them on a different machine, they can be found at:\n- VGG-19 bottleneck features\n- ResNet-50 bottleneck features\n- Inception bottleneck features\n- Xception bottleneck features\nThe files are encoded as such:\nDog{network}Data.npz\n\nwhere {network}, in the above filename, can be one of VGG19, Resnet50, InceptionV3, or Xception. \nThe above architectures are downloaded and stored for you in the /data/bottleneck_features/ folder.\nThis means the following will be in the /data/bottleneck_features/ folder:\nDogVGG19Data.npz\nDogResnet50Data.npz\nDogInceptionV3Data.npz\nDogXceptionData.npz\n(IMPLEMENTATION) Obtain Bottleneck Features\nIn the code block below, extract the bottleneck features corresponding to the train, test, and validation sets by running the following:\nbottleneck_features = np.load('/data/bottleneck_features/Dog{network}Data.npz')\ntrain_{network} = bottleneck_features['train']\nvalid_{network} = bottleneck_features['valid']\ntest_{network} = bottleneck_features['test']", "### TODO: Obtain bottleneck features from another pre-trained CNN.\nbottleneck_features = np.load('/data/bottleneck_features/DogXceptionData.npz')\ntrain_Xception = bottleneck_features['train']\nvalid_Xception = bottleneck_features['valid']\ntest_Xception = bottleneck_features['test']", "(IMPLEMENTATION) Model Architecture\nCreate a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line:\n &lt;your model's name&gt;.summary()\n\nQuestion 5: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.\nAnswer: \nThe proposed CNN architecture uses cross learned features from V19 serialized model provided to us. I use a softmax function at the end to be able to specify the loss type and predict the coutcome based on activation from previous layers.\nTraining the model and developing a well performing deep architecture is a time consuming task in deep learning. The cost is significant both in terms of developer time and CPU time. As the model becomes complex, cost of training grows non-linearly. Complex models need to be developed when simple models do not provide high accuracy. A complex deep learning model can be developed only if one has lot of experience in neural network architectures. Transfer learning approach is advantageous, as it allows for the lower layers to be shared between the developers and systems. The use of this technique allows us to over come the problem of limited labeled data in a domain compared to another domain. Helps us to iterate faster in development of deep networks.", "### TODO: Define your architecture.\nXception_model = Sequential()\nXception_model.add(GlobalAveragePooling2D(input_shape=train_VGG19.shape[1:]))\nXception_model.add(Dense(133, activation='softmax'))\nXception_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', \n metrics=['accuracy'])\nXception_model.summary()", "(IMPLEMENTATION) Compile the Model", "### TODO: Compile the model.\nXception_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])", "(IMPLEMENTATION) Train the Model\nTrain your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss. \nYou are welcome to augment the training data, but this is not a requirement.", "from keras.callbacks import ModelCheckpoint \n### TODO: Train the model.\ncheckpointer = ModelCheckpoint(filepath='saved_models/weights.best.Xception.hdf5', \n verbose=1, save_best_only=True)\n\nhistory = Xception_model.fit(train_Xception, train_targets, \n validation_data=(valid_Xception, valid_targets),\n epochs=50, batch_size=100, callbacks=[checkpointer], verbose=1)", "(IMPLEMENTATION) Load the Model with the Best Validation Loss", "### TODO: Load the model weights with the best validation loss.\nXception_model.load_weights('saved_models/weights.best.Xception.hdf5')", "(IMPLEMENTATION) Test the Model\nTry out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 60%.", "### TODO: Calculate classification accuracy on the test dataset.\n# get index of predicted dog breed for each image in test set\nXception_predictions = [np.argmax(Xception_model.predict(np.expand_dims(feature, axis=0))) for feature in test_Xception]\n\n# report test accuracy\ntest_accuracy = 100*np.sum(np.array(Xception_predictions)==np.argmax(test_targets, axis=1))/len(Xception_predictions)\nprint('Test accuracy: %.4f%%' % test_accuracy)", "(IMPLEMENTATION) Predict Dog Breed with the Model\nWrite a function that takes an image path as input and returns the dog breed (Affenpinscher, Afghan_hound, etc) that is predicted by your model. \nSimilar to the analogous function in Step 5, your function should have three steps:\n1. Extract the bottleneck features corresponding to the chosen CNN model.\n2. Supply the bottleneck features as input to the model to return the predicted vector. Note that the argmax of this prediction vector gives the index of the predicted dog breed.\n3. Use the dog_names array defined in Step 0 of this notebook to return the corresponding breed.\nThe functions to extract the bottleneck features can be found in extract_bottleneck_features.py, and they have been imported in an earlier code cell. To obtain the bottleneck features corresponding to your chosen CNN architecture, you need to use the function\nextract_{network}\n\nwhere {network}, in the above filename, should be one of VGG19, Resnet50, InceptionV3, or Xception.", "### TODO: Write a function that takes a path to an image as input\n### and returns the dog breed that is predicted by the model.\ndef Xception_predict_breed(img_path):\n # extract bottleneck features\n bottleneck_feature = extract_Xception(path_to_tensor(img_path))\n # obtain predicted vector\n predicted_vector = Xception_model.predict(bottleneck_feature)\n # return dog breed that is predicted by the model\n return (dog_names[np.argmax(predicted_vector)], predicted_vector[0][np.argmax(predicted_vector)])\n\nimport glob\nfrom PIL import Image\nfrom io import BytesIO\nfrom IPython.display import HTML\n\ndef get_thumbnail(path):\n i = Image.open(path).convert('RGB')\n i.thumbnail((150, 150), Image.LANCZOS)\n return i\n\ndef image_base64(im):\n if isinstance(im, str):\n im = get_thumbnail(im)\n with BytesIO() as buffer:\n im.save(buffer, 'jpeg')\n return base64.b64encode(buffer.getvalue()).decode()\n\ndef image_formatter(im):\n return f'<img src=\"data:image/jpeg;base64,{image_base64(im)}\">'\n\ndef file_formatter(file_name):\n return f'<img src=\"{file_name}\" width=\"150\" height=\"150\">'\n\npredictions_and_images = dict()\nfor filename in glob.iglob('dogs/*'):\n label, score = Xception_predict_breed(filename)\n print(\"Predicting that image in {0}, depicts a {1}, with a score {2}\".format(filename, label, score))\n predictions_and_images[filename] = label\n\nimport pandas as pd\nimport base64\nfrom IPython.display import HTML\npredictions_and_images\npredictions_df = pd.DataFrame.from_dict(predictions_and_images, orient='index')\npredictions_df.reset_index(inplace=True)\npredictions_df.columns = ['file', 'label']\npredictions_df['image'] = predictions_df.file.map(lambda f: get_thumbnail(f))\npredictions_df\nHTML(predictions_df[['label', 'file']].to_html(formatters={'file': file_formatter}, escape=False))", "<a id='step6'></a>\nStep 6: Write your Algorithm\nWrite an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,\n- if a dog is detected in the image, return the predicted breed.\n- if a human is detected in the image, return the resembling dog breed.\n- if neither is detected in the image, provide output that indicates an error.\nYou are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the face_detector and dog_detector functions developed above. You are required to use your CNN from Step 5 to predict dog breed. \nSome sample output for our algorithm is provided below, but feel free to design your own user experience!\n\n(IMPLEMENTATION) Write your Algorithm", "### TODO: Write your algorithm.\n### Feel free to use as many code cells as needed.\n \ndef predict_for_image(filename):\n label, score = Xception_predict_breed(filename)\n if (score > 0.95) :\n predictions_and_images[filename] = ('dog', label, score)\n elif face_detector(filename):\n predictions_and_images[filename] = ('human', label, score)\n else:\n predictions_and_images[filename] = ('neither', label, score)\n return predictions_and_images\n ", "<a id='step7'></a>\nStep 7: Test Your Algorithm\nIn this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that you look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?\n(IMPLEMENTATION) Test Your Algorithm on Sample Images!\nTest your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images. \nQuestion 6: Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.\nAnswer: \nThe algorithm is worst then I expected. For each of the predictions, we need a confidence score associated with the prediction. As it currently stands the label does not capture contents of the image and all the images with humans/dogs are allocated a breed. This problem may be symptomatic of the way the label space is created. The three ways that I would fix the problem are:\n* Have a 'None' label to denote that the image does not contain any dog breeds\n* Include confidence intervals with each of the predictions\n* Improve on classifier accuracy", "## Execute your algorithm from Step 6 on\n## at least 6 images on your computer.\n## Feel free to use as many code cells as needed.\nfor filename in glob.iglob('random_images/*'):\n type_of_animal, label, score = predict_for_image(filename)[filename]\n print(\"Predicting that image in {0}, depicts a {1}, closest dog breed {2}, score = {3}\".format(filename, type_of_animal, label, score))\n", "Please download your notebook to submit" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ray-project/ray
doc/source/tune/examples/nevergrad_example.ipynb
apache-2.0
[ "Running Tune experiments with Nevergrad\nIn this tutorial we introduce Nevergrad, while running a simple Ray Tune experiment. Tune’s Search Algorithms integrate with Nevergrad and, as a result, allow you to seamlessly scale up a Nevergrad optimization process - without sacrificing performance.\nNevergrad provides gradient/derivative-free optimization able to handle noise over the objective landscape, including evolutionary, bandit, and Bayesian optimization algorithms. Nevergrad internally supports search spaces which are continuous, discrete or a mixture of thereof. It also provides a library of functions on which to test the optimization algorithms and compare with other benchmarks.\nIn this example we minimize a simple objective to briefly demonstrate the usage of Nevergrad with Ray Tune via NevergradSearch. It's useful to keep in mind that despite the emphasis on machine learning experiments, Ray Tune optimizes any implicit or explicit objective. Here we assume nevergrad==0.4.3.post7 library is installed. To learn more, please refer to Nevergrad website.", "# !pip install ray[tune]\n!pip install nevergrad==0.4.3.post7 ", "Click below to see all the imports we need for this example.\nYou can also launch directly into a Binder instance to run this notebook yourself.\nJust click on the rocket symbol at the top of the navigation.", "import time\n\nimport ray\nimport nevergrad as ng\nfrom ray import tune\nfrom ray.tune.suggest import ConcurrencyLimiter\nfrom ray.tune.suggest.nevergrad import NevergradSearch", "Let's start by defining a simple evaluation function.\nWe artificially sleep for a bit (0.1 seconds) to simulate a long-running ML experiment.\nThis setup assumes that we're running multiple steps of an experiment and try to tune two hyperparameters,\nnamely width and height, and activation.", "def evaluate(step, width, height, activation):\n time.sleep(0.1)\n activation_boost = 10 if activation==\"relu\" else 1\n return (0.1 + width * step / 100) ** (-1) + height * 0.1 + activation_boost", "Next, our objective function takes a Tune config, evaluates the score of your experiment in a training loop,\nand uses tune.report to report the score back to Tune.", "def objective(config):\n for step in range(config[\"steps\"]):\n score = evaluate(step, config[\"width\"], config[\"height\"], config[\"activation\"])\n tune.report(iterations=step, mean_loss=score)\n\nray.init(configure_logging=False)", "Now we construct the hyperparameter search space using ConfigSpace\nNext we define the search algorithm built from NevergradSearch, constrained to a maximum of 4 concurrent trials with a ConcurrencyLimiter. Here we use ng.optimizers.OnePlusOne, a simple evolutionary algorithm.", "algo = NevergradSearch(\n optimizer=ng.optimizers.OnePlusOne,\n)\nalgo = tune.suggest.ConcurrencyLimiter(algo, max_concurrent=4)", "The number of samples is the number of hyperparameter combinations that will be tried out. This Tune run is set to 1000 samples.\n(you can decrease this if it takes too long on your machine).", "num_samples = 1000\n\n# If 1000 samples take too long, you can reduce this number.\n# We override this number here for our smoke tests.\nnum_samples = 10", "Finally, all that's left is to define a search space.", "search_config = {\n \"steps\": 100,\n \"width\": tune.uniform(0, 20),\n \"height\": tune.uniform(-100, 100),\n \"activation\": tune.choice([\"relu, tanh\"])\n}", "Finally, we run the experiment to \"min\"imize the \"mean_loss\" of the objective by searching search_space via algo, num_samples times. This previous sentence is fully characterizes the search problem we aim to solve. With this in mind, observe how efficient it is to execute tune.run().", "analysis = tune.run(\n objective,\n search_alg=algo,\n metric=\"mean_loss\",\n mode=\"min\",\n name=\"nevergrad_exp\",\n num_samples=num_samples,\n config=search_config,\n)", "Here are the hyperparamters found to minimize the mean loss of the defined objective.", "print(\"Best hyperparameters found were: \", analysis.best_config)", "Optional: passing the (hyper)parameter space into the search algorithm\nWe can also pass the search space into NevergradSearch using their designed format.", "space = ng.p.Dict(\n width=ng.p.Scalar(lower=0, upper=20),\n height=ng.p.Scalar(lower=-100, upper=100),\n activation=ng.p.Choice(choices=[\"relu\", \"tanh\"])\n)\n\nalgo = NevergradSearch(\n optimizer=ng.optimizers.OnePlusOne,\n space=space,\n metric=\"mean_loss\",\n mode=\"min\"\n)\nalgo = tune.suggest.ConcurrencyLimiter(algo, max_concurrent=4)", "Again we run the experiment, this time with a less passed via the config and instead passed through search_alg.", "analysis = tune.run(\n objective,\n search_alg=algo,\n# metric=\"mean_loss\",\n# mode=\"min\",\n name=\"nevergrad_exp\",\n num_samples=num_samples,\n config={\"steps\": 100},\n)\n\nray.shutdown()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
shareactorIO/pipeline
gpu.ml/notebooks/08_Optimize_Model_CPU.ipynb
apache-2.0
[ "Optimize Trained CPU Model\nTypes of Optimizations Applied for Inference\n\nRemove training-only operations (checkpoint saving, drop out)\nStrip out unused nodes\nRemove debug operations\nFold batch normalization ops into weights (super cool)\nRound weights\nQuantize weights\n\nGraph Transform Tool\nhttps://petewarden.com/2016/12/30/rewriting-tensorflow-graphs-with-the-gtt/\nhttps://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms\nOptimize Models\nSummarize Graph Utility", "%%bash \n\nwhich summarize_graph\n\n%%bash\n\n## TODO: /root/models/linear/cpu/metagraph\n## ls -l /root/models/optimize_me/\n\nls -l /root/models/linear/cpu/unoptimized\n\n%%bash\n\nfreeze_graph\n\nfrom tensorflow.python.tools import freeze_graph\n\ncheckpoint_prefix = os.path.join(self.get_temp_dir(), \"saved_checkpoint\")\ncheckpoint_state_name = \"checkpoint_state\"\ninput_graph_name = \"input_graph.pb\"\noutput_graph_name = \"output_graph.pb\"\n \ninput_graph_path = os.path.join(self.get_temp_dir(),\n input_graph_name)\ninput_saver_def_path = \"\"\ninput_binary = False\noutput_node_names = \"output_node\"\nrestore_op_name = \"save/restore_all\"\nfilename_tensor_name = \"save/Const:0\"\noutput_graph_path = os.path.join(self.get_temp_dir(), output_graph_name)\nclear_devices = False\n \nfreeze_graph.freeze_graph(input_graph_path,\n input_saver_def_path,\n input_binary, \n checkpoint_path,\n output_node_names,\n restore_op_name,\n filename_tensor_name,\n output_graph_path,\n clear_devices, \"\")\n\n%%bash\n\n## TODO: /root/models/linear/cpu/unoptimized/metagraph.pb\n## summarize_graph --in_graph=/root/models/optimize_me/unoptimized_cpu.pb\n\nsummarize_graph --in_graph=/root/models/linear/cpu/unoptimized/metagraph.pb", "Strip Unused Nodes", "%%bash\n\n# TODO: shuffle_batch?? x_observed_batch??\ntransform_graph \\\n--in_graph=/root/models/optimize_me/unoptimized_cpu.pb \\\n--out_graph=/root/models/optimize_me/strip_unused_optimized_cpu.pb \\\n--inputs='x_observed,weights,bias' \\\n--outputs='add' \\\n--transforms='\nstrip_unused_nodes'\n\n%%bash\n\nls -l /root/models/optimize_me/\n\n%%bash\n\nsummarize_graph --in_graph=/root/models/optimize_me/strip_unused_optimized_cpu.pb\n\n%%bash\n\nbenchmark_model --graph=/root/models/optimize_me/strip_unused_optimized_cpu.pb --input_layer=weights,bias,x_observed --input_layer_type=float,float,float --input_layer_shape=:: --output_layer=add", "Fold Constants", "%%bash\n\ntransform_graph \\\n--in_graph=/root/models/optimize_me/unoptimized_cpu.pb \\\n--out_graph=/root/models/optimize_me/fold_constants_optimized_cpu.pb \\\n--inputs='x_observed,weights,bias' \\\n--outputs='add' \\\n--transforms='\nfold_constants(ignore_errors=true)'\n\n%%bash\n\nls -l /root/models/optimize_me/\n\n%%bash\n\nsummarize_graph --in_graph=/root/models/optimize_me/fold_constants_optimized_cpu.pb\n\n%%bash\n\nbenchmark_model --graph=/root/models/optimize_me/fold_constants_optimized_cpu.pb --input_layer=x_observed,bias,weights --input_layer_type=float,float,float --input_layer_shape=:: --output_layer=add", "Fold Batch Normalizations\nMust run Fold Constants first!", "%%bash\n\ntransform_graph \\\n--in_graph=/root/models/optimize_me/fold_constants_optimized_cpu.pb \\\n--out_graph=/root/models/optimize_me/fold_batch_norms_optimized_cpu.pb \\\n--inputs='x_observed,weights,bias' \\\n--outputs='add' \\\n--transforms='\nfold_batch_norms\nfold_old_batch_norms'\n\n%%bash\n\nls -l /root/models/optimize_me/\n\n%%bash\n\nsummarize_graph --in_graph=/root/models/optimize_me/fold_batch_norms_optimized_cpu.pb\n\n%%bash\n\nbenchmark_model --graph=/root/models/optimize_me/fold_batch_norms_optimized_cpu.pb --input_layer=x_observed,bias,weights --input_layer_type=float,float,float --input_layer_shape=:: --output_layer=add", "Quantize Weights\nShould run Fold Batch Norms first!", "%%bash\n\ntransform_graph \\\n--in_graph=/root/models/optimize_me/fold_batch_norms_optimized_cpu.pb \\\n--out_graph=/root/models/optimize_me/quantized_optimized_cpu.pb \\\n--inputs='x_observed,weights,bias' \\\n--outputs='add' \\\n--transforms='quantize_weights'\n\n%%bash\n\nls -l /root/models/optimize_me/\n\n%%bash\n\nsummarize_graph --in_graph=/root/models/optimize_me/quantized_optimized_cpu.pb\n\n%%bash\n\nbenchmark_model --graph=/root/models/optimize_me/quantized_optimized_cpu.pb --input_layer=x_observed,bias,weights --input_layer_type=float,float,float --input_layer_shape=:: --output_layer=add", "Perform All Common Optimizations", "%%bash\n\ntransform_graph \\\n--in_graph=/root/models/optimize_me/unoptimized_cpu.pb \\\n--out_graph=/root/models/optimize_me/fully_optimized_cpu.pb \\\n--inputs='x_observed,weights,bias' \\\n--outputs='add' \\\n--transforms='\nadd_default_attributes\nremove_nodes(op=Identity, op=CheckNumerics)\nfold_constants(ignore_errors=true)\nfold_batch_norms\nfold_old_batch_norms\nquantize_weights\nquantize_nodes\nstrip_unused_nodes\nobfuscate_names'\n\n%%bash\n\nls -l /root/models/optimize_me/\n\n%%bash\n\nsummarize_graph --in_graph=/root/models/optimize_me/fully_optimized_cpu.pb\n\n%%bash\n\nbenchmark_model --graph=/root/models/optimize_me/fully_optimized_cpu.pb --input_layer=weights,x_observed,bias --input_layer_type=float,float,float --input_layer_shape=:: --output_layer=add", "Sort by Execution Order (DAG Topological Order)\n\nMinimizes inference overhead \nInputs for a node guaranteed to be available", "%%bash\n\ntransform_graph \\\n--in_graph=/root/models/optimize_me/fully_optimized_cpu.pb \\\n--out_graph=/root/models/optimize_me/sort_by_execution_order_optimized_cpu.pb \\\n--inputs='x_observed,weights,bias' \\\n--outputs='add' \\\n--transforms='\nsort_by_execution_order'\n\n%%bash\n\nls -l /root/models/optimize_me/\n\n%%bash\n\nsummarize_graph --in_graph=/root/models/optimize_me/sort_by_execution_order_optimized_cpu.pb\n\n%%bash\n\nbenchmark_model --graph=/root/models/optimize_me/sort_by_execution_order_optimized_cpu.pb --input_layer=weights,x_observed,bias --input_layer_type=float,float,float --input_layer_shape=:: --output_layer=add" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gaufung/Data_Analytics_Learning_Note
DesignPattern/AdapterPattern.ipynb
mit
[ "适配器模式(Adapter Pattern)\n1 代码\n假设某公司A与某公司B需要合作,公司A需要访问公司B的人员信息,但公司A与公司B协议接口不同,该如何处理?先将公司A和公司B针对各自的人员信息访问系统封装了对象接口。", "class ACpnStaff(object):\n name=\"\"\n id=\"\"\n phone=\"\"\n def __init__(self,id):\n self.id=id\n def getName(self):\n print (\"A protocol getName method...id:%s\"%self.id)\n return self.name\n def setName(self,name):\n print (\"A protocol setName method...id:%s\"%self.id)\n self.name=name\n def getPhone(self):\n print (\"A protocol getPhone method...id:%s\"%self.id)\n return self.phone\n def setPhone(self,phone):\n print (\"A protocol setPhone method...id:%s\"%self.id)\n self.phone=phone\nclass BCpnStaff(object):\n name=\"\"\n id=\"\"\n telephone=\"\"\n def __init__(self,id):\n self.id=id\n def get_name(self):\n print (\"B protocol get_name method...id:%s\"%self.id)\n return self.name\n def set_name(self,name):\n print (\"B protocol set_name method...id:%s\"%self.id)\n self.name=name\n def get_telephone(self):\n print (\"B protocol get_telephone method...id:%s\"%self.id)\n return self.telephone\n def set_telephone(self,telephone):\n print (\"B protocol get_name method...id:%s\"%self.id)\n self.telephone=telephone", "为在A公司平台复用B公司接口,直接调用B公司人员接口是个办法,但会对现在业务流程造成不确定的风险。为减少耦合,规避风险,我们需要一个帮手,就像是转换电器电压的适配器一样,这个帮手就是协议和接口转换的适配器。适配器构造如下:", "class CpnStaffAdapter(object):\n b_cpn=\"\"\n def __init__(self,id):\n self.b_cpn=BCpnStaff(id)\n def getName(self):\n return self.b_cpn.get_name()\n def getPhone(self):\n return self.b_cpn.get_telephone()\n def setName(self,name):\n self.b_cpn.set_name(name)\n def setPhone(self,phone):\n self.b_cpn.set_telephone(phone)", "适配器将B公司人员接口封装,而对外接口形式与A公司人员接口一致,达到用A公司人员接口访问B公司人员信息的效果。\n业务示例如下:", "acpn_staff=ACpnStaff(\"123\")\nacpn_staff.setName(\"X-A\")\nacpn_staff.setPhone(\"10012345678\")\nprint (\"A Staff Name:%s\"%acpn_staff.getName())\nprint (\"A Staff Phone:%s\"%acpn_staff.getPhone())\nbcpn_staff=CpnStaffAdapter(\"456\")\nbcpn_staff.setName(\"Y-B\")\nbcpn_staff.setPhone(\"99987654321\")\nprint (\"B Staff Name:%s\"%bcpn_staff.getName())\nprint (\"B Staff Phone:%s\"%bcpn_staff.getPhone())", "2 Description\n适配器模式定义如下:将一个类的接口变换成客户端期待的另一种接口,从而使原本因接口不匹配而无法在一起工作的两个类能够在一起工作。适配器模式和装饰模式有一定的相似性,都起包装的作用,但二者本质上又是不同的,装饰模式的结果,是给一个对象增加了一些额外的职责,而适配器模式,则是将另一个对象进行了“伪装”。\n3 Advantages\n\n适配器模式可以让两个接口不同,甚至关系不大的两个类一起运行\n提高了类的复用度,经过“伪装”的类,可以充当新的角色;\n适配器可以灵活“拆卸”\n\n4 Usages\n\n不修改现有接口,同时也要使该接口适用或兼容新场景业务中,\n\n5 Disadvantages\n\n适配器模式与原配接口相比,毕竟增加了一层调用关系,所以,在设计系统时,不要使用适配器模式。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google-research/google-research
milking_cowmask/Generate_ImageNet_split_shards.ipynb
apache-2.0
[ "Generate splits of ImageNet and save in sharded TFRECORD files\nLicensed under the Apache License, Version 2.0", "import os\nos.environ['UNITTEST_ON_FORGE'] = '1'\n\nimport string\nimport random\nimport pickle\nimport zipfile\nimport io\nimport itertools\n\nimport time\nimport tensorflow as tf\nimport numpy as np\nfrom sklearn.model_selection import StratifiedShuffleSplit\nfrom matplotlib import pyplot as plt\nimport tqdm\nimport tensorflow_datasets as tfds\n\ntf.enable_eager_execution()\n\n\nimport distutils\nif distutils.version.LooseVersion(tf.__version__) < '1.14':\n raise Exception('This notebook is compatible with TensorFlow 1.14 or higher, for TensorFlow 1.13 or lower please use the previous version at https://github.com/tensorflow/tpu/blob/r1.13/tools/colab/fashion_mnist.ipynb')\n\n \nprint('Tensorflow version {}'.format(tf.__version__))", "Path to ImageNet on Placer", "# EDIT THESE\nIMAGENET_TFRECORDS_SOURCE_PATH = r'<source path in here>'\nOUT_SUBSET_SHARDS_PATH = r'<destination path in here>'\nIMAGENET_SIZE = 1281167\n", "Description of features in ImageNet tfrecord files", "feature_description = {\n 'label': tf.io.FixedLenFeature([], tf.int64, default_value=0),\n 'image': tf.io.FixedLenFeature([], tf.string),\n 'file_name': tf.io.FixedLenFeature([], tf.string),\n}", "Load tfrecord dataset", "train_files = [f for f in tf.io.gfile.listdir(IMAGENET_TFRECORDS_SOURCE_PATH) if f.startswith('imagenet2012-train.tfrecord')]\ntrain_paths = [IMAGENET_TFRECORDS_SOURCE_PATH + f for f in train_files]\nds = tf.data.TFRecordDataset(train_paths)", "Get ground truth labels and filenames for all samples in training set", "def get_labels(ds, N):\n it = ds.prefetch(tf.data.experimental.AUTOTUNE).make_one_shot_iterator()\n\n all_ys = []\n all_fns = []\n for _ in tqdm.tqdm(range(N)):\n sample = it.next()\n all_ys.append(sample['label'])\n all_fns.append(sample[SAMPLE_FILENAME_ATTR].numpy())\n\n return np.array(all_ys), all_fns\n\nall_y, all_filenames = get_labels(tfds.load(name='imagenet2012', split=tfds.Split.TRAIN), IMAGENET_SIZE)\n", "Define functions for getting ImageNet subsets", "def random_name():\n allchar = string.ascii_letters\n name = \"\".join([random.choice(allchar) for x in range(8)])\n return name\n\ndef dataset_subset_by_file_name(in_ds, subset_filenames):\n kv_init = tf.lookup.KeyValueTensorInitializer(np.array(subset_filenames), np.ones((len(subset_filenames),), dtype=int),\n key_dtype=tf.string, value_dtype=tf.int64, name=random_name())\n ht = tf.lookup.StaticHashTable(kv_init, 0, name=random_name())\n\n def pred_fn(x):\n f = tf.io.parse_single_example(x, feature_description)\n return tf.equal(ht.lookup(f['file_name']), 1)\n\n return in_ds.filter(pred_fn)\n\n\ndef imagenet_subset(n_samples, seed):\n splitter = StratifiedShuffleSplit(1, test_size=n_samples, random_state=seed)\n _, ndx = next(splitter.split(data_index['y'], data_index['y']))\n\n sub_fn = [all_filenames[int(i)] for i in ndx]\n\n return dataset_subset_by_file_name(ds, sub_fn)\n\n\n\ndef write_imagenet_subset_by_fn_sharded(out_dir, name, filenames, num_shards, group='brain-ams'):\n # out_path is a directory name\n out_path = os.path.join(out_dir, name)\n if tf.io.gfile.exists(out_path):\n print('Skipping already existing {}'.format(out_path))\n return\n print('Generating {} ...'.format(out_path))\n tf.io.gfile.mkdir(out_path)\n t1 = time.time()\n sub_ds = dataset_subset_by_file_name(ds, filenames)\n\n shard_base_path = os.path.join(out_path, '{}.tfrecord-'.format(name))\n def reduce_func(key, dataset):\n filename = tf.strings.join([shard_base_path, tf.strings.as_string(key)])\n writer = tf.data.experimental.TFRecordWriter(filename)\n writer.write(dataset.map(lambda _, x: x))\n return tf.data.Dataset.from_tensors(filename)\n\n write_ds = sub_ds.enumerate()\n write_ds = write_ds.apply(tf.data.experimental.group_by_window(\n lambda i, _: i % num_shards, reduce_func, tf.int64.max\n ))\n for x in write_ds:\n pass\n\n t2 = time.time()\n print('Built subset {} in {:.2f}s'.format(\n name, t2 - t1\n ))\n\n\ndef write_imagenet_subset_sharded(out_dir, name, ds_filenames, ds_y, n_samples, seed, num_shards, group='brain-ams'):\n splitter = StratifiedShuffleSplit(1, test_size=n_samples, random_state=seed)\n _, ndx = next(splitter.split(ds_y, ds_y))\n\n sub_fn = [ds_filenames[int(i)] for i in ndx]\n\n write_imagenet_subset_by_fn_sharded(out_dir, name, sub_fn, num_shards, group=group)\n", "Build our subsets\nSplit into train and val", "n_train_val = len(all_filenames)\nN_VAL = 50000\nVAL_SEED = 131\n\n# Validation set\ntrainval_splitter = StratifiedShuffleSplit(1, test_size=N_VAL, random_state=VAL_SEED)\ntrain_ndx, val_ndx = next(trainval_splitter.split(data_index['y'], data_index['y']))\n\ntrain_fn = [all_filenames[int(i)] for i in train_ndx]\ntrain_y = all_y[train_ndx]\nval_fn = [all_filenames[int(i)] for i in val_ndx]\nval_y = all_y[val_ndx]\n\nprint('Split train-val set of {} into {} train and {} val'.format(\n n_train_val, len(train_fn), len(val_fn)\n))\n\n\n\nsplit_path = os.path.join(OUT_SUBSET_SHARDS_PATH, 'imagenet_tv{}s{}_split.pkl'.format(N_VAL, VAL_SEED))\nwith tf.io.gfile.GFile(split_path, mode='wb') as f_split:\n split_data = dict(train_fn=train_fn, train_y=train_y, val_fn=val_fn, val_y=val_y)\n pickle.dump(split_data, f_split)\n\n\n\n# Val\nwrite_imagenet_subset_by_fn_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_tv{}s{}_val'.format(N_VAL, VAL_SEED),\n val_fn, num_shards=256)\n\n\n# 1% subsets\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_tv{}s{}_{}_seed{}'.format(N_VAL, VAL_SEED, len(train_fn)//100, 12345),\n train_fn, train_y, len(train_fn)//100, 12345, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_tv{}s{}_{}_seed{}'.format(N_VAL, VAL_SEED, len(train_fn)//100, 23456),\n train_fn, train_y, len(train_fn)//100, 23456, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_tv{}s{}_{}_seed{}'.format(N_VAL, VAL_SEED, len(train_fn)//100, 34567),\n train_fn, train_y, len(train_fn)//100, 34567, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_tv{}s{}_{}_seed{}'.format(N_VAL, VAL_SEED, len(train_fn)//100, 45678),\n train_fn, train_y, len(train_fn)//100, 45678, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_tv{}s{}_{}_seed{}'.format(N_VAL, VAL_SEED, len(train_fn)//100, 56789),\n train_fn, train_y, len(train_fn)//100, 56789, num_shards=256)\n\n\n# 10% subsets\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_tv{}s{}_{}_seed{}'.format(N_VAL, VAL_SEED, len(train_fn)//10, 12345),\n train_fn, train_y, len(train_fn)//10, 12345, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_tv{}s{}_{}_seed{}'.format(N_VAL, VAL_SEED, len(train_fn)//10, 23456),\n train_fn, train_y, len(train_fn)//10, 23456, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_tv{}s{}_{}_seed{}'.format(N_VAL, VAL_SEED, len(train_fn)//10, 34567),\n train_fn, train_y, len(train_fn)//10, 34567, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_tv{}s{}_{}_seed{}'.format(N_VAL, VAL_SEED, len(train_fn)//10, 45678),\n train_fn, train_y, len(train_fn)//10, 45678, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_tv{}s{}_{}_seed{}'.format(N_VAL, VAL_SEED, len(train_fn)//10, 56789),\n train_fn, train_y, len(train_fn)//10, 56789, num_shards=256)\n", "Write train subsets (no validation split; use ImageNet validation as evaluation set)", "# 1% subsets\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_{}_seed{}'.format(IMAGENET_SIZE//100, 12345),\n all_filenames, len(all_filenames)//100, 12345, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_{}_seed{}'.format(IMAGENET_SIZE//100, 23456),\n all_filenames, len(all_filenames)//100, 23456, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_{}_seed{}'.format(IMAGENET_SIZE//100, 34567),\n all_filenames, len(all_filenames)//100, 34567, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_{}_seed{}'.format(IMAGENET_SIZE//100, 45678),\n all_filenames, len(all_filenames)//100, 45678, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_{}_seed{}'.format(IMAGENET_SIZE//100, 56789),\n all_filenames, len(all_filenames)//100, 56789, num_shards=256)\n\n\n# 10% subsets\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_{}_seed{}'.format(IMAGENET_SIZE//10, 12345),\n all_filenames, len(all_filenames)//10, 12345, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_{}_seed{}'.format(IMAGENET_SIZE//10, 23456),\n all_filenames, len(all_filenames)//10, 23456, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_{}_seed{}'.format(IMAGENET_SIZE//10, 34567),\n all_filenames, len(all_filenames)//10, 34567, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_{}_seed{}'.format(IMAGENET_SIZE//10, 45678),\n all_filenames, len(all_filenames)//10, 45678, num_shards=256)\nwrite_imagenet_subset_sharded(OUT_SUBSET_SHARDS_PATH, 'imagenet_{}_seed{}'.format(IMAGENET_SIZE//10, 56789),\n all_filenames, len(all_filenames)//10, 56789, num_shards=256)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
koulakis/amazon-review-qa-analysis
Summarize the reviews.ipynb
mit
[ "Summarize the reviews\nThe idea in this solution is to provide a new feature to the customer which will reduce the need to go through several reviews in order to evaluate a product. In order to achieve that, we will attempt to extract the most predictive words or sentences from the ratings and present them in a nice format (e.g. wordcloud).\nImplementation steps of a proof of concept\n\nExtract the summaries and split them to words\nKeep only the data with ranks 1, 2 -labeled as 0- and 5 -labeled as 1. \nGenerate tf-idf vector features from the words\nTrain a binary logistic regression model which predicts the rankings from the vector features\nUsing this model evaluate each word by generating the features for it as if it were a whole summary\nOrder the words by the probability generated by the model to be in the '0' or '1' category\nSelect the words with highest probability to be '1' as the positive ones\nSelect the words with highest probability to be '0' as the negative ones\nPick a random set of products and print the top 10 words with highest probabilities (max of positive and negative) on a wordcloud\n\nLoading and preparing the data", "all_reviews = (spark\n .read\n .json('./data/raw_data/reviews_Baby_5.json.gz',)\n .na\n .fill({ 'reviewerName': 'Unknown' }))\n\nfrom pyspark.sql.functions import col, expr, udf, trim\nfrom pyspark.sql.types import IntegerType\nimport re\n\nremove_punctuation = udf(lambda line: re.sub('[^A-Za-z\\s]', '', line))\nmake_binary = udf(lambda rating: 0 if rating in [1, 2] else 1, IntegerType())\n\nreviews = (all_reviews\n .filter(col('overall').isin([1, 2, 5]))\n .withColumn('label', make_binary(col('overall')))\n .select(col('label').cast('int'), remove_punctuation('summary').alias('summary'))\n .filter(trim(col('summary')) != ''))", "Splitting data and balancing skewness", "train, test = reviews.randomSplit([.8, .2], seed=5436L)\n\ndef multiply_dataset(dataset, n):\n return dataset if n <= 1 else dataset.union(multiply_dataset(dataset, n - 1))\n\nreviews_good = train.filter('label == 1')\nreviews_bad = train.filter('label == 0')\n\nreviews_bad_multiplied = multiply_dataset(reviews_bad, reviews_good.count() / reviews_bad.count())\n\n\ntrain_reviews = reviews_bad_multiplied.union(reviews_good)", "Benchmark: predict by distribution", "accuracy = reviews_good.count() / float(train_reviews.count())\nprint('Always predicting 5 stars accuracy: {0}'.format(accuracy))", "Learning pipeline", "from pyspark.ml.feature import Tokenizer, HashingTF, IDF, StopWordsRemover\nfrom pyspark.ml.pipeline import Pipeline\nfrom pyspark.ml.classification import LogisticRegression\n\ntokenizer = Tokenizer(inputCol='summary', outputCol='words')\n\npipeline = Pipeline(stages=[\n tokenizer, \n StopWordsRemover(inputCol='words', outputCol='filtered_words')\n HashingTF(inputCol='filtered_words', outputCol='rawFeatures', numFeatures=120000),\n IDF(inputCol='rawFeatures', outputCol='features'),\n LogisticRegression(regParam=.3, elasticNetParam=.01)\n])", "Testing the model accuracy", "model = pipeline.fit(train_reviews)\n\nfrom pyspark.ml.evaluation import BinaryClassificationEvaluator\n\nprediction = model.transform(test)\nBinaryClassificationEvaluator().evaluate(prediction)", "Using model to extract the most predictive words", "from pyspark.sql.functions import explode\nimport pyspark.sql.functions as F\nfrom pyspark.sql.types import FloatType\n\nwords = (tokenizer\n .transform(reviews)\n .select(explode(col('words')).alias('summary')))\n\npredictors = (model\n .transform(words)\n .select(col('summary').alias('word'), 'probability'))\n\nfirst = udf(lambda x: x[0].item(), FloatType())\nsecond = udf(lambda x: x[1].item(), FloatType())\n\npredictive_words = (predictors\n .select(\n 'word', \n second(col('probability')).alias('positive'), \n first(col('probability')).alias('negative'))\n .groupBy('word')\n .agg(\n F.max('positive').alias('positive'),\n F.max('negative').alias('negative')))\n\npositive_predictive_words = (predictive_words\n .select(col('word').alias('positive_word'), col('positive').alias('pos_prob'))\n .sort('pos_prob', ascending=False))\n\nnegative_predictive_words = (predictive_words\n .select(col('word').alias('negative_word'), col('negative').alias('neg_prob'))\n .sort('neg_prob', ascending=False))\n\nimport pandas as pd\n\npd.concat([\n positive_predictive_words.toPandas().head(n=20),\n negative_predictive_words.toPandas().head(n=20) ],\n axis=1)", "Summarize single product - picks the best and worst", "full_model = pipeline.fit(reviews)\n\nhighly_reviewed_products = (all_reviews\n .groupBy('asin')\n .agg(F.count('asin').alias('count'), F.avg('overall').alias('avg_rating'))\n .filter('count > 25'))\n\nbest_product = highly_reviewed_products.sort('avg_rating', ascending=False).take(1)[0][0]\n\nworst_product = highly_reviewed_products.sort('avg_rating').take(1)[0][0]\n\ndef most_contributing_summaries(product, total_reviews, ranking_model):\n reviews = total_reviews.filter(col('asin') == product).select('summary', 'overall')\n \n udf_max = udf(lambda p: max(p.tolist()), FloatType())\n \n summary_ranks = (ranking_model\n .transform(reviews)\n .select(\n 'summary', \n second(col('probability')).alias('pos_prob')))\n \n pos_summaries = { row[0]: row[1] for row in summary_ranks.sort('pos_prob', ascending=False).take(10) }\n neg_summaries = { row[0]: row[1] for row in summary_ranks.sort('pos_prob').take(10) }\n \n return pos_summaries, neg_summaries\n\nfrom wordcloud import WordCloud\nimport matplotlib.pyplot as plt\n\ndef present_product(product, total_reviews, ranking_model):\n pos_summaries, neg_summaries = most_contributing_summaries(product, total_reviews, ranking_model)\n \n pos_wordcloud = WordCloud(background_color='white', max_words=20).fit_words(pos_summaries)\n neg_wordcloud = WordCloud(background_color='white', max_words=20).fit_words(neg_summaries)\n \n fig = plt.figure(figsize=(15, 15))\n \n ax = fig.add_subplot(1,2,1)\n ax.set_title('Positive summaries')\n ax.imshow(pos_wordcloud, interpolation='bilinear')\n ax.axis('off')\n \n ax = fig.add_subplot(1,2,2)\n ax.set_title('Negative summaries')\n ax.imshow(neg_wordcloud, interpolation='bilinear')\n ax.axis('off')\n \n plt.show()\n\npresent_product(best_product, all_reviews, full_model)\n\npresent_product(worst_product, all_reviews, full_model)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
arongdari/sparse-graph-prior
notebooks/AnalysisWWWdataset.ipynb
mit
[ "Analysis of WWW dataset\nWWW dataset is identified as the most sparse graph in C&F paper. In this notebook, we will compute an empirical growth rate of edges w.r.t the number of nodes, and fit two different curves to this empirical growth.", "import os\nimport pickle\n\nimport time\nfrom collections import defaultdict\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.sparse import csc_matrix, csr_matrix, dok_matrix\nfrom scipy.optimize import curve_fit\n\n%matplotlib inline", "Load WWW dataset with sparse matrices", "n_e = 325729\n\ndef getWWWdataset(n_e = 325729, shuffle=True): \n if shuffle:\n node_idx = np.arange(n_e)\n np.random.shuffle(node_idx)\n node_dic = {i:node_idx[i] for i in range(n_e)}\n else:\n node_dic = {i:i for i in range(n_e)}\n\n row_list = list()\n col_list = list()\n with open('../data/www/www.dat.txt', 'r') as f:\n for line in f.readlines():\n row, col = line.split()\n row = int(row.strip())\n col = int(col.strip())\n row_list.append(node_dic[row])\n col_list.append(node_dic[col])\n \n return row_list, col_list", "Compute growth rate of WWW dataset with varying size of nodes", "if not os.path.exists('www_growth.pkl'):\n n_e = 325729\n n_link = defaultdict(list)\n n_samples = 10\n\n for si in range(n_samples):\n row_list, col_list = getWWWdataset()\n www_row = csr_matrix((np.ones(len(row_list)), (row_list, col_list)), shape=(n_e, n_e))\n www_col = csc_matrix((np.ones(len(row_list)), (row_list, col_list)), shape=(n_e, n_e))\n\n n_link[0].append(0)\n\n for i in range(1, n_e):\n # counting triples by expanding tensor\n cnt = 0\n cnt += www_row.getrow(i)[:,:i].nnz\n cnt += www_col.getcol(i)[:i-1,:].nnz \n n_link[i].append(cnt + n_link[i-1][-1])\n \n pickle.dump(n_link, open('www_growth.pkl', 'wb'))\nelse:\n n_link = pickle.load(open('www_growth.pkl', 'rb'))\n \navg_cnt = [np.mean(n_link[i]) for i in range(n_e)]", "Fit the growth curve", "def func(x, a, b, c):\n return c*x**a + b\n\ndef poly2(x, a, b, c):\n return c*x**2 + b*x + a\n\npopt, pcov = curve_fit(func, np.arange(n_e), avg_cnt)\nfitted_t = func(np.arange(n_e), *popt)\n\npopt2, pcov2 = curve_fit(poly2, np.arange(n_e), avg_cnt)\nfitted_t2 = poly2(np.arange(n_e), *popt2)", "Plot the empirical and fitted growth curves", "plt.figure(figsize=(16,6))\nplt.subplot(1,2,1)\nplt.plot(avg_cnt, label='empirical')\nplt.plot(fitted_t, label='$y=%.5f x^{%.2f} + %.2f$' % (popt[2], popt[0], popt[1]))\nplt.plot(fitted_t2, label='$y=%.5f x^2 + %.5f x + %.2f$' % (popt2[2], popt2[1], popt2[0]))\n\nplt.legend(loc='upper left')\nplt.title('# of nodes vs # of links')\nplt.xlabel('# nodes')\nplt.ylabel('# links')\n\nplt.subplot(1,2,2)\nplt.plot(avg_cnt, label='empirical')\nplt.plot(fitted_t, label='$y=%.5f x^{%.2f} + %.2f$' % (popt[2], popt[0], popt[1]))\nplt.plot(fitted_t2, label='$y=%.5f x^2 + %.5f x + %.2f$' % (popt2[2], popt2[1], popt2[0]))\n\nplt.legend(loc='upper left')\nplt.title('# of nodes vs # of links (Magnified)')\nplt.xlabel('# nodes')\nplt.ylabel('# links')\nplt.axis([100000,150000,100000,350000])\n\nrow_list, col_list = getWWWdataset()\nwww_row = csr_matrix((np.ones(len(row_list)), (row_list, col_list)), shape=(n_e, n_e))\nwww_col = csc_matrix((np.ones(len(row_list)), (row_list, col_list)), shape=(n_e, n_e))\n\nentity_degree = (www_row.sum(1) + www_col.sum(0).T).tolist()\n\ne_list = np.arange(n_e)\nnp.random.shuffle(e_list)\none_entity = [entity_degree[ei][0] == 1 for ei in e_list]\ncumsum = np.cumsum(one_entity) \n\nplt.figure(figsize=(8,6))\nplt.plot(cumsum) \nplt.xlabel('# of entities')\nplt.ylabel('# of entities of degree one')\nplt.title('# of entities of degree one in WWW')\nplt.axis([0, n_e, 0, n_e])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.16/_downloads/plot_time_frequency_global_field_power.ipynb
bsd-3-clause
[ "%matplotlib inline", "Explore event-related dynamics for specific frequency bands\nThe objective is to show you how to explore spectrally localized\neffects. For this purpose we adapt the method described in [1]_ and use it on\nthe somato dataset. The idea is to track the band-limited temporal evolution\nof spatial patterns by using the Global Field Power (GFP).\nWe first bandpass filter the signals and then apply a Hilbert transform. To\nreveal oscillatory activity the evoked response is then subtracted from every\nsingle trial. Finally, we rectify the signals prior to averaging across trials\nby taking the magniude of the Hilbert.\nThen the GFP is computed as described in [2], using the sum of the squares\nbut without normalization by the rank.\nBaselining is subsequently applied to make the GFPs comparable between\nfrequencies.\nThe procedure is then repeated for each frequency band of interest and\nall GFPs are visualized. To estimate uncertainty, non-parametric confidence\nintervals are computed as described in [3] across channels.\nThe advantage of this method over summarizing the Space x Time x Frequency\noutput of a Morlet Wavelet in frequency bands is relative speed and, more\nimportantly, the clear-cut comparability of the spectral decomposition (the\nsame type of filter is used across all bands).\nReferences\n.. [1] Hari R. and Salmelin R. Human cortical oscillations: a neuromagnetic\n view through the skull (1997). Trends in Neuroscience 20 (1),\n pp. 44-49.\n.. [2] Engemann D. and Gramfort A. (2015) Automated model selection in\n covariance estimation and spatial whitening of MEG and EEG signals,\n vol. 108, 328-342, NeuroImage.\n.. [3] Efron B. and Hastie T. Computer Age Statistical Inference (2016).\n Cambrdige University Press, Chapter 11.2.", "# Authors: Denis A. Engemann <denis.engemann@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import somato\nfrom mne.baseline import rescale\nfrom mne.stats import _bootstrap_ci", "Set parameters", "data_path = somato.data_path()\nraw_fname = data_path + '/MEG/somato/sef_raw_sss.fif'\n\n# let's explore some frequency bands\niter_freqs = [\n ('Theta', 4, 7),\n ('Alpha', 8, 12),\n ('Beta', 13, 25),\n ('Gamma', 30, 45)\n]", "We create average power time courses for each frequency band", "# set epoching parameters\nevent_id, tmin, tmax = 1, -1., 3.\nbaseline = None\n\n# get the header to extract events\nraw = mne.io.read_raw_fif(raw_fname, preload=False)\nevents = mne.find_events(raw, stim_channel='STI 014')\n\nfrequency_map = list()\n\nfor band, fmin, fmax in iter_freqs:\n # (re)load the data to save memory\n raw = mne.io.read_raw_fif(raw_fname, preload=True)\n raw.pick_types(meg='grad', eog=True) # we just look at gradiometers\n\n # bandpass filter and compute Hilbert\n raw.filter(fmin, fmax, n_jobs=1, # use more jobs to speed up.\n l_trans_bandwidth=1, # make sure filter params are the same\n h_trans_bandwidth=1, # in each band and skip \"auto\" option.\n fir_design='firwin')\n raw.apply_hilbert(n_jobs=1, envelope=False)\n\n epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=baseline,\n reject=dict(grad=4000e-13, eog=350e-6), preload=True)\n # remove evoked response and get analytic signal (envelope)\n epochs.subtract_evoked() # for this we need to construct new epochs.\n epochs = mne.EpochsArray(\n data=np.abs(epochs.get_data()), info=epochs.info, tmin=epochs.tmin)\n # now average and move on\n frequency_map.append(((band, fmin, fmax), epochs.average()))", "Now we can compute the Global Field Power\nWe can track the emergence of spatial patterns compared to baseline\nfor each frequency band, with a bootstrapped confidence interval.\nWe see dominant responses in the Alpha and Beta bands.", "fig, axes = plt.subplots(4, 1, figsize=(10, 7), sharex=True, sharey=True)\ncolors = plt.get_cmap('winter_r')(np.linspace(0, 1, 4))\nfor ((freq_name, fmin, fmax), average), color, ax in zip(\n frequency_map, colors, axes.ravel()[::-1]):\n times = average.times * 1e3\n gfp = np.sum(average.data ** 2, axis=0)\n gfp = mne.baseline.rescale(gfp, times, baseline=(None, 0))\n ax.plot(times, gfp, label=freq_name, color=color, linewidth=2.5)\n ax.axhline(0, linestyle='--', color='grey', linewidth=2)\n ci_low, ci_up = _bootstrap_ci(average.data, random_state=0,\n stat_fun=lambda x: np.sum(x ** 2, axis=0))\n ci_low = rescale(ci_low, average.times, baseline=(None, 0))\n ci_up = rescale(ci_up, average.times, baseline=(None, 0))\n ax.fill_between(times, gfp + ci_up, gfp - ci_low, color=color, alpha=0.3)\n ax.grid(True)\n ax.set_ylabel('GFP')\n ax.annotate('%s (%d-%dHz)' % (freq_name, fmin, fmax),\n xy=(0.95, 0.8),\n horizontalalignment='right',\n xycoords='axes fraction')\n ax.set_xlim(-1000, 3000)\n\naxes.ravel()[-1].set_xlabel('Time [ms]')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Jackporter415/phys202-2015-work
assignments/assignment07/AlgorithmsEx02.ipynb
mit
[ "Algorithms Exercise 2\nImports", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nimport numpy as np", "Peak finding\nWrite a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should:\n\nProperly handle local maxima at the endpoints of the input array.\nReturn a Numpy array of integer indices.\nHandle any Python iterable as input.", "def find_peaks(a):\n \"\"\"Find the indices of the local maxima in a sequence.\"\"\"\n \n #empty list and make the parameter into an array\n empty = []\n f = np.array(a)\n \n #Loop through the parameter and tell if it is a max\n for i in range(len(f)):\n if i == 0 and f[i] > f[i+1]:\n empty.append(i)\n\n if i == len(f)-1 and f[i]> f[i-1]:\n empty.append(i)\n if i > 0 and i < len(f)-1:\n if f[i]>f[i-1] and f[i] > f[i+1]:\n empty.append(i)\n\n \n return empty\n\n\np1 = find_peaks([2,0,1,0,2,0,1])\nassert np.allclose(p1, np.array([0,2,4,6]))\np2 = find_peaks(np.array([0,1,2,3]))\nassert np.allclose(p2, np.array([3]))\np3 = find_peaks([3,2,1,0])\nassert np.allclose(p3, np.array([0]))", "Here is a string with the first 10000 digits of $\\pi$ (after the decimal). Write code to perform the following:\n\nConvert that string to a Numpy array of integers.\nFind the indices of the local maxima in the digits of $\\pi$.\nUse np.diff to find the distances between consequtive local maxima.\nVisualize that distribution using an appropriately customized histogram.", "from sympy import pi, N\npi_digits_str = str(N(pi, 10001))[2:]\n\n\n#iterate through pi_digits_str\nf = [c for c in pi_digits_str]\n\n#find peaks in f\nx = find_peaks(f)\n\n#graph\nplt.hist(np.diff(x),10, align = 'left')\nplt.xticks(range(0,11))\nplt.title(\"Differences of Local Maxima for pi\");\nplt.xlabel(\"Difference\");\nplt.ylabel(\"Frequency\");\n\nassert True # use this for grading the pi digits histogram" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
IsaacLab/LaboratorioIntangible
T3/.ipynb_checkpoints/T3.3-Social-Minimal-Interaction-checkpoint.ipynb
agpl-3.0
[ "Social Minimal Interaction\nThere exist social processes that emerge in collective online situations –when two persons are engaged in real-time interactions– that can not be captured by a traditional offline perspective, understanding the problem in terms of an isolated individual that acts as observer exploiting its internal cognitive mechanisms to understand people.\nSome authors have pointed out the need of designing metrics capturing the ‘ability for interaction’ that subjects have as a constituent element of sensorimotor and social cognition. In these cases, dynamical processes with emergent collective properties are generated, overflowing the individual abilities of each interlocutor.\nDuring the last years, a classical experiment has been taken as inspiration for building a minimal framework known as the ‘perceptual crossing paradigm’, which has allowed a series of studies on social interactions which focus on the dynamical process of interactions as a constituent element of the emergence of the whole social system.\nPrevious analysis have been constrained to short-term dynamic responses of the player. In turn, we propose a complex systems approach based on the analysis of long-range correlations and fractal dynamics as a more suitable framework for the analysis of complex social interactions that are deployed along many scales of activity.\n1. The perceptual crossing paradigm\nFrom an experimental point of view, a minimal paradigm has been consolidated along the recent years. Perceptual crossing paradigm constitutes a simple framework for studying social online interactions, and for understanding the mechanisms that give support to social capabilities. The experiment involves two participants sitting in different rooms and interacting by moving a sensor along a shared virtual line using a computer mouse. In this experimental framework, several experiments can be designed providing us with a way to study online dyadic interaction and to analyze the perception of someone else’s agency in different situations implemented in minimal virtual worlds. Those experiments highlight that emergent coordination processes result in successful detection of agency although, on an individual level, participants can not discriminate it. Furthermore, all these results illustrate the importance of online dynamical interaction in the analysis of human social cognition.\n2. Experimental framework\nThe device of the participants consisted of a computer-mouse that moved left and right searching someone to interact. The environment consisted of a virtual one-dimensional space of 800 pixels long with both ends connected, forming a torus to avoid the singularities induced by the edges. The participant shifted a cursor in this space moving her computer-mouse. \nIn this blindfold experiment, human participants were placed in computers to interact in pairs, within a shared perceptual space, where some opponents were other human participants and some opponents were computerized agents (bots) but participants are unaware of the nature of their opponents. Concretely, participants could play against another human, an 'oscillatory agent', or a 'shadow agent'. The oscillatory agent moved according a sinusoidal function while the shadow agent replicated the movements of the player with a certain delay in time and in space.\nWhen opponents (human-human or human-bot) cross their cursors, they receive an auditive stimulation. No image of the cursors or their positions were displayed on the computerscreen, so the auditive stimulations were the only environmental perceptions of the virtual space.\n\n2.1. Exercise\nThe script below reads the data from the experiment just related. We are going to analize the velocity of the movement for each type of match (human-human, human-oscillatory, and human-shadow):\n\nPlot the graph of the velocity of the participant.\nObtain the main statistics of the velocity: mean, variance.\nAre there any differences related to the type of opponent?", "%matplotlib inline\nimport numpy as np\nimport scipy.io\nimport scipy.signal as signal\nfrom matplotlib import pyplot as plt\nfrom pyeeg import dfa as dfa\n\ndef readFilePerceptualCrossing(filename):\n data = scipy.io.loadmat(filename)\n size = len(data['dataSeries'])\n series = [data['dataSeries'][i][0] for i in range(size)]\n series = np.array(series)[:,:,0]\n series = signal.decimate(series, 10, zero_phase=True)\n series = np.diff(series)\n oppType = [data['dataOpponentType'][i][0] for i in range(size)]\n oppType = np.array(oppType)[:,0]\n return [series, oppType]\n\n# Read data\n[vel_player , oppTypes] = readFilePerceptualCrossing('dataPC-player.mat')\n[vel_opponent, oppTypes] = readFilePerceptualCrossing('dataPC-opponent.mat')\n[vel_relative, oppTypes] = readFilePerceptualCrossing('dataPC-distance.mat')\nindexOscill = [i for i, x in enumerate(oppTypes) if x==\"Oscillatory\"]\nindexShadow = [i for i, x in enumerate(oppTypes) if x==\"Shadow\"]\nindexHuman = [i for i, x in enumerate(oppTypes) if x==\"Human\"]\n\nseries = vel_player\n\n# Plot figures\nplt.figure(figsize=(16, 4), dpi=72)\nindexExamples = [60, 7, 11];\nfor i,ex in enumerate(indexExamples):\n x = series[ex,:]\n ax = plt.subplot(1,3,(i+1))\n plt.title(oppTypes[ex]+r\" ($\\mu$={:0.2f}\".format(np.mean(x))+r\", $\\sigma^2$={:0.2f}\".format(np.var(x))+\")\")\n ax.set(xlabel=\"Time\", ylabel=\"Velocity\", )\n plt.plot(x);\n", "We can display the box-plot of the velocity to check if there are differences between groups.\n- Try other velocity variables looking for differences between groups, e.g. velocity of opponent, relative velocity", "# Calculate the average velocity of each serie\nvel_stats=np.std(vel_player,axis=1) # velocity of the player\n#vel_stats=np.std(vel_opponent,axis=1) # velocity of the opponent\n#vel_stats=np.std(vel_relative,axis=1) # relative velocity between player\n\n# Plot figure\nplt.figure(figsize=(16, 4), dpi=72)\ndataBox = [vel_stats[indexOscill], vel_stats[indexShadow], vel_stats[indexHuman]]\nplt.boxplot(dataBox);\nplt.ylabel(\"Average velocity\")\nplt.xticks([1, 2, 3], ['Oscillatory', 'Shadow', 'Human']);\n", "3. Fractal analysis\nDespite of its apparent simplicity, the perceptual crossing paradigm comprises several embedded of dynamic interaction, resulting on auto-correlations of the signals over different time scales.\nCritical systems typically display temporal and spatial scale invariance in the form of fractals and 1/f noise, reflecting the process of propagation of long-range interactions based on local effects. For the complex systems approach to cognitive science, self-organized criticallity is appealing because it allows us to imagine systems that are able to self-regulate coordinated behaviours at different scales in a distributed manner and without a central controller.\nWe argue that 1/f noise analysis can account not only for the integratedness the behaviour of an agencial system (e.g. the mental, psychological characteristics of human behaviour) but also can characterize the nature of social interaction process. In our experimental setup we have a broad range of kinds of social interaction: humans recognizing each others as such, humans interacting with bots with artificial behaviour, humans failing to recognize other humans, bots tricking humans... Can we characterize when genuine social interaction emerges? And if so, where does it lies?\nFor analyzing fractal exponents in the dynamics of social interaction we use the Detrended Fluctuation Analysis (DFA). Since the slope of the fluctuations in a logarithmic plot is not always linear for all scales, we check if there is any cutoff value in which a transition to a linear relation starts. We do this by searching for negative peaks in the second derivate of F (n). We only do this on the right half of the values of n in the plot, in order to find only the cutoffs at larger scales. Once the cutoff is found, we analyze the slope of the function in the decade inferior to the cutoff value. In the cases where there is no cutoff value (as in Figure 2.c) we analyze the interval $n \\in [10^{-0.5},10^{0.5}]$.\n3.1. Exercise\nRun a DFA analysis to obtain the fractal index β.\n- Plot the fluctuation versus timescales graphics for the three opponent types: shadow, oscillatory and human. Are there any statistical differences for each type of opponent?\n- Load the data of the movement of the opponent and re-run the analysis. Are there statistical differences now?", "def plot_dfa_perceptual(x, precision, title, drawPlot):\n ix = np.arange(np.log2(len(x)/4), 4, -precision)\n n = np.round(2**ix)\n [_, n, F] = dfa(x, L=n)\n n = n/115 # Time (seconds) = samples / sample_frequency\n indexes = (n>10**-0.5)&(n<10**0.5) # Time interval for calculating the slope\n P = np.polyfit(np.log(n[indexes]),np.log(F[indexes]), 1)\n beta = 2*P[0]-1 # beta=2*alpha-1\n if drawPlot:\n plt.title(title+r\" ($\\beta$ = {:0.2f})\".format(beta))\n plt.xlabel('n')\n plt.ylabel('F(n)')\n plt.loglog(n, F)\n plt.loglog(n[indexes], np.power(n[indexes], P[0])*np.exp(P[1]), 'r')\n return [beta, n, F]\n\n# Plot figures\nseries = vel_player\nplt.figure(figsize=(16, 4), dpi=72)\nindexExamples = [60, 7, 11];\nfor i,ex in enumerate(indexExamples):\n x = series[ex,:]\n ax = plt.subplot(1,3,(i+1))\n plot_dfa_perceptual(x, 0.1, oppTypes[ex], True);\n", "Now, we display the boxplot of the results to get an statistical overview. For the cases of the derivative of the player's position or the opponent's possition, we cannot assure an statistical difference between the distributions of β.", "# Calculate the average velocity of each serie\nseries = vel_player\n\nbetas = np.zeros(len(series));\nfor i in range(len(series)):\n [beta,_,_] = plot_dfa_perceptual(series[i,:], 0.5, oppTypes[i], False)\n betas[i] = beta\n\n# Plot figures\nplt.figure(figsize=(16, 4), dpi=72)\ndataBox = [betas[indexOscill], betas[indexShadow], betas[indexHuman]]\nplt.boxplot(dataBox);\nplt.ylabel(r'$\\beta$');\nplt.xticks([1, 2, 3], ['Oscillatory', 'Shadow', 'Human']);\n", "4. Interaction measures\nWe propose that genuine social interaction should be manifested by emerging integratedness in collective variables capturing the dynamics of this interactions. Concretely, we propose the changes in the distance between the two participants as a candidate variable for test this hypothesis. On the other hand, if social engagement truly arises from interaction dynamics, individual variables as the changes in the position of the agent or the opponent should not present significative changes in their levels of integratedness and thus the exponents obtained from 1/f analysis.\nIn order to analyze the interaction between the subjects, we take the time series of the distance between the two players (or the player and the bot agent). We compute the first derivative of the distance to obtain the variations in the distance i.e. whether the players are approaching or distancing themselves at each moment of time. Then we use a DFA algorithm [Peng et al. (2000)] to compute the correlations in the data series of distance variations.", "# Data\nseries = vel_relative\n\n# Plot figures\nplt.figure(figsize=(16, 4), dpi=72)\nindexExamples = [60, 7, 11];\nfor i,ex in enumerate(indexExamples):\n ax = plt.subplot(1,3,(i+1))\n plot_dfa_perceptual(series[ex,:], 0.1, oppTypes[ex], True);", "The boxplot displays statistical differences.\nWhen the opponent is the oscillatory agent (dashed lines), we find that the values of β in the time series is around 1.5. This means that the interactions are closer to a brown noise structure, meaning that the interaction is more rigid and structured than in the other cases. This makes sense since the movement of the oscillatory agent is going constrain the interactions into its cyclic movement structure.\nOn the other hand, when the opponent is the shadow agent (dash-dot lines), we have the opposite situation, and the interaction dynamics tends to display values of β greater but close to 0. This means that the history of interaction is more random and uncorrelated. Finally, when the opponent is other human player (solid lines), the exponents of the interactions dynamic are around a value of β close to 1, indicating that they follow a pink noise structure between randomness and coherence. This suggest that the dynamics emerge from a situation where the movement of both players is softly assembled into a coherent coordination.\nThe 1/f spectrum results show that the changes in the relative position of the player to its opponent show that the interaction process is completely different when genuine social interaction is happening than when the player is interacting with object with trivial (oscillatory) or complex (shadow) patterns of movement. It is interesting that pink noise only emerges for a collective variable (the derivative of the distance) only in the case of human-human interaction, suggesting the hypothesis that social interaction is based on the emergence of the soft assembling of the activity of the pair of players. In the cases when this assembling is more rigid or too weak the emergent system disappears.", "# Data\nseries = vel_relative\n\n# Plot figures\nplt.figure(figsize=(16, 4), dpi=72)\nindexExamples = [0];\nfor i in range(len(series)):\n [beta,_,_] = plot_dfa_perceptual(series[i,:], 0.5, oppTypes[i], False)\n betas[i] = beta\ndataBox = [betas[indexOscill], betas[indexShadow], betas[indexHuman]]\nplt.boxplot(dataBox);\nplt.ylabel(r'$\\beta$');\nplt.xticks([1, 2, 3], ['Oscillatory', 'Shadow', 'Human']);", "References\n\nAuvray, Malika, Charles Lenay, and John Stewart. \"Perceptual interactions in a minimalist virtual environment.\" New ideas in psychology 27.1 (2009): 32-47.\nBedia, Manuel G., et al. \"Quantifying long-range correlations and 1/f patterns in a minimal experiment of social interaction.\" Frontiers in psychology 5 (2014)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sdpython/ensae_teaching_cs
_doc/notebooks/td2a_eco/td2a_eco_exercices_de_manipulation_de_donnees.ipynb
mit
[ "2A.eco - Mise en pratique des séances 1 et 2 - Utilisation de pandas et visualisation\nTrois exercices pour manipuler les donner, manipulation de texte, données vélib.", "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()", "Données\nLes données sont téléchargeables à cette adresse : td2a_eco_exercices_de_manipulation_de_donnees.zip. Le code suivant permet de les télécharger automatiquement.", "from pyensae.datasource import download_data\nfiles = download_data(\"td2a_eco_exercices_de_manipulation_de_donnees.zip\",\n url=\"https://github.com/sdpython/ensae_teaching_cs/raw/master/_doc/notebooks/td2a_eco/data/\")\nfiles", "Exercice 1 - manipulation des textes\nDurée : 20 minutes\n\nImporter la base de données relatives aux joueurs de la Coupe du Monde 2014 >> Players_WC2014.xlsx\nDéterminer le nombre de joueurs dans chaque équipe et créer un dictionnaire { équipe : Nombre de joueurs}\nDéterminer quels sont les 3 joueurs qui ont couvert le plus de distance. Y a t il un biais de sélection ?\nParmis les joueurs qui sont dans le premier décile des joueurs plus rapides, qui a passé le plus clair de son temps à courrir sans la balle ?\n\nExercice 2 - Les villes\nDurée : 40 minutes\n\nImporter la base des villes villes.xls\nLes noms de variables et les observations contiennent des espaces inutiles (exemple : 'MAJ ') : commnecer par nettoyer l'ensemble des chaines de caractères (à la fois dans les noms de colonnes et dans les observations)\nTrouver le nombre de codes INSEE différents (attention aux doublons)\nComment calculer rapidement la moyenne, le nombre et le maximum pour chaque variable numérique ? (une ligne de code)\nCompter le nombre de villes dans chaque Region et en faire un dictionnaire où la clé est la région et la valeur le nombre de villes\n\nReprésenter les communes en utilisant \n\nmatplotlib \nune librairie de cartographie (ex : folium) \n\n\n\nExercice 3 - Disponibilité des vélibs\nDurée : 30 minutes\n\n\nImporter les données sous la forme d'un dataFrame \n\nvelib_t1.txt - avec les données des stations à un instant $t$\nvelib_t2.txt - avec les données des stations à un instant $t + 1$\n\n\n\nReprésenter la localisation des stations vélib dans Paris\n\nreprésenter les stations en fonction du nombre de places avec un gradient\n\n\n\nComparer pour une station donnée l'évolution de la disponibilité (en fusionnant les deux bases $t$ et $t+1$)\n\nreprésenter les stations qui ont connu une évolution significative (plus de 5 changements) avec un gradient de couleurs" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
shugert/DeepLearning
Pixel Regression - Step by Step.ipynb
mit
[ "Author: <a href=\"http://www.shugert.com.mx\">Samuel Noriega</a> | See full post at <a href=\"https://3blades.io/blog\">3blades</a>\nPixel Regression.\nCan we recover an image by learning a deep regression map from pixels $(x,y)$ to colors $(r,g,b)$?\nOur target image will be Mona Lisa:", "import matplotlib.image as mpimg\nimport matplotlib.pylab as plt\nimport numpy as np\n%matplotlib inline\n\nim = mpimg.imread(\"data/monalisa.jpg\")\n\nplt.imshow(im)\nplt.show()\nim.shape", "Our training dataset will be composed of pixels locations and input and pixel values as output:", "X_train = []\nY_train = []\nfor i in range(im.shape[0]):\n for j in range(im.shape[1]):\n X_train.append([float(i),float(j)])\n Y_train.append(im[i][j])\n \nX_train = np.array(X_train)\nY_train = np.array(Y_train)\nprint 'Samples:', X_train.shape[0]\nprint '(x,y):', X_train[0],'\\n', '(r,g,b):',Y_train[0]", "Our objective is to train a deep MLP that is able to reconstruct the image:", "import keras\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, Dropout\nfrom keras.optimizers import Adam, RMSprop, Nadam\n\n\n# Model architecture\nmodel = Sequential()\n\nmodel.add(Dense(500, input_dim=2, init='uniform'))\nmodel.add(Activation('relu'))\n\nmodel.add(Dense(500, init='uniform'))\nmodel.add(Activation('relu'))\n\nmodel.add(Dense(500, init='uniform'))\nmodel.add(Activation('relu'))\n\nmodel.add(Dense(500, init='uniform'))\nmodel.add(Activation('relu'))\n\nmodel.add(Dense(500, init='uniform'))\nmodel.add(Activation('relu'))\n\nmodel.add(Dense(3, init='uniform'))\nmodel.add(Activation('linear'))\n\nmodel.summary()\n\n# Compile model\nmodel.compile(loss='mean_squared_error',\n optimizer=Nadam(),\n metrics=['accuracy'])\n# Why use NAdam Optimizer?\n# Much like Adam is essentially RMSprop with momentum, Nadam is Adam RMSprop with Nesterov momentum.\n\n# use this cell to find the best model architecture\nhistory = model.fit(X_train, Y_train, nb_epoch=1000, shuffle=True, verbose=1, batch_size=500)\nY = model.predict(X_train, batch_size=10000)\nk = 0\nim_out = im[:]\nfor i in range(im.shape[0]):\n for j in range(im.shape[1]):\n im_out[i,j]= Y[k]\n k += 1\n \nprint \"Mona Lisa by DL\"\nplt.imshow(im_out)\nplt.show()\n\nplt.imshow(im_out)\nplt.show()\n\n# summarize history for accuracy\nplt.plot(history.history['acc'], 'b')\nplt.title('Model Accuracy')\nplt.xlabel('epoch')\nplt.show()\n\n# summarize history for loss\nplt.plot(history.history['loss'], 'r')\nplt.title('Model Loss')\nplt.xlabel('epoch')\nplt.show()", "Conclusions\nSeveral test were made in order to achieve the presented result. Optimization methods play an important role in the image out quality. In this case I decided to use Nadam optimizer (Much like Adam is essentially RMSprop with momentum, Nadam is Adam RMSprop with Nesterov momentum.) and the result out ligued results from other optimizers such at Adam itself, RMSprop, Adagrad, Adadelta adn SDG.\nThe numer of neurons and layers used on the model with a uniform init played an important role on the speed of the script as well as the quality of the output." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Olsthoorn/TransientGroundwaterFlow
Syllabus_in_notebooks/Sec7_head-simulation-convolution.ipynb
gpl-3.0
[ "Groundwater system behavior simulation with convolution\nIHE, transient groundwater\nOlsthoorn, 2019-01-05\nA solution, which shows the deline of the head in a strip due to bleeding to the fixed heads at both ends.\n$$ s(x, t) = A \\frac 4 \\pi \\sum _{j=1} ^\\infty \\left{\n\\frac {(-1)^{j-1}} {2 j - 1}\n\\cos \\left[(2 j -1) \\frac \\pi 2 \\frac x b \\right]\n\\exp \\left[ -(2 j - 1)^2 \\left( \\frac \\pi 2 \\right)^2 \\frac {kD} {b^2 S} t \\right]\n\\right} $$\nThe solution starts at an initial head equal to $A$ and then decays to a uniform head equal to 0. The head at $s(\\pm b, t) = 0$.\nTo make sure that both reflex decay of an initial head of $a$ above the fixed heads at $x \\pm L/2 = b$, we have to subtract the sudden head solutions from the initial head $A$\nWe may use convolution to simulate the impact of a time series of recharge surplus values on the groundwater head in a region modelled as a cross section representing a uniform aquifer of width $2b$ bounded at both sides by fixed-head boundaries.\nTo do this we may set $A = \\frac {p \\Delta t} S $ in which $ p \\Delta t$ is the total net recharge using a simulation time step $\\Delta t$, of for instance 1 day. It is the effect of a sudden net recharge impuls $ p \\Delta t$. Clearly this recharge well generally not arrive immediately at the water table. Therefore, the recharge should in general be spread over some time to taka into account storage and distribution of the reharge by the vadose zone. We can achieve this by filtering the input reharge time series, so as to compute a moving average over some period, which converts intput into the soil to arrival at the water table, of which we observe the effect by means of observation wells.\nSuch pre-filtering is readily done by the function scipy.signal.lfilter(b, a, x), where x is the input time series, and b are the moving average weights, given as an np.ndarray. a=1 in our case. So if we simulate a daily time series, and b is to represent a uniform moving average over say 30 days, then b would be an array of 30 elements all unity divided by 30, so that the sum of the weights equals 1.\nNote that this filtering is also called convolution.\nNext we want to used the filtered input as input to a simultation of the water table. This can also be done with convolution. \nThe expression above is a step response it tells what happens after after a sudden uniform unit head increas does during the time after the event. If we consider the recharge surplus of each day as such an event, and hence assume $\\Delta t = 1$ d for now, we can just do superpostion of the effects on a given time due to all events in the past. As $s(x, \\tau)$ represents the effect at time t after the event, it also represents the effect now due to an event at time $t - \\tau$. Hence by letting $s(x, \\tau)$ backward in time from the current time $t$ and multiplying it with the corresponding recharge surplusses, i.e. $p_{t - \\tau} \\delta \\tau$ and summing (superimposing the result, we obtain the total impact of the past on the current time $t$. This is convolution, en efficient way of superposition. It can be done using the mentiond lfilter(..) function, which resides in module `scipy.special'.", "import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.special import erfc\nfrom scipy.signal import lfilter\nimport pandas as pd", "System response to recharge\nThe groundwter system's response to a unit constant recharge (see above expression) is captured in a function for convenience. It's called step(..)", "def step(x, t, S, kD, b):\n 'Return the expresson as a step response for all x and single t'\n if isinstance(x, np.ndarray):\n h = np.zeros_like(x)\n elif isinstance(t, np.ndarray):\n h = np.zeros_like(t)\n else:\n h = 0\n a = 1 / S\n for j in range(1,20):\n h += a * 4 / np.pi * ((-1)**(j-1) / (2 * j - 1) *\n np.cos((2 * j - 1) * np.pi / 2 * x / b) *\n np.exp(- (2 * j - 1)**2 * (np.pi / 2)**2 * kD /(b**2 * S) * t))\n return h", "Basis behavior of the function in the expression above\nThe expression above is the decay from an initial uniform head. This decay is shown here for the whole cross section and different values of time $t$.\nFirst the data for an arbitrary case.\nThen the simulation. Note that we multiply the step by $a \\, S$ to get the result for initial head equal to $a$ as the step response divides by $S$ internally to make sure it represents the response to a unite recharge pulse.", "L = 150 # m (strip width)\nb =L/2 # [m] half width of strip\nx = np.linspace(-L/2, L/2, 201) # points, taking left at zero.\nkD = 600 # m2/d\nS = 0.1 # [-]\na = 1.0 # m, sudden head change at x = -L/2\ntimes = np.linspace(0, 0.5, 11)[1:] # \n\nplt.title('Symmetric solution for head decay in strip')\nplt.xlabel('x [m]')\nplt.ylabel('head [h]')\nplt.grid()\nplt.xlim((-b, b))\nfor t in times:\n h = a * S * step(x, t, S, kD, b)\n plt.plot(x, h, label='t={:.1f}'.format(t))\nplt.legend()\nplt.show()", "Let's take a time and show how the individual terms add up to yield the final solution.\nGet Meteo\nThe meteo is in a simple text file. It is most easily read in by pandas to yield a pandas DataFrame.\nThe DaraFrame is dressed up by column names and also its index is given a name.\nFinally we add a column days to the DatFrame, holding the number of days since the first date in the file. We'll use these days as our simulation time. We will nevertheless use the index of the DataFrame as the time axis in our plots to reveal the true dates.", "if False:\n meteo_file = 'PE-00-08.txt' # in mm/d\n meteo = pd.read_csv(meteo_file, index_col=0, header=None, parse_dates=True, dayfirst=True)\n meteo['P', 'E'] /= 1000. # to m/d\nelse:\n meteo_file = 'DeBiltPE.txt' # in m/d\n meteo = pd.read_csv(meteo_file, index_col=0, header=0, parse_dates=True)\n\nmeteo.columns = ['P', 'E'] # give the columns a name\nmeteo.index.name = 'time' # given the index a name\n\n# add column 'days' that are days since first day of the data (for convenience only)\nmeteo['days'] = (meteo.index - meteo.index[0]) / np.timedelta64(1, 'D')\n\n# show what we've got (note that P and E are in mm/d not in m/d)\n# first 5 lines\nmeteo.iloc[:5]", "Convolution\nThe steady state head is\n$$ s_{steady} = \\frac p {2 kD} (b^2 - x^2) $$\nFor comparison, we will also plot this steady head for $p$ equal to the average recharge.", "# new data\n\nL = 6000 # m (strip width)\nb =L/2 # [m] half width of strip\nx = 0. # point taken at cross section\nkD = 600 # m2/d\nS = 0.25 # [-]\na = 1.0 # m, sudden head change at x = -L/2\ntimes = np.linspace(0, 0.5, 11)[1:] # \n\n# Get the dynamic response\n\n# Get time tau directly from the column days, then the vector tau is of equal length as the total\n# time series, which will yield maximum accuracy (no truncation errors).\ntau = meteo['days'].values\n\n# Then compute the response\nresponse = step(x, tau, S, kD, b)\n\n# Do the convolution\n\nrch = meteo['P'] - meteo['E'] # recharge in m/d\n\n# Design a vadose-zone filter (here just uniform, but any shape can be chosen)\nn = 1 # length of vadose filter\nvadose_filter = np.ones(n) / n\n\n# Filter the recharge to get the recharge at the water table\nrch = lfilter(vadose_filter, 1, rch)\n\n# steady state for mean recharge\nsteady = np.mean(rch) / (2 * kD) * (b**2 - x**2)\n\n# add the simulated result as a column to the data frame (this is not necessary, though)\nmeteo['sim'] = lfilter(response, 1, rch)\n\n# set up the plot\nplt.title('simulated head $b$={:.0f} m, $kD$={:.0f} m$^2$/d, $S$={:.3f} [-]'.format(b, kD, S))\nplt.xlabel('time -->')\nplt.ylabel('head [m]')\nplt.grid()\n\n# plot simulate result\nplt.plot(meteo.index, meteo['sim'], label='simulated')\n\n# plot the simulation for a constant recharge equal to the average value\nplt.plot(meteo.index, lfilter(response, 1, np.mean(rch) * np.ones_like(rch)), label='constant mean recharge')\n\n# plot the steady state result for the case of average recharge\nplt.plot(meteo.index[[0, -1]], [steady, steady], 'k', lw=3, label='steady state')\n\nplt.legend()\nplt.show()", "Discussion\nThis simulation by convolution with vadose-zone prefilter is extremely efficient in simulating the response of a general groundwater system to time-varying recharge (values positive and negative). Different groundwater systems can be simulated by adapting the parameter like the prefilter, telling the rate at which recharge reaches the water table, the size of the basin (half-width $b$), the transmissivity of the aquifer and its storage coefficient, which must, of course be the specific yield because we deal with a free water table.\nThis simple simulator is effective in showing how the characteristics of a groundwater system carries on in its response to recharge over longer periods.\nThe steady-state results for the average recharge functions as a check.\nThe transient result of constant recharge serves to clearly see characteristic time of the system or, for that sake, its half time, i.e. the time it takes to reach half the value of the steady stead head." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tclaudioe/Scientific-Computing
SC1/03_floating_point_arithmetic.ipynb
bsd-3-clause
[ "<center>\n <h1> INF-285 - Computación Científica / ILI-285 - Computación Científica I</h1>\n <h2> Floating Point Arithmetic </h2>\n <h2> <a href=\"#acknowledgements\"> [S]cientific [C]omputing [T]eam </a> </h2>\n <h2> Version: 1.17</h2>\n</center>\nTable of Contents\n\nIntroduction\nThe nature of floating point numbers\nVisualization of floating point numbers\nWhat is the first integer that is not representable in double precision?\nLoss of significance\nLoss of significance in funcion evaluation\nAnother analysis (example from textbook)\nAcknowledgements", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "<div id='intro' />\n\nIntroduction\nHello! This notebook is an introduction to how our computers handle the representation of real numbers using double-precision floating-point format. To understand the contents of this notebook you should at least have a basic notion of how binary numbers work.\nThe aforementioned format occupies 64 bits which are divided as follows:\n\n1 bit for the sign\n11 bits for the exponent\n52 bits for the mantissa\n\nThis means that the very next representable number after $1$ is $1 + 2^{-52}$, and their difference, $2^{-52}$, is $\\epsilon_{mach}$.\nAdditionally, if you'd like to quickly go from a base-2 integer to a base-10 integer and viceversa, Python has some functions that can help you.", "int('0b11', 2)\n\nbin(9)\n\nbin(2**53)", "<div id='nature' />\n\nThe nature of floating point numbers\nAs we know, float representations of real numbers are just a finite and bounded representation of them. Interestingly, these floating numbers are distributed across the real number line. \nTo see this, it's important to keep in mind the following property:\n\\begin{equation} \\left|\\frac{\\text{fl}(x)-x}{x}\\right| \\leq \\frac{1}{2} \\epsilon_{\\text{mach}} \\end{equation}\nwhere $\\text{fl}(x)$ is the float representation of $x \\in R$. What it says is that the relative error in representing any non-zero real number x, is bounded by a quantity that depends on the system representation ($\\epsilon_{\\text{mach}}$).\nMaybe now you're thinking: what relationship does this have with the distribution of floating point numbers? You see, if we rewrite the previous property as follows:\n\\begin{equation} |\\text{fl}(x)-x| \\leq \\frac{1}{2} \\epsilon_{\\text{mach}} |x| \\end{equation}\nit becomes clear: The absolute error (distance) between a real number and its floating point representation is proportional to the real number's magnitude.\nIntuitively speaking, if the representation error of a number increases as its magnitude increases, then it's quite natural that the distance between a floating point number and the next representable floating point number will increase as the magnitude of such number increases (and conversely). Can you prove it? For now we will prove it experimentally.\nWe will use a library named bitstring to handle different number representations. You can install it with:\npip install bitstring", "import bitstring as bs", "The next two functions are self-explanatory:\n\nnext_float(f) computes the next representable float number.\nepsilon(f) computes the difference between f and the next representable float number", "def next_float(f):\n #packing double-precision foat\n b = bs.pack('>d', f)\n \n #extracting mantisa as unsigned int\n #and adding up 1\n m = b[12:].uint\n m += 1\n \n #putting the result in his place\n b[12:] = m\n \n return b.float\n\ndef epsilon(f):\n next_f = next_float(f)\n return next_f - f", "So if we compute epsilon(1) we should get the epsilon machine number. Let's try it:", "epsilon(1)", "In order to prove our hypotesis, we will create an array of values: [1e-32, 1e-31, ..., 1e31, 1e32] and compute their corresponding epsilon.", "#values between 10**-32 and 10**+32\nvalues = np.array([10**i for i in range(-32,32)]).astype(float)\n\n#corresponding epsilons\nvepsilon = np.vectorize(epsilon)\neps = vepsilon(values)", "We now include a comparison between a linear scale plot and a loglog scale plot. Which one is more useful here?", "fig = plt.figure(figsize=(10,5))\n\nplt.subplot(121)\nplt.plot(values, eps,'.',markersize=20)\nplt.xlabel('Values')\nplt.ylabel('Corresponding Epsilons')\nplt.title('Epsilons v/s Values')\nplt.grid(True)\n\nplt.subplot(122)\nplt.loglog(values, eps,'.')\nplt.xlabel('Values')\nplt.ylabel('Corresponding Epsilons')\nplt.title('Epsilons v/s Values')\nplt.grid(True)\n\nfig.tight_layout()\nplt.show()", "As you can see, the hypothesis was right. In other words: Floating point numbers are not linearly distributed across the real numbers, and the distance between them is proportional to their magnitude. Tiny numbers (~ 0) are closer to each other than big numbers are.\n<div id='visualization' />\n\nVisualization of floating point numbers\nWith the help of bitstring library we could write a function to visualize floating point numbers in their binary representation", "def to_binary(f):\n b = bs.pack('>d', f)\n b = b.bin\n #show sign + exponent + mantisa\n print(b[0]+' '+b[1:12]+ ' '+b[12:])", "Let's see some intereseting examples", "to_binary(1.)\n\nint('0b01111111111', 2)\n\nto_binary(1.+epsilon(1.))\n\nto_binary(+0.)\n\nto_binary(-0.)\n\nto_binary(np.inf)\n\nto_binary(-np.inf)\n\nto_binary(np.nan)\n\nto_binary(-np.nan)\n\nto_binary(2.**-1074)\n\nprint(2.**-1074)\n\nto_binary(2.**-1075)\n\nprint(2.**-1075)\n\nto_binary(9.4)", "<div id='firstinteger' />\n\nWhat is the first integer that is not representable in double precision?\nRecall that $\\epsilon_{\\text{mach}}=2^{-52}$ in double precision.", "to_binary(1)\nto_binary(1+2**-52)", "This means that if we want to store any number in the interval $[1,1+\\epsilon_{\\text{mach}}]$, only the numbers $1$ and $1+\\epsilon_{\\text{mach}}$ will be stored. For example, compare the exponent and the mantissa in the previous cell with the following outputs:", "for i in np.arange(11):\n to_binary(1+i*2**-55)", "We can now scale this difference such that the scaling factor multiplied by $\\epsilon_{\\text{mach}}$ is one. The factor will be $2^{52}$. This means $2^{52}\\,\\epsilon_{\\text{mach}}=1$. Repeating the same example as before, but with the scaling factor, we obtain:", "for i in np.arange(11):\n to_binary((1+i*2**-55)*2**52)", "Which means we can only exactly store the numbers:", "to_binary(2**52)\nto_binary(2**52+1)", "The distance from $2^{52}$ and the following number representable is $1$ !!!! So, what would happen if we were to store $2^{53}+1$?", "to_binary(2**53)\nto_binary(2**53+1)", "We can't store the Integer $2^{53}+1$! Thus, the first integer not representable is $2^{53}+1$.\n<div id='loss' />\n\nLoss of significance\nAs we mentioned, there's a small leap between 1 and the next representable number, which means that if you want to represent a number between those two, you won't be able to do so; that number is nonexistent as it is for the computer, so it'll have to round it to a representable number before storing it in memory.", "a = 1.\nb = 2.**(-52) #emach\nresult_1 = a + b # arithmetic result is 1.0000000000000002220446049250313080847263336181640625\nresult_1b = result_1-1.0\nprint(\"{0:.1000}\".format(result_1))\nprint(result_1b)\nprint(b)\n\nc = 2.**(-53)\nresult_2 = a + c # arithmetic result is 1.00000000000000011102230246251565404236316680908203125\nnp.set_printoptions(precision=16)\nprint(\"{0:.1000}\".format(result_2))\nprint(result_2-a)\n\nto_binary(result_2)\nto_binary(result_2-a)\n\nd = 2.**(-53) + 2.**(-54)\n\nresult_3 = a + d # arithmetic result is 1.000000000000000166533453693773481063544750213623046875\nprint(\"{0:.1000}\".format(result_3))\nto_binary(result_3)\nto_binary(d)", "As you can see, if you try to save a number between $1$ and $1 + \\epsilon _{mach}$, it will have to be rounded (according to some criteria) to a representable number before being stored, thus creating a difference between the <i>real</i> number and the <i>stored</i> number. This is an example of loss of significance.\nDoes that mean that the \"leap\" between representable numbers is <i>always</i> going to be $\\epsilon _{mach}$? Of course not! Some numbers will require smaller leaps, and some others will require bigger leaps. \nIn any interval of the form $[2^n,2^{n+1}]$ for a representable $n\\in \\mathbb{Z}$, the leap is constant. For example, all the numbers between $2^{-1}$ and $2^0$ (but excluding $2^0$) have a distance of $\\epsilon {mach}/2$ between them. All the numbers between $2^0$ and $2^1$ (excluding $2^1$) have a distance of $\\epsilon {mach}$ between them. Those between $2^1$ and $2^2$ (not including $2^2$) have a distance of $2\\,\\epsilon _{mach}$ between them, and so on and so forth.", "e = 2.**(-1)\nf = b/2. # emach/2\n\nresult_4 = e + f # 0.50000000000000011102230246251565404236316680908203125\nprint(\"{0:.1000}\".format(result_4))\n\nresult_5 = e + b # 0.5000000000000002220446049250313080847263336181640625\nprint(\"{0:.1000}\".format(result_5))\n\ng = b/4.\n\nresult_5 = e + g # 0.500000000000000055511151231257827021181583404541015625\nprint(\"{0:.1000}\".format(result_5))", "We'll let the students find some representable numbers and some non-representable numbers. It's important to note that loss significance can occur in many operations and functions other that the simple addition of two numbers.", "num_1 = a\nnum_2 = b\nresult = a + b\nprint(\"{0:.1000}\".format(result))", "<div id='func' />\n\nLoss of significance in function evaluation\nLoss of Significance is present too in the representation of functions. A classical example (which you can see in the guide book), is the next function: \n\\begin{equation}f(x)= \\frac{1 - \\cos x}{\\sin^{2}x} \\end{equation}\nApplying trigonometric identities, we can obtain the 'equivalent' function:\n\\begin{equation}f(x)= \\frac{1}{1 + \\cos x} \\end{equation}\nBoth of these functions are apparently equal. Nevertheless, its graphics say to us another thing when $x$ is equal to zero.", "x = np.arange(-10,10,0.1)\ny = (1-np.cos(x))/(np.sin(x)**2)\nplt.figure()\nplt.plot(x,y,'.')\nplt.grid(True)\nplt.show()\n\nx = np.arange(-10,10,0.1)\ny = 1/(1+np.cos(x))\nplt.figure()\nplt.plot(x,y,'.')\nplt.grid(True)\nplt.show()\n\nx = np.arange(-1,1,0.01)\ny = (1-np.cos(x))/np.sin(x)**2\nplt.figure()\nplt.plot(x,y,'.',markersize=10)\nplt.grid(True)\nplt.show()\n\ny = 1/(1+np.cos(x))\nplt.figure()\nplt.plot(x,y,'.',markersize=10)\nplt.grid(True)\nplt.show()", "When $x$ is equal to zero, the first function has an indetermination, but previously, the computer subtracted numbers that were almost equal. This leads to a loss of significance, turning the expression close to this point to zero. However, modifying this expression towards the second function eliminates this substraction, fixing the error in its calculation when $x=0$.\nIn conclusion, two representations of a function can be equal to us, but different for the computer!\n<div id='another' />\n\nAnother analysis (example from textbook)", "f1 = lambda x: (1.-np.cos(x))/(np.sin(x)**2)\nf2 = lambda x: 1./(1+np.cos(x))\nf3 = lambda x: (1.-np.cos(x))\nf4 = lambda x: (np.sin(x)**2)\nx = np.logspace(-19,0,20)[-1:0:-1]\no1 = f1(x)\no2 = f2(x)\no3 = f3(x)\no4 = f4(x)\n\nprint(\"x, f1(x), f2(x), f3(x), f4(x)\")\nfor i in np.arange(len(x)):\n print(\"%1.10f, %1.10f, %1.10f, %1.25f, %1.25f\" % (x[i],o1[i],o2[i],o3[i],o4[i]))", "Libraries\nPlease make sure you make all of them your BFF!!\n\nNumpy - IEEE 754 Floating Point Special Values: https://docs.scipy.org/doc/numpy-1.10.0/user/misc.html\nMatplotlib: http://matplotlib.org/examples/pylab_examples/simple_plot.html\nNice Trick: https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/\n\n<div id='acknowledgements' />\n\nAcknowledgements\n\nMaterial originally created by professor Claudio Torres (ctorres@inf.utfsm.cl) and assistants: Laura Bermeo, Alvaro Salinas, Axel Simonsen and Martín Villanueva. v.1.1. DI UTFSM. March 2016.\nUpdate April 2020 - v1.14 - C.Torres : Fixing some issues.\nUpdate April 2020 - v1.15 - C.Torres : Adding subplot.\nUpdate April 2020 - v1.16 - C.Torres : Adding value of numerator and denominator in example of f1 = lambda x: (1.-np.cos(x))/(np.sin(x)** 2).\nUpdate April 2020 - v1.17 - C.Torres : Adding section \"What is the first integer that is not representable in double precision?\"" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
neuropsychology/NeuroKit.py
examples/Bio/bio.ipynb
mit
[ "Biosignals Processing in Python\nWelcome to the course for biosignals processing using NeuroKit and python. You'll find the necessary files to run this example in the examples section.\nPreprocessing\nPreparation", "# Import packages\nimport neurokit as nk\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib\n\n# Plotting preferences\n%matplotlib inline\nmatplotlib.rcParams['figure.figsize'] = [14.0, 10.0] # Bigger figures\nsns.set_style(\"whitegrid\") # White background\nsns.set_palette(sns.color_palette(\"hls\")) # Better colours\n\n# Download data\ndf = pd.read_csv(\"https://raw.githubusercontent.com/neuropsychology/NeuroKit.py/master/examples/Bio/bio_100Hz.csv\")\n# Plot it\ndf.plot()", "df contains 2.5 minutes of data recorded at 100Hz (2.5 x 60 x 100 = 15000 data points). There are 4 channels, EDA, ECG, RSP and the Photosensor used to localize events. In the present case, there are four events, corresponding to emotionally negative and neutral pictures presented for 3 seconds.\nProcessing\nBiosignals processing can be done quite easily using NeuroKit with the bio_process() function. Simply provide the appropriate biosignal channels and additional channels that you want to keep (for example, the photosensor), and bio_process() will take care of the rest. It will returns a dict containing a dataframe df, including the raw as well as processed signals, and features relevant to each provided signal.", "# Process the signals\nbio = nk.bio_process(ecg=df[\"ECG\"], rsp=df[\"RSP\"], eda=df[\"EDA\"], add=df[\"Photosensor\"], sampling_rate=100)\n# Plot the processed dataframe, normalizing all variables for viewing purpose\nnk.z_score(bio[\"df\"]).plot()", "Woah, there's a lot going on there! From 3 variables of interest (ECG, RSP and EDA), bio_process() produced 18 signals. Moreover, the bio dict contains three other dicts, ECG, RSP and EDA containing other features that cannot be simply added in a dataframe. Let's see what we can do with that.\nBio Features Extraction\nECG Signal quality", "bio[\"ECG\"][\"Average_Signal_Quality\"] # Get average quality", "As you can see, the average quality of the ECG signal is 99%. See this TO BE DONE tutorial for how to record a good signal.\nHeart Beats / Cardiac Cycles\nLet's take a look at each individual heart beat, synchronized by their R peak. You can plot all of them by doing the following:", "pd.DataFrame(bio[\"ECG\"][\"Cardiac_Cycles\"]).plot(legend=False) # Plot all the heart beats", "Heart Rate Variability (HRV)\nA large number of HRV indices can be found by checking out bio[\"ECG\"][\"HRV\"].\nRespiratory Sinus Arrythmia (RSA)\nOne of the most popular RSA algorithm (P2T) implementation can be found in the main data frame.", "nk.z_score(bio[\"df\"][[\"ECG_Filtered\", \"RSP_Filtered\", \"RSA\"]])[1000:2500].plot()\n", "find_events returns a dict containing onsets and durations of each event. Here, it correctly detected only one event. Then, we're gonna crop our data according to that event. The create_epochs function returns a list containing epochs of data corresponding to each event. As we have only one event, we're gonna select the 0th element of that list. \nEvent-Related Analysis\nThis experiment consisted of 4 events (when the photosensor signal goes down), which were 2 types of images that were shown to the participant: \"Negative\" vs \"Neutral\". The following list is the condition order.", "condition_list = [\"Negative\", \"Neutral\", \"Neutral\", \"Negative\"]", "Find Events\nFirst, we must find events onset within our photosensor's signal using the find_events() function. Specify a cut direction (should it select events that are higher or lower than the treshold).", "events = nk.find_events(df[\"Photosensor\"], cut=\"lower\")\nevents", "As we can see, find_events() returns a dict containing onsets and durations for each event. Here, each event lasts for approximately 300 data points (= 3 seconds sampled at 100Hz). \nCreate Epochs\nThen, we have to split our dataframe in epochs, i.e. segments of data around the event. We set our epochs to start one second before the event start (onset=-100) and to last for 700 data points, in our case equal to 7 s (since the signal is sampled at 100Hz).", "epochs = nk.create_epochs(bio[\"df\"], events[\"onsets\"], duration=700, onset=-100)", "Let's plot the first epoch.", "nk.z_score(epochs[0][[\"ECG_Filtered\", \"EDA_Filtered\", \"Photosensor\"]]).plot()", "Extract Event Related Features\nWe can then itereate through the epochs and store the interesting results in a new dict that will be, at the end, converted to a dataframe.", "data = {} # Initialize an empty dict\nfor epoch_index in epochs:\n data[epoch_index] = {} # Initialize an empty dict for the current epoch\n epoch = epochs[epoch_index]\n \n # ECG\n baseline = epoch[\"ECG_RR_Interval\"].ix[-100:0].mean() # Baseline\n rr_max = epoch[\"ECG_RR_Interval\"].ix[0:400].max() # Maximum RR interval\n data[epoch_index][\"HRV_MaxRR\"] = rr_max - baseline # Corrected for baseline\n \n # EDA - SCR\n scr_max = epoch[\"SCR_Peaks\"].ix[0:600].max() # Maximum SCR peak\n if np.isnan(scr_max):\n scr_max = 0 # If no SCR, consider the magnitude, i.e. that the value is 0 \n data[epoch_index][\"SCR_Magnitude\"] = scr_max\n\ndata = pd.DataFrame.from_dict(data, orient=\"index\") # Convert to a dataframe\ndata[\"Condition\"] = condition_list # Add the conditions\ndata # Print\n", "Plot Results", "sns.boxplot(x=\"Condition\", y=\"HRV_MaxRR\", data=data)", "In accord with the litterature, we observe that the RR interval is higher in the negative than in the neutral condition.", "sns.boxplot(x=\"Condition\", y=\"SCR_Magnitude\", data=data)", "In the same line, the skin conductance response (SCR) is higher in the negative condition compared to the neutral one. Overall, these results suggest that the acquired biosignals are sensitive to the cognitive processing of emotional stimuli." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
dev/_downloads/aec45e1f20057e833cee12bb6bd292dc/10_evoked_overview.ipynb
bsd-3-clause
[ "%matplotlib inline", "The Evoked data structure: evoked/averaged data\nThis tutorial covers the basics of creating and working with :term:evoked\ndata. It introduces the :class:~mne.Evoked data structure in detail,\nincluding how to load, query, subset, export, and plot data from an\n:class:~mne.Evoked object. For details on creating an :class:~mne.Evoked\nobject from (possibly simulated) data in a :class:NumPy array\n&lt;numpy.ndarray&gt;, see tut-creating-data-structures.\nAs usual, we'll start by importing the modules we need:", "import mne", "Creating Evoked objects from Epochs\n:class:~mne.Evoked objects typically store EEG or MEG signals that have\nbeen averaged over multiple :term:epochs, which is a common technique for\nestimating stimulus-evoked activity. The data in an :class:~mne.Evoked\nobject are stored in an :class:array &lt;numpy.ndarray&gt; of shape\n(n_channels, n_times) (in contrast to an :class:~mne.Epochs object,\nwhich stores data of shape (n_epochs, n_channels, n_times)). Thus, to\ncreate an :class:~mne.Evoked object, we'll start by epoching some raw data,\nand then averaging together all the epochs from one condition:", "root = mne.datasets.sample.data_path() / 'MEG' / 'sample'\nraw_file = root / 'sample_audvis_raw.fif'\nraw = mne.io.read_raw_fif(raw_file, verbose=False)\n\nevents = mne.find_events(raw, stim_channel='STI 014')\n# we'll skip the \"face\" and \"buttonpress\" conditions to save memory\nevent_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,\n 'visual/right': 4}\nepochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict,\n preload=True)\nevoked = epochs['auditory/left'].average()\n\ndel raw # reduce memory usage", "You may have noticed that MNE informed us that \"baseline correction\" has been\napplied. This happened automatically during creation of the\n:class:~mne.Epochs object, but may also be initiated (or disabled)\nmanually. We will discuss this in more detail later.\nThe information about the baseline period of :class:~mne.Epochs is\ntransferred to derived :class:~mne.Evoked objects to maintain provenance as\nyou process your data:", "print(f'Epochs baseline: {epochs.baseline}')\nprint(f'Evoked baseline: {evoked.baseline}')", "Basic visualization of Evoked objects\nWe can visualize the average evoked response for left-auditory stimuli using\nthe :meth:~mne.Evoked.plot method, which yields a butterfly plot of each\nchannel type:", "evoked.plot()", "Like the plot() methods for :meth:Raw &lt;mne.io.Raw.plot&gt; and\n:meth:Epochs &lt;mne.Epochs.plot&gt; objects,\n:meth:evoked.plot() &lt;mne.Evoked.plot&gt; has many parameters for customizing\nthe plot output, such as color-coding channel traces by scalp location, or\nplotting the :term:global field power alongside the channel traces.\nSee tut-visualize-evoked for more information on visualizing\n:class:~mne.Evoked objects.\nSubsetting Evoked data\n.. sidebar:: Evokeds are not memory-mapped\n:class:~mne.Evoked objects use a :attr:~mne.Evoked.data attribute\n rather than a :meth:~mne.Epochs.get_data method; this reflects the fact\n that the data in :class:~mne.Evoked objects are always loaded into\n memory and never memory-mapped_ from their location on disk (because they\n are typically much smaller than :class:~mne.io.Raw or\n :class:~mne.Epochs objects).\nUnlike :class:~mne.io.Raw and :class:~mne.Epochs objects,\n:class:~mne.Evoked objects do not support selection by square-bracket\nindexing. Instead, data can be subsetted by indexing the\n:attr:~mne.Evoked.data attribute:", "print(evoked.data[:2, :3]) # first 2 channels, first 3 timepoints", "To select based on time in seconds, the :meth:~mne.Evoked.time_as_index\nmethod can be useful, although beware that depending on the sampling\nfrequency, the number of samples in a span of given duration may not always\nbe the same (see the time-as-index section of the tutorial on\nRaw data &lt;tut-raw-class&gt; for details).\nSelecting, dropping, and reordering channels\nBy default, when creating :class:~mne.Evoked data from an\n:class:~mne.Epochs object, only the primary data channels will be retained:\neog, ecg, stim, and misc channel types will be dropped. You\ncan control which channel types are retained via the picks parameter of\n:meth:epochs.average() &lt;mne.Epochs.average&gt;, by passing 'all' to\nretain all channels, or by passing a list of integers, channel names, or\nchannel types. See the documentation of :meth:~mne.Epochs.average for\ndetails.\nIf you've already created the :class:~mne.Evoked object, you can use the\n:meth:~mne.Evoked.pick, :meth:~mne.Evoked.pick_channels,\n:meth:~mne.Evoked.pick_types, and :meth:~mne.Evoked.drop_channels methods\nto modify which channels are included in an :class:~mne.Evoked object.\nYou can also use :meth:~mne.Evoked.reorder_channels for this purpose; any\nchannel names not provided to :meth:~mne.Evoked.reorder_channels will be\ndropped. Note that channel selection methods modify the object in-place, so\nin interactive/exploratory sessions you may want to create a\n:meth:~mne.Evoked.copy first.", "evoked_eeg = evoked.copy().pick_types(meg=False, eeg=True)\nprint(evoked_eeg.ch_names)\n\nnew_order = ['EEG 002', 'MEG 2521', 'EEG 003']\nevoked_subset = evoked.copy().reorder_channels(new_order)\nprint(evoked_subset.ch_names)", "Similarities among the core data structures\n:class:~mne.Evoked objects have many similarities with :class:~mne.io.Raw\nand :class:~mne.Epochs objects, including:\n\n\nThey can be loaded from and saved to disk in .fif format, and their\n data can be exported to a :class:NumPy array &lt;numpy.ndarray&gt; (but through\n the :attr:~mne.Evoked.data attribute instead of a get_data()\n method). :class:Pandas DataFrame &lt;pandas.DataFrame&gt; export is also\n available through the :meth:~mne.Evoked.to_data_frame method.\n\n\nYou can change the name or type of a channel using\n :meth:evoked.rename_channels() &lt;mne.Evoked.rename_channels&gt; or\n :meth:evoked.set_channel_types() &lt;mne.Evoked.set_channel_types&gt;.\n Both methods take :class:dictionaries &lt;dict&gt; where the keys are existing\n channel names, and the values are the new name (or type) for that channel.\n Existing channels that are not in the dictionary will be unchanged.\n\n\n:term:SSP projector &lt;projector&gt; manipulation is possible through\n :meth:~mne.Evoked.add_proj, :meth:~mne.Evoked.del_proj, and\n :meth:~mne.Evoked.plot_projs_topomap methods, and the\n :attr:~mne.Evoked.proj attribute. See tut-artifact-ssp for more\n information on SSP.\n\n\nLike :class:~mne.io.Raw and :class:~mne.Epochs objects,\n :class:~mne.Evoked objects have :meth:~mne.Evoked.copy,\n :meth:~mne.Evoked.crop, :meth:~mne.Evoked.time_as_index,\n :meth:~mne.Evoked.filter, and :meth:~mne.Evoked.resample methods.\n\n\nLike :class:~mne.io.Raw and :class:~mne.Epochs objects,\n :class:~mne.Evoked objects have evoked.times,\n :attr:evoked.ch_names &lt;mne.Evoked.ch_names&gt;, and :class:info &lt;mne.Info&gt;\n attributes.\n\n\nLoading and saving Evoked data\nSingle :class:~mne.Evoked objects can be saved to disk with the\n:meth:evoked.save() &lt;mne.Evoked.save&gt; method. One difference between\n:class:~mne.Evoked objects and the other data structures is that multiple\n:class:~mne.Evoked objects can be saved into a single .fif file, using\n:func:mne.write_evokeds. The example data &lt;sample-dataset&gt;\nincludes such a .fif file: the data have already been epoched and\naveraged, and the file contains separate :class:~mne.Evoked objects for\neach experimental condition:", "evk_file = root / 'sample_audvis-ave.fif'\nevokeds_list = mne.read_evokeds(evk_file, verbose=False)\nprint(evokeds_list)\nprint(type(evokeds_list))", "Notice that :func:mne.read_evokeds returned a :class:list of\n:class:~mne.Evoked objects, and each one has an evoked.comment\nattribute describing the experimental condition that was averaged to\ngenerate the estimate:", "for evok in evokeds_list:\n print(evok.comment)", "If you want to load only some of the conditions present in a .fif file,\n:func:~mne.read_evokeds has a condition parameter, which takes either a\nstring (matched against the comment attribute of the evoked objects on disk),\nor an integer selecting the :class:~mne.Evoked object based on the order\nit is stored in the file. Passing lists of integers or strings is also\npossible. If only one object is selected, the :class:~mne.Evoked object\nwill be returned directly (rather than inside a list of length one):", "right_vis = mne.read_evokeds(evk_file, condition='Right visual')\nprint(right_vis)\nprint(type(right_vis))", "Previously, when we created an :class:~mne.Evoked object by averaging\nepochs, baseline correction was applied by default when we extracted epochs\nfrom the ~mne.io.Raw object (the default baseline period is (None, 0),\nwhich ensures zero mean for times before the stimulus event). In contrast, if\nwe plot the first :class:~mne.Evoked object in the list that was loaded\nfrom disk, we'll see that the data have not been baseline-corrected:", "evokeds_list[0].plot(picks='eeg')", "This can be remedied by either passing a baseline parameter to\n:func:mne.read_evokeds, or by applying baseline correction after loading,\nas shown here:", "# Original baseline (none set)\nprint(f'Baseline after loading: {evokeds_list[0].baseline}')\n\n# Apply a custom baseline correction\nevokeds_list[0].apply_baseline((None, 0))\nprint(f'Baseline after calling apply_baseline(): {evokeds_list[0].baseline}')\n\n# Visualize the evoked response\nevokeds_list[0].plot(picks='eeg')", "Notice that :meth:~mne.Evoked.apply_baseline operated in-place. Similarly,\n:class:~mne.Evoked objects may have been saved to disk with or without\n:term:projectors &lt;projector&gt; applied; you can pass proj=True to the\n:func:~mne.read_evokeds function, or use the :meth:~mne.Evoked.apply_proj\nmethod after loading.\nCombining Evoked objects\nOne way to pool data across multiple conditions when estimating evoked\nresponses is to do so prior to averaging (recall that MNE-Python can select\nbased on partial matching of epoch labels separated by /; see\ntut-section-subselect-epochs for more information):", "left_right_aud = epochs['auditory'].average()\nprint(left_right_aud)", "This approach will weight each epoch equally and create a single\n:class:~mne.Evoked object. Notice that the printed representation includes\n(average, N=145), indicating that the :class:~mne.Evoked object was\ncreated by averaging across 145 epochs. In this case, the event types were\nfairly close in number:", "left_aud = epochs['auditory/left'].average()\nright_aud = epochs['auditory/right'].average()\nprint([evok.nave for evok in (left_aud, right_aud)])", "However, this may not always be the case. If for statistical reasons it is\nimportant to average the same number of epochs from different conditions,\nyou can use :meth:~mne.Epochs.equalize_event_counts prior to averaging.\nAnother approach to pooling across conditions is to create separate\n:class:~mne.Evoked objects for each condition, and combine them afterwards.\nThis can be accomplished with the function :func:mne.combine_evoked, which\ncomputes a weighted sum of the :class:~mne.Evoked objects given to it. The\nweights can be manually specified as a list or array of float values, or can\nbe specified using the keyword 'equal' (weight each :class:~mne.Evoked\nobject by $\\frac{1}{N}$, where $N$ is the number of\n:class:~mne.Evoked objects given) or the keyword 'nave' (weight each\n:class:~mne.Evoked object proportional to the number of epochs averaged\ntogether to create it):", "left_right_aud = mne.combine_evoked([left_aud, right_aud], weights='nave')\nassert left_right_aud.nave == left_aud.nave + right_aud.nave", "Note that the nave attribute of the resulting :class:~mne.Evoked object\nwill reflect the effective number of averages, and depends on both the\nnave attributes of the contributing :class:~mne.Evoked objects and the\nweights with which they are combined. Keeping track of effective nave is\nimportant for inverse imaging, because nave is used to scale the noise\ncovariance estimate, which in turn affects the magnitude of estimated source\nactivity (see minimum_norm_estimates for more information, especially\nthe whitening_and_scaling section). Note that\n:func:mne.grand_average does not adjust nave to reflect the effective\nnumber of averaged epochs; it simply sets nave to the number of evokeds\nthat were averaged together. For this reason, it is best to use\n:func:mne.combine_evoked rather than :func:mne.grand_average if you\nintend to perform inverse imaging on the resulting :class:~mne.Evoked\nobject.\nOther uses of Evoked objects\nAlthough the most common use of :class:~mne.Evoked objects is to store\naverages of epoched data, there are a few other uses worth noting here.\nFirst, the method :meth:epochs.standard_error() &lt;mne.Epochs.standard_error&gt;\nwill create an :class:~mne.Evoked object (just like\n:meth:epochs.average() &lt;mne.Epochs.average&gt; does), but the data in the\n:class:~mne.Evoked object will be the standard error across epochs instead\nof the average. To indicate this difference, :class:~mne.Evoked objects\nhave a :attr:~mne.Evoked.kind attribute that takes values 'average' or\n'standard error' as appropriate.\nAnother use of :class:~mne.Evoked objects is to represent a single trial\nor epoch of data, usually when looping through epochs. This can be easily\naccomplished with the :meth:epochs.iter_evoked() &lt;mne.Epochs.iter_evoked&gt;\nmethod, and can be useful for applications where you want to do something\nthat is only possible for :class:~mne.Evoked objects. For example, here\nwe use the :meth:~mne.Evoked.get_peak method (which is not available for\n:class:~mne.Epochs objects) to get the peak response in each trial:", "for ix, trial in enumerate(epochs[:3].iter_evoked()):\n channel, latency, value = trial.get_peak(ch_type='eeg',\n return_amplitude=True)\n latency = int(round(latency * 1e3)) # convert to milliseconds\n value = int(round(value * 1e6)) # convert to µV\n print('Trial {}: peak of {} µV at {} ms in channel {}'\n .format(ix, value, latency, channel))", ".. REFERENCES" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ewulczyn/talk_page_abuse
src/data_generation/Admin Filtering for Article Talk.ipynb
apache-2.0
[ "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\nimport warnings\nwarnings.filterwarnings('ignore')\nimport pandas as pd\nfrom db_utils import query_hive_ssh\nimport re\nimport copy\nfrom diff_utils import *\nimport time\nimport numpy as np", "Random Sample\nConsider comments made since min_timestamp. Take n random comments from non-bot users.\nParams", "n = 50000\nmin_timestamp = '2000-01-01T00:00:00Z' # start of time", "Query", "t1 = time.time()\nquery = \"\"\"\nSELECT \n *\nFROM\n enwiki.article_talk_diff_no_bot_sample\nWHERE\n rev_timestamp > '%(min_timestamp)s'\n AND ns = 'article'\nLIMIT %(n)d\n\"\"\"\n\nparams = {\n 'n': int(n * 1.7),\n 'min_timestamp': min_timestamp\n }\n\ndf = query_hive_ssh(query % params, '../../data/raw_random_sample.tsv', priority = True, quoting=3, delete = True)\ndf.columns = [c.split('.')[1] for c in df.columns]\nt2 = time.time()\nprint('Query and Download Time:', (t2-t1) / 60.0)\n\ndfc = clean(df[300:500])\n\nshow_comments(dfc, 100)", "There do not seem to be any admin messages to remove" ]
[ "code", "markdown", "code", "markdown", "code", "markdown" ]
schemaorg/schemaorg
software/scripts/Schema_org_Dashboard.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/schemaorg/schemaorg/blob/main/scripts/Schema_org_Dashboard.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nThis notebook is part of the Schema.org project codebase at https://github.com/schemaorg/schemaorg and licensed under the same terms. bold text\nThe purpose of this notebook is to show how to work programmatically with schema.org's definitions. \nSee also https://colab.research.google.com/drive/1GVQaP5t8G-NRLAmEvVSp8k5MnsrfttDP for another approach to this, and this 2016 dashboard for some useful SPARQL queries to migrate here.\nSPARQL\nHow to query schema.org schemas using SPARQL", "# run this once per session to bring in a required library\n\n!pip --quiet install sparqlwrapper | grep -v 'already satisfied'\n\nfrom SPARQLWrapper import SPARQLWrapper, JSON\nimport pandas as pd\nimport io\nimport requests\n\n# This function shows how to use rdflib to query a REMOTE sparql dataset\n\nq1 = \"\"\"SELECT distinct ?prop ?type1 ?type2 WHERE {\n ?type1 rdfs:subClassOf* <https://schema.org/Organization> . \n ?type2 rdfs:subClassOf* <https://schema.org/Person> . \n ?prop <https://schema.org/domainIncludes> ?type1 .\n ?prop <https://schema.org/domainIncludes> ?type2 .\n}\"\"\"\n\npd.set_option('display.max_colwidth', None)\n\n# data\nwd_endpoint = 'https://query.wikidata.org/sparql'\nsdo_endpoint = \"https://dydra.com/danbri/schema-org-v11/sparql\"\n\n# utility function\ndef df_from_query(querystring=q1, endpoint=sdo_endpoint):\n sparql = SPARQLWrapper(endpoint)\n sparql.setQuery(querystring)\n sparql.setReturnFormat(JSON)\n results = sparql.query().convert()\n return( pd.json_normalize(results['results']['bindings']) )\n\n# This shows how to use rdflib to query a LOCAL sparql dataset\n# TODO: Need a function that loads https://webschemas.org/version/latest/schemaorg-current-https.nt into a named graph SPARQL store \n\n\nimport rdflib\nimport json\nfrom collections import Counter\nfrom rdflib import Graph, plugin, ConjunctiveGraph\nfrom rdflib.serializer import Serializer\n\ndef toDF(result):\n return pd.DataFrame(result, columns=result.vars)\n\n# Fetch Schema.org definitions\n\nsdo_current_https_url = \"https://webschemas.org/version/latest/schemaorg-current-https.nq\"\nsdo_all_https_url = \"https://webschemas.org/version/latest/schemaorg-all-https.nq\"\n\n# TODO - is this the only way to figure out what is in the attic? except both files use same NG URL\ng = ConjunctiveGraph(store=\"IOMemory\")\ng.parse( sdo_all_https_url, format=\"nquads\", publicID=\"https://schema.org/\")\n#g.parse( sdo_current_https_url, format=\"nquads\", publicID=\"https://schema.org/\")\n\n\nresult = toDF( g.query(\"select * where { GRAPH ?g { ?article_type rdfs:subClassOf <https://schema.org/NewsArticle> ; rdfs:label ?label }}\") )\n\n\nresult\n\ntoDF( g.query(\"select * where { ?attic_term <https://schema.org/isPartOf> <https://attic.schema.org> ; rdfs:label ?label }\") )\n\ngrandchild_count_query = \"\"\"SELECT ?child (count(?grandchild) as ?nGrandchildren) where { ?child rdfs:subClassOf <https://schema.org/Thing> . OPTIONAL { ?grandchild rdfs:subClassOf ?child } } GROUP BY ?child order by desc(count(?grandchild))\"\"\"\nres = g.query (grandchild_count_query)\nmydf = toDF( res )\n#mydf.plot(kind='bar')\n\nmydf.columns\n\n# https://www.shanelynn.ie/bar-plots-in-python-using-pandas-dataframes/\n\nmydf['nGrandchildren']\n\nprint(mydf)\nmydf['nGrandchildren'].plot(kind='bar')\n\nresult\n\nx = df_from_query(q1)\nx", "Examples\nHow to access schema.org examples", "# First we clone the entire schema.org repo, then we collect up the examples from .txt files:\n\n!git clone https://github.com/schemaorg/schemaorg\n\n!find . -name \\*example\\*.txt -exec ls {} \\;", "TODOs:\n * can we load all the examples into a multi-graph SPARQL store? (in rdflib not remote endpoint); put them into 'core' and 'pending' named graphs or similar.\n * then load triples from latest webschemas, https://webschemas.org/version/latest/schemaorg-current-https.jsonld into a named graph.\n * find triples in 'core' examples that are not in the vocabulary (then same with pending)" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
DiXiT-eu/collatex-tutorial
unit5/4_collate-outside-the-notebook.ipynb
gpl-3.0
[ "Collate outside the notebook\nPython files, input files, output file\n\n\nSet up a PyCharm project\nCreate a Python file\nRun a script\nIn PyCharm\nIn the terminal\n\n\nInput files\nOutput file\nExercise\n\n\nHere it is another way to run the scripts you produced in the previous tutorials (note: even if technically they mean different things, we will use interchangeably the words code, script and program). This tutorial assumes that you went already through tutorials on Collate plain texts (1 and 2) and on the different Collation ouputs. Everything that we will do here, is possible also in Jupyter notebook and certain section, as Input files is a recap of something already seen in the previous tutorials.\nIn the Command line tutorial, we have briefly seen how to run a Python program. In the terminal, type\npython myfile.py\n\nreplacing “myfile.py” with the name of your Python program.\nAgain on file system hygiene: directory 'Scripts'\nIn this tutorial, we will create Python programs. Where to save the files that you will create? Remember that we created a directory for this workshop, called 'Workshop'. Now let's create a sub-directory, called 'Scripts', to store all our Python programs. \n\nSet up a PyCharm project\nIf you are using PyCharm for these exercises it is worth setting up a project that will automatically save the files you create to the 'Scripts' directory you just created (see above). To do this open PyCharm and from the File menu select New Project. In the dialogue box that appears navigate to the 'scripts' directory you made for this workshop by clicking the button with '...' on it, on the right of the location box. Then click create. This will create a new project that will save all of the files to the folder you have selected.\nCreate a Python file\nLet's do this step by step. First of all, create a python file.\n\nOpen PyCharm, if you downloaded it before, or another text editor: Notepad++ for Windows or TextWrangler for Mac OS X.\nCreate a new file and copy paste the code we used before:", "from collatex import *\ncollation = Collation()\ncollation.add_plain_witness( \"A\", \"The quick brown fox jumped over the lazy dog.\")\ncollation.add_plain_witness( \"B\", \"The brown fox jumped over the dog.\" )\ncollation.add_plain_witness( \"C\", \"The bad fox jumped over the lazy dog.\")\ntable = collate(collation)\nprint(table)", "Now save the file, as 'collate.py', inside the directory 'Scripts' (see above). If you setup a project in PyCharm then the files should automatically be saved in the correct place.\n\nRun the script\nIn PyCharm\n\nIn Pycharm you can run the script using the button, or run from the menu.\nThe result will appear in a window at the bottom of the page.\n\nIn the terminal\n<img src=\"images/run-script.png\" width=\"50%\" style=\"border:1px solid black\" align=\"right\"/>\n\n\nOpen the terminal and navigate to the folder where your script is, using the 'cd' command (again, refer to the Command line tutorial, if you don't know what this means). Then type\npython collate.py\n\nIf you are not in the directory where your script is, you should specify the path for that file. If you are in the Home directory, for example, the command would look like\npython Workshop/Scripts/collate.py\n\n\n\nThe result will appear below in the terminal.\n\n\nInput files\nIn the first tutorial, we saw how to use texts stored in files as witnesses for the collation. We used the open command to open each text file and appoint the contents to a variable with an appropriately chosen name; and we don't forget the encoding=\"utf-8\" bit!\nLet's try to do the same in our script 'collate.py', using the data in fixtures/Darwin/txt (only the first paragraph: _par1) and producing an output in XML/TEI. The code will look like this:", "from collatex import *\ncollation = Collation()\nwitness_1859 = open( \"../fixtures/Darwin/txt/darwin1859_par1.txt\", encoding='utf-8' ).read()\nwitness_1860 = open( \"../fixtures/Darwin/txt/darwin1860_par1.txt\", encoding='utf-8' ).read()\nwitness_1861 = open( \"../fixtures/Darwin/txt/darwin1861_par1.txt\", encoding='utf-8' ).read()\nwitness_1866 = open( \"../fixtures/Darwin/txt/darwin1866_par1.txt\", encoding='utf-8' ).read()\nwitness_1869 = open( \"../fixtures/Darwin/txt/darwin1869_par1.txt\", encoding='utf-8' ).read()\nwitness_1872 = open( \"../fixtures/Darwin/txt/darwin1872_par1.txt\", encoding='utf-8' ).read()\ncollation.add_plain_witness( \"1859\", witness_1859 )\ncollation.add_plain_witness( \"1860\", witness_1860 )\ncollation.add_plain_witness( \"1861\", witness_1861 )\ncollation.add_plain_witness( \"1866\", witness_1866 )\ncollation.add_plain_witness( \"1869\", witness_1869 )\ncollation.add_plain_witness( \"1872\", witness_1872 )\ntable = collate(collation, output='tei')\nprint(table)", "Now save the file (just 'save', or 'save as' with another name, as 'collate-darwin-tei.py', if you want to keep both scripts) and then\nrun the new script (run in PyCharm; or type python collate.py or python collate-darwin-tei.py in the terminal). This may take a bit longer than the fox and dog example.\nThe result will appear below.\n\nOutput file\nLooking at the result this way is not very practical, especially if we want to save it. Better store the result in a new file, that we call 'outfile' (but you can give it another name if you prefer). We need to add this chunk of code, in order to create and open 'outfile':", "outfile = open('outfile.txt', 'w', encoding='utf-8')", "If we are going to produce an output in XML/TEI, we can specify that 'outfile' will be a XML file, and the same goes for any other format. Here below there are two examples, the first for a XML output file, the second for a JSON output file:", "outfile = open('outfile.xml', 'w', encoding='utf-8')\noutfile = open('outfile.json', 'w', encoding='utf-8')", "Now we add the outfile chunk to our code above. The new script is:", "from collatex import *\ncollation = Collation()\nwitness_1859 = open( \"../fixtures/Darwin/txt/darwin1859_par1.txt\", encoding='utf-8' ).read()\nwitness_1860 = open( \"../fixtures/Darwin/txt/darwin1860_par1.txt\", encoding='utf-8' ).read()\nwitness_1861 = open( \"../fixtures/Darwin/txt/darwin1861_par1.txt\", encoding='utf-8' ).read()\nwitness_1866 = open( \"../fixtures/Darwin/txt/darwin1866_par1.txt\", encoding='utf-8' ).read()\nwitness_1869 = open( \"../fixtures/Darwin/txt/darwin1869_par1.txt\", encoding='utf-8' ).read()\nwitness_1872 = open( \"../fixtures/Darwin/txt/darwin1872_par1.txt\", encoding='utf-8' ).read()\noutfile = open('outfile-tei.xml', 'w', encoding='utf-8')\ncollation.add_plain_witness( \"1859\", witness_1859 )\ncollation.add_plain_witness( \"1860\", witness_1860 )\ncollation.add_plain_witness( \"1861\", witness_1861 )\ncollation.add_plain_witness( \"1866\", witness_1866 )\ncollation.add_plain_witness( \"1869\", witness_1869 )\ncollation.add_plain_witness( \"1872\", witness_1872 )\ntable = collate(collation, output='tei')\nprint(table, file=outfile)", "When we run the script, the result won't appear below anymore. But a new file, 'outfile-tei.xml' has been created in the directory 'Scripts'. Check what's inside!\nIf you want to change the location of the output file, you can specify a different path. If, for example, you want your output file in the Desktop, you would write", "outfile = open('C:/Users/Elena/Desktop/output.xml', 'w', encoding='utf-8')", "N.b.: you can create an output file also running your script in the Jupyter notebook! Depending on the path you specify, it will be created in your 'Notebook' directory or elsewhere.\nExercise\nCreate a new Python script that produces an output in JSON, using the data in 'fixtures/Woolf/Lighthouse-1' (remember? We use the same data in another tutorial). Pay attention to indicate correctly the input files, the output file (and its extension) and the output format." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
marshal789/Lectures-On-Machine-Learning
Naive_Bayes/Supervised+Learning+Naive+Bayes.ipynb
mit
[ "Naive Bayes Classifiers\nIn this lecture we will learn how to use Naive Bayes Classifier to perform a Multi Class Classification on a data set we are already familiar with: the Iris Data Set. \n This Lecture will consist of 7 main parts:\n\nPart 1: Note on Notation and Math Terms\nPart 2: Bayes' Theorem\nPart 3: Introduction to Naive Bayes\nPart 4: Naive Bayes Classifier Mathematics Overview\nPart 5: Constructing a classifier from the probability model\nPart 6: Gaussian Naive Bayes\nPart 7: Gaussian Naive Bayes with SciKit Learn\n\nPart 1: Note on Notation and Math Terms\nThere are a few more advanced notations and mathematical terms used during the explanation of naive Bayes Classification.\nYou should be familiar with the following:\nProduct of Sequence\nThe product of a sequence of terms can be written with the product symbol, which derives from the capital letter Π (Pi) in the Greek alphabet. The meaning of this notation is given by:\n $$\\prod_{i=1}^4 i = 1\\cdot 2\\cdot 3\\cdot 4, $$\nthat is\n $$\\prod_{i=1}^4 i = 24. $$\nArg Max\nIn mathematics, the argument of the maximum (abbreviated arg max or argmax) is the set of points of the given argument for which the given function attains its maximum value. In contrast to global maximums, which refer to a function's largest outputs, the arg max refers to the inputs which create those maximum outputs.\nThe arg max is defined by\n$$\\operatorname*{arg\\,max}_x f(x) := {x \\mid \\forall y : f(y) \\le f(x)}$$\nIn other words, it is the set of points x for which f(x) attains its largest value. This set may be empty, have one element, or have multiple elements. For example, if f(x) is 1−|x|, then it attains its maximum value of 1 at x = 0 and only there, so\n$$\\operatorname*{arg\\,max}_x (1-|x|) = {0}$$\nPart 2: Bayes' Theorem\nFirst, for a quick introduction to Bayes' Theorem, check out the Bayes' Theorem Lecture in the statistics appendix portion of this course, in order ot fully understand Naive Bayes, you'll need a complete understanding of the Bayes' Theorem.\nPart 3: Introduction to Naive Bayes\nNaive Bayes is probably one of the practical machine learning algorithms. Despite its name, it is actually performs very well considering its classification performance. It proves to be quite robust to irrelevant features, which it ignores. It learns and predicts very fast and it does not require lots of storage. So, why is it then called naive?\nThe naive was added to the account for one assumption that is required for Bayes to work optimally: all features must be independent of each other. In reality, this is usually not the case, however, it still returns very good accuracy in practice even when the independent assumption does not hold.\nNaive Bayes classifiers have worked quite well in many real-world situations, famously document classification and spam filtering. We will be working with the Iris Flower data set in this lecture.\nPart 4: Naive Bayes Classifier Mathematics Overview\nNaive Bayes methods are a set of supervised learning algorithms based on applying Bayes’ theorem with the “naive” assumption of independence between every pair of features. Given a class variable y and a dependent feature vector x<sub>1</sub> through x<sub>n</sub>, Bayes’ theorem states the following relationship:\n$$P(y \\mid x_1, \\dots, x_n) = \\frac{P(y) P(x_1, \\dots x_n \\mid y)}\n {P(x_1, \\dots, x_n)}$$\nUsing the naive independence assumption that\n$$P(x_i | y, x_1, \\dots, x_{i-1}, x_{i+1}, \\dots, x_n) = P(x_i | y)$$\nfor all i, this relationship is simplified to:\n$$P(y \\mid x_1, \\dots, x_n) = \\frac{P(y) \\prod_{i=1}^{n} P(x_i \\mid y)}\n {P(x_1, \\dots, x_n)}$$\nWe now have a relationship between the target and the features using Bayes Theorem along with a Naive Assumption that all features are independent.\nPart 5: Constructing a classifier from the probability model\nSo far we have derived the independent feature model, the Naive Bayes probability model. The Naive Bayes classifier combines this model with a decision rule, this decision rule will decide which hypothesis is most probable, in our example case this will be which class of flower is most probable.\nPicking the hypothesis that is most probable is known as the maximum a posteriori or MAP decision rule. The corresponding classifier, a Bayes classifier, is the function that assigns a class label to y as follows:\nSince P(x<sub>1</sub>, ..., x<sub>n</sub>) is constant given the input, we can use the following classification rule:\n$$P(y \\mid x_1, \\dots, x_n) \\propto P(y) \\prod_{i=1}^{n} P(x_i \\mid y)$$\n$$\\Downarrow$$\n$$\\hat{y} = \\arg\\max_y P(y) \\prod_{i=1}^{n} P(x_i \\mid y),$$\nand we can use Maximum A Posteriori (MAP) estimation to estimate P(y) and P(x<sub>i</sub> | y); the former is then the relative frequency of class y in the training set.\nThere are different naive Bayes classifiers that differ mainly by the assumptions they make regarding the distribution of P(x<sub>i</sub> | y).\nPart 6: Gaussian Naive Bayes\nWhen dealing with continuous data, a typical assumption is that the continuous values associated with each class are distributed according to a Gaussian distribution. Go back to the normal distribution lecture to review the formulas for the Gaussian/Normal Distribution.\nFor example of using the Gaussian Distribution, suppose the training data contain a continuous attribute, x. We first segment the data by the class, and then compute the mean and variance of x in each class. Let μ<sub>c</sub> be the mean of the values in x associated with class c, and let σ<sup>2</sup><sub>c</sub> be the variance of the values in x associated with class c. Then, the probability distribution of some value given a class, p(x=v|c), can be computed by plugging v into the equation for a Normal distribution parameterized by μ<sub>c</sub> and σ<sup>2</sup><sub>c</sub>. That is:\n$$p(x=v|c)=\\frac{1}{\\sqrt{2\\pi\\sigma^2_c}}\\,e^{ -\\frac{(v-\\mu_c)^2}{2\\sigma^2_c} }$$\nThe key to Naive Bayes is making the (rather large) assumption that the presences (or absences) of\neach data feature are independent of one another, conditional on a data having a certain label.\nPart 7: Gaussian Naive Bayes with SciKit Learn\nQuick note we will actually only use the SciKit Learn Library in this lecture:", "import pandas as pd\nfrom pandas import Series,DataFrame\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n# Gaussian Naive Bayes\nfrom sklearn import datasets\nfrom sklearn import metrics\nfrom sklearn.naive_bayes import GaussianNB", "Now that we have our module imports, let's go ahead and import the Iris Data Set. We have previously worked with this dataset, so go ahead and look at Lectures on MultiClass Classification for a complete breakdown on this data set!", "# load the iris datasets\niris = datasets.load_iris()\n\n# Grab features (X) and the Target (Y)\nX = iris.data\n\nY = iris.target\n\n# Show the Built-in Data Description\nprint iris.DESCR", "Since we have already done a general analysis of this data in earlier lectures, let's go ahead and move on to using the Naive Bayes Method to seperate this data set into multiple classes.\nFirst we create and fit the model", "# Fit a Naive Bayes model to the data\nmodel = GaussianNB()", "Now that we have our model, we will continue by seperating into training and testing sets:", "from sklearn.cross_validation import train_test_split\n# Split the data into Trainging and Testing sets\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y)", "Now we fit our model using the training data set:", "# Fit the training model\nmodel.fit(X_train,Y_train)", "Now we predict the outcomes from the Testing Set:", "# Predicted outcomes\npredicted = model.predict(X_test)\n\n# Actual Expected Outvomes\nexpected = Y_test", "Finally we can see the metrics for performance:", "print metrics.accuracy_score(expected, predicted)", "It looks like we have about 94.7% accuracy using the Naive Bayes method!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
stevetjoa/stanford-mir
fourier_transform.ipynb
mit
[ "%matplotlib inline\nimport seaborn\nimport numpy, scipy, matplotlib.pyplot as plt, librosa, IPython.display as ipd", "&larr; Back to Index\nFourier Transform\nLet's download an audio file:", "import urllib\nfilename = 'c_strum.wav'\nurllib.urlretrieve('http://audio.musicinformationretrieval.com/c_strum.wav', filename=filename)\nx, sr = librosa.load(filename)\n\nprint(x.shape)\nprint(sr)", "Listen to the audio file:", "ipd.Audio(x, rate=sr)", "Fourier Transform\nThe Fourier Transform (Wikipedia) is one of the most fundamental operations in applied mathematics and signal processing.\nIt transforms our time-domain signal into the frequency domain. Whereas the time domain expresses our signal as a sequence of samples, the frequency domain expresses our signal as a superposition of sinusoids of varying magnitudes, frequencies, and phase offsets.\nTo compute a Fourier transform in NumPy or SciPy, use scipy.fft:", "X = scipy.fft(x)\nX_mag = numpy.absolute(X)\nf = numpy.linspace(0, sr, len(X_mag)) # frequency variable", "Plot the spectrum:", "plt.figure(figsize=(13, 5))\nplt.plot(f, X_mag) # magnitude spectrum\nplt.xlabel('Frequency (Hz)')", "Zoom in:", "plt.figure(figsize=(13, 5))\nplt.plot(f[:5000], X_mag[:5000])\nplt.xlabel('Frequency (Hz)')", "&larr; Back to Index" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CentroGeo/Analisis-Espacial
muap/.ipynb_checkpoints/MUAP-checkpoint.ipynb
gpl-2.0
[ "Unidad de Area Modificable\nEn esta práctica vamos a explorar rápidamente la forma en la que las escalas de análisis influyen en los resultados\nPara esto vamos a trabajar con los datos de desaparecidos en la República Mexicana entre 2004 y 2014.\nLos datos vienen en dos shapefiles, uno a nivel estatal y otro a nivel municipal. Importemos los archivos en GeoPandas para analizarlos:", "import sys\nreload(sys)\nsys.setdefaultencoding('utf-8')\n\nfrom geopandas import GeoDataFrame\nestatal = GeoDataFrame.from_file('data/des_rezago_estado.shp')\nmunicipal = GeoDataFrame.from_file('data/muns_geo_des.shp')", "Vamos a ver un poco los datos:", "estatal.head()\n\nmunicipal.head()", "En las columnas 2006-2014 tenemos los datos de desaparecidos para cada año. En las demás columnos tenemos alguna información sobre las condiciones socioeconómicas de cada unidad espacial.\nEl ejercicio que vamos a hacer en esta ocasión es muy sencillo, vamos a hacer una regresión lineal del total de desaparecidos contra alguna variable socioeconómica y vamos a observar cómo cambia el resultado con la escala de análisis.\nEl primer paso es crear y calcular una columna con el total de desaparecidos:", "des_estado = estatal[['cvegeo','2006','2007','2008','2009','2010','2011','2012','2013','2014']]\ndes_estado.head()", "Aquí simplemente seleccionamos las columnas que nos interesan", "des_estado['total_des'] = des_estado.sum(axis=1)\ndes_estado.head()", "Y aquí añadimos una nueva columna a nuestra selección con la suma de desaparecidos.\nAhora vamos a unir la suma a nuestros datos originales:", "import pandas as pd\nestatal = pd.merge(estatal,des_estado[['total_des','cvegeo']],on='cvegeo' )\nestatal.head()", "Repetimos para los municipios (ahora en un solo paso):", "import pandas as pd\ndes_mun = municipal[['cvegeo_x','2006','2007','2008','2009','2010','2011','2012','2013','2014']]\ndes_mun['total_des'] = des_mun.sum(axis=1)\nmunicipal = pd.merge(municipal,des_mun[['total_des','cvegeo_x']],on='cvegeo_x' )\nmunicipal.head()", "Ahora sí, vamos a hacer una regresión a nivel estatal:", "from pandas.stats.api import ols\nmodel = ols(y=estatal['total_des'], x=estatal['rezago'])\nmodel" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bicv/LogGabor
LogGabor_fit_example.ipynb
mit
[ "LogGabor user guide\nTable of content\n\n\nWhat is the LogGabor package? \n\n\nInstalling \n\n\nImporting the library\n\n\nProperties of log-Gabor filters\n\n\nTesting filter generation \n\n\nTesting on a sample image \n\n\nBuilding a pyramid \n\n\nAn example of fitting images with log-Gabor filters \n\n\nImporting the library", "%load_ext autoreload\n%autoreload 2\nfrom LogGabor import LogGabor\nparameterfile = 'https://raw.githubusercontent.com/bicv/LogGabor/master/default_param.py'\nlg = LogGabor(parameterfile)\nlg.set_size((32, 32))", "To install the dependencies related to running this notebook, see Installing notebook dependencies.\nBack to top", "import os\nimport numpy as np\nnp.set_printoptions(formatter={'float': '{: 0.3f}'.format})\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfig_width = 12\nfigsize=(fig_width, .618*fig_width)", "Perspectives: Better fits of the filters\nBasically, it is possible to infer the best possible log-Gabor function, even if it's parameters do not fall on the grid\nDefining a reference log-gabor (look in the corners!)", "def twoD_Gaussian(xy, x_pos, y_pos, theta, sf_0):\n FT_lg = lg.loggabor(x_pos, y_pos, sf_0=np.absolute(sf_0), B_sf=lg.pe.B_sf, theta=theta, B_theta=lg.pe.B_theta)\n return lg.invert(FT_lg).ravel()\n\n# Create x and y indices\nx = np.arange(lg.pe.N_X)\ny = np.arange(lg.pe.N_Y)\nx, y = xy = np.meshgrid(x, y)\n\n#create data\nx_pos, y_pos, theta, sf_0 = 14.6, 8.5, 12 * np.pi / 180., .1\ndata = twoD_Gaussian(xy, x_pos, y_pos, theta=theta, sf_0=sf_0)\n\n\n# plot twoD_Gaussian data generated above\n#plt.figure()\n#plt.imshow(data.reshape(lg.pe.N_X, lg.pe.N_Y))\n#plt.colorbar()\n\n# add some noise to the data and try to fit the data generated beforehand\ndata /= np.abs(data).max()\ndata_noisy = data + .25*np.random.normal(size=data.shape)\n# getting best match\nC = lg.linear_pyramid(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y))\nidx = lg.argmax(C)\ninitial_guess = [idx[0], idx[1], lg.theta[idx[2]], lg.sf_0[idx[3]]]\nprint ('initial_guess :', initial_guess, ', idx :', idx)\n\nimport scipy.optimize as opt\n\npopt, pcov = opt.curve_fit(twoD_Gaussian, xy, data_noisy, p0=initial_guess)\n\ndata_fitted = twoD_Gaussian(xy, *popt)\n\nextent = (0, lg.pe.N_X, 0, lg.pe.N_Y)\nprint ('popt :', popt, ', true : ', x_pos, y_pos, theta, sf_0)\nfig, axs = plt.subplots(1, 3, figsize=(15, 5))\n_ = axs[0].contourf(data.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')\n_ = axs[1].imshow(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), cmap=plt.cm.viridis, extent=extent)\n_ = axs[2].contourf(data_fitted.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')\nfor ax in axs: ax.axis('equal')", "Back to top\nperforming a fit", "from LogGabor import LogGaborFit\nlg = LogGaborFit(parameterfile)\nlg.set_size((32, 32))\n\nx_pos, y_pos, theta, sf_0 = 14.6, 8.5, 12 * np.pi / 180., .1\ndata = lg.invert(lg.loggabor(x_pos, y_pos, sf_0=np.absolute(sf_0), B_sf=lg.pe.B_sf, theta=theta, B_theta=lg.pe.B_theta))\ndata /= np.abs(data).max()\ndata_noisy = data + .25*np.random.normal(size=data.shape)\n\n\ndata_fitted, params = lg.LogGaborFit(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y))\n\ndata_fitted.shape\n\n\nparams.pretty_print()\n\nextent = (0, lg.pe.N_X, 0, lg.pe.N_Y)\nprint ('params :', params, ', true : ', x_pos, y_pos, theta, sf_0)\nfig, axs = plt.subplots(1, 3, figsize=(15, 5))\n_ = axs[0].contourf(data.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')\n_ = axs[1].imshow(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), cmap=plt.cm.viridis, extent=extent)\n_ = axs[2].contourf(data_fitted.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')\nfor ax in axs: ax.axis('equal')", "With periodic boundaries, check that the filter \"re-enters\" the image from the other border:", "data_fitted, params = lg.LogGaborFit(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), do_border=False)\nextent = (0, lg.pe.N_X, 0, lg.pe.N_Y)\nprint ('params :', params, ', true : ', x_pos, y_pos, theta, sf_0)\nfig, axs = plt.subplots(1, 3, figsize=(15, 5))\n_ = axs[0].contourf(data.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')\n_ = axs[1].imshow(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), cmap=plt.cm.viridis, extent=extent)\n_ = axs[2].contourf(data_fitted.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')\nfor ax in axs: ax.axis('equal')", "Back to top\nTODO: validation of fits\nBack to top\nmore book keeping", "%load_ext watermark\n%watermark -i -h -m -v -p numpy,matplotlib,scipy,imageio,SLIP,LogGabor -r -g -b", "Back to top\nBack to the LogGabor user guide" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]