repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
mlhhu2017/identifyDigit
bekcic/gaussian_multivar.ipynb
mit
[ "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport itertools\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import confusion_matrix as conf_mat\nfrom mnist_util import *", "Basic setup\nGetting the training and test data and calculating means, variances (axis=None, axis=1) and covariances.", "training, test = load_sorted_data('data_notMNIST')\n\nPRIORS = {'id': lambda data: [np.identity(len(data[0][0])) for _ in range(len(data))],\n 'var': lambda data: variance(data),\n 'var_1': lambda data: variance(data, axis=0),\n 'cov': lambda data: covariance(data)}\n\nmeans = mean(training)\n\nsigma = {}\nfor key in PRIORS:\n sigma[key] = PRIORS[key](training)", "Plotting\nHere you can see the means and variances of the numbers in the training dataset.", "plot_all_numbers(means, elements_per_line=5, plot_title=\"Means of the training dataset\")\nplot_all_numbers(variances1, elements_per_line=5, plot_title=\"Variances of the training dataset\")", "PDFs\nUsing Gaussian multivariate normal distribution.\nHere I am trying in total 4 different PDFs, where I swapped out the $\\Sigma$ (Prior).\n\nFirst one is $\\Sigma =$ id.\nSecond one is $\\Sigma =$ variances.\nThird one is $\\Sigma =$ variances1\nFourth one is $\\Sigma =$ covariances", "pdfs = {}\nfor key in sigma:\n pdfs[key] = multivariates(training, sigma[key])", "First test run\nNow I am plotting the first 20 numbers of each test dataset (0-9) and thereafter guess/predict the corresponding number with the four PDFs and then I will show the number of errors for each PDF for each number.\n(The second next code snippet may take a while. It is processing the entire test data)", "tmp = flatten_lists([test[i][:20] for i in range(10)])\ntmp = [np.array(x) for x in tmp]\nplot_all_numbers(tmp, elements_per_line=20, plot_title=\"First 20 of each number from the test dataset\")\n\npreds = {}\nfor key in pdfs:\n preds[key] = [tell_all_numbers(pdfs[key], nums) for nums in test]\n\n\nfor i in range(10):\n print(\"Right guess: {0}\".format(i))\n for key in preds:\n print(\"{0}:\\t{1}\\tERRORS: {2}\".format(key, preds[key][i][:20], len([x for x in preds[key][i][:20] if x != i])))\n \n print(\"\")", "Visualizing the result\nNow I am going to visualize the results with confusion matrices.", "class_names = [str(i) for i in range(10)]\n\nconfusion_matrix = {}\n\ntraining_labels = flatten_lists([[i]*len(preds['id'][i]) for i in range(10)])\n\n\n#plot_confusion_matrix(conf_matrix, classes=class_names)\n\nfor key in preds:\n confusion_matrix[key] = conf_mat(flatten_lists(preds[key]), list(training_labels))\n\n\nfor key in confusion_matrix:\n plot.figure(figsize=(10,10))\n plot_confusion_matrix(normalize(confusion_matrix[key]), classes=class_names, title=\"{0} Confusion Matrix\".format(key))\n plot.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/applied-machine-learning-intensive
content/06_other_models/06_bayesian_modeling/colab.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/06_other_models/03_bayes/colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2020 Google LLC.", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Bayesian Models\nBayesian models are at the heart of many ML applications, and they can be implemented in regression or classification. For example, the \"Naive Bayes\" algorithm has proven to be an excellent spam detection method. Bayesian inference is often used in applications of modeling stochastic, temporal, or time-series data, such as finance, healthcare, sales, marketing, and economics. Bayesian networks are also at the heart of reinforcement learning (RL) algorithms, which drive complex automation, like autonomous vehicles. And Bayesian optimization is used to maximize the effectiveness of AI game opponents like alphaGO. Bayesian models make effective use of information, and it is possible to parameterize and update these models using prior and posterior probability functions.\nThere are many libraries that implement probabilistic programming including TensorFlow Probability. \nIn this Colab we will implement a Bayesian model using a Naive Bayes classifier to predict the likelihood of spam in a sample of text data.\nLoad Packages", "from zipfile import ZipFile\nimport urllib.request\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom sklearn.naive_bayes import BernoulliNB\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.model_selection import train_test_split", "Naive Bayes\nWhat is Naive Bayes? There are two aspects: the first is naive, and the second is Bayes'. Let's first review the second part: Bayes' theorem from probability.\n$$ P(x)P(y|x) = P(y)P(x|y) $$\nUsing this theorem, we can solve for the conditional probability of event $y$, given condition $x$. Furthermore, Bayes' rule can be extended to incorporate $n$ vectors as follows:\n$$ P(y|x_1, ..., x_n) = \\frac{P(y)P(x_1, ..., x_n|y)}{P(x_1, ..., x_n)}$$\nThese probability vectors can then be simplified by multiplying the individual conditional probability for each vector and taking the maximum likelihood. Naive Bayes returns the y value, or the category that maximizes the following argument.\n$$ \\hat{y} = argmax_y(P(y)\\prod_{i=1}^nP(x_i|y) $$\nDon't worry too much if this is a bit too much algebra. The actual implementations don't require us to remember everything!\nBut Wait, Why \"Naive\"?\nIn this context, \"naive\" assumes that there is independence between pairs of conditional vectors. In other words, it assumes the features of your model are independent (or at least, have a low multicollinearity). This is typically not the case, and it is the cause for error. Naive Bayes is practically good for classification, but not for estimation. Furthermore, it is not robust to interaction, so some of your variables may have interactions. This comes up quite frequently in natural language processing (NLP), and so the usefulness of Naive Bayes is limited to simpler applications. Sometimes simple is better, like in spam filtering where Naive Bayes can perform reasonably well with limited training data.\nSpam Filtering", "def LoadZip(url, file_name, cols=['type', 'message']):\n # Download file.\n urllib.request.urlretrieve(url, 'spam.zip')\n # Open zip in memory.\n with ZipFile('spam.zip') as myzip:\n with myzip.open(file_name) as myfile:\n df = pd.read_csv(myfile, sep='\\t', header=None)\n\n df.columns=cols\n display(df.head())\n display(df.shape)\n return df\n\nurl = ('https://archive.ics.uci.edu/ml/machine-learning-databases/00228/'\n 'smsspamcollection.zip')\ndf = LoadZip(url, 'SMSSpamCollection')", "First let's analyze the number of spam vs. ham. For reference, \"ham\" is the opposite of \"spam\", so a non-spam message.", "sns.countplot(df['type'])\nplt.show()", "Here we notice a class imbalance with under 1000 spam messages out of over 5000 total messages.\nNow we create a list of keywords that might indicate spam and generate features columns for each keyword.", "features = pd.DataFrame()\nkeywords = ['selected', 'win','deal', 'free', 'trip', 'urgent', 'require',\n 'need', 'cash', 'asap']\n\n# Use regex search built into pandas.\nfor k in keywords:\n features[k]=df['message'].str.contains(k, case=False)", "Let's look at the correlation of features.", "features['allcaps'] = df['message'].str.isupper()\nsns.heatmap(features.corr())\n\nplt.show()", "The heatmap shows only weak correlations between variables like 'cash', 'win', 'free', and 'urgent'. Therefore, we can assume there is independence between each keyword. In actuality, we are violating this assumption.\nTrain a Model to Predict Spam", "np.random.seed(seed=0)\nX = features\ny = df['type']\nX_train, X_test, y_train, y_test = train_test_split(X,y)\nsns.countplot(y_test)\nplt.show()", "Using features, we will now make predictions on whether an individual message is spam or ham.", "def classifyNB(X_train,y_train, X_test, y_test, cols=['spam', 'ham']):\n nb = BernoulliNB()\n\n nb.fit(X_train,y_train)\n\n y_pred = nb.predict(X_test)\n class_names = cols\n print('Classification Report')\n print(classification_report(y_test, y_pred, target_names=class_names))\n cm = confusion_matrix(y_test, y_pred, labels=class_names)\n df_cm = pd.DataFrame(cm, index=class_names, columns=class_names)\n\n sns.heatmap(df_cm, cmap='Blues', annot=True, fmt=\"d\",\n xticklabels=True, yticklabels=True, cbar=False, square=True)\n plt.ylabel('Predicted')\n plt.xlabel('Actual')\n plt.suptitle(\"Confusion Matrix\")\n plt.show()\n \nclassifyNB(X_train,y_train,X_test,y_test)", "The confusion matrix reads as follows:\n\n1182 ham messages correctly predicted\n114 ham messages were predicted to be spam (Type II error)\n71 spam messages were correctly predicted\n26 spam messages were erroneously predicted to be ham (Type I error)\n\nPrecision and Recall\nRemember that precision and recall are derived from the ground truth. Review the diagram below for clarification.", "%%html\n\n<a title=\"Walber [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons\" \n href=\"https://commons.wikimedia.org/wiki/File:Precisionrecall.svg\">\n <img width=\"256\" alt=\"Precisionrecall\" \n src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/2/26/Precisionrecall.svg/256px-Precisionrecall.svg.png\">\n</a>", "For email, what's more important: spam detection or ham protection?\nIn the case of your inbox, I don't think anyone wants to have legitimate email end up in the spam folder. On the other hand, your organization may be the target of phishing, and it may be important to filter out all spam aggressively. The answer to the question depends on the situation.\nResources\n\nNaive Bayes docs\nSpam dataset\nSentiment reviews\nPaper on classifiers\nBayesian fnference\n\nExercises\nExercise 1\nLet's load some user reviews data and do a sentiment analysis. Download the text data from this UCI ML archive.\nCreate a classifier using Naive Bayes for one of the three datasets in the cell below. See how it performs on the other two sets of reviews. Comment on your approach to building features and why that may or may not work well for each dataset.", "url = ('https://archive.ics.uci.edu/ml/machine-learning-databases/'\n'00331/sentiment%20labelled%20sentences.zip')\n\ncols = ['message', 'sentiment']\nfolder = 'sentiment labelled sentences'\nprint('\\nYelp')\ndf_yelp = LoadZip(url, folder+'/yelp_labelled.txt', cols)\nprint('\\nAmazon')\ndf_amazon = LoadZip(url, folder+'/amazon_cells_labelled.txt', cols)\nprint('\\nImdb')\ndf_imdb = LoadZip(url, folder+'/imdb_labelled.txt', cols)", "Student Solution", "# Your answer goes here", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
QuantCrimAtLeeds/PredictCode
notebooks/More on SaTScan.ipynb
artistic-2.0
[ "# Allow us to load `open_cp` without installing\nimport sys, os.path\nsys.path.insert(0, os.path.abspath(\"..\"))", "Comparison with SaTScan\nHaving discovered further trouble replicating the results of SaTScan, we introduce some more support for reading and writing SaTScan files, and test various corner cases.\nThe class AbstractSTScan works with \"generic time\" (so just numbers, now interpretted as some time unit before an epoch time). This allows us to concentrate on the details. We also introduce a more complicated rule about cases when the boundary of a disc contains more than one point (see below).\nThe class STScanNumpy takes the same data and settings as AbstractSTScan, but uses a parallel numpy programme style to improve performance. Like the original implementation, and unlike AbstractSTScan, it does nothing special about events which fall on the boundary of disks.\nIn the following, we set the \"population\" limits to 50% (the SaTScan default) and set the maximum radius and time lengths to be effectively infinity, given the inputs. Hence these results should be directly comparable with the results from SaTScan for a \"Prospective Analyses\" / \"Space-Time\" and \"Space-Time Permutation\" with otherwise default options.\nSetup\n\nImport what we need\nVarious functions to build test data and get into the various classes which implement the different algorithms.", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nimport open_cp.stscan, open_cp.stscan2\nimport numpy as np\n\ndef make_random_data(s=100):\n times = np.floor(np.random.random(size=s) * 200)\n times.sort()\n times = np.flipud(times)\n coords = np.random.random(size=(2,s)) * 100\n return coords, times\n\ndef build_ab_scan(coords, times):\n ab_scan = open_cp.stscan2.AbstractSTScan(coords, times)\n ab_scan.geographic_radius_limit = 1000\n ab_scan.geographic_population_limit = 0.5\n ab_scan.time_max_interval = 200\n ab_scan.time_population_limit = 0.5\n return ab_scan\n\ndef build_stscan_numpy(coords, times):\n stsn = open_cp.stscan2.STScanNumpy(coords, times)\n stsn.geographic_radius_limit = 1000\n stsn.geographic_population_limit = 0.5\n stsn.time_max_interval = 200\n stsn.time_population_limit = 0.5\n return stsn\n\ndef to_timed_points(coords, times):\n \"\"\"Convert to days before 2017-04-01 and use `STSTrainerSlow`.\"\"\"\n timestamps = (np.timedelta64(1,\"D\") / np.timedelta64(1,\"s\")) * times * np.timedelta64(1,\"s\")\n timestamps = np.datetime64(\"2017-04-01T00:00\") - timestamps\n return open_cp.data.TimedPoints(timestamps, coords)\n\ndef build_trainer(coords, times):\n trainer = open_cp.stscan.STSTrainerSlow()\n trainer.data = to_timed_points(coords, times)\n trainer.time_max_interval = np.timedelta64(200,\"D\")\n trainer.time_population_limit = 0.5\n trainer.geographic_population_limit = 0.5\n trainer.geographic_radius_limit = 1000\n return trainer\n\ndef build_trainer_fast(coords, times):\n trainer = open_cp.stscan.STSTrainer()\n trainer.data = to_timed_points(coords, times)\n trainer.time_max_interval = np.timedelta64(200,\"D\")\n trainer.time_population_limit = 0.5\n trainer.geographic_population_limit = 0.5\n trainer.geographic_radius_limit = 1000\n return trainer", "Comparison\nWe find that most of the time, we obtain the same clusters. But sometimes we don't. This is down to:\n\nNon-deterministic ordering. If we compare things in different orders, we can break ties in different ways.\nAs the discs are always centred on events, it is possible for different discs to contain the same events. As we generate further clusters by finding the next most significant cluster which is disjoint for current clusters, if we again process things in a different order, then we can obtain different disks.\n\nFrom this point of view, obtaining perfect agreement with SaTScan seems an almost hopeless ideal!", "coords, times = make_random_data()\nab_scan = build_ab_scan(coords, times)\nall_clusters = list(ab_scan.find_all_clusters())\nfor c in all_clusters:\n print(c.centre, c.radius, c.time, c.statistic)\n\nstsn = build_stscan_numpy(coords, times)\nall_clusters = list(stsn.find_all_clusters())\nfor c in all_clusters:\n print(c.centre, c.radius, c.time, c.statistic)\n\ntrainer = build_trainer(coords, times)\nresult = trainer.predict(time=np.datetime64(\"2017-04-01T00:00\"))\nfor c, t, s in zip(result.clusters, result.time_ranges, result.statistics):\n assert np.datetime64(\"2017-04-01T00:00\") == t[1]\n t = (np.datetime64(\"2017-04-01T00:00\") - t[0]) / np.timedelta64(1,\"D\")\n print(c, t, s)\n\ntrainer_fast = build_trainer_fast(coords, times)\nresult_fast = trainer_fast.predict(time=np.datetime64(\"2017-04-01T00:00\"))\nfor c, t, s in zip(result_fast.clusters, result_fast.time_ranges, result_fast.statistics):\n assert np.datetime64(\"2017-04-01T00:00\") == t[1]\n t = (np.datetime64(\"2017-04-01T00:00\") - t[0]) / np.timedelta64(1,\"D\")\n print(c, t, s)\n\n# Plots the old \"trainer\" implementation against the abstract numpy implementation\n\nimport matplotlib.patches\n\nfig, ax = plt.subplots(figsize=(8,8))\nax.set(xlim=[-10,110], ylim=[-10,110])\nfor c in result.clusters:\n cir = matplotlib.patches.Circle(c.centre, c.radius, alpha=0.5)\n ax.add_patch(cir)\nfor c in all_clusters:\n cir = matplotlib.patches.Circle(c.centre, c.radius, alpha=0.5, color=\"red\")\n ax.add_patch(cir)", "Timings\nThe newer code in AbstractSTScan is a bit quicker. The Numpy code is definitely quicker.", "%timeit( list(ab_scan.find_all_clusters()) )\n\n%timeit( list(stsn.find_all_clusters()) )\n\n%timeit( trainer.predict() )\n\n%timeit( trainer_fast.predict() )", "Optionally save\nWe can write the data out in SaTScan format for comparison purposes. Be sure to adjust Advanced Analysis options in SaTScan to reflect the settings we used above (no limit of size of clusters, but a population limit of 50% for both space and time).~", "#ab_scan.to_satscan(\"satscan_test2\", 1000)", "Grided data\nWhere we have found quite different behaviour from SaTScan is in \"boundary\" behaviour. Consider the case when a disk's boundary (it's circumference) contains more than one event. The STSTrainer class always considers all events inside or on the edge of the disk. But SaTScan will sometimes consider events inside the disc, and then only some of the events on the boundary.\nNotice in particular that we can expect this to happen a lot if the input data is on a regular grid.\nWe try to replicate this behaviour in AbstractSTScan by considering all possibilities of events on the boundary being counted or not. Unfortunately, we then seem to beat SaTScan at its own game, and consider too many subsets, resulting in finding clusters which SaTScan does not.\nThe first example below shows where AbstractSTScan is more aggresive than SaTScan. The 2nd example shows where SaTScan does indeed fail to consider all events in a disc, and gets the same result as AbstractSTScan.\nGenerate example random data\nWe use the grid abilities of STSTrainer.", "def trainer_to_data(trainer):\n coords = trainer.data.coords\n times = (np.datetime64(\"2017-04-01T00:00\") - trainer.data.timestamps) / np.timedelta64(1,\"s\")\n times /= (np.timedelta64(1,\"D\") / np.timedelta64(1,\"s\"))\n times = np.floor(times)\n \n return coords, times\n\nnp.testing.assert_array_almost_equal(trainer_to_data(trainer)[0], coords)\nnp.testing.assert_array_almost_equal(trainer_to_data(trainer)[1], times)\n\ntrainer = build_trainer(*make_random_data())\nregion = open_cp.RectangularRegion(xmin=0, ymin=0, xmax=100, ymax=100)\nab_scan = build_ab_scan( *trainer_to_data( trainer.grid_coords(region, grid_size=20) ) )\n\nall_clusters = list(ab_scan.find_all_clusters())\nfor c in all_clusters:\n print(c.centre, c.radius, c.time, c.statistic)\n\n#ab_scan.to_satscan(\"satscan_test1\", 1000)", "Reload some data\nHere's one we prepared earlier. It shows a case where our aggressive algorithm finds a cluster which SaTScan does not.", "def find_satscan_ids_for_mask(in_disc, time):\n in_disc &= ab_scan.timestamps <= time\n in_disc = set( (x,y) for x,y in ab_scan.coords[:,in_disc].T )\n return [i for i in satscan_data.geo if satscan_data.geo[i] in in_disc]\n\ndef find_mask(centre, radius):\n return np.sum((ab_scan.coords - np.array(centre)[:,None])**2, axis=0) <= radius**2\n\ndef to_our_indexes(sat_scan_indexes):\n out = set()\n for i in sat_scan_indexes:\n x, y = satscan_data.geo[i]\n m = (ab_scan.coords[0] == x) & (ab_scan.coords[1] == y)\n for j in np.arange(ab_scan.coords.shape[1])[m]:\n out.add(j)\n return out\n\nsatscan_data = open_cp.stscan2.SaTScanData(\"satscan_test3\", 1000)\nab_scan = build_ab_scan( *satscan_data.to_coords_time() )\n\nall_clusters = list(ab_scan.find_all_clusters())\nfor c in all_clusters:\n print(c.centre, c.radius, c.time, c.statistic)\n\n# Cluster which SaTScan finds -- In this case, seemingly SaTScan includes all events\nin_disc = find_mask([30,30], 20)\nfind_satscan_ids_for_mask(in_disc, 70)\n\n# Our cluster-- all events in or on the disc\nin_disc = find_mask([50,30], 20)\nfind_satscan_ids_for_mask(in_disc, 45)\n\n# The subset of events our algorithm chooses to use\nin_disc = all_clusters[0].mask\nfind_satscan_ids_for_mask(in_disc, 45)\n\n# The numpy code should, mostly, replicate what SaTScan does\nstsn = build_stscan_numpy( *satscan_data.to_coords_time() )\n\nall_clusters = list(stsn.find_all_clusters())\nfor c in all_clusters:\n print(c.centre, c.radius, c.time, c.statistic)", "2nd Example\nThis example actually seems to show SaTScan not including all points in a disc. SaTScan reports:\n1.Location IDs included.: 23, 6, 16\n Coordinates / radius..: (30,30) / 20.00\n Time frame............: 993 to 1000\n Number of cases.......: 3\n Expected cases........: 0.42\n Observed / expected...: 7.14\n Test statistic........: 3.352053\n P-value...............: 0.202\n Recurrence interval...: 5.0 units\n\nNow, we note that:\n- Event 23 occurs at times 967 and 924, which are both outside the time window.\n- The disc centred at (30,30) of radius 20 contains events 6, 11, 16, 23 and 24.\n- If we manually compute the statistic for this disk and time, we get the same value as SaTScan (to be precise, if we change the space window to only include events 6, 16 and 23, we obtain the sample \"expected\" count).\n- The Numpy accelerated code fails to find this cluster, as it includes all events in the disk.", "satscan_data = open_cp.stscan2.SaTScanData(\"satscan_test1\", 1000)\ncoords, times = satscan_data.to_coords_time()\nab_scan = build_ab_scan(coords, times)\n\nall_clusters = list(ab_scan.find_all_clusters())\nfor c in all_clusters:\n print(c.centre, c.radius, c.time, c.statistic)\n\nin_disc = find_mask([30,30], 20)\nfind_satscan_ids_for_mask(in_disc,7)\n\nin_disc = find_mask([30,30], 20)\nfind_satscan_ids_for_mask(in_disc,10000)\n\nsatscan_data.geo[6], satscan_data.geo[16], satscan_data.geo[11], satscan_data.geo[23], satscan_data.geo[24]\n\ntime_mask = times <= 7\nspace_mask = np.sum( (coords - np.array([30,30])[:,None])**2, axis=0) <= 20**2\n\nexpected = np.sum(space_mask) * np.sum(time_mask) / 100\nactual = np.sum(space_mask & time_mask)\nactual, expected, ab_scan._statistic(actual, expected, 100)\n\n# The above Statistic is smaller than the one SaTScan finds, because the expected count is too large\n# But if we limit the spacial region to the ids SaTScan claims, we obtain a perfect match\nexpected = len(to_our_indexes([23, 6, 16])) * np.sum(time_mask) / 100\nexpected\n\n# The numpy accelerated code doesn't find the same clusters\nstsn = build_stscan_numpy(coords, times)\n\nall_clusters = list(stsn.find_all_clusters())\nfor c in all_clusters:\n print(c.centre, c.radius, c.time, c.statistic)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rabernat/xgcm
doc/grid_metrics.ipynb
mit
[ "Grid Metrics\nMost modern circulation models discretize the partial differential equations needed to simulate the earth system on a logically rectangular grid. This means the grid for a single time step can be represented as a 3-dimensional array of cells. Even for more complex grid geometries like here, subdomains are usually organized in this manner. A noteable exception are models with unstructured grids example, which currently cannot be processed with the datamodel of xarray and xgcm.\nOur grid operators work on the logically rectangular grid of an ocean model, meaning that e.g. differences are evaluated on the 'neighboring' cells in either direction, but even though these cells are adjacent, cells can have different size and geometry.\nIn order to convert operators acting on the logically rectangular grid to physically meaningful output models need 'metrics' - information about the grid cell geometry in physical space.\nIn the case of a perfectly rectangular cuboid, the only metrics needed would be three of the edge distances. All other distances can be reconstructed exactly. Most ocean models have however slightly distorted cells, due to the curvature of the earth. To accurately represent the volume of the cell we require more metrics. \nEach grid point has three kinds of fundamental metrics associated with it which differ in the number of described axes:\n\nDistances: A distance is associated with a single axis (e.g. ('X',),('Y',) or ('Z',)). Each distance describes the distance from the point to either face of the cell associated with the grid point. \nAreas: An area is associated with a pair of axes (e.g. ('X', 'Y'), ('Y', 'Z') and ('X', 'Z')). Each grid point intersects three areas.\nVolume: The cell volume is unique for each cell and associated with all three axes (('X', 'Y', 'Z')).\n\nUsing metrics with xgcm\nOnce the user assigns the metrics (given as coordinates in most model output) to the grid object, xgcm is able to automatically select and apply these to calculate e.g. derivatives and integrals from model data. \n<div class=\"alert alert-info\">\n\n*Note*: xgcm does not currently check for alignment of missing values between data and metrics. The user needs to check and mask values appropriately\n\n</div>", "import xarray as xr\nimport numpy as np\nfrom xgcm import Grid\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# hack to make file name work with nbsphinx and binder\nimport os\nfname = '../datasets/mitgcm_example_dataset_v2.nc'\nif not os.path.exists(fname):\n fname = '../' + fname\n \nds = xr.open_dataset(fname)\nds", "For mitgcm output we need to first incorporate partial cell thicknesses into new metric coordinates", "ds['drW'] = ds.hFacW * ds.drF #vertical cell size at u point\nds['drS'] = ds.hFacS * ds.drF #vertical cell size at v point\nds['drC'] = ds.hFacC * ds.drF #vertical cell size at tracer point", "To assign the metrics, the user has to provide a dictionary with keys and entries corresponding to the spatial orientation of the metrics and a list of the appropriate variable names in the dataset.", "metrics = {\n ('X',): ['dxC', 'dxG'], # X distances \n ('Y',): ['dyC', 'dyG'], # Y distances \n ('Z',): ['drW', 'drS', 'drC'], # Z distances \n ('X', 'Y'): ['rA', 'rAz', 'rAs', 'rAw'] # Areas \n}\ngrid = Grid(ds, metrics=metrics)", "Grid aware integration\nIt is now possible to integrate over any grid axis. For example, we can integrate over the Z axis to compute the \ndiscretized version of:\n$$ \\int_{-H}^0 u dz$$\nin one line:", "grid.integrate(ds.UVEL, 'Z').plot();", "This is equivalent to doing (ds.UVEL * ds.drW).sum('Z'), with the advantage of not having to remember the name of the appropriate metric and the matching dimension. The only thing the user needs to input is the axis to integrate over.", "a = grid.integrate(ds.UVEL, 'Z')\nb = (ds.UVEL * ds.drW).sum('Z')\nxr.testing.assert_equal(a, b)", "We can do the exact same thing on a tracer field (which is located on a different grid point) by using the exact same syntax:", "grid.integrate(ds.SALT, 'Z').plot();", "It also works in two dimensions:", "a = grid.integrate(ds.UVEL, ['X', 'Y'])\na.plot(y='Z')\n\n# Equivalent to integrating over area\nb = (ds.UVEL * ds.rAw).sum(['XG', 'YC'])\nxr.testing.assert_equal(a, b)", "And finally in 3 dimensions, this time using the salinity of the tracer cell:", "print('Spatial integral of zonal velocity: ',grid.integrate(ds.UVEL, ['X', 'Y', 'Z']).values)", "But wait, we did not provide a cell volume when setting up the Grid. What happened?\nWhenever no matching metric is provided, xgcm will default to reconstruct it from the other available metrics, in this case the area and z distance of the tracer cell", "a = grid.integrate(ds.SALT, ['X', 'Y', 'Z'])\nb = (ds.SALT * ds.rA * ds.drC).sum(['XC', 'YC', 'Z'])\nxr.testing.assert_allclose(a, b)", "Grid-aware (weighted) average\nxgcm can also calcualate the weighted average along each axis and combinations of axes. \nSee for example the vertical average of salinity:\n$$ \\frac{\\int_{-H}^0 S dz}{\\int_{-H}^0 dz} $$", "# depth mean salinity\ngrid.average(ds.SALT, ['Z']).plot();", "Equivalently, this can be computed with the xgcm operations:\n(ds.SALT * ds.drF).sum('Z') / ds.drF.sum('Z')\nSee also for zonal velocity:", "# depth mean zonal velocity\ngrid.average(ds.UVEL, ['Z']).plot();", "This works with multiple dimensions as well:", "# horizontal average zonal velocity\ngrid.average(ds.UVEL, ['X','Y']).plot(y='Z');\n\n# average salinity of the global ocean\n# horizontal average zonal velocity\nprint('Volume weighted average of salinity: ',grid.average(ds.SALT, ['X','Y', 'Z']).values)", "Cumulative integration\nUsing the metric-aware cumulative integration cumint, we can calculate the barotropic transport streamfunction even easier and more intuitive in one line:", "# the streamfunction is the cumulative integral of the vertically integrated zonal velocity along y\npsi = grid.cumint(-grid.integrate(ds.UVEL,'Z'),'Y', boundary='fill')\n\nmaskZ = grid.interp(ds.hFacS, 'X').isel(Z=0)\n(psi / 1e6).squeeze().where(maskZ).plot.contourf(levels=np.arange(-160, 40, 5));", "Here, cumint is performing the discretized form of:\n$$ \\psi = \\int_{y_0}^{y} -U dy' $$\nwhere $U = \\int_{-H}^0 u dz$, and under the hood looks like the following operation: \ngrid.cumsum( -grid.integrate(ds.UVEL,'Z') * ds.dyG, 'Y', boundary='fill')\nExcept that, once again, one does not have to remember the matching metric while using cumint.\nComputing derivatives\nIn a similar fashion to integration, xgcm uses metrics to compute derivatives. \nFor this example we show vertical shear, i.e. the derivative of some quantity in the vertical.\nAt it's core, derivative is based on diff, which shifts a data array \nto a new grid point, as shown\nhere.\nBecause of this shifting, we need to either define new metrics which live at the right points on the grid, \nor first interpolate the desired quantities, anticipating the shift.\nHere we choose the latter, and interpolate velocities and temperature onto the vertical cell faces of the grid \ncells.\nThe resulting quantities are in line with the vertical velocity w, which is shown in the vertical grid of the C \ngrid here.", "uvel_l = grid.interp(ds.UVEL,'Z')\nvvel_l = grid.interp(ds.VVEL,'Z')\ntheta_l = grid.interp(ds.THETA,'Z')", "The subscript \"l\" is used to denote a leftward shift on the vertical axis, following this nomenclature.\nAs a first example, we show zonal velocity shear in the top layer, which is the finite difference version of: \n$$ \\frac{\\partial u}{\\partial z}\\Big|_{z=-25m} $$", "zonal_shear = grid.derivative(uvel_l,'Z')\nzonal_shear.isel(Z=0).plot();", "and the underlying xgcm operations are: \ngrid.diff( uvel_l, 'Z' ) / ds.drW\nWhich is shown to be equivalent below:", "expected_result = (grid.diff( uvel_l, 'Z') ) /ds.drW\nxr.testing.assert_equal(zonal_shear, expected_result.reset_coords(drop=True))", "A note on dimensions: here we first interpolated from \"Z\"->\"Zl\" and \nthe derivative operation shifted the result back from \"Zl\"->\"Z\".", "print('1. ', ds.UVEL.dims)\nprint('2. ', uvel_l.dims)\nprint('3. ', zonal_shear.dims)", "For reference, the vertical profiles of horizontal average of zonal velocity \nand zonal velocity shear are shown below.", "fig,axs = plt.subplots(1,2,figsize=(12,8))\ntitles=['Horizontal average of zonal velocity, $u$',\n 'Horizontal average of zonal velocity shear, $\\partial u/\\partial z$']\nfor ax,fld,title in zip(axs,[ds.UVEL,zonal_shear],titles):\n \n # Only select non-land (a.k.a. wet) points\n fld = fld.where(ds.maskW).isel(time=0).copy()\n grid.average(fld,['X','Y']).plot(ax=ax,y='Z')\n ax.grid();\n ax.set_title(title);", "And finally, for meridional velocity and temperature in the top layer:", "grid.derivative(vvel_l,'Z').isel(Z=0).plot();\n\ngrid.derivative(theta_l,'Z').isel(Z=0).plot();", "<div class=\"alert alert-info\">\n\n**Note:** The `.derivative` function performs a centered finite difference operation. \nKeep in mind that this is different from \n[finite volume differencing schemes](https://mitgcm.readthedocs.io/en/latest/algorithm/finitevol-meth.html) \nas used in many ocean models.\nSee [this section](https://xgcm.readthedocs.io/en/latest/example_mitgcm.html#Divergence) \nof documentation for some examples of how xgcm can be helpful in performing these operations.\n\n</div>\n\nMetric weighted interpolation\nFinally, grid metrics allow us to implement area-weighted interpolation schemes quite easily. First, however, we need to once again define new metrics in the horizontal:", "ds['dxF'] = grid.interp(ds.dxC,'X')\nds['dyF'] = grid.interp(ds.dyC,'Y')\n\nmetrics = {\n ('X',): ['dxC', 'dxG','dxF'], # X distances \n ('Y',): ['dyC', 'dyG','dyF'], # Y distances \n ('Z',): ['drW', 'drS', 'drC'], # Z distances \n ('X', 'Y'): ['rA', 'rAz', 'rAs', 'rAw'] # Areas \n}\ngrid = Grid(ds, metrics=metrics)", "Here we show temperature interpolated in the X direction: from the tracer location to where zonal velocity is located, i.e. from t to u in the horizontal view of the C grid shown here.", "grid.interp(ds.THETA.where(ds.maskC),'X',metric_weighted=['X','Y']).isel(Z=0).plot();", "This area weighted interpolation conserves tracer content in the horizontal, at least to first order \nas defined by the underlying interpolation operation. \nNote that in this example the difference between an area weighted interpolation and standard interpolation\n(i.e. arithmetic mean for first order) is quite small because the underlying field is smooth." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ageron/ml-notebooks
14_recurrent_neural_networks.ipynb
apache-2.0
[ "Chapter 14 – Recurrent Neural Networks\nThis notebook contains all the sample code and solutions to the exercises in chapter 14.\n<table align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/ageron/handson-ml/blob/master/14_recurrent_neural_networks.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n</table>\n\nWarning: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions. In particular, the 1st edition is based on TensorFlow 1, while the 2nd edition uses TensorFlow 2, which is much simpler to use.\nSetup\nFirst, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:", "# To support both python 2 and python 3\nfrom __future__ import division, print_function, unicode_literals\n\n# Common imports\nimport numpy as np\nimport os\n\ntry:\n # %tensorflow_version only exists in Colab.\n %tensorflow_version 1.x\nexcept Exception:\n pass\n\n# to make this notebook's output stable across runs\ndef reset_graph(seed=42):\n tf.reset_default_graph()\n tf.set_random_seed(seed)\n np.random.seed(seed)\n\n# To plot pretty figures\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nplt.rcParams['axes.labelsize'] = 14\nplt.rcParams['xtick.labelsize'] = 12\nplt.rcParams['ytick.labelsize'] = 12\n\n# Where to save the figures\nPROJECT_ROOT_DIR = \".\"\nCHAPTER_ID = \"rnn\"\nIMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, \"images\", CHAPTER_ID)\nos.makedirs(IMAGES_PATH, exist_ok=True)\n\ndef save_fig(fig_id, tight_layout=True, fig_extension=\"png\", resolution=300):\n path = os.path.join(IMAGES_PATH, fig_id + \".\" + fig_extension)\n print(\"Saving figure\", fig_id)\n if tight_layout:\n plt.tight_layout()\n plt.savefig(path, format=fig_extension, dpi=resolution)", "Then of course we will need TensorFlow:", "import tensorflow as tf", "Basic RNNs\nManual RNN", "reset_graph()\n\nn_inputs = 3\nn_neurons = 5\n\nX0 = tf.placeholder(tf.float32, [None, n_inputs])\nX1 = tf.placeholder(tf.float32, [None, n_inputs])\n\nWx = tf.Variable(tf.random_normal(shape=[n_inputs, n_neurons],dtype=tf.float32))\nWy = tf.Variable(tf.random_normal(shape=[n_neurons,n_neurons],dtype=tf.float32))\nb = tf.Variable(tf.zeros([1, n_neurons], dtype=tf.float32))\n\nY0 = tf.tanh(tf.matmul(X0, Wx) + b)\nY1 = tf.tanh(tf.matmul(Y0, Wy) + tf.matmul(X1, Wx) + b)\n\ninit = tf.global_variables_initializer()\n\nimport numpy as np\n\nX0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]]) # t = 0\nX1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]]) # t = 1\n\nwith tf.Session() as sess:\n init.run()\n Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})\n\nprint(Y0_val)\n\nprint(Y1_val)", "Using static_rnn()\nNote: tf.contrib.rnn was partially moved to the core API in TensorFlow 1.2. Most of the *Cell and *Wrapper classes are now available in tf.nn.rnn_cell, and the tf.contrib.rnn.static_rnn() function is available as tf.nn.static_rnn().", "n_inputs = 3\nn_neurons = 5\n\nreset_graph()\n\nX0 = tf.placeholder(tf.float32, [None, n_inputs])\nX1 = tf.placeholder(tf.float32, [None, n_inputs])\n\nbasic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)\noutput_seqs, states = tf.nn.static_rnn(basic_cell, [X0, X1],\n dtype=tf.float32)\nY0, Y1 = output_seqs\n\ninit = tf.global_variables_initializer()\n\nX0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]])\nX1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]])\n\nwith tf.Session() as sess:\n init.run()\n Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})\n\nY0_val\n\nY1_val\n\nfrom datetime import datetime\n\nroot_logdir = os.path.join(os.curdir, \"tf_logs\")\n\ndef make_log_subdir(run_id=None):\n if run_id is None:\n run_id = datetime.utcnow().strftime(\"%Y%m%d%H%M%S\")\n return \"{}/run-{}/\".format(root_logdir, run_id)\n\ndef save_graph(graph=None, run_id=None):\n if graph is None:\n graph = tf.get_default_graph()\n logdir = make_log_subdir(run_id)\n file_writer = tf.summary.FileWriter(logdir, graph=graph)\n file_writer.close()\n return logdir\n\nsave_graph()\n\n%load_ext tensorboard\n\n%tensorboard --logdir {root_logdir}", "Packing sequences", "n_steps = 2\nn_inputs = 3\nn_neurons = 5\n\nreset_graph()\n\nX = tf.placeholder(tf.float32, [None, n_steps, n_inputs])\nX_seqs = tf.unstack(tf.transpose(X, perm=[1, 0, 2]))\n\nbasic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)\noutput_seqs, states = tf.nn.static_rnn(basic_cell, X_seqs,\n dtype=tf.float32)\noutputs = tf.transpose(tf.stack(output_seqs), perm=[1, 0, 2])\n\ninit = tf.global_variables_initializer()\n\nX_batch = np.array([\n # t = 0 t = 1 \n [[0, 1, 2], [9, 8, 7]], # instance 1\n [[3, 4, 5], [0, 0, 0]], # instance 2\n [[6, 7, 8], [6, 5, 4]], # instance 3\n [[9, 0, 1], [3, 2, 1]], # instance 4\n ])\n\nwith tf.Session() as sess:\n init.run()\n outputs_val = outputs.eval(feed_dict={X: X_batch})\n\nprint(outputs_val)\n\nprint(np.transpose(outputs_val, axes=[1, 0, 2])[1])", "Using dynamic_rnn()", "n_steps = 2\nn_inputs = 3\nn_neurons = 5\n\nreset_graph()\n\nX = tf.placeholder(tf.float32, [None, n_steps, n_inputs])\n\nbasic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)\noutputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)\n\ninit = tf.global_variables_initializer()\n\nX_batch = np.array([\n [[0, 1, 2], [9, 8, 7]], # instance 1\n [[3, 4, 5], [0, 0, 0]], # instance 2\n [[6, 7, 8], [6, 5, 4]], # instance 3\n [[9, 0, 1], [3, 2, 1]], # instance 4\n ])\n\nwith tf.Session() as sess:\n init.run()\n outputs_val = outputs.eval(feed_dict={X: X_batch})\n\nprint(outputs_val)\n\nsave_graph()\n\n%tensorboard --logdir {root_logdir}", "Setting the sequence lengths", "n_steps = 2\nn_inputs = 3\nn_neurons = 5\n\nreset_graph()\n\nX = tf.placeholder(tf.float32, [None, n_steps, n_inputs])\nbasic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)\n\nseq_length = tf.placeholder(tf.int32, [None])\noutputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32,\n sequence_length=seq_length)\n\ninit = tf.global_variables_initializer()\n\nX_batch = np.array([\n # step 0 step 1\n [[0, 1, 2], [9, 8, 7]], # instance 1\n [[3, 4, 5], [0, 0, 0]], # instance 2 (padded with zero vectors)\n [[6, 7, 8], [6, 5, 4]], # instance 3\n [[9, 0, 1], [3, 2, 1]], # instance 4\n ])\nseq_length_batch = np.array([2, 1, 2, 2])\n\nwith tf.Session() as sess:\n init.run()\n outputs_val, states_val = sess.run(\n [outputs, states], feed_dict={X: X_batch, seq_length: seq_length_batch})\n\nprint(outputs_val)\n\nprint(states_val)", "Training a sequence classifier\nNote: the book uses tensorflow.contrib.layers.fully_connected() rather than tf.layers.dense() (which did not exist when this chapter was written). It is now preferable to use tf.layers.dense(), because anything in the contrib module may change or be deleted without notice. The dense() function is almost identical to the fully_connected() function. The main differences relevant to this chapter are:\n* several parameters are renamed: scope becomes name, activation_fn becomes activation (and similarly the _fn suffix is removed from other parameters such as normalizer_fn), weights_initializer becomes kernel_initializer, etc.\n* the default activation is now None rather than tf.nn.relu.", "reset_graph()\n\nn_steps = 28\nn_inputs = 28\nn_neurons = 150\nn_outputs = 10\n\nlearning_rate = 0.001\n\nX = tf.placeholder(tf.float32, [None, n_steps, n_inputs])\ny = tf.placeholder(tf.int32, [None])\n\nbasic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)\noutputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)\n\nlogits = tf.layers.dense(states, n_outputs)\nxentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,\n logits=logits)\nloss = tf.reduce_mean(xentropy)\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)\ntraining_op = optimizer.minimize(loss)\ncorrect = tf.nn.in_top_k(logits, y, 1)\naccuracy = tf.reduce_mean(tf.cast(correct, tf.float32))\n\ninit = tf.global_variables_initializer()", "Warning: tf.examples.tutorials.mnist is deprecated. We will use tf.keras.datasets.mnist instead.", "(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()\nX_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0\nX_test = X_test.astype(np.float32).reshape(-1, 28*28) / 255.0\ny_train = y_train.astype(np.int32)\ny_test = y_test.astype(np.int32)\nX_valid, X_train = X_train[:5000], X_train[5000:]\ny_valid, y_train = y_train[:5000], y_train[5000:]\n\ndef shuffle_batch(X, y, batch_size):\n rnd_idx = np.random.permutation(len(X))\n n_batches = len(X) // batch_size\n for batch_idx in np.array_split(rnd_idx, n_batches):\n X_batch, y_batch = X[batch_idx], y[batch_idx]\n yield X_batch, y_batch\n\nX_test = X_test.reshape((-1, n_steps, n_inputs))\n\nn_epochs = 100\nbatch_size = 150\n\nwith tf.Session() as sess:\n init.run()\n for epoch in range(n_epochs):\n for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n X_batch = X_batch.reshape((-1, n_steps, n_inputs))\n sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})\n acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})\n print(epoch, \"Last batch accuracy:\", acc_batch, \"Test accuracy:\", acc_test)", "Multi-layer RNN", "reset_graph()\n\nn_steps = 28\nn_inputs = 28\nn_outputs = 10\n\nlearning_rate = 0.001\n\nX = tf.placeholder(tf.float32, [None, n_steps, n_inputs])\ny = tf.placeholder(tf.int32, [None])\n\nn_neurons = 100\nn_layers = 3\n\nlayers = [tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons,\n activation=tf.nn.relu)\n for layer in range(n_layers)]\nmulti_layer_cell = tf.nn.rnn_cell.MultiRNNCell(layers)\noutputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)\n\nstates_concat = tf.concat(axis=1, values=states)\nlogits = tf.layers.dense(states_concat, n_outputs)\nxentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\nloss = tf.reduce_mean(xentropy)\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)\ntraining_op = optimizer.minimize(loss)\ncorrect = tf.nn.in_top_k(logits, y, 1)\naccuracy = tf.reduce_mean(tf.cast(correct, tf.float32))\n\ninit = tf.global_variables_initializer()\n\nn_epochs = 10\nbatch_size = 150\n\nwith tf.Session() as sess:\n init.run()\n for epoch in range(n_epochs):\n for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n X_batch = X_batch.reshape((-1, n_steps, n_inputs))\n sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})\n acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})\n print(epoch, \"Last batch accuracy:\", acc_batch, \"Test accuracy:\", acc_test)", "Time series", "t_min, t_max = 0, 30\nresolution = 0.1\n\ndef time_series(t):\n return t * np.sin(t) / 3 + 2 * np.sin(t*5)\n\ndef next_batch(batch_size, n_steps):\n t0 = np.random.rand(batch_size, 1) * (t_max - t_min - n_steps * resolution)\n Ts = t0 + np.arange(0., n_steps + 1) * resolution\n ys = time_series(Ts)\n return ys[:, :-1].reshape(-1, n_steps, 1), ys[:, 1:].reshape(-1, n_steps, 1)\n\nt = np.linspace(t_min, t_max, int((t_max - t_min) / resolution))\n\nn_steps = 20\nt_instance = np.linspace(12.2, 12.2 + resolution * (n_steps + 1), n_steps + 1)\n\nplt.figure(figsize=(11,4))\nplt.subplot(121)\nplt.title(\"A time series (generated)\", fontsize=14)\nplt.plot(t, time_series(t), label=r\"$t . \\sin(t) / 3 + 2 . \\sin(5t)$\")\nplt.plot(t_instance[:-1], time_series(t_instance[:-1]), \"b-\", linewidth=3, label=\"A training instance\")\nplt.legend(loc=\"lower left\", fontsize=14)\nplt.axis([0, 30, -17, 13])\nplt.xlabel(\"Time\")\nplt.ylabel(\"Value\")\n\nplt.subplot(122)\nplt.title(\"A training instance\", fontsize=14)\nplt.plot(t_instance[:-1], time_series(t_instance[:-1]), \"bo\", markersize=10, label=\"instance\")\nplt.plot(t_instance[1:], time_series(t_instance[1:]), \"w*\", markersize=10, label=\"target\")\nplt.legend(loc=\"upper left\")\nplt.xlabel(\"Time\")\n\n\nsave_fig(\"time_series_plot\")\nplt.show()\n\nX_batch, y_batch = next_batch(1, n_steps)\n\nnp.c_[X_batch[0], y_batch[0]]", "Using an OuputProjectionWrapper\nLet's create the RNN. It will contain 100 recurrent neurons and we will unroll it over 20 time steps since each training instance will be 20 inputs long. Each input will contain only one feature (the value at that time). The targets are also sequences of 20 inputs, each containing a single value:", "reset_graph()\n\nn_steps = 20\nn_inputs = 1\nn_neurons = 100\nn_outputs = 1\n\nX = tf.placeholder(tf.float32, [None, n_steps, n_inputs])\ny = tf.placeholder(tf.float32, [None, n_steps, n_outputs])\n\ncell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)\noutputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)", "At each time step we now have an output vector of size 100. But what we actually want is a single output value at each time step. The simplest solution is to wrap the cell in an OutputProjectionWrapper.", "reset_graph()\n\nn_steps = 20\nn_inputs = 1\nn_neurons = 100\nn_outputs = 1\n\nX = tf.placeholder(tf.float32, [None, n_steps, n_inputs])\ny = tf.placeholder(tf.float32, [None, n_steps, n_outputs])\n\ncell = tf.contrib.rnn.OutputProjectionWrapper(\n tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu),\n output_size=n_outputs)\n\noutputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)\n\nlearning_rate = 0.001\n\nloss = tf.reduce_mean(tf.square(outputs - y)) # MSE\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)\ntraining_op = optimizer.minimize(loss)\n\ninit = tf.global_variables_initializer()\n\nsaver = tf.train.Saver()\n\nn_iterations = 1500\nbatch_size = 50\n\nwith tf.Session() as sess:\n init.run()\n for iteration in range(n_iterations):\n X_batch, y_batch = next_batch(batch_size, n_steps)\n sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n if iteration % 100 == 0:\n mse = loss.eval(feed_dict={X: X_batch, y: y_batch})\n print(iteration, \"\\tMSE:\", mse)\n \n saver.save(sess, \"./my_time_series_model\") # not shown in the book\n\nwith tf.Session() as sess: # not shown in the book\n saver.restore(sess, \"./my_time_series_model\") # not shown\n\n X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))\n y_pred = sess.run(outputs, feed_dict={X: X_new})\n\ny_pred\n\nplt.title(\"Testing the model\", fontsize=14)\nplt.plot(t_instance[:-1], time_series(t_instance[:-1]), \"bo\", markersize=10, label=\"instance\")\nplt.plot(t_instance[1:], time_series(t_instance[1:]), \"w*\", markersize=10, label=\"target\")\nplt.plot(t_instance[1:], y_pred[0,:,0], \"r.\", markersize=10, label=\"prediction\")\nplt.legend(loc=\"upper left\")\nplt.xlabel(\"Time\")\n\nsave_fig(\"time_series_pred_plot\")\nplt.show()", "Without using an OutputProjectionWrapper", "reset_graph()\n\nn_steps = 20\nn_inputs = 1\nn_neurons = 100\n\nX = tf.placeholder(tf.float32, [None, n_steps, n_inputs])\ny = tf.placeholder(tf.float32, [None, n_steps, n_outputs])\n\ncell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)\nrnn_outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)\n\nn_outputs = 1\nlearning_rate = 0.001\n\nstacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])\nstacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)\noutputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])\n\nloss = tf.reduce_mean(tf.square(outputs - y))\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)\ntraining_op = optimizer.minimize(loss)\n\ninit = tf.global_variables_initializer()\nsaver = tf.train.Saver()\n\nn_iterations = 1500\nbatch_size = 50\n\nwith tf.Session() as sess:\n init.run()\n for iteration in range(n_iterations):\n X_batch, y_batch = next_batch(batch_size, n_steps)\n sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n if iteration % 100 == 0:\n mse = loss.eval(feed_dict={X: X_batch, y: y_batch})\n print(iteration, \"\\tMSE:\", mse)\n \n X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))\n y_pred = sess.run(outputs, feed_dict={X: X_new})\n \n saver.save(sess, \"./my_time_series_model\")\n\ny_pred\n\nplt.title(\"Testing the model\", fontsize=14)\nplt.plot(t_instance[:-1], time_series(t_instance[:-1]), \"bo\", markersize=10, label=\"instance\")\nplt.plot(t_instance[1:], time_series(t_instance[1:]), \"w*\", markersize=10, label=\"target\")\nplt.plot(t_instance[1:], y_pred[0,:,0], \"r.\", markersize=10, label=\"prediction\")\nplt.legend(loc=\"upper left\")\nplt.xlabel(\"Time\")\n\nplt.show()", "Generating a creative new sequence", "with tf.Session() as sess: # not shown in the book\n saver.restore(sess, \"./my_time_series_model\") # not shown\n\n sequence = [0.] * n_steps\n for iteration in range(300):\n X_batch = np.array(sequence[-n_steps:]).reshape(1, n_steps, 1)\n y_pred = sess.run(outputs, feed_dict={X: X_batch})\n sequence.append(y_pred[0, -1, 0])\n\nplt.figure(figsize=(8,4))\nplt.plot(np.arange(len(sequence)), sequence, \"b-\")\nplt.plot(t[:n_steps], sequence[:n_steps], \"b-\", linewidth=3)\nplt.xlabel(\"Time\")\nplt.ylabel(\"Value\")\nplt.show()\n\nwith tf.Session() as sess:\n saver.restore(sess, \"./my_time_series_model\")\n\n sequence1 = [0. for i in range(n_steps)]\n for iteration in range(len(t) - n_steps):\n X_batch = np.array(sequence1[-n_steps:]).reshape(1, n_steps, 1)\n y_pred = sess.run(outputs, feed_dict={X: X_batch})\n sequence1.append(y_pred[0, -1, 0])\n\n sequence2 = [time_series(i * resolution + t_min + (t_max-t_min/3)) for i in range(n_steps)]\n for iteration in range(len(t) - n_steps):\n X_batch = np.array(sequence2[-n_steps:]).reshape(1, n_steps, 1)\n y_pred = sess.run(outputs, feed_dict={X: X_batch})\n sequence2.append(y_pred[0, -1, 0])\n\nplt.figure(figsize=(11,4))\nplt.subplot(121)\nplt.plot(t, sequence1, \"b-\")\nplt.plot(t[:n_steps], sequence1[:n_steps], \"b-\", linewidth=3)\nplt.xlabel(\"Time\")\nplt.ylabel(\"Value\")\n\nplt.subplot(122)\nplt.plot(t, sequence2, \"b-\")\nplt.plot(t[:n_steps], sequence2[:n_steps], \"b-\", linewidth=3)\nplt.xlabel(\"Time\")\nsave_fig(\"creative_sequence_plot\")\nplt.show()", "Deep RNN\nMultiRNNCell", "reset_graph()\n\nn_inputs = 2\nn_steps = 5\n\nX = tf.placeholder(tf.float32, [None, n_steps, n_inputs])\n\nn_neurons = 100\nn_layers = 3\n\nlayers = [tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)\n for layer in range(n_layers)]\nmulti_layer_cell = tf.nn.rnn_cell.MultiRNNCell(layers)\noutputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)\n\ninit = tf.global_variables_initializer()\n\nX_batch = np.random.rand(2, n_steps, n_inputs)\n\nwith tf.Session() as sess:\n init.run()\n outputs_val, states_val = sess.run([outputs, states], feed_dict={X: X_batch})\n\noutputs_val.shape", "Distributing a Deep RNN Across Multiple GPUs\nDo NOT do this:", "with tf.device(\"/gpu:0\"): # BAD! This is ignored.\n layer1 = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)\n\nwith tf.device(\"/gpu:1\"): # BAD! Ignored again.\n layer2 = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)", "Instead, you need a DeviceCellWrapper:", "import tensorflow as tf\n\nclass DeviceCellWrapper(tf.nn.rnn_cell.RNNCell):\n def __init__(self, device, cell):\n self._cell = cell\n self._device = device\n\n @property\n def state_size(self):\n return self._cell.state_size\n\n @property\n def output_size(self):\n return self._cell.output_size\n\n def __call__(self, inputs, state, scope=None):\n with tf.device(self._device):\n return self._cell(inputs, state, scope)\n\nreset_graph()\n\nn_inputs = 5\nn_steps = 20\nn_neurons = 100\n\nX = tf.placeholder(tf.float32, shape=[None, n_steps, n_inputs])\n\ndevices = [\"/cpu:0\", \"/cpu:0\", \"/cpu:0\"] # replace with [\"/gpu:0\", \"/gpu:1\", \"/gpu:2\"] if you have 3 GPUs\ncells = [DeviceCellWrapper(dev,tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons))\n for dev in devices]\nmulti_layer_cell = tf.nn.rnn_cell.MultiRNNCell(cells)\noutputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)", "Alternatively, since TensorFlow 1.1, you can use the tf.contrib.rnn.DeviceWrapper class (alias tf.nn.rnn_cell.DeviceWrapper since TF 1.2).", "init = tf.global_variables_initializer()\n\nwith tf.Session() as sess:\n init.run()\n print(sess.run(outputs, feed_dict={X: np.random.rand(2, n_steps, n_inputs)}))", "Dropout", "reset_graph()\n\nn_inputs = 1\nn_neurons = 100\nn_layers = 3\nn_steps = 20\nn_outputs = 1\n\nX = tf.placeholder(tf.float32, [None, n_steps, n_inputs])\ny = tf.placeholder(tf.float32, [None, n_steps, n_outputs])", "Note: the input_keep_prob parameter can be a placeholder, making it possible to set it to any value you want during training, and to 1.0 during testing (effectively turning dropout off). This is a much more elegant solution than what was recommended in earlier versions of the book (i.e., writing your own wrapper class or having a separate model for training and testing). Thanks to Shen Cheng for bringing this to my attention.", "keep_prob = tf.placeholder_with_default(1.0, shape=())\ncells = [tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)\n for layer in range(n_layers)]\ncells_drop = [tf.nn.rnn_cell.DropoutWrapper(cell, input_keep_prob=keep_prob)\n for cell in cells]\nmulti_layer_cell = tf.nn.rnn_cell.MultiRNNCell(cells_drop)\nrnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)\n\nlearning_rate = 0.01\n\nstacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])\nstacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)\noutputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])\n\nloss = tf.reduce_mean(tf.square(outputs - y))\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)\ntraining_op = optimizer.minimize(loss)\n\ninit = tf.global_variables_initializer()\nsaver = tf.train.Saver()\n\nn_iterations = 1500\nbatch_size = 50\ntrain_keep_prob = 0.5\n\nwith tf.Session() as sess:\n init.run()\n for iteration in range(n_iterations):\n X_batch, y_batch = next_batch(batch_size, n_steps)\n _, mse = sess.run([training_op, loss],\n feed_dict={X: X_batch, y: y_batch,\n keep_prob: train_keep_prob})\n if iteration % 100 == 0: # not shown in the book\n print(iteration, \"Training MSE:\", mse) # not shown\n \n saver.save(sess, \"./my_dropout_time_series_model\")\n\nwith tf.Session() as sess:\n saver.restore(sess, \"./my_dropout_time_series_model\")\n\n X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))\n y_pred = sess.run(outputs, feed_dict={X: X_new})\n\nplt.title(\"Testing the model\", fontsize=14)\nplt.plot(t_instance[:-1], time_series(t_instance[:-1]), \"bo\", markersize=10, label=\"instance\")\nplt.plot(t_instance[1:], time_series(t_instance[1:]), \"w*\", markersize=10, label=\"target\")\nplt.plot(t_instance[1:], y_pred[0,:,0], \"r.\", markersize=10, label=\"prediction\")\nplt.legend(loc=\"upper left\")\nplt.xlabel(\"Time\")\n\nplt.show()", "Oops, it seems that Dropout does not help at all in this particular case. :/\nLSTM", "reset_graph()\n\nlstm_cell = tf.nn.rnn_cell.BasicLSTMCell(num_units=n_neurons)\n\nn_steps = 28\nn_inputs = 28\nn_neurons = 150\nn_outputs = 10\nn_layers = 3\n\nlearning_rate = 0.001\n\nX = tf.placeholder(tf.float32, [None, n_steps, n_inputs])\ny = tf.placeholder(tf.int32, [None])\n\nlstm_cells = [tf.nn.rnn_cell.BasicLSTMCell(num_units=n_neurons)\n for layer in range(n_layers)]\nmulti_cell = tf.nn.rnn_cell.MultiRNNCell(lstm_cells)\noutputs, states = tf.nn.dynamic_rnn(multi_cell, X, dtype=tf.float32)\ntop_layer_h_state = states[-1][1]\nlogits = tf.layers.dense(top_layer_h_state, n_outputs, name=\"softmax\")\nxentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\nloss = tf.reduce_mean(xentropy, name=\"loss\")\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)\ntraining_op = optimizer.minimize(loss)\ncorrect = tf.nn.in_top_k(logits, y, 1)\naccuracy = tf.reduce_mean(tf.cast(correct, tf.float32))\n \ninit = tf.global_variables_initializer()\n\nstates\n\ntop_layer_h_state\n\nn_epochs = 10\nbatch_size = 150\n\nwith tf.Session() as sess:\n init.run()\n for epoch in range(n_epochs):\n for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n X_batch = X_batch.reshape((-1, n_steps, n_inputs))\n sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})\n acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})\n print(epoch, \"Last batch accuracy:\", acc_batch, \"Test accuracy:\", acc_test)\n\nlstm_cell = tf.nn.rnn_cell.LSTMCell(num_units=n_neurons, use_peepholes=True)\n\ngru_cell = tf.nn.rnn_cell.GRUCell(num_units=n_neurons)", "Embeddings\nThis section is based on TensorFlow's Word2Vec tutorial.\nFetch the data", "import urllib.request\n\nimport errno\nimport os\nimport zipfile\n\nWORDS_PATH = \"datasets/words\"\nWORDS_URL = 'http://mattmahoney.net/dc/text8.zip'\n\ndef mkdir_p(path):\n \"\"\"Create directories, ok if they already exist.\n \n This is for python 2 support. In python >=3.2, simply use:\n >>> os.makedirs(path, exist_ok=True)\n \"\"\"\n try:\n os.makedirs(path)\n except OSError as exc:\n if exc.errno == errno.EEXIST and os.path.isdir(path):\n pass\n else:\n raise\n\ndef fetch_words_data(words_url=WORDS_URL, words_path=WORDS_PATH):\n os.makedirs(words_path, exist_ok=True)\n zip_path = os.path.join(words_path, \"words.zip\")\n if not os.path.exists(zip_path):\n urllib.request.urlretrieve(words_url, zip_path)\n with zipfile.ZipFile(zip_path) as f:\n data = f.read(f.namelist()[0])\n return data.decode(\"ascii\").split()\n\nwords = fetch_words_data()\n\nwords[:5]", "Build the dictionary", "from collections import Counter\n\nvocabulary_size = 50000\n\nvocabulary = [(\"UNK\", None)] + Counter(words).most_common(vocabulary_size - 1)\nvocabulary = np.array([word for word, _ in vocabulary])\ndictionary = {word: code for code, word in enumerate(vocabulary)}\ndata = np.array([dictionary.get(word, 0) for word in words])\n\n\" \".join(words[:9]), data[:9]\n\n\" \".join([vocabulary[word_index] for word_index in [5241, 3081, 12, 6, 195, 2, 3134, 46, 59]])\n\nwords[24], data[24]", "Generate batches", "from collections import deque\n\ndef generate_batch(batch_size, num_skips, skip_window):\n global data_index\n assert batch_size % num_skips == 0\n assert num_skips <= 2 * skip_window\n batch = np.ndarray(shape=[batch_size], dtype=np.int32)\n labels = np.ndarray(shape=[batch_size, 1], dtype=np.int32)\n span = 2 * skip_window + 1 # [ skip_window target skip_window ]\n buffer = deque(maxlen=span)\n for _ in range(span):\n buffer.append(data[data_index])\n data_index = (data_index + 1) % len(data)\n for i in range(batch_size // num_skips):\n target = skip_window # target label at the center of the buffer\n targets_to_avoid = [ skip_window ]\n for j in range(num_skips):\n while target in targets_to_avoid:\n target = np.random.randint(0, span)\n targets_to_avoid.append(target)\n batch[i * num_skips + j] = buffer[skip_window]\n labels[i * num_skips + j, 0] = buffer[target]\n buffer.append(data[data_index])\n data_index = (data_index + 1) % len(data)\n return batch, labels\n\nnp.random.seed(42)\n\ndata_index = 0\nbatch, labels = generate_batch(8, 2, 1)\n\nbatch, [vocabulary[word] for word in batch]\n\nlabels, [vocabulary[word] for word in labels[:, 0]]", "Build the model", "batch_size = 128\nembedding_size = 128 # Dimension of the embedding vector.\nskip_window = 1 # How many words to consider left and right.\nnum_skips = 2 # How many times to reuse an input to generate a label.\n\n# We pick a random validation set to sample nearest neighbors. Here we limit the\n# validation samples to the words that have a low numeric ID, which by\n# construction are also the most frequent.\nvalid_size = 16 # Random set of words to evaluate similarity on.\nvalid_window = 100 # Only pick dev samples in the head of the distribution.\nvalid_examples = np.random.choice(valid_window, valid_size, replace=False)\nnum_sampled = 64 # Number of negative examples to sample.\n\nlearning_rate = 0.01\n\nreset_graph()\n\n# Input data.\ntrain_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])\nvalid_dataset = tf.constant(valid_examples, dtype=tf.int32)\n\nvocabulary_size = 50000\nembedding_size = 150\n\n# Look up embeddings for inputs.\ninit_embeds = tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)\nembeddings = tf.Variable(init_embeds)\n\ntrain_inputs = tf.placeholder(tf.int32, shape=[None])\nembed = tf.nn.embedding_lookup(embeddings, train_inputs)\n\n# Construct the variables for the NCE loss\nnce_weights = tf.Variable(\n tf.truncated_normal([vocabulary_size, embedding_size],\n stddev=1.0 / np.sqrt(embedding_size)))\nnce_biases = tf.Variable(tf.zeros([vocabulary_size]))\n\n# Compute the average NCE loss for the batch.\n# tf.nce_loss automatically draws a new sample of the negative labels each\n# time we evaluate the loss.\nloss = tf.reduce_mean(\n tf.nn.nce_loss(nce_weights, nce_biases, train_labels, embed,\n num_sampled, vocabulary_size))\n\n# Construct the Adam optimizer\noptimizer = tf.train.AdamOptimizer(learning_rate)\ntraining_op = optimizer.minimize(loss)\n\n# Compute the cosine similarity between minibatch examples and all embeddings.\nnorm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), axis=1, keepdims=True))\nnormalized_embeddings = embeddings / norm\nvalid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)\nsimilarity = tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True)\n\n# Add variable initializer.\ninit = tf.global_variables_initializer()", "Train the model", "num_steps = 10001\n\nwith tf.Session() as session:\n init.run()\n\n average_loss = 0\n for step in range(num_steps):\n print(\"\\rIteration: {}\".format(step), end=\"\\t\")\n batch_inputs, batch_labels = generate_batch(batch_size, num_skips, skip_window)\n feed_dict = {train_inputs : batch_inputs, train_labels : batch_labels}\n\n # We perform one update step by evaluating the training op (including it\n # in the list of returned values for session.run()\n _, loss_val = session.run([training_op, loss], feed_dict=feed_dict)\n average_loss += loss_val\n\n if step % 2000 == 0:\n if step > 0:\n average_loss /= 2000\n # The average loss is an estimate of the loss over the last 2000 batches.\n print(\"Average loss at step \", step, \": \", average_loss)\n average_loss = 0\n\n # Note that this is expensive (~20% slowdown if computed every 500 steps)\n if step % 10000 == 0:\n sim = similarity.eval()\n for i in range(valid_size):\n valid_word = vocabulary[valid_examples[i]]\n top_k = 8 # number of nearest neighbors\n nearest = (-sim[i, :]).argsort()[1:top_k+1]\n log_str = \"Nearest to %s:\" % valid_word\n for k in range(top_k):\n close_word = vocabulary[nearest[k]]\n log_str = \"%s %s,\" % (log_str, close_word)\n print(log_str)\n\n final_embeddings = normalized_embeddings.eval()", "Let's save the final embeddings (of course you can use a TensorFlow Saver if you prefer):", "np.save(\"./my_final_embeddings.npy\", final_embeddings)", "Plot the embeddings", "def plot_with_labels(low_dim_embs, labels):\n assert low_dim_embs.shape[0] >= len(labels), \"More labels than embeddings\"\n plt.figure(figsize=(18, 18)) #in inches\n for i, label in enumerate(labels):\n x, y = low_dim_embs[i,:]\n plt.scatter(x, y)\n plt.annotate(label,\n xy=(x, y),\n xytext=(5, 2),\n textcoords='offset points',\n ha='right',\n va='bottom')\n\nfrom sklearn.manifold import TSNE\n\ntsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)\nplot_only = 500\nlow_dim_embs = tsne.fit_transform(final_embeddings[:plot_only,:])\nlabels = [vocabulary[i] for i in range(plot_only)]\nplot_with_labels(low_dim_embs, labels)", "Machine Translation\nThe basic_rnn_seq2seq() function creates a simple Encoder/Decoder model: it first runs an RNN to encode encoder_inputs into a state vector, then runs a decoder initialized with the last encoder state on decoder_inputs. Encoder and decoder use the same RNN cell type but they don't share parameters.", "import tensorflow as tf\nreset_graph()\n\nn_steps = 50\nn_neurons = 200\nn_layers = 3\nnum_encoder_symbols = 20000\nnum_decoder_symbols = 20000\nembedding_size = 150\nlearning_rate = 0.01\n\nX = tf.placeholder(tf.int32, [None, n_steps]) # English sentences\nY = tf.placeholder(tf.int32, [None, n_steps]) # French translations\nW = tf.placeholder(tf.float32, [None, n_steps - 1, 1])\nY_input = Y[:, :-1]\nY_target = Y[:, 1:]\n\nencoder_inputs = tf.unstack(tf.transpose(X)) # list of 1D tensors\ndecoder_inputs = tf.unstack(tf.transpose(Y_input)) # list of 1D tensors\n\nlstm_cells = [tf.nn.rnn_cell.BasicLSTMCell(num_units=n_neurons)\n for layer in range(n_layers)]\ncell = tf.nn.rnn_cell.MultiRNNCell(lstm_cells)\n\noutput_seqs, states = tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq(\n encoder_inputs,\n decoder_inputs,\n cell,\n num_encoder_symbols,\n num_decoder_symbols,\n embedding_size)\n\nlogits = tf.transpose(tf.unstack(output_seqs), perm=[1, 0, 2])\n\nlogits_flat = tf.reshape(logits, [-1, num_decoder_symbols])\nY_target_flat = tf.reshape(Y_target, [-1])\nW_flat = tf.reshape(W, [-1])\nxentropy = W_flat * tf.nn.sparse_softmax_cross_entropy_with_logits(labels=Y_target_flat, logits=logits_flat)\nloss = tf.reduce_mean(xentropy)\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)\ntraining_op = optimizer.minimize(loss)\n\ninit = tf.global_variables_initializer()", "Exercise solutions\n1. to 6.\nSee Appendix A.\n7. Embedded Reber Grammars\nFirst we need to build a function that generates strings based on a grammar. The grammar will be represented as a list of possible transitions for each state. A transition specifies the string to output (or a grammar to generate it) and the next state.", "np.random.seed(42)\n\ndefault_reber_grammar = [\n [(\"B\", 1)], # (state 0) =B=>(state 1)\n [(\"T\", 2), (\"P\", 3)], # (state 1) =T=>(state 2) or =P=>(state 3)\n [(\"S\", 2), (\"X\", 4)], # (state 2) =S=>(state 2) or =X=>(state 4)\n [(\"T\", 3), (\"V\", 5)], # and so on...\n [(\"X\", 3), (\"S\", 6)],\n [(\"P\", 4), (\"V\", 6)],\n [(\"E\", None)]] # (state 6) =E=>(terminal state)\n\nembedded_reber_grammar = [\n [(\"B\", 1)],\n [(\"T\", 2), (\"P\", 3)],\n [(default_reber_grammar, 4)],\n [(default_reber_grammar, 5)],\n [(\"T\", 6)],\n [(\"P\", 6)],\n [(\"E\", None)]]\n\ndef generate_string(grammar):\n state = 0\n output = []\n while state is not None:\n index = np.random.randint(len(grammar[state]))\n production, state = grammar[state][index]\n if isinstance(production, list):\n production = generate_string(grammar=production)\n output.append(production)\n return \"\".join(output)", "Let's generate a few strings based on the default Reber grammar:", "for _ in range(25):\n print(generate_string(default_reber_grammar), end=\" \")", "Looks good. Now let's generate a few strings based on the embedded Reber grammar:", "for _ in range(25):\n print(generate_string(embedded_reber_grammar), end=\" \")", "Okay, now we need a function to generate strings that do not respect the grammar. We could generate a random string, but the task would be a bit too easy, so instead we will generate a string that respects the grammar, and we will corrupt it by changing just one character:", "def generate_corrupted_string(grammar, chars=\"BEPSTVX\"):\n good_string = generate_string(grammar)\n index = np.random.randint(len(good_string))\n good_char = good_string[index]\n bad_char = np.random.choice(sorted(set(chars) - set(good_char)))\n return good_string[:index] + bad_char + good_string[index + 1:]", "Let's look at a few corrupted strings:", "for _ in range(25):\n print(generate_corrupted_string(embedded_reber_grammar), end=\" \")", "It's not possible to feed a string directly to an RNN: we need to convert it to a sequence of vectors, first. Each vector will represent a single letter, using a one-hot encoding. For example, the letter \"B\" will be represented as the vector [1, 0, 0, 0, 0, 0, 0], the letter E will be represented as [0, 1, 0, 0, 0, 0, 0] and so on. Let's write a function that converts a string to a sequence of such one-hot vectors. Note that if the string is shorted than n_steps, it will be padded with zero vectors (later, we will tell TensorFlow how long each string actually is using the sequence_length parameter).", "def string_to_one_hot_vectors(string, n_steps, chars=\"BEPSTVX\"):\n char_to_index = {char: index for index, char in enumerate(chars)}\n output = np.zeros((n_steps, len(chars)), dtype=np.int32)\n for index, char in enumerate(string):\n output[index, char_to_index[char]] = 1.\n return output\n\nstring_to_one_hot_vectors(\"BTBTXSETE\", 12)", "We can now generate the dataset, with 50% good strings, and 50% bad strings:", "def generate_dataset(size):\n good_strings = [generate_string(embedded_reber_grammar)\n for _ in range(size // 2)]\n bad_strings = [generate_corrupted_string(embedded_reber_grammar)\n for _ in range(size - size // 2)]\n all_strings = good_strings + bad_strings\n n_steps = max([len(string) for string in all_strings])\n X = np.array([string_to_one_hot_vectors(string, n_steps)\n for string in all_strings])\n seq_length = np.array([len(string) for string in all_strings])\n y = np.array([[1] for _ in range(len(good_strings))] +\n [[0] for _ in range(len(bad_strings))])\n rnd_idx = np.random.permutation(size)\n return X[rnd_idx], seq_length[rnd_idx], y[rnd_idx]\n\nX_train, l_train, y_train = generate_dataset(10000)", "Let's take a look at the first training instances:", "X_train[0]", "It's padded with a lot of zeros because the longest string in the dataset is that long. How long is this particular string?", "l_train[0]", "What class is it?", "y_train[0]", "Perfect! We are ready to create the RNN to identify good strings. We build a sequence classifier very similar to the one we built earlier to classify MNIST images, with two main differences:\n* First, the input strings have variable length, so we need to specify the sequence_length when calling the dynamic_rnn() function.\n* Second, this is a binary classifier, so we only need one output neuron that will output, for each input string, the estimated log probability that it is a good string. For multiclass classification, we used sparse_softmax_cross_entropy_with_logits() but for binary classification we use sigmoid_cross_entropy_with_logits().", "reset_graph()\n\npossible_chars = \"BEPSTVX\"\nn_inputs = len(possible_chars)\nn_neurons = 30\nn_outputs = 1\n\nlearning_rate = 0.02\nmomentum = 0.95\n\nX = tf.placeholder(tf.float32, [None, None, n_inputs], name=\"X\")\nseq_length = tf.placeholder(tf.int32, [None], name=\"seq_length\")\ny = tf.placeholder(tf.float32, [None, 1], name=\"y\")\n\ngru_cell = tf.nn.rnn_cell.GRUCell(num_units=n_neurons)\noutputs, states = tf.nn.dynamic_rnn(gru_cell, X, dtype=tf.float32,\n sequence_length=seq_length)\n\nlogits = tf.layers.dense(states, n_outputs, name=\"logits\")\ny_pred = tf.cast(tf.greater(logits, 0.), tf.float32, name=\"y_pred\")\ny_proba = tf.nn.sigmoid(logits, name=\"y_proba\")\n\nxentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)\nloss = tf.reduce_mean(xentropy, name=\"loss\")\noptimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate,\n momentum=momentum,\n use_nesterov=True)\ntraining_op = optimizer.minimize(loss)\n\ncorrect = tf.equal(y_pred, y, name=\"correct\")\naccuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")\n\ninit = tf.global_variables_initializer()\nsaver = tf.train.Saver()", "Now let's generate a validation set so we can track progress during training:", "X_val, l_val, y_val = generate_dataset(5000)\n\nn_epochs = 50\nbatch_size = 50\n\nwith tf.Session() as sess:\n init.run()\n for epoch in range(n_epochs):\n X_batches = np.array_split(X_train, len(X_train) // batch_size)\n l_batches = np.array_split(l_train, len(l_train) // batch_size)\n y_batches = np.array_split(y_train, len(y_train) // batch_size)\n for X_batch, l_batch, y_batch in zip(X_batches, l_batches, y_batches):\n loss_val, _ = sess.run(\n [loss, training_op],\n feed_dict={X: X_batch, seq_length: l_batch, y: y_batch})\n acc_train = accuracy.eval(feed_dict={X: X_batch, seq_length: l_batch, y: y_batch})\n acc_val = accuracy.eval(feed_dict={X: X_val, seq_length: l_val, y: y_val})\n print(\"{:4d} Train loss: {:.4f}, accuracy: {:.2f}% Validation accuracy: {:.2f}%\".format(\n epoch, loss_val, 100 * acc_train, 100 * acc_val))\n saver.save(sess, \"./my_reber_classifier\")", "Now let's test our RNN on two tricky strings: the first one is bad while the second one is good. They only differ by the second to last character. If the RNN gets this right, it shows that it managed to notice the pattern that the second letter should always be equal to the second to last letter. That requires a fairly long short-term memory (which is the reason why we used a GRU cell).", "test_strings = [\n \"BPBTSSSSSSSXXTTVPXVPXTTTTTVVETE\",\n \"BPBTSSSSSSSXXTTVPXVPXTTTTTVVEPE\"]\nl_test = np.array([len(s) for s in test_strings])\nmax_length = l_test.max()\nX_test = [string_to_one_hot_vectors(s, n_steps=max_length)\n for s in test_strings]\n\nwith tf.Session() as sess:\n saver.restore(sess, \"./my_reber_classifier\")\n y_proba_val = y_proba.eval(feed_dict={X: X_test, seq_length: l_test})\n\nprint()\nprint(\"Estimated probability that these are Reber strings:\")\nfor index, string in enumerate(test_strings):\n print(\"{}: {:.2f}%\".format(string, 100 * y_proba_val[index][0]))", "Ta-da! It worked fine. The RNN found the correct answers with high confidence. :)\n8. and 9.\nComing soon..." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phoebe-project/phoebe2-docs
2.3/tutorials/settings.ipynb
gpl-3.0
[ "Advanced: Settings\nThe Bundle also contains a few Parameters that provide settings for that Bundle. Note that these are not system-wide and only apply to the current Bundle. They are however maintained when saving and loading a Bundle.\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).", "#!pip install -I \"phoebe>=2.3,<2.4\"", "As always, let's do imports and initialize a longger and a new Bundle.", "import phoebe\nfrom phoebe import u # units\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()", "Accessing Settings\nSettings are found with their own context in the Bundle and can be accessed through the get_setting method", "b.get_setting()", "or via filtering/twig access", "b['setting']", "and can be set as any other Parameter in the Bundle\nAvailable Settings\nNow let's look at each of the available settings and what they do\nphoebe_version\nphoebe_version is a read-only parameter in the settings to store the version of PHOEBE used.\ndict_set_all\ndict_set_all is a BooleanParameter (defaults to False) that controls whether attempting to set a value to a ParameterSet via dictionary access will set all the values in that ParameterSet (if True) or raise an error (if False)", "b['dict_set_all@setting']\n\nb['teff@component']", "In our default binary there are temperatures ('teff') parameters for each of the components ('primary' and 'secondary'). If we were to do:\nb['teff@component'] = 6000\nthis would raise an error. Under-the-hood, this is simply calling:\nb.set_value('teff@component', 6000)\nwhich of course would also raise an error.\nIn order to set both temperatures to 6000, you would either have to loop over the components or call the set_value_all method:", "b.set_value_all('teff@component', 4000)\nprint(b['value@teff@primary@component'], b['value@teff@secondary@component'])", "If you want dictionary access to use set_value_all instead of set_value, you can enable this parameter", "b['dict_set_all@setting'] = True\nb['teff@component'] = 8000\nprint(b['value@teff@primary@component'], b['value@teff@secondary@component'])", "Now let's disable this so it doesn't confuse us while looking at the other options", "b.set_value_all('teff@component', 6000)\nb['dict_set_all@setting'] = False", "dict_filter\ndict_filter is a Parameter that accepts a dictionary. This dictionary will then always be sent to the filter call which is done under-the-hood during dictionary access.", "b['incl']", "In our default binary, there are several inclination parameters - one for each component ('primary', 'secondary', 'binary') and one with the constraint context (to keep the inclinations aligned).\nThis can be inconvenient... if you want to set the value of the binary's inclination, you must always provide extra information (like '@component').\nInstead, we can always have the dictionary access search in the component context by doing the following", "b['dict_filter@setting'] = {'context': 'component'}\n\nb['incl']", "Now we no longer see the constraint parameters.\nAll parameters are always accessible with method access:", "b.filter(qualifier='incl')", "Now let's reset this option... keeping in mind that we no longer have access to the 'setting' context through twig access, we'll have to use methods to clear the dict_filter", "b.set_value('dict_filter@setting', {})", "run_checks_compute (/figure/solver/solution)\nThe run_checks_compute option allows setting the default compute option(s) sent to b.run_checks, including warnings in the logger raised by interactive checks (see phoebe.interactive_checks_on).\nSimilar options also exist for checks at the figure, solver, and solution level.", "b['run_checks_compute@setting']\n\nb.add_dataset('lc')\nb.add_compute('legacy')\nprint(b.run_checks())\n\nb['run_checks_compute@setting'] = ['phoebe01']\n\nprint(b.run_checks())", "auto_add_figure, auto_remove_figure\nThe auto_add_figure and auto_remove_figure determine whether new figures are automatically added to the Bundle when new datasets, distributions, etc are added. This is False by default within Python, but True by default within the UI clients.", "b['auto_add_figure']\n\nb['auto_add_figure'].description\n\nb['auto_remove_figure']\n\nb['auto_remove_figure'].description", "web_client, web_client_url\nThe web_client and web_client_url settings determine whether the client is opened in a web-browser or with the installed desktop client whenever calling b.ui or b.ui_figures. For more information, see the UI from Jupyter tutorial.", "b['web_client']\n\nb['web_client'].description\n\nb['web_client_url']\n\nb['web_client_url'].description" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kit-cel/lecture-examples
nt1/vorlesung/9_mimo/mimo.ipynb
gpl-2.0
[ "Content and Objectives\n\nShow several aspects of MIMO\nCapacity is estimated by approximating expectation by the weak law of large numbers and averaging along multiple channel realizations\nNumber of non-zero singular values is characterized when channel coefficients H_ij are circular gaussian\nSymbols at receiver are shown w. and wo. post-processing (zero-forcing, MMSE)\n\nImport", "# importing\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport matplotlib\n\n# showing figures inline\n%matplotlib inline\n\n# plotting options \nfont = {'size' : 30}\nplt.rc('font', **font)\nplt.rc('text', usetex=matplotlib.checkdep_usetex(True))\n\nmatplotlib.rc('figure', figsize=(18, 8))", "Capacity of MIMO Systems\nDetermining the capacity of MIMO by estimating capacity using the law of large numbers (\"time averaging approximates expectation\").\nParameters", "# number of transmit and receive antennas\n# vector input --> pairwise combining\nN_T = [ 1, 2, 2, 4, 4 ] \nN_R = [ 1, 2, 4, 4, 32 ]\n\nassert len( N_T ) == len( N_R ), 'Number of transmit and receive antenna pairs has to be equal!'\n\n# snr in dB\nsnr_db = np.arange( 0, 110, 10 ) \nsnr = 10**( snr_db / 10 )\n\n# number of realizations used to approximate expectation\nN_real = 100", "Approximating Expectation", "# initialize capacity array\nC = np.zeros( ( len(N_T), len(snr_db) ) )\n\n# loop for snr values\nfor ind_snr, val_snr in enumerate( snr ):\n \n # loop for antenna combinations\n for ind_n in np.arange( len( N_T ) ):\n\n # read transmit and receive antennas\n n_T = N_T[ ind_n ]\n n_R = N_R[ ind_n ]\n\n # instantaneous capacity; collecting along several realizations\n C_inst = []\n\n # loop for realizations\n for _n_real in np.arange( N_real ):\n\n # sample channel matrix with i.i.d. CN(0,1) entries\n H = np.matrix( 1 / np.sqrt(2) * ( np.random.randn( n_R, n_T ) + 1j * np.random.randn( n_R, n_T ) ) )\n\n # get matrix in \"log-det formula\" and determine instantaneous capacity\n # real part is applied since det(A) is always real for hermitian A (which is true in our scenario)\n det_arg = np.eye( n_R, dtype=complex ) + val_snr / n_T * H @ H.H\n\n C_inst.append( np.log2( np.real( np.linalg.det( det_arg ) ) ) )\n \n \n C[ ind_n, ind_snr ] = np.average( C_inst )", "Plotting Capacity", "# combine antenna constellations\nN = zip( N_T, N_R )\n \nfor ind_n in np.arange( len( N_T ) ):\n plt.plot( snr_db, C[ ind_n, : ], linewidth=2.0, label='$N_T=${}, $N_R$={}'.format( N_T[ind_n], N_R[ind_n] ) )\nplt.xlabel( 'SNR (dB)' )\nplt.ylabel( '$\\\\hat{C}$ (bit/Kanalzugriff)' )\n\nplt.grid( True )\nplt.legend( loc = 'upper left' )\nplt.autoscale(enable=True, tight=True)\nplt.savefig('mimo_capacity.pdf',bbox_inches='tight')", "Non-Zero Singular Values of MIMO Systems\nThe following simulation shows the number of non-zero singular values in MIMO systems.\nNOTE: Values < 0.25 are assumened as equal to zero, thereby over-estimating zero singular values\nFinding Number of non-zero Singular Values", "# specific constellation (8,8)\nN_T = 8\nN_R = 8\n\n# number of realization for histogram\nN_real = 1000\n\n# deviation for counting singular values as zero\ndev_sv = .25\n\n# initialize empty array\nnumb_sv_collect = np.zeros( N_real )\n\n# loop for realizations\nfor ind_n, val_n in enumerate( np.arange( N_real) ):\n\n # generate channel matrix as i.i.d. CN(0,1)\n H = np.matrix( 1 / np.sqrt( 2 ) * ( np.random.randn( N_R, N_T ) + 1j * np.random.randn( N_R, N_T ) ) )\n\n L, X = np.linalg.eig( np.dot( H, H.H ) )\n\n numb_sv_collect[ ind_n ] = np.size( np.where( np.abs( L ) > dev_sv ) )\n", "Plotting Histogram of non-zero Singular Values", "plt.hist( numb_sv_collect, N_T, range=[0, N_T ], density = True, align = 'right', label='$N_{{T}}=${}, $N_{{R}}=${}'.format( N_T, N_R ) , color='#009682' )\n\nplt.xlabel('$n$')\nplt.ylabel('$\\\\hat{P}(R=n)$')\nplt.grid()\nplt.legend(loc='upper left')\nplt.xlim( ( .5, N_T +.5) )\nplt.ylim( (0, 1) )\nplt.savefig('histogram_rank.pdf',bbox_inches='tight')", "MIMO Systems: Superpostion of Systems\nShowing that, by construction, symbols are overlapping, thereby generating kind of chaotic constellation diagrams.\nEXERCISE: Can you reason which values will be generated? (Hint: Use matrix-vector notation and consider r_1.)\nShowing Effects of Superposition of Symbols at the Receiver without Detection", "# snr in dB\nsnr_db = 100\nsigma2 = 10**( - snr_db / 10 )\n\n# specify constellation\nN_T = 4\nN_R = 4\n\n# constellation using 4-qam\nconstellation_points = [ 1+1j, -1+1j, -1-1j, 1-1j ]\nconstellation_points /= np.linalg.norm( constellation_points ) / np.sqrt( len( constellation_points ) )\n\n# number of symbols for receiver constellation diagram\nN_syms_mimo = 5\n\n# sample channel matrix as i.i.d. CN(0,1)\nH = 1/np.sqrt(2) * ( np.random.randn(N_R,N_T) + 1j * np.random.randn(N_R,N_T) )\n\n# loop for mimo symbols\nfor _n in np.arange( N_syms_mimo ):\n \n # sample input vector\n s = np.random.choice( constellation_points, N_T )\n \n # sampe noise vector\n noise = np.sqrt( sigma2 / 2 ) * ( np.random.randn(N_R) + 1j * np.random.randn(N_R) )\n\n # determine receive vector\n r = np.dot( H, s ) + noise\n \n # plot receive symbols\n plt.plot( np.real(r), np.imag(r), 'x', ms='12', mew='4')\n\n# replot last points to apply legend\nplt.plot( np.real(r), np.imag(r), 'x', ms='12', mew='4', label='$\\mathbf{r}$') \n \n# plot transmit symbols for illustration\nplt.plot( np.real(constellation_points), np.imag(constellation_points), 'o', ms='12', mew='4', label='$\\mathbf{s}$', c=(0,0,0))\n\nplt.grid( True )\nmax_ax = np.ceil(np.max([np.max(np.abs(np.real(r))), np.max(np.abs(np.imag(r)))]))\nplt.xlabel('$\\mathrm{Re}\\{\\mathbf{r}\\}$')\nplt.ylabel('$\\mathrm{Im}\\{\\mathbf{r}\\}$') \n\nplt.title('MIMO receive symbols without processing')\nplt.gca().set_aspect('equal', adjustable='box')\nplt.xlim( (-max_ax, max_ax) )\nplt.ylim( (-max_ax, max_ax) )\nplt.legend( loc='upper left')\n#plt.savefig('mimo_symbols_snr_100.pdf',bbox_inches='tight')", "Now Applying Zero-Forcing and MMSE", "# get detector matrix\nH = np.matrix( H )\n\nH_zf = np.linalg.pinv( H )\nH_mmse = np.linalg.pinv( H.H @ H + N_T * 10**( - snr_db / 10 ) * np.eye( N_T ) ) @ H.H\n\n\n# loop for mimo symbols\nfor _n in np.arange( N_syms_mimo ):\n \n # sample input vector\n s = np.random.choice( constellation_points, N_T )\n \n # sampe noise vector\n noise = np.sqrt( sigma2 / 2 ) * ( np.random.randn(N_R) + 1j * np.random.randn(N_R) )\n\n # determine receive vector\n r = ( np.dot( H, s ) + noise ).getA1()\n \n # apply detection matrix (zero-forcing)\n y_zf = np.dot( H_zf, r ).getA1()\n y_mmse = np.dot( H_mmse, r ).getA1()\n \n \n # plot receive symbols\n plt.plot( np.real( r ), np.imag( r ), 'x', ms='18', mew='4')\n plt.plot( np.real( y_zf ), np.imag( y_zf ), 'o', ms='12', mew='4')\n plt.plot( np.real( y_mmse ), np.imag( y_mmse ), 'D', ms='12', mew='4') \n\n\n\n# replot last points to apply legend\nplt.plot( np.real( r ), np.imag( r ), 'x', ms='18', mew='4', label='$\\mathbf{r}$') \nplt.plot( np.real( y_zf ), np.imag( y_zf ), 'D', ms='12', mew='4', label='$\\mathbf{r}_{{\\mathrm{ZF}}}$')\nplt.plot( np.real( y_mmse ), np.imag( y_mmse ), 'o', ms='12', mew='4', label='$\\mathbf{r}_{{\\mathrm{MMSE}}}$')\n\nplt.grid( True )\nmax_ax = np.ceil(np.max([np.max(np.abs(np.real(r))), np.max(np.abs(np.imag(r)))]))\nplt.gca().set_aspect('equal', adjustable='box')\nplt.xlim( (-max_ax, max_ax) )\nplt.ylim( (-max_ax, max_ax) )\nplt.xlabel( '$\\mathrm{Re}\\{\\mathbf{r}\\}$' )\nplt.ylabel( '$\\mathrm{Im}\\{\\mathbf{r}\\}$' ) \n\nplt.title('MIMO receive symbols after detection')\nplt.legend( loc = 'upper left' )\n#plt.savefig('mimo_symbols_detection_snr_100.pdf',bbox_inches='tight')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
canclini/bikesharing
worldfeed.ipynb
mit
[ "Visualizing origins and language of RSS feeds\nThe idea is to consume the big rss feed as a stream from Satori, enrich each message with Spark Structured Streaming and visualize it in a browser app.\nThe Satori stream gathers more than 6.5 million rss feeds. The stream is consumed as websocket with a python function and written into the kafka topic world-feed.\nSpark streaming is then used to read from the topic into a streaming dataframe which is enhanced with 2 informations:\n\nCountry of the server the message is comming from\nThe language of the message\n\nThe stream aggregates the message count by a 15 minute time window, the country_code and the language. All updates are written every 5 seconds into a second kafka topic enriched-feed.\nThe visualisation is done with a small node.js app consuming the kafka topic and sending the message via websockets to all connected browsers where a reactJS app is handling the update of charts.\nSystem Architecture\n\nImplementation\nsatori2Kafka\na generic function consuming a satori stream identified by channel, endpoint and appkey and writing each message to a kafka topic. The message is in the JSON format.\nThe stream is slowed down for development purpose.", "from __future__ import print_function\nimport socket\nimport json\nimport sys\nimport threading\nimport time\nfrom satori.rtm.client import make_client, SubscriptionMode\nfrom kafka import KafkaProducer\n\ndef satori2kafka(channel,endpoint, appkey, topic, delay=1):\n # Kafka\n producer = KafkaProducer(bootstrap_servers=['localhost:9092'])\n \n with make_client(\n endpoint=endpoint, appkey=appkey) as client:\n print('Connected!')\n mailbox = []\n got_message_event = threading.Event()\n\n class SubscriptionObserver(object):\n def on_subscription_data(self, data):\n for message in data['messages']:\n mailbox.append(message)\n got_message_event.set()\n\n subscription_observer = SubscriptionObserver()\n client.subscribe(\n channel,\n SubscriptionMode.SIMPLE,\n subscription_observer)\n\n if not got_message_event.wait(10):\n print(\"Timeout while waiting for a message\")\n sys.exit(1)\n\n while True:\n for message in mailbox:\n msg = json.dumps(message, ensure_ascii=False)\n producer.send(topic, msg.encode())\n # do not send the messages to fast for development\n time.sleep(delay)", "helper functions for data enrichment\nLanguage Identification\nsimple usage of the python langid library to identify in which language the message is written. The function returns a 2 letter iso code for the identified language.", "import langid\n\ndef get_language_from_text(text):\n lang, prob = langid.classify(text)\n return lang", "Country Identification\nwith the extracted hostname from the url the ip address can be fetched using a nameserver lookup on the local machine. The geoip2 library allows a lookup with the IP address in the Maxmind geolocation database. The iso country code is returned by the function.", "from geolite2 import geolite2\nimport socket\nfrom urllib.parse import urlparse\n\n\ndef get_country_from_url(url):\n try:\n hostname = urlparse(url)[1]\n ip = socket.gethostbyname(hostname)\n result = geolite2.reader().get(ip)\n country_iso_code = result['country']['iso_code']\n except:\n country_iso_code = \"unknown\"\n finally:\n geolite2.close()\n return country_iso_code", "Spark Structured Streaming\nRead Stream\nwith spark structured streaming we connect to a kafka topic and continuously append each message to a dataframe. Each kafka record consists of a key, a value, and a timestamp.\nthe value contains our satori message in the JSON format. For further processing we apply the jsonSchema to the value field and create a new streaming dataframe where we keep the timestamp from the kafka record together with the satori structured message.", "from pyspark.sql.functions import *\nfrom pyspark.sql.types import *\n\n# Since we know the data format already, \n# let's define the schema to speed up processing \n# (no need for Spark to infer schema)\njsonSchema = StructType([ StructField(\"publishedTimestamp\", TimestampType(), True), \n StructField(\"url\", StringType(), True),\n StructField(\"feedURL\", StringType(), True),\n StructField(\"title\", StringType(), True),\n StructField(\"description\", StringType(), True)\n ])\n# define 'parsed' as a structured stream from the \n# kafka records in the topic 'world-feed'.\nparsed = (\n spark\n .readStream \n .format(\"kafka\")\n .option(\"kafka.bootstrap.servers\", \"localhost:9092\")\n .option(\"subscribe\", \"world-feed\")\n .load()\n # keep timestamp and the json from the field value in a new field 'parsed_value'\n .select(col(\"timestamp\"),from_json(col(\"value\").cast(\"string\"),jsonSchema).alias(\"parsed_value\"))\n)\n# print the current schema \nparsed.printSchema()\n\n# get rid of the struct 'parsed_value' and keep only the fields beneath.\nworldfeed = parsed.select(\"timestamp\",\"parsed_value.*\")\n\n# print the schema which is used for further processing\nworldfeed.printSchema()\n\n# show that the dataframe is streaming\nworldfeed.isStreaming", "data enrichment\nthe dataframe function withColumn() allows us to add a new column to a dataframe by applying a function to existing columns. For this, an existing function has to be converted to a UserDefinedFunction. This function can than be applied to a distributed dataframe.\nThe get_country_from_url() functions is too big to be serialized. It is therefore loaded from a library.\nBe aware that any library used in such a function has to be made available on the worker nodes executing the job.", "from pyspark.sql.types import StringType\nfrom pyspark.sql.functions import udf\n\n# as get_country_from_url can not be serialized it is loaded from a library. \n# Therefore worldfeed.location_lookup has to be installed on all worker nodes.\n\nfrom worldfeed.location_lookup import get_country_from_url\n\n# transform the helpers to UDFs.\nlanguage_classify_udf = udf(get_language_from_text, StringType())\nget_country_from_url_udf = udf(get_country_from_url, StringType())\n\n# create a new dataframe with the additional columns 'language' and 'server_country'\nenriched_df = (\n worldfeed\n .withColumn('language', language_classify_udf(worldfeed['description']))\n .withColumn('server_country', get_country_from_url_udf(worldfeed['feedURL']))\n)\n\n# print the new schema\nenriched_df.printSchema()", "start the streaming query\nbased on the enriched_df dataframe, a query is written which aggregates the data, reformats it into a kafka readable format and writes it to a kafka topic.", "spark.conf.set(\"spark.sql.shuffle.partitions\", \"2\") # keep the size of shuffles small\n\nquery = (\n enriched_df\n # watermarking allows to update passed timeframes with late arrivals\n # after 30 minutes the timeframe is frozen and can be removed from memory\n .withWatermark(\"timestamp\", \"30 minutes\")\n \n # aggregation happens by server_country, the language and a 15 minute timeframe\n .groupBy(\n enriched_df.server_country,\n enriched_df.language, \n window(enriched_df.timestamp, \"1 minutes\"))\n # the messages are counted\n .count()\n \n # the query result is written to a kafka topic, \n #therefore the output has to consist of a 'key' and a 'value'\n # key: \n .select(to_json(struct(\"server_country\", \"window\")).alias(\"key\"),\n # value: (json format the mentioned fields)\n to_json(struct(\"window.start\",\"window.end\",\"server_country\", \"language\", \"count\")).alias(\"value\"))\n .writeStream\n # only write every 5 seconds\n .trigger(processingTime='5 seconds')\n\n # output to console for debug\n # .format(\"console\")\n\n # output to kafka \n .format(\"kafka\")\n .option(\"kafka.bootstrap.servers\", \"localhost:9092\")\n .option(\"topic\", \"enriched-feed\")\n .option(\"checkpointLocation\", \"./checkpoints\")\n # End kafka related output\n # only write the rows that where updated since the last update\n .outputMode(\"update\") \n .queryName(\"worldfeed\")\n .start()\n)", "start the satori2kafka stream", "channel = \"big-rss\"\nendpoint = \"wss://open-data.api.satori.com\"\nappkey = \"8e7f2BeFE8C8c6e8A4A41976a2dE5Fa9\"\ntopic = \"world-feed\"\n\nsatori2kafka(channel, endpoint, appkey, topic)\n# has to be manually cancelled", "start the node.js app\nmake sure to start the node.js app in visualize. Command: \nbash\nnode server/server.js\nthen point your browser to http://localhost:3001\nDevelopment notes\nThis node app is using browserify to compile the JS code for the browser. If changes in the JS- and JSX-files are made, the code needs to get compiled again.\nTo compile the JS-files automatically every time a change is made to a JS- or JSX-file, start gulp. Command: \nbash\ngulp\n\nhelpers\ncheck if there are any active streaming queries. query.stop() terminates the query. Only 1 query of the same instance can run simultaneously.", "spark.streams.active\nquery.stop()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cloudmesh/book
notebooks/scikit-learn/scikit-learn-k-means.ipynb
apache-2.0
[ "Instalation\nSource: ...\nScikit-learn requires:\nPython (&gt;= 2.6 or &gt;= 3.3),\nNumPy (&gt;= 1.6.1),\nSciPy (&gt;= 0.9).\n\nIf you already have a working installation of numpy and scipy, the easiest way to install scikit-learn is using pip", "! pip install numpy\n\n! pip install scipy -U\n\n! pip install -U scikit-learn", "Import", "from time import time\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn import metrics\nfrom sklearn.cluster import KMeans\nfrom sklearn.datasets import load_digits\nfrom sklearn.decomposition import PCA\nfrom sklearn.preprocessing import scale", "Create samples", "np.random.seed(42)\n\ndigits = load_digits()\ndata = scale(digits.data)\n\nn_samples, n_features = data.shape\nn_digits = len(np.unique(digits.target))\nlabels = digits.target\n\nsample_size = 300\n\nprint(\"n_digits: %d, \\t n_samples %d, \\t n_features %d\"\n % (n_digits, n_samples, n_features))\n\n\nprint(79 * '_')\nprint('% 9s' % 'init'\n ' time inertia homo compl v-meas ARI AMI silhouette')\n\n\ndef bench_k_means(estimator, name, data):\n t0 = time()\n estimator.fit(data)\n print('% 9s %.2fs %i %.3f %.3f %.3f %.3f %.3f %.3f'\n % (name, (time() - t0), estimator.inertia_,\n metrics.homogeneity_score(labels, estimator.labels_),\n metrics.completeness_score(labels, estimator.labels_),\n metrics.v_measure_score(labels, estimator.labels_),\n metrics.adjusted_rand_score(labels, estimator.labels_),\n metrics.adjusted_mutual_info_score(labels, estimator.labels_),\n metrics.silhouette_score(data, estimator.labels_,\n metric='euclidean',\n sample_size=sample_size)))\n\nbench_k_means(KMeans(init='k-means++', n_clusters=n_digits, n_init=10),\n name=\"k-means++\", data=data)\n\nbench_k_means(KMeans(init='random', n_clusters=n_digits, n_init=10),\n name=\"random\", data=data)\n\n# in this case the seeding of the centers is deterministic, hence we run the\n# kmeans algorithm only once with n_init=1\npca = PCA(n_components=n_digits).fit(data)\nbench_k_means(KMeans(init=pca.components_, \n n_clusters=n_digits, n_init=1),\n name=\"PCA-based\",\n data=data)\nprint(79 * '_')", "Visualize", "reduced_data = PCA(n_components=2).fit_transform(data)\nkmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=10)\nkmeans.fit(reduced_data)\n\n# Step size of the mesh. Decrease to increase the quality of the VQ.\nh = .02 # point in the mesh [x_min, x_max]x[y_min, y_max].\n\n# Plot the decision boundary. For that, we will assign a color to each\nx_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1\ny_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))\n\n# Obtain labels for each point in mesh. Use last trained model.\nZ = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])\n\n# Put the result into a color plot\nZ = Z.reshape(xx.shape)\nplt.figure(1)\nplt.clf()\nplt.imshow(Z, interpolation='nearest',\n extent=(xx.min(), xx.max(), yy.min(), yy.max()),\n cmap=plt.cm.Paired,\n aspect='auto', origin='lower')\n\nplt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)\n# Plot the centroids as a white X\ncentroids = kmeans.cluster_centers_\nplt.scatter(centroids[:, 0], centroids[:, 1],\n marker='x', s=169, linewidths=3,\n color='w', zorder=10)\nplt.title('K-means clustering on the digits dataset (PCA-reduced data)\\n'\n 'Centroids are marked with white cross')\nplt.xlim(x_min, x_max)\nplt.ylim(y_min, y_max)\nplt.xticks(())\nplt.yticks(())\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
SKA-ScienceDataProcessor/crocodile
examples/notebooks/wtowers-predict.ipynb
apache-2.0
[ "Illustration of prediction using $w$-towers\nUsing $w$-stacking on sub-grids for predicting visibilities", "%matplotlib inline\n\nimport sys\nsys.path.append('../..')\n\nfrom matplotlib import pylab as plt\nfrom ipywidgets import interact\n\nimport itertools\nimport numpy\nimport numpy.linalg\nimport scipy\nimport scipy.special\nimport time\n\nfrom crocodile.synthesis import *\nfrom crocodile.simulate import *\nfrom util.visualize import *\nfrom arl.test_support import create_named_configuration, export_visibility_to_hdf5\nfrom arl.data_models import *", "Generate baseline coordinates for an observation with the VLA over 6 hours, with a visibility recorded every 10 minutes. The phase center is fixed at a declination of 45 degrees. We assume that the imaged sky says at that position over the course of the observation.\nNote how this gives rise to fairly large $w$-values.", "vlas = create_named_configuration('VLAA')\nha_range = numpy.arange(numpy.radians(0),\n numpy.radians(90),\n numpy.radians(90 / 10))\ndec = numpy.radians(45)\nvobs = xyz_to_baselines(vlas.data['xyz'], ha_range, dec)\nwvl = 5\nuvw_in = vobs / wvl", "General parameters", "# Imaging parameterisation\ntheta = 0.125\nlam = 18*1024\nwstep = 10\nmargin = 100\nsubgrid_size = 256\nassert subgrid_size > margin\n\n# Scale for kernel size? This will make kernel size predictable, but reduce our\n# field of view (and therefore decrease image sharpness).\nscaleByDet = False\n\n# Use kernel without transformed l/m coordinates? This effectively boils down to\n# a second order approximation of the w kernel function phase. If scaleByDet\n# is set as well, this means we do everything with the exact kernel we would use at\n# the phase centre.\napproxKernel = False\n\ngrid_size = int(numpy.ceil(theta*lam))\nprint(\"Grid size: %dx%d\" % (grid_size, grid_size))\nsubgrid_count = numpy.ceil(grid_size / (subgrid_size - margin))\n\nprint(\"Subgrid count: %dx%d (margin overhead %.1f%%)\" % (subgrid_count,subgrid_count,\n 100*(subgrid_count*subgrid_size/grid_size)**2-100))\n\n# Show theoretically \"optimal\" subgrid count for comparison\nif margin > 0:\n def copt(m):\n return -m - 2 * m * scipy.special.lambertw(-1 / 2 / m / numpy.sqrt(numpy.e), -1).real\n print(\"Optimal subgrid size for margin %d: %dx%d\" % (margin, copt(margin), copt(margin)))", "Determine Transformation\nNow we assume that we want to shift the image centre somewhere else. We transform the image at the same time in a way that optimises w-shape:", "dl = 0\ndm = 0\ndn = numpy.sqrt(1 - dl**2 - dm**2)\n\nprint(\"Elevation: %.f deg (if phase centre is zenith)\\n\" % \\\n numpy.rad2deg(numpy.arcsin(dn)))\n\nT = kernel_transform(dl, dm)\nTdet = (numpy.sqrt(numpy.linalg.det(T)) if scaleByDet else 1)\n\nT *= Tdet\nprint(\"Transformation:\\n %s [l, m] + [%f, %f]\" % (T, dl, dm))\nprint(\"Determinant: \", numpy.linalg.det(T))", "Visualise where our transformed field-of-view sits in the original lm-space:", "plt.rcParams['figure.figsize'] = 8, 8\nax = plt.subplot(111)\ncoords = 0.01 * numpy.transpose(numpy.meshgrid(range(-3, 4), range(-3, 4)))\nfacet_centres = numpy.sin(2 * numpy.pi * 0.025 *\n numpy.vstack(numpy.transpose(numpy.meshgrid(range(-10, 11), range(-10, 11)))))\nfor dp in facet_centres:\n if dp[0]**2+dp[1]**2 >= 1: continue\n Tx = kernel_transform(*dp)\n if scaleByDet: Tx *= numpy.sqrt(numpy.linalg.det(Tx))\n xys = numpy.dot(coords, Tx) + dp\n plt.scatter(*numpy.transpose(xys),c='gray')\nxys = numpy.dot(coords, T) + numpy.array([dl, dm])\nplt.scatter(*numpy.transpose(xys),c='red')\nax.set_xlim(-1,1)\nax.set_ylim(-1,1)\nax.set_xlabel('l')\nax.set_xlabel('m')\nplt.show()", "Create visibilities\nWe place dots where the transformed field of view is going to end up at", "support = 10\noversampling = 16 * 1024\naaf = anti_aliasing_function(grid_size, 0, 1.3*support)\naaf_gcf = kernel_oversample(aaf, oversampling, support)\naaf_gcf /= numpy.sum(aaf_gcf[0])\n\n# Generate test pattern: Grid of points\nsources = []\nfor il, im in itertools.product(range(-3, 4), range(-3, 4)):\n sources.append((1, theta/8*il, theta/8*im))\n# Extra dot to mark upper-right corner\nsources.append((1, theta*0.25, theta*0.25))\n# Extra dot to mark upper-left corner\n#sources.append((1, theta*-0.35, theta*0.28))\nimport random\ngrid_relevant = grid_size // 8 * 7\nfor i in range(1):\n x = random.randint(0,grid_relevant) - grid_relevant//2\n y = random.randint(0,grid_relevant) - grid_relevant//2\n sources.append((1, y/lam, x/lam))\n\n# Make image and simulate visibilities for reference\nimage = numpy.zeros((grid_size, grid_size))\nfor intensity, l, m in sources:\n assert abs(int(m*lam) - m*lam) < 1e-13\n image[grid_size//2 + int(m*lam),\n grid_size//2 + int(l*lam)] += intensity\ndef simulate(uvw_in, antialias=False):\n vis_in = numpy.zeros(len(uvw_in), dtype=complex)\n for intensity, l, m in sources:\n p = numpy.dot([l, m], T) + numpy.array([dl, dm])\n if antialias:\n intensity /= scipy.special.pro_ang1(0, 0, support, (2-1/grid_size/4) * l / theta)[0]\n intensity /= scipy.special.pro_ang1(0, 0, support, (2-1/grid_size/4) * m / theta)[0]\n vis_in += intensity * simulate_point(uvw_in, *p)\n return vis_in\nvis_in = simulate(uvw_in)\n\ndef simulate_grid(N, uvw_mid, antialias):\n u,v = (N / theta) * coordinates2(N)\n w = numpy.zeros_like(u) # TODO\n uvw = numpy.transpose([u.flatten(),v.flatten(),w.flatten()]) + uvw_mid\n vis = simulate(uvw, antialias)\n return vis.reshape((N,N))\n\nplt.rcParams['figure.figsize'] = 14, 14\nshow_image(image, \"image\", theta);", "We will now attempt to predict those visibilities using FFTs. We split the visibilities into a number of w-bins:", "# Determine weights (globally)\nwt = doweight(theta, lam, uvw_in, numpy.ones(len(uvw_in)))\n\n# Conjugate visibility. This does not change its meaning.\nwhere = uvw_in[:,1] < 0.0\nuvw = numpy.array(uvw_in)\nvis = numpy.array(vis_in)\nuvw[where] = -uvw_in[where]\nvis[where] = numpy.conj(vis_in[where])\n# Determine w-planes\nwplane = numpy.around(uvw[:,2] / wstep).astype(int)\nwplanes = numpy.arange(numpy.min(wplane), numpy.max(wplane)+1)", "Prepare for imaging\nFirst apply the image-space linear transformation by applying the inverse in visibility space. Then create the w-kernel and apply re-centering so kernels to not go out of bounds.", "# Apply visibility transformations (l' m') = T (l m) + (dl dm)\nvis = visibility_shift(uvw, vis, -dl,-dm)\nuvw = uvw_transform(uvw, numpy.linalg.inv(T))\n\n# Generate Fresnel pattern for shifting between two w-planes\n# As this is the same between all w-planes, we can share it\n# between the whole loop.\nif approxKernel:\n l,m = kernel_coordinates(subgrid_size, theta)\n if not scaleByDet:\n l /= numpy.sqrt(numpy.linalg.det(T))\n m /= numpy.sqrt(numpy.linalg.det(T))\nelse:\n l,m = kernel_coordinates(grid_size, theta, T=T, dl=dl, dm=dm)\nwkern = w_kernel_function(l, m, wstep)\n\n# Center kernels by moving the grid pattern into one direction and adding the opposite offset to visibilities\nif not approxKernel:\n wkern = kernel_recentre(wkern, theta, wstep, dl*Tdet, dm*Tdet)\nuvw = visibility_recentre(uvw, dl*Tdet, dm*Tdet)\n\nwkern = fft(extract_mid(ifft(wkern), subgrid_size))\n\n# Check kernel in grid space at maximum w to make sure that we managed to center it\nplt.rcParams['figure.figsize'] = 16, 8\nshow_grid(ifft(wkern**(numpy.max(uvw[:,2])/wstep)), \"wkern\", theta)\nshow_grid(ifft(wkern**(numpy.min(uvw[:,2])/wstep)), \"wkern\", theta)", "Predict\nSimply combines the principles of predict and w-towers: We shift the sub-grid to the appropriate $w$-value, then proceed to degrid from it.", "# FFT image to obtain the grid\ngrid = fft(image / numpy.outer(aaf, aaf))\nplt.rcParams['figure.figsize'] = 16, 12\n\n# What we are going to generate visibilities into\nvis_out = numpy.empty_like(vis)\n\n# Shall we do imaging alongside predict? Useful for checking results\nimage_as_well = False\nif image_as_well:\n grid_sum = numpy.zeros((grid_size, grid_size), dtype=complex)\n\nstart_time = time.time()\nubin = numpy.floor(uvw[:,0]*theta/(subgrid_size-margin)+0.5).astype(int)\nvbin = numpy.floor(uvw[:,1]*theta/(subgrid_size-margin)+0.5).astype(int)\nsrc = numpy.ndarray((len(vis), 0))\nfor ub in range(numpy.min(ubin), numpy.max(ubin)+1):\n for vb in range(numpy.min(vbin), numpy.max(vbin)+1):\n \n # Find visibilities\n bin_sel = numpy.logical_and(ubin == ub, vbin == vb)\n if not numpy.any(bin_sel):\n continue\n \n # Determine bin dimensions\n xy_min = (subgrid_size-margin) * numpy.array([ub, vb], dtype=int) - (subgrid_size-margin) // 2\n xy_max = (subgrid_size-margin) * numpy.array([ub+1, vb+1], dtype=int) - (subgrid_size-margin) // 2\n uv_mid = (xy_max + xy_min) // 2 / theta\n\n # Get sub-grid coordinates\n mid = int(lam*theta)//2\n x0, y0 = mid + xy_min.astype(int) - (margin + 1) // 2\n x1, y1 = mid + xy_max.astype(int) + margin // 2\n assert(x1 - x0 == subgrid_size and y1 - y0 == subgrid_size)\n x0b, y0b = numpy.amax([[x0, y0], [0,0]], axis=0)\n x1b, y1b = numpy.amin([[x1, y1], [grid_size,grid_size]], axis=0)\n\n # Get sub-grid in image space\n bin_grid = numpy.zeros((subgrid_size, subgrid_size), dtype=complex)\n bin_grid[y0b-y0:y1b-y0, x0b-x0:x1b-x0] = grid[y0b:y1b, x0b:x1b]\n bin_image = ifft(bin_grid)\n if image_as_well:\n bin_image_sum = numpy.zeros((subgrid_size, subgrid_size), dtype=complex)\n last_wp = 0\n for wp in wplanes:\n\n # Filter out visibilities for u/v-bin and w-plane\n slc = numpy.logical_and(bin_sel, wplane == wp)\n if not numpy.any(slc):\n continue\n \n # Bring image sum into this w-plane\n if last_wp != wp:\n if image_as_well:\n bin_image_sum *= wkern**(wp-last_wp)\n bin_image *= wkern**(wp-last_wp)\n last_wp = wp\n \n # Debugging\n if False:\n print(\"w =\", wp*wstep)\n show_grid(fft(bin_image), \"bin_grid\", theta)\n ref_grid = simulate_grid(subgrid_size, [uv_mid[0], uv_mid[1], wp*wstep], True)\n show_grid(ref_grid, \"ref_grid\", theta)\n show_grid(fft(bin_image) - ref_grid, \"diff_grid\", theta)\n\n # Grid relative to mid-point\n uvw_mid = numpy.hstack([uv_mid, [wp*wstep]])\n puvw = uvw[slc] - uvw_mid\n vis_out[slc] = conv_predict(theta, subgrid_size / theta, puvw, src, fft(bin_image), kv=aaf_gcf)\n vis_out[slc] *= w_kernel_function(dl, dm, puvw[:,2])\n \n # Use predicted visibilities for imaging, if requested\n if image_as_well:\n ivis = vis_out[slc] * wt[slc]\n # ivis = vis[slc] * wt[slc] # To use DFT-predicted visibilities\n pgrid = conv_imaging(theta, subgrid_size / theta, puvw, src, ivis, kv=aaf_gcf)\n bin_image_sum += ifft(pgrid)\n\n if image_as_well:\n # Transfer into w=0 plane, FFT image sum\n bin_image_sum /= wkern**last_wp\n bin_grid_out = fft(bin_image_sum)\n\n # Add to grid, keeping bounds in mind\n grid_sum[y0b:y1b, x0b:x1b] += \\\n bin_grid_out[y0b-y0:y1b-y0, x0b-x0:x1b-x0]\n\nplt.rcParams['figure.figsize'] = 16, 12\nprint(\"Done in %.1fs, %d vis/s\" % (time.time() - start_time, len(vis) / (time.time() - start_time)))\n\nerror = numpy.abs(vis - vis_out) / numpy.mean(numpy.abs(vis))\nerror_raw = (vis - vis_out) / numpy.mean(numpy.abs(vis))\nprint(\"Error: \", numpy.sqrt( numpy.mean( error**2 ) ) )\nif image_as_well:\n image_out = numpy.abs(ifft(grid_sum)) / numpy.outer(aaf, aaf)\n show_image(image_out, \"image\", theta)", "Errors\nThere are a couple of sources for accuracy loss here. The main worry for $w$-towers is subgrids: Due to the Fresnel pattern wrapping around the edges of the sub-grid we can get errors there for large $w$ values. Let us plot the error depending on visibility coordinate, and highlight these regions:", "wrap = (subgrid_size - margin) / theta\noffs = wrap / 2\nhighlight = (numpy.abs(numpy.mod(uvw[:,0], wrap) - offs) < 150) & (numpy.abs(uvw[:,2]) > 1000)\nhighlight2 = (numpy.abs(numpy.mod(uvw[:,1], wrap) - offs) < 150) & (numpy.abs(uvw[:,2]) > 1000)\nsets = [range(len(vis)), highlight, highlight2]\n\nfor a, axis in enumerate(['u','v','w']):\n plt.yscale('log')\n for s in sets:\n plt.scatter(uvw[s,a], error[s])\n if axis == 'u':\n plt.xticks((subgrid_size - margin)/theta*(0.5+numpy.arange(numpy.min(ubin), numpy.max(ubin)+1)))\n elif axis == 'v':\n plt.xticks((subgrid_size - margin)/theta*(0.5+numpy.arange(numpy.min(vbin), numpy.max(vbin)+1)))\n else:\n if len(wplanes) < 100:\n plt.xticks(wstep*wplanes)\n plt.xlabel(axis + ' [$\\lambda$]')\n plt.ylabel('error')\n plt.grid(True)\n plt.show()", "Clearly we have more outliers on the margins than otherwise. We can pull this out more by considering coordinates modulo the chunk size.\nWe can do the same thing into the $w$ direction to track how much accuracy we lose for visibilities that do not directly fall on a $w$ plane. This actually turns out to be the main villain here.", "for a, axis in enumerate(['u','v','w']):\n plt.yscale('log')\n if axis == 'w':\n wrap = wstep\n off = 0\n else:\n wrap = (subgrid_size - margin)/theta\n off = wrap/2\n for i, s in enumerate(sets):\n x = numpy.mod(uvw[s,a]+off, wrap)-off\n y = error[s]\n plt.scatter(x, y)\n plt.xlabel(axis + ' [$\\lambda$]')\n plt.ylabel('error')\n plt.grid(True)\n plt.show()", "Oversampling error\nSomething else we might want to check is the error depending on oversampling. However, we are oversampling very finely already, and the error from $w$ is a lot more pronounced, so there is not much to see here. But it is still a good idea to check:", "for a, axis in enumerate(['u','v']):\n plt.yscale('log')\n wrap = 1/theta/oversampling\n off = 0\n for s in sets:\n plt.scatter(numpy.mod(uvw[s,a]+off, wrap)-off, error[s])\n plt.xlim((-wrap/10,11*wrap/10))\n plt.xlabel(axis + ' [$\\lambda$]')\n plt.ylabel('error')\n plt.grid(True)\n plt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zaqwes8811/micro-apps
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/Appendix-A-Installation.ipynb
mit
[ "Table of Contents\nInstallation", "from __future__ import division, print_function", "This book is written in Jupyter Notebook, a browser based interactive Python environment that mixes Python, text, and math. I choose it because of the interactive features - I found Kalman filtering nearly impossible to learn until I started working in an interactive environment. It is difficult to form an intuition about many of the parameters until you can change them and immediately see the output. An interactive environment also allows you to play 'what if' scenarios. \"What if I set $\\mathbf{Q}$ to zero?\" It is trivial to find out with Jupyter Notebook.\nAnother reason I choose it is because most textbooks leaves many things opaque. For example, there might be a beautiful plot next to some pseudocode. That plot was produced by software, but software that is not available to the reader. I want everything that went into producing this book to be available to you. How do you plot a covariance ellipse? You won't know if you read most books. With Jupyter Notebook all you have to do is look at the source code.\nEven if you choose to read the book online you will want Python and the SciPy stack installed so that you can write your own Kalman filters. There are many different ways to install these libraries, and I cannot cover them all, but I will cover a few typical scenarios.\nInstalling the SciPy Stack\nThis book requires IPython, Jupyter, NumPy, SciPy, SymPy, and Matplotlib. The SciPy stack of NumPy, SciPy, and Matplotlib depends on third party Fortran and C code, and is not trivial to install from source code. The SciPy website strongly urges using a pre-built installation, and I concur with this advice.\nJupyter notebook is the software that allows you to run Python inside of the browser - the book is a collection of Jupyter notebooks. IPython provides the infrastructure for Jupyter and data visualization. NumPy and Scipy are packages which provide the linear algebra implementation that the filters use. Sympy performs symbolic math - I use it to find derivatives of algebraic equations. Finally, Matplotlib provides plotting capability. \nI use the Anaconda distribution from Continuum Analytics. This is an excellent distribution that combines all of the packages listed above, plus many others. IPython recommends this package to install Ipython. Installation is very straightforward, and it can be done alongside other Python installations you might already have on your machine. It is free to use. You may download it from here: http://continuum.io/downloads I strongly recommend using the latest Python 3 version that they provide. For now I support Python 2.7, but perhaps not much longer. \nThere are other choices for installing the SciPy stack. You can find instructions here: http://scipy.org/install.html It can be very cumbersome, and I do not support it or provide any instructions on how to do it.\nMany Linux distributions come with these packages pre-installed. However, they are often somewhat dated and they will need to be updated as the book depends on recent versions of all. Updating a specific Linux installation is beyond the scope of this book. An advantage of the Anaconda distribution is that it does not modify your local Python installation, so you can install it and not break your linux distribution. Some people have been tripped up by this. They install Anaconda, but the installed Python remains the default version and then the book's software doesn't run correctly.\nI do not run regression tests on old versions of these libraries. In fact, I know the code will not run on older versions (say, from 2014-2015). I do not want to spend my life doing tech support for a book, thus I put the burden on you to install a recent version of Python and the SciPy stack. \nYou will need Python 2.7 or later installed. Almost all of my work is done in Python 3.6, but I periodically test on 2.7. I do not promise any specific check in will work in 2.7 however. I use Python's from __future__ import ... statement to help with compatibility. For example, all prints need to use parenthesis. If you try to add, say, print x into the book your script will fail; you must write print(x) as in Python 3.X.\nPlease submit a bug report at the book's github repository if you have installed the latest Anaconda and something does not work - I will continue to ensure the book will run with the latest Anaconda release. I'm rather indifferent if the book will not run on an older installation. I'm sorry, but I just don't have time to provide support for everyone's different setups. Packages like jupyter notebook are evolving rapidly, and I cannot keep up with all the changes and remain backwards compatible as well. \nIf you need older versions of the software for other projects, note that Anaconda allows you to install multiple versions side-by-side. Documentation for this is here:\nhttps://conda.io/docs/user-guide/tasks/manage-python.html\nInstalling FilterPy\nFilterPy is a Python library that implements all of the filters used in this book, and quite a few others. Installation is easy using pip. Issue the following from the command prompt:\n pip install filterpy\n\nFilterPy is written by me, and the latest development version is always available at https://github.com/rlabbe/filterpy.\nDownloading and Running the Book\nThe book is stored in a github repository. From the command line type the following:\ngit clone --depth=1 https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python.git\n\nThis will create a directory named Kalman-and-Bayesian-Filters-in-Python. The depth parameter just gets you the latest version. Unless you need to see my entire commit history this is a lot faster and saves space.\nIf you do not have git installed, browse to https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python where you can download the book via your browser.\nNow, from the command prompt change to the directory that was just created, and then run Jupyter notebook:\ncd Kalman-and-Bayesian-Filters-in-Python\njupyter notebook\n\nA browser window should launch showing you all of the chapters in the book. Browse to the first chapter by clicking on it, then open the notebook in that subdirectory by clicking on the link.\nMore information about running the notebook can be found here:\nhttp://jupyter-notebook-beginner-guide.readthedocs.org/en/latest/execute.html\nCompanion Software\nCode that is specific to the book is stored with the book in the subdirectory ./kf_book. This code is in a state of flux; I do not wish to document it here yet. I do mention in the book when I use code from this directory, so it should not be a mystery.\nIn the kf_book subdirectory there are Python files with a name like xxx_internal.py. I use these to store functions that are useful for a specific chapter. This allows me to hide away Python code that is not particularly interesting to read - I may be generating a plot or chart, and I want you to focus on the contents of the chart, not the mechanics of how I generate that chart with Python. If you are curious as to the mechanics of that, just go and browse the source.\nSome chapters introduce functions that are useful for the rest of the book. Those functions are initially defined within the Notebook itself, but the code is also stored in a Python file that is imported if needed in later chapters. I do document when I do this where the function is first defined, but this is still a work in progress. I try to avoid this because then I always face the issue of code in the directory becoming out of sync with the code in the book. However, IPython Notebook does not give us a way to refer to code cells in other notebooks, so this is the only mechanism I know of to share functionality across notebooks.\nThere is an undocumented directory called experiments. This is where I write and test code prior to putting it in the book. There is some interesting stuff in there, and feel free to look at it. As the book evolves I plan to create examples and projects, and a lot of this material will end up there. Small experiments will eventually just be deleted. If you are just interested in reading the book you can safely ignore this directory. \nThe subdirectory ./kf_book contains a css file containing the style guide for the book. The default look and feel of IPython Notebook is rather plain. Work is being done on this. I have followed the examples set by books such as Probabilistic Programming and Bayesian Methods for Hackers. I have also been very influenced by Professor Lorena Barba's fantastic work, available here. I owe all of my look and feel to the work of these projects. \nUsing Jupyter Notebook\nA complete tutorial on Jupyter Notebook is beyond the scope of this book. Many are available online. In short, Python code is placed in cells. These are prefaced with text like In [1]:, and the code itself is in a boxed area. If you press CTRL-ENTER while focus is inside the box the code will run and the results will be displayed below the box. Like this:", "print(3+7.2)", "If you have this open in Jupyter Notebook now, go ahead and modify that code by changing the expression inside the print statement and pressing CTRL+ENTER. The output should be changed to reflect what you typed in the code cell.\nSymPy\nSymPy is a Python package for performing symbolic mathematics. The full scope of its abilities are beyond this book, but it can perform algebra, integrate and differentiate equations, find solutions to differential equations, and much more. For example, we use use it to compute the Jacobian of matrices and expected value integral computations.\nFirst, a simple example. We will import SymPy, initialize its pretty print functionality (which will print equations using LaTeX). We will then declare a symbol for SymPy to use.", "import sympy\nsympy.init_printing(use_latex='mathjax')\n\nphi, x = sympy.symbols('\\phi, x')\nphi", "Notice how it prints the symbol phi using LaTeX. Now let's do some math. What is the derivative of $\\sqrt{\\phi}$?", "sympy.diff('sqrt(phi)')", "We can factor equations", "sympy.factor(phi**3 -phi**2 + phi - 1)", "and we can expand them.", "((phi+1)*(phi-4)).expand()", "You can evauate an equation for specific values of its variables:", "w =x**2 -3*x +4\nprint(w.subs(x, 4))\nprint(w.subs(x, 12))", "You can also use strings for equations that use symbols that you have not defined:", "x = sympy.expand('(t+1)*2')\nx", "Now let's use SymPy to compute the Jacobian of a matrix. Given the function\n$$h=\\sqrt{(x^2 + z^2)}$$\nfind the Jacobian with respect to x, y, and z.", "x, y, z = sympy.symbols('x y z')\n\nH = sympy.Matrix([sympy.sqrt(x**2 + z**2)])\n\nstate = sympy.Matrix([x, y, z])\nH.jacobian(state)", "Now let's compute the discrete process noise matrix $\\mathbf Q$ given the continuous process noise matrix \n$$\\mathbf Q = \\Phi_s \\begin{bmatrix}0&0&0\\0&0&0\\0&0&1\\end{bmatrix}$$\nThe integral is \n$$\\mathbf Q = \\int_0^{\\Delta t} \\mathbf F(t)\\mathbf Q\\mathbf F^T(t)\\, dt$$\nwhere \n$$\\mathbf F(\\Delta t) = \\begin{bmatrix}1 & \\Delta t & {\\Delta t}^2/2 \\ 0 & 1 & \\Delta t\\ 0& 0& 1\\end{bmatrix}$$", "dt = sympy.symbols('\\Delta{t}')\nF_k = sympy.Matrix([[1, dt, dt**2/2],\n [0, 1, dt],\n [0, 0, 1]])\nQ = sympy.Matrix([[0,0,0],\n [0,0,0],\n [0,0,1]])\n\nsympy.integrate(F_k*Q*F_k.T,(dt, 0, dt))", "Various Links\nhttps://ipython.org/\nhttps://jupyter.org/\nhttps://www.scipy.org/" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Karuntg/SDSS_SSC
Analysis_2020/recalibration_gray.ipynb
gpl-3.0
[ "Compare photometry in the new Stripe82 catalog\nto Gaia DR2 photometry and derive corrections for\ngray systematics using Gmag photometry\ninput: N2020_stripe82calibStars.dat\noutput: stripe82calibStars_v3.1.dat\nfiles with RA/Dec corrections: ZPcorrectionsRA_v3.1_final.dat and ZPcorrectionsDec_v3.1_final.dat\nmakes paper plots:\nGmagCorrection_RA_Hess.png GmagCorrection_Dec_Hess.png\nGmagCorrectionTest_Gmag_Hess.png", "%matplotlib inline\nfrom astropy.table import Table\nfrom astropy.coordinates import SkyCoord\nfrom astropy import units as u\nfrom astropy.table import hstack\nimport matplotlib.pyplot as plt \nimport numpy as np\nfrom astroML.plotting import hist\n# for astroML installation see https://www.astroml.org/user_guide/installation.html\n\n## automatically reload any modules read below that might have changed (e.g. plots)\n%load_ext autoreload\n%autoreload 2\n# importing ZI and KT tools: \nimport ZItools as zit\nimport KTtools as ktt", "<a id='dataReading'></a>\nDefine paths and catalogs", "ZIdataDir = \"/Users/ivezic/Work/Science/CalibrationV2/SDSS_SSC/Data\"\n\n# the original SDSS catalog from 2007\nsdssOldCat = ZIdataDir + \"/\" + \"stripe82calibStars_v2.6.dat\"\n# INPUT: Karun's new catalog from 2020\nsdssNewCatIn = ZIdataDir + \"/\" + \"N2020_stripe82calibStars.dat\"\nreadFormat = 'csv'\n# OUTPUT: with Gmag-based gray corrections \nsdssNewCatOut = ZIdataDir + \"/\" + \"stripe82calibStars_v3.1.dat\" \n# Gaia DR2 \nGaiaDR2Cat = ZIdataDir + \"/\" + \"Stripe82_GaiaDR2.dat\"\n# Gaia DR2 with BP and RP data\nGaiaDR2CatBR = ZIdataDir + \"/\" + \"Stripe82_GaiaDR2_BPRP.dat\" \n\n# both new and old files use identical data structure\ncolnamesSDSS = ['calib_fla', 'ra', 'dec', 'raRMS', 'decRMS', 'nEpochs', 'AR_val', \n 'u_Nobs', 'u_mMed', 'u_mMean', 'u_mErr', 'u_rms_scatt', 'u_chi2',\n 'g_Nobs', 'g_mMed', 'g_mMean', 'g_mErr', 'g_rms_scatt', 'g_chi2',\n 'r_Nobs', 'r_mMed', 'r_mMean', 'r_mErr', 'r_rms_scatt', 'r_chi2',\n 'i_Nobs', 'i_mMed', 'i_mMean', 'i_mErr', 'i_rms_scatt', 'i_chi2',\n 'z_Nobs', 'z_mMed', 'z_mMean', 'z_mErr', 'z_rms_scatt', 'z_chi2']\n\n%%time\n# old\nsdssOld = Table.read(sdssOldCat, format='ascii', names=colnamesSDSS) \nnp.size(sdssOld)\n\n%%time\n# new \nsdssNew = Table.read(sdssNewCatIn, format=readFormat, names=colnamesSDSS)\nnp.size(sdssNew)", "Simple positional match using ra/dec", "sdssOld_coords = SkyCoord(ra = sdssOld['ra']*u.degree, dec= sdssOld['dec']*u.degree) \nsdssNew_coords = SkyCoord(ra = sdssNew['ra']*u.degree, dec= sdssNew['dec']*u.degree) \n# this is matching sdssNew to sdssOld, so that indices are into sdssNew catalog\n# makes sense in this case since the sdssOld catalog is (a little bit) bigger \n# than sdssNew (1006849 vs 1005470)\nidx, d2d, d3d = sdssNew_coords.match_to_catalog_sky(sdssOld_coords) \n\n# object separation is an object with units, \n# I add that as a column so that one can \n# select based on separation to the nearest matching object\nnew_old = hstack([sdssNew, sdssOld[idx]], table_names = ['new', 'old'])\nnew_old['sep_2d_arcsec'] = d2d.arcsec\n# good matches between the old and new catalogs\nMAX_DISTANCE_ARCSEC = 0.5\nsdss = new_old[(new_old['sep_2d_arcsec'] < MAX_DISTANCE_ARCSEC)]\nprint(np.size(sdss))", "apply standard cuts as in old catalog:", "mOK3 = sdss[sdss['ra_new']<1]\nmOK3 = zit.selectCatalog(sdss, mOK3)\n\nprint(996147/1006849)\nprint(993774/1006849)\nprint(991472/1006849)", "now match to Gaia DR2...", "colnamesGaia = ['ra', 'dec', 'nObs', 'Gmag', 'flux', 'fluxErr', 'pmra', 'pmdec']\ncolnamesGaia = colnamesGaia + ['BPmag', 'BPeI', 'RPmag', 'RPeI', 'BRef']\ngaia = Table.read(GaiaDR2CatBR, format='ascii', names=colnamesGaia)\ngaia['raG'] = gaia['ra']\ngaia['decG'] = gaia['dec'] \ngaia['GmagErr'] = gaia['fluxErr'] / gaia['flux'] \ngaia['BR'] = gaia['BPmag'] - gaia['RPmag'] \ngaia['GBP'] = gaia['Gmag'] - gaia['BPmag']\ngaia['GRP'] = gaia['Gmag'] - gaia['RPmag']\n\nsdss_coords = SkyCoord(ra = sdss['ra_old']*u.degree, dec= sdss['dec_old']*u.degree) \ngaia_coords = SkyCoord(ra = gaia['raG']*u.degree, dec= gaia['decG']*u.degree) \n\n# this is matching gaia to sdss, so that indices are into sdss catalog\n# makes sense in this case since the sdss catalog is bigger than gaia\nidxG, d2dG, d3dG = gaia_coords.match_to_catalog_sky(sdss_coords) \n\n# object separation is an object with units, \n# I add that as a column so that one can \n# select based on separation to the nearest matching object\ngaia_sdss = hstack([gaia, sdss[idxG]], table_names = ['gaia', 'sdss'])\ngaia_sdss['sepSG_2d_arcsec'] = d2dG.arcsec\n\n### code for generating new quantities, such as dra, ddec, colors, differences in mags, etc\ndef derivedColumns(matches):\n matches['dra'] = (matches['ra_new']-matches['ra_old'])*3600\n matches['ddec'] = (matches['dec_new']-matches['dec_old'])*3600\n matches['ra'] = matches['ra_old']\n ra = matches['ra'] \n matches['raW'] = np.where(ra > 180, ra-360, ra) \n matches['dec'] = matches['dec_old']\n matches['u'] = matches['u_mMed_old']\n matches['g'] = matches['g_mMed_old']\n matches['r'] = matches['r_mMed_old']\n matches['i'] = matches['i_mMed_old']\n matches['z'] = matches['z_mMed_old']\n matches['ug'] = matches['u_mMed_old'] - matches['g_mMed_old']\n matches['gr'] = matches['g_mMed_old'] - matches['r_mMed_old']\n matches['ri'] = matches['r_mMed_old'] - matches['i_mMed_old']\n matches['gi'] = matches['g_mMed_old'] - matches['i_mMed_old']\n matches['du'] = matches['u_mMed_old'] - matches['u_mMed_new']\n matches['dg'] = matches['g_mMed_old'] - matches['g_mMed_new']\n matches['dr'] = matches['r_mMed_old'] - matches['r_mMed_new']\n matches['di'] = matches['i_mMed_old'] - matches['i_mMed_new']\n matches['dz'] = matches['z_mMed_old'] - matches['z_mMed_new']\n # Gaia \n matches['draGold'] = -3600*(matches['ra_old'] - matches['raG']) \n matches['draGnew'] = -3600*(matches['ra_new'] - matches['raG']) \n matches['ddecGold'] = -3600*(matches['dec_old'] - matches['decG']) \n matches['ddecGnew'] = -3600*(matches['dec_new'] - matches['decG']) \n # photometric\n matches['gGr_old'] = matches['Gmag'] - matches['r_mMed_old']\n matches['gGr_new'] = matches['Gmag'] - matches['r_mMed_new']\n matches['gRPr_new'] = matches['RPmag'] - matches['r_mMed_new']\n return\n\nderivedColumns(gaia_sdss) ", "Select good matches and compare both catalogs to Gaia DR2", "# doGaiaAll(mOK)\ndef doGaiaGmagCorrection(d, Cstr, Gmax=20.0, yMax=0.03):\n # Cstr = 'gGr_old' or 'gGr_new' \n gi = d['gi']\n Gr = d[Cstr]\n Gmag = d['Gmag']\n zit.qpBM(d, 'gi', -1, 4.5, Cstr, -2, 1.0, 56) \n\n xBin, nPts, medianBin, sigGbin = zit.fitMedians(gi, Gr, -0.7, 4.0, 47, 0) \n data = np.array([xBin, medianBin, sigGbin])\n Ndata = xBin.size\n ### HERE WE ARE FITTING 7-th ORDER POLYNOMIAL TO Gmag-rSDSS vs. g-i ###\n # get best-fit parameters \n thetaCloc = zit.best_theta(data,7)\n # generate best fit lines on a fine grid \n xfit = np.linspace(-1.1, 4.3, 1000)\n yfit = zit.polynomial_fit(thetaCloc, xfit) \n ## added \"Poly\" because switched to piecewise linear interpolation below\n d['gGrFitPoly'] = zit.polynomial_fit(thetaCloc, gi)\n d['dgGrPoly'] = d[Cstr] - d['gGrFitPoly'] \n ### PIECEWISE LINEAR INTERPOLATION (AS FOR ALL OTHER COLORS AND SURVEYS)\n d['gGrFit'] = np.interp(gi, xBin, medianBin)\n d['dgGr'] = d[Cstr] - d['gGrFit'] \n \n # SELECT FOR RECALIBRATION wrt RA and Dec\n giMin = 0.4\n giMax = 3.0 \n Dc = d[(d['gi']>giMin)&(d['gi']<giMax)]\n print('N before and after color cut:', np.size(d), np.size(Dc))\n DcB = Dc[(Dc['Gmag']>14.5)&(Dc['Gmag']<Gmax)]\n DcB['GrResid'] = DcB['dgGr'] - np.median(DcB['dgGr'])\n zit.printStats(DcB['dgGr'])\n DcBok = DcB[np.abs(DcB['dgGr'])<0.1]\n print(np.size(DcB), np.size(DcBok))\n\n zit.qpBM(DcBok, 'Gmag', 14.5, Gmax, 'GrResid', -1*yMax, yMax, 56) \n zit.qpBM(DcBok, 'dec', -1.3, 1.3, 'GrResid', -1*yMax, yMax, 126) \n zit.qpBM(DcBok, 'raW', -51.5, 60, 'GrResid', -1*yMax, yMax, 112) \n \n return thetaCloc, DcBok \n\n## first limit astrometric distance and \n## require at least 4 epochs as in the old catalog\nMAX_DISTANCE_ARCSEC = 0.5\nm1 = gaia_sdss[(gaia_sdss['sepSG_2d_arcsec'] < MAX_DISTANCE_ARCSEC)]\na1 = m1['g_Nobs_new']\na2 = m1['r_Nobs_new']\na3 = m1['i_Nobs_new']\nmOK = m1[(a1>3)&(a2>3)&(a3>3)]\nprint(len(new_old))\nprint(len(m1))\nprint(len(mOK))\n\ndef plotAstro2Ddiagrams(d):\n ### plots \n plotNameRoot = 'astroVSpm_RA_pm'\n plotName = plotNameRoot + '.png' \n kw = {\"Xstr\":'pmra', \"Xmin\":-40, \"Xmax\":40, \"Xlabel\":'R.A. proper motion (mas/yr)', \\\n \"Ystr\":'draGnew', \"Ymin\":-0.5, \"Ymax\":0.5, \"Ylabel\":'raw SDSS R.A. - Gaia R.A. (arcsec)', \\\n \"XminBin\":-35, \"XmaxBin\":35, \"nBin\":70, \\\n \"plotName\":plotName, \"Nsigma\":0, \"offset\":-0.1, \"symbSize\":0.05}\n kw[\"nBinX\"] = 90\n kw[\"nBinY\"] = 40\n kw[\"cmap\"] = 'plasma'\n ktt.plotdelMagBW_KT(d, kw)\n print('made plot', plotName)\n\n # need to fit draGnew vs. pmra and correct for the mean trend, then plot vs. r mag\n pmra = d['pmra']\n draGnew = d['draGnew']\n xBin, nPts, medianBin, sigGbin = zit.fitMedians(pmra, draGnew, -60, 60, 120, 0) \n ### PIECEWISE LINEAR INTERPOLATION \n d['draGnewFit'] = np.interp(d['pmra'], xBin, medianBin)\n draCorr = d['draGnew'] - d['draGnewFit'] \n draCorrOK = np.where(np.abs(draCorr) < 0.25, draCorr, 0)\n d['draGnewCorr'] = draCorrOK \n\n plotNameRoot = 'astroVSpm_RA_r'\n plotName = plotNameRoot + '.png' \n kw = {\"Xstr\":'r_mMed_new', \"Xmin\":14, \"Xmax\":21, \"Xlabel\":'SDSS r magnitude', \\\n \"Ystr\":'draGnewCorr', \"Ymin\":-0.12, \"Ymax\":0.12, \"Ylabel\":'corr. SDSS R.A. - Gaia R.A. (arcsec)', \\\n \"XminBin\":14, \"XmaxBin\":21, \"nBin\":30, \\\n \"plotName\":plotName, \"Nsigma\":0, \"offset\":0.050, \"symbSize\":0.05}\n kw[\"nBinX\"] = 30\n kw[\"nBinY\"] = 24\n kw[\"cmap\"] = 'plasma'\n ktt.plotdelMagBW_KT(d, kw)\n print('made plot', plotName)\n\n\n plotNameRoot = 'astroVSpm_Dec_pm'\n plotName = plotNameRoot + '.png' \n kw = {\"Xstr\":'pmdec', \"Xmin\":-40, \"Xmax\":40, \"Xlabel\":'Dec. proper motion (mas/yr)', \\\n \"Ystr\":'ddecGnew', \"Ymin\":-0.5, \"Ymax\":0.5, \"Ylabel\":'raw SDSS Dec. - Gaia Dec. (arcsec)', \\\n \"XminBin\":-35, \"XmaxBin\":35, \"nBin\":70, \\\n \"plotName\":plotName, \"Nsigma\":0, \"offset\":-0.1, \"symbSize\":0.05}\n kw[\"nBinX\"] = 90\n kw[\"nBinY\"] = 40\n kw[\"cmap\"] = 'plasma'\n ktt.plotdelMagBW_KT(d, kw)\n print('made plot', plotName)\n\n### produce astrometric plots showing correlation with proper motions\nplotAstro2Ddiagrams(mOK)\n# print(np.std(mOK['draGnew']), np.std(mOK['ddecGnew'])) \n#mOK\n\nx = mOK['draGnewCorr']\nxOK = x[np.abs(x)<0.25]\nprint(np.std(xOK), zit.sigG(xOK))\n\nzit.qpBM(mOK, 'pmra', -50, 50, 'draGnew', -0.6, 0.6, 50) \n\nzit.qpBM(mOK, 'pmdec', -50, 50, 'ddecGnew', -0.6, 0.6, 50) \n\ntheta, mOKc = doGaiaGmagCorrection(mOK, 'gGr_new')\nthetaLoc = theta\n\n## for zero point calibration, in addition to color cut in doGaiaAll, take 16 < G < 19.5 \nmOKcB = mOKc[(mOKc['Gmag']>16)&(mOKc['Gmag']<19.5)]\nmOKcB['GrResid'] = mOKcB['dgGr'] - np.median(mOKcB['dgGr'])\nmOKcBok = mOKcB[np.abs(mOKcB['dgGr'])<0.1]\nprint(np.size(mOKc), np.size(mOKcB), np.size(mOKcBok))\n\nprint(np.std(mOKcBok['GrResid']), zit.sigG(mOKcBok['GrResid'])) \n\nzit.qpBM(mOKcBok, 'dec', -1.3, 1.3, 'GrResid', -0.03, 0.03, 260) \n\nzit.qpBM(mOKcBok, 'raW', -51.5, 60, 'GrResid', -0.03, 0.03, 112) ", "Final Figures for the Paper\nwith Karun's 2D histogram implementation", "def plotGmag2Ddiagrams(d):\n ### plots \n plotNameRoot = 'GrVSgi'\n plotName = plotNameRoot + '.png' \n kw = {\"Xstr\":'gi', \"Xmin\":0.0, \"Xmax\":3.5, \"Xlabel\":'SDSS g-i', \\\n \"Ystr\":'gGr_new', \"Ymin\":-1.25, \"Ymax\":0.25, \"Ylabel\":'Gaia Gmag - SDSS r', \\\n \"XminBin\":-0.5, \"XmaxBin\":4.0, \"nBin\":90, \\\n \"plotName\":plotName, \"Nsigma\":3, \"offset\":0.0, \"symbSize\":0.05}\n kw[\"nBinX\"] = 90\n kw[\"nBinY\"] = 40\n kw[\"cmap\"] = 'plasma'\n ktt.plotdelMagBW_KT(d, kw)\n print('made plot', plotName)\n\ndef plotGmag2DdiagramsX(d, kw):\n # Gaia G\n print('-----------')\n print(' stats for SDSS r binning medians:')\n plotName = plotNameRoot + '_Gmag.png' \n kwOC = {\"Xstr\":'Gmag', \"Xmin\":14.3, \"Xmax\":21.01, \"Xlabel\":'Gaia G (mag)', \\\n \"Ystr\":kw['Ystr'], \"Ymin\":-0.06, \"Ymax\":0.06, \"Ylabel\":Ylabel, \\\n \"XminBin\":14.5, \"XmaxBin\":21.0, \"nBin\":130, \\\n \"plotName\":plotName, \"Nsigma\":3, \"offset\":0.01, \"symbSize\":kw['symbSize']}\n zit.plotdelMag(goodC, kwOC)\n plotName = plotNameRoot + '_Gmag_Hess.png' \n kwOC[\"plotName\"] = plotName\n kwOC[\"nBinX\"] = 130\n kwOC[\"nBinY\"] = 50\n kwOC[\"cmap\"] = 'plasma'\n ktt.plotdelMagBW_KT(goodC, kwOC)\n print('made plot', plotName)\n print('------------------------------------------------------------------')\n\ndef plotGmagCorrections(d, kw):\n ### REDEFINE residuals to correspond to \"SDSS-others\", as other cases\n d['redef'] = -1*d[kw['Ystr']] \n kw['Ystr'] = 'redef'\n goodC = d[np.abs(d['redef'])<0.1]\n \n ### plots \n plotNameRoot = kw['plotNameRoot']\n # RA\n print(' stats for RA binning medians:')\n plotName = plotNameRoot + '_RA.png'\n Ylabel = 'residuals for (Gmag$_{SDSS}$ - Gmag$_{GaiaDR2}$) '\n kwOC = {\"Xstr\":'raW', \"Xmin\":-52, \"Xmax\":60.5, \"Xlabel\":'R.A. (deg)', \\\n \"Ystr\":kw['Ystr'], \"Ymin\":-0.07, \"Ymax\":0.07, \"Ylabel\":Ylabel, \\\n \"XminBin\":-51.5, \"XmaxBin\":60, \"nBin\":112, \\\n \"plotName\":plotName, \"Nsigma\":3, \"offset\":0.01, \"symbSize\":kw['symbSize']}\n zit.plotdelMag(goodC, kwOC)\n plotName = plotNameRoot + '_RA_Hess.png' \n kwOC[\"plotName\"] = plotName\n kwOC[\"nBinX\"] = 112\n kwOC[\"nBinY\"] = 50\n kwOC[\"cmap\"] = 'plasma'\n ktt.plotdelMagBW_KT(goodC, kwOC) \n print('made plot', plotName)\n\n # Dec\n print('-----------')\n print(' stats for Dec binning medians:')\n plotName = plotNameRoot + '_Dec.png'\n kwOC = {\"Xstr\":'dec', \"Xmin\":-1.3, \"Xmax\":1.3, \"Xlabel\":'Declination (deg)', \\\n \"Ystr\":kw['Ystr'], \"Ymin\":-0.07, \"Ymax\":0.07, \"Ylabel\":Ylabel, \\\n \"XminBin\":-1.266, \"XmaxBin\":1.264, \"nBin\":252, \\\n \"plotName\":plotName, \"Nsigma\":3, \"offset\":0.01, \"symbSize\":kw['symbSize']}\n zit.plotdelMag(goodC, kwOC)\n plotName = plotNameRoot + '_Dec_Hess.png' \n kwOC[\"plotName\"] = plotName\n kwOC[\"nBinX\"] = 252 \n kwOC[\"nBinY\"] = 50\n kwOC[\"cmap\"] = 'plasma'\n ktt.plotdelMagBW_KT(goodC, kwOC)\n print('made plot', plotName) \n \n # Gaia G\n print('-----------')\n print(' stats for SDSS r binning medians:')\n plotName = plotNameRoot + '_Gmag.png' \n kwOC = {\"Xstr\":'Gmag', \"Xmin\":14.3, \"Xmax\":21.01, \"Xlabel\":'Gaia G (mag)', \\\n \"Ystr\":kw['Ystr'], \"Ymin\":-0.06, \"Ymax\":0.06, \"Ylabel\":Ylabel, \\\n \"XminBin\":14.5, \"XmaxBin\":21.0, \"nBin\":130, \\\n \"plotName\":plotName, \"Nsigma\":3, \"offset\":0.01, \"symbSize\":kw['symbSize']}\n zit.plotdelMag(goodC, kwOC)\n plotName = plotNameRoot + '_Gmag_Hess.png' \n kwOC[\"plotName\"] = plotName\n kwOC[\"nBinX\"] = 130\n kwOC[\"nBinY\"] = 50\n kwOC[\"cmap\"] = 'plasma'\n ktt.plotdelMagBW_KT(goodC, kwOC)\n print('made plot', plotName)\n print('------------------------------------------------------------------')\n \n\nmOK['GrResid'] = mOK['dgGr'] - np.median(mOK['dgGr']) + 0.006\nmOKok = mOK[np.abs(mOK['dgGr'])<0.1]\nprint(np.size(mOK), np.size(mOKok))\n\nkeywords = {\"Ystr\":'GrResid', \"plotNameRoot\":'GmagCorrection', \"symbSize\":0.05}\nplotGmagCorrections(mOKok, keywords) \n\n!cp GmagCorrection_Gmag_Hess.png GmagCorrectionTest_Gmag_Hess.png\n\nmOKokX = mOKok[(mOKok['Gmag']>15)&(mOKok['Gmag']<15.5)]\nprint(np.median(mOKokX['GrResid']))\n\nmOKokX = mOKok[(mOKok['Gmag']>16)&(mOKok['Gmag']<16.2)]\nprint(np.median(mOKokX['GrResid']))\n\nkeywords = {\"Ystr\":'GrResid', \"plotNameRoot\":'GmagCorrection', \"symbSize\":0.05}\nplotGmagCorrections(mOKcBok, keywords) \n\n# for calibration: giMin = 0.4 & giMax = 3.0 \nmOKB = mOK[(mOK['Gmag']>16)&(mOK['Gmag']<19.5)]\nplotGmag2Ddiagrams(mOKB)\n\nmOKB", "Final Gmag-based recalibration\nRecalibrate R.A. residuals", "RAbin, RAnPts, RAmedianBin, RAsigGbin = zit.fitMedians(mOKcBok['raW'], mOKcBok['GrResid'], -51.5, 60.0, 112, 1)", "Recalibrate Dec residuals", "decOK = mOKcBok['dec_new']\nGrResid = mOKcBok['GrResid']\nfig,ax = plt.subplots(1,1,figsize=(8,6))\nax.scatter(decOK, GrResid, s=0.01, c='blue')\nax.set_xlim(-1.3,1.3)\nax.set_ylim(-0.06,0.06)\nax.set_ylim(-0.04,0.04)\n\nax.set_xlabel('Declination (deg)')\nax.set_ylabel('Gaia G - SDSS G')\nxBin, nPts, medianBin, sigGbin = zit.fitMedians(decOK, GrResid, -1.266, 1.264, 252, 0)\nax.scatter(xBin, medianBin, s=30.0, c='black', alpha=0.9)\nax.scatter(xBin, medianBin, s=15.0, c='yellow', alpha=0.5)\nTwoSigP = medianBin + 2*sigGbin\nTwoSigM = medianBin - 2*sigGbin \nax.plot(xBin, TwoSigP, c='yellow')\nax.plot(xBin, TwoSigM, c='yellow')\nxL = np.linspace(-100,100)\nax.plot(xL, 0*xL+0.00, c='yellow')\nax.plot(xL, 0*xL+0.01, c='red')\nax.plot(xL, 0*xL-0.01, c='red')\ndCleft = -1.3\nax.plot(0*xL+dCleft, xL, c='red')\nalltheta = []\nfor i in range(0,12):\n decCol = -1.2655 + (i+1)*0.2109\n ax.plot(0*xL+decCol, xL, c='red')\n xR = xBin[(xBin>dCleft)&(xBin<decCol)]\n yR = medianBin[(xBin>dCleft)&(xBin<decCol)]\n dyR = sigGbin[(xBin>dCleft)&(xBin<decCol)]\n data = np.array([xR, yR, dyR])\n theta2 = zit.best_theta(data,5)\n alltheta.append(theta2)\n yfit = zit.polynomial_fit(theta2, xR)\n ax.plot(xR, yfit, c='cyan', lw=2)\n dCleft = decCol\n rrr = yR - yfit\n # print(i, np.median(rrr), np.std(rrr)) # 2 milimag scatter \n # print(i, theta2)\nplt.savefig('GmagDecCorrections.png')\n\n# let's now correct all mags with this correction\nthetaRecalib = alltheta\n\ndecLeft = -1.3\nfor i in range(0,12):\n decRight = -1.2655 + (i+1)*0.2109\n decArr = np.linspace(decLeft, decRight, 100)\n thetaBin = thetaRecalib[i] \n ZPfit = zit.polynomial_fit(thetaBin, decArr)\n if (i==0):\n decCorrGrid = decArr\n ZPcorr = ZPfit\n else: \n decCorrGrid = np.concatenate([decCorrGrid, decArr]) \n ZPcorr = np.concatenate([ZPcorr, ZPfit])\n decLeft = decRight\n\nmOKtest = mOK[mOK['r_Nobs_new']>3]\n\n# Dec correction\ndecGrid2correct = mOKtest['dec_new']\nZPcorrectionsDec = np.interp(decGrid2correct, decCorrGrid, ZPcorr)\n# RA correction \nraWGrid2correct = mOKtest['raW'] \nZPcorrectionsRA = np.interp(raWGrid2correct, RAbin, RAmedianBin)\nprint(np.std(ZPcorrectionsDec), np.std(ZPcorrectionsRA))\n\nfig,ax = plt.subplots(1,1,figsize=(8,6))\nax.scatter(decGrid2correct, ZPcorrectionsDec, s=0.01, c='blue')\nax.plot(decCorrGrid, ZPcorr, c='red')\nax.set_xlim(-1.3,1.3)\nax.set_ylim(-0.02,0.02)\nax.set_xlabel('Declination (deg)')\nax.set_ylabel('Correction')\n\nfig,ax = plt.subplots(1,1,figsize=(8,6))\nax.scatter(raWGrid2correct, ZPcorrectionsRA, s=0.01, c='blue')\nax.plot(RAbin, RAmedianBin, c='red')\nax.set_xlim(-52,61)\nax.set_ylim(-0.05,0.05)\nax.set_xlabel('RA (deg)')\nax.set_ylabel('Correction') \nnp.min(ZPcorrectionsRA)\n\nmOKtest['ZPcorrectionsRA'] = ZPcorrectionsRA\nmOKtest['ZPcorrectionsDec'] = ZPcorrectionsDec\nmOKtest['r_mMed_new'] = mOKtest['r_mMed_new'] + mOKtest['ZPcorrectionsRA'] + mOKtest['ZPcorrectionsDec']\nmOKtest['gGr_new'] = mOKtest['Gmag'] - mOKtest['r_mMed_new']\nmOKtest['gGrFit'] = zit.polynomial_fit(thetaCloc, mOKtest['gi'])\nmOKtest['dgGr'] = mOKtest['gGr_new'] - mOKtest['gGrFit']\n\nd = mOKtest\ngi = d['gi']\nGr = d['gGr_new']\nGmag = d['Gmag']\nzit.qpBM(d, 'gi', -1, 4.5, 'gGr_new', -2, 1.0, 56) \n\nthetaCtest, DcBokTest_new = doGaiaGmagCorrection(mOKtest, 'gGr_new')\n\nkeywords = {\"Ystr\":'gGr_new', \"plotNameRoot\":'GmagCorrectionTest', \"symbSize\":0.05}\nmOKtest2 = mOKtest[(mOKtest['gi']>0.4)&(mOKtest['gi']<3.0)]\nx = mOKtest2[(mOKtest2['Gmag']>14.5)&(mOKtest2['Gmag']<15.5)]\nmOKtest2['gGr_new'] = mOKtest2['gGr_new'] - np.median(x['gGr_new']) \nplotGmagCorrections(mOKtest2, keywords) ", "Now save correction arrays, then apply to original file, and then test", "# final refers to the July 2020 analysis, before the paper submission\n#np.savetxt('ZPcorrectionsRA_v3.1_final.dat', (RAbin, RAmedianBin)) \n#np.savetxt('ZPcorrectionsDec_v3.1_final.dat', (decCorrGrid, ZPcorr))\n\nsdssOut = sdss[sdss['ra_new']<1]\nsdssOut = zit.selectCatalog(sdss, sdssOut)\n\nsdssOut.sort('calib_fla_new') \n\n# read back gray zero point recalibration files \nzpRAgrid, zpRA = np.loadtxt('ZPcorrectionsRA_v3.1_final.dat') \nzpDecgrid, zpDec = np.loadtxt('ZPcorrectionsDec_v3.1_final.dat') \n\nsdssOut\n\n# Dec correction\ndecGrid2correct = sdssOut['dec_new']\nZPcorrectionsDec = np.interp(decGrid2correct, zpDecgrid, zpDec)\n\n# RA correction \nra = sdssOut['ra_new'] \nraWGrid2correct = np.where(ra > 180, ra-360, ra) \nZPcorrectionsRA = np.interp(raWGrid2correct, zpRAgrid, zpRA)\nprint('gray std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec)) \n\nfor b in ('u', 'g', 'r', 'i', 'z'):\n for mtype in ('_mMed_new', '_mMean_new'):\n mstr = b + mtype\n # applying here gray corrections \n sdssOut[mstr] = sdssOut[mstr] + ZPcorrectionsRA + ZPcorrectionsDec \n\nSSCindexRoot = 'CALIBSTARS_'\noutFile = ZIdataDir + \"/\" + \"stripe82calibStars_v3.1_noheader_final.dat\"\nnewSSC = open(outFile,'w')\ndf = sdssOut\nNgood = 0\nfor i in range(0, np.size(df)):\n Ngood += 1\n NoldCat = df['calib_fla_new'][i]\n strNo = f'{Ngood:07}'\n SSCindex = SSCindexRoot + strNo \n SSCrow = zit.getSSCentry(df, i)\n zit.SSCentryToOutFileRow(SSCrow, SSCindex, newSSC) \nnewSSC.close()\nprint(Ngood, 'rows in file', outFile)", "paper plot showing the jump in Gaia Gmag", "np.size(zpDec)", "TEMP code for color corrections to go from 3.1 to 3.2 and 3.3", "### need to figure out where were ZPcorrections2_rz_Dec.dat etc produced ... \n## color corrections \nfor mtype in ('_mMed', '_mMean'):\n ## u band from u-r color\n color = 'ur'\n zpcFilename = 'ZPcorrections_' + color + '_RA.dat'\n zpcRAgrid, zpcRA = np.loadtxt(zpcFilename) \n zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'\n zpcDecgrid, zpcDec = np.loadtxt(zpcFilename) \n ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA) \n ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec) \n print('u-r std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec)) \n mstr = 'u' + mtype\n sdssOut[mstr] = sdssOut[mstr] - ZPcorrectionsRA - ZPcorrectionsDec \n ## g band from g-r color\n color = 'gr'\n zpcFilename = 'ZPcorrections_' + color + '_RA.dat'\n zpcRAgrid, zpcRA = np.loadtxt(zpcFilename) \n zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'\n zpcDecgrid, zpcDec = np.loadtxt(zpcFilename) \n ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA) \n ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec) \n print('g-r std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec)) \n mstr = 'g' + mtype\n sdssOut[mstr] = sdssOut[mstr] - ZPcorrectionsRA - ZPcorrectionsDec \n ## i band from r-i color\n color = 'ri'\n zpcFilename = 'ZPcorrections_' + color + '_RA.dat'\n zpcRAgrid, zpcRA = np.loadtxt(zpcFilename) \n zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'\n zpcDecgrid, zpcDec = np.loadtxt(zpcFilename) \n ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA) \n ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec) \n mstr = 'i' + mtype\n print('r-i std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec)) \n sdssOut[mstr] = sdssOut[mstr] + ZPcorrectionsRA + ZPcorrectionsDec \n ## i band from r-z color\n color = 'rz'\n zpcFilename = 'ZPcorrections_' + color + '_RA.dat'\n zpcRAgrid, zpcRA = np.loadtxt(zpcFilename) \n zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'\n zpcDecgrid, zpcDec = np.loadtxt(zpcFilename) \n ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA) \n ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec) \n mstr = 'z' + mtype\n print('r-z std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec)) \n sdssOut[mstr] = sdssOut[mstr] + ZPcorrectionsRA + ZPcorrectionsDec " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
lukas/ml-class
examples/notebooks/Lesson-1-Sentiment-Analysis-Data-Exploration.ipynb
gpl-2.0
[ "Data Exploration\nGoals\n\nIntroduction to Sentiment Analysis use case\nHow to quickly understand a real world data set\n\nIntroduction\nA useful application of machine learning is \"sentiment analysis\". Here we are trying to determine if a person feels positively or negatively about what they're writing about. One important application of sentiment analysis is for marketing departments to understand what people are saying about them on social media. Nearly every medium or large company with any sort of social media presence does some sort of sentiment analysis like the task we are about to do.\nHere we have a collection of tweets from the tech conference SXSW talking about apple brands. These tweet are hand labeled by humans using a tool I built called CrowdFlower. Our goal is to build a classifier that can generalize the human labels to more tweets.\nThe labels are what's known as training data, and we're going to use it to teach our classifier what text is positive sentiment and what text is negative sentiment.\nLet's take a look at our data. Machine learning classes tend to talk mostly about algroithms, but in practice, machine learning practicioners usually spend most of their time looking at their data.\nThis is a real data set, not a toy one, and I've left it uncleanup up so you will have to work through a few of the messy issues that almost always happen in the real world.", "# Our data file is in ../scikit/tweet.csv\n# in a Comma Separated Values format\n# this command uses the shell to print out the first ten lines\n!head ../scikit/tweets.csv", "Ok, that looks good - if a little messy. Let's open the file with some python\nLoading Data", "import pandas as pd # this loads the pandas library, a very useful data exploration library\nimport numpy as np # this loads numpy, a very useful numerical computing library\n\n# Puts tweets into a data frame\ndf = pd.read_csv('../scikit/tweets.csv') # read the file into a pandas data frame\nprint(df.head()) # print the first few rows of the data frame", "Data frames are pretty cool, for example I can index the column by name.", "tweets = df['tweet_text'] # sets tweets to be the first column, titled 'tweet_text'\nprint(tweets.head())", "Check for understanding\nSome questions that I immediately asked myself (and you should too)\n1. How many rows are in our data set?\n2. How many different types of labels are there? What are they?\n3. What year was this data collected?\nIf you were my student and you were sitting in front of me, I would make you actually do this. Unfortunately I can't force you to answer these questions yourself, but you will have more fun and learn more if you do.\nYou will probably need to google around a little to figure out how to use the dataframe to answer these questions. You can check out the cool pandas tutorial at https://pandas.pydata.org/pandas-docs/stable/10min.html - it will be useful for many things besides this tutorial!\nQuestion 1: How many rows in the dataset?", "print(tweets.shape) # print the shape of the variable tweets", "Looks like there are 9093 rows in our dataset\nQuestion 2: How many different types of labels are there? What are they?", "# we make target the list of labels from the third column\ntarget = df['is_there_an_emotion_directed_at_a_brand_or_product']\n\n# describe is a cool function for quick data exploration\ntarget.describe()", "Hmmm... looks like there are 4 values for the sentiment of the tweets with \"No emotion toward brand or product\" being the most common.", "target.value_counts()", "Interesting, there is a label \"I can't tell\" along with \"Positive emotion\", \"Negative emotion\" and \"No emotion toward brand or product\"\nQuestion 3: What year was this data collected?", "tweets[0]", "Hm, it's a 3G iphone, when was that? 2010?", "tweets[200]", "Ok - the ipad2 was released in 2011, these tweets must be from 2011.\nData Cleanup\nIf we dig into the data set one thing we'll notice is that some of the tweets are actually empty.", "print(tweets[6])", "It is best practice to not change the input data. It's better to clearly show the ways that you've modified your data in your code. In this case, we can use pandas to easily pull out the rows where the tweets are empty. Here we are indexing into our data frame with the results of a pd.notnull function - this notation is really convenient.", "fixed_tweets = tweets[pd.notnull(tweets)]", "We also need to remove the same rows of labels so that our \"tweets\" and \"target\" lists have the same length.", "fixed_target = target[pd.notnull(tweets)]", "Take a second to think about why I wrote \nfixed_target = target[pd.notnull(tweets)] instead of fixed_target = target[pd.notnull(target)]\nKey Takeaways\n\nThe most important thing to do when building a machine learning model is to actually look at your data. \nClean up your data in code, not in the original file\n\nQuestions\n\nHow messy is this data? It was labeled by humans - how many mislabels?\nWhy is there a \"Can't Tell\" label - what kind of tweets get that?\nAre all the tweets in English?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dhyeon/ingredient2vec
src/src_old/plot_ingredients_recipes.ipynb
apache-2.0
[ "import sys\nprint(sys.version)", "구현에 필요한 함수 모듈들 구축\nload_embeddings(path), nemb[1:]\nload_vocab(path), vocab[1:]\nload_recipes(path), recipes\nrun_tsne(nemb,multicore), tsne.fit_transform(nemb)\nbuild_food2cusine(recipes, vocab)\nmake_plot(name, points, labels, legend_labels, legend_order, legend_label_to_color, pretty_legend_label, publish)", "import os\nimport sklearn.manifold\nimport matplotlib.pyplot as plt\nimport h5py\nimport plotly.plotly as py\nimport plotly.graph_objs as go\nimport plotly.offline as offline\nimport numpy as np\nimport collections\nimport pandas as pd\nimport itertools\nimport seaborn as sns\nimport time\nimport json\nimport re\nfrom sklearn.manifold import TSNE\n# %load_ext wurlitzer\n\noffline.init_notebook_mode()\n\nflatten = lambda l: [item for sublist in l for item in sublist]\n\ndef load_embeddings(path):\n f = h5py.File(path, 'r')\n nemb = f['nemb'][:]\n f.close()\n return nemb[1:]\n\n\ndef load_vocab(path):\n vocab = []\n with open(path, 'r') as f:\n for line in f.readlines():\n split = line.split(' ')\n vocab.append((split[0].replace('\\'', ''), int(split[1].rstrip())))\n # ignore UNK at position 0\n return vocab[1:]\n\ndef load_recipes(path):\n recipes = []\n with open(path, 'r') as f:\n for line in f:\n if line[0] == '#':\n pass\n else:\n recipes.append(line.rstrip().split(','))\n return recipes\n\ndef run_tsne(nemb, multicore=True):\n if multicore:\n tsne = TSNE(n_jobs=4)\n else:\n tsne = sklearn.manifold.TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000, verbose=1)\n return tsne.fit_transform(nemb)\n \ndef build_food2cuisine(recipes, vocab):\n foods = [tup[0] for tup in vocab]\n food_counters = {food: collections.Counter() for food in foods}\n cuisine_counter = collections.Counter()\n for line in recipes:\n cuisine = line[0]\n cuisine_counter.update([cuisine])\n for food in line[1:]:\n if food in foods:\n food_counters[food].update([cuisine])\n food2cuisine = {}\n for food, food_counter in food_counters.items():\n for cuisine in cuisine_counter.keys():\n food_counter[cuisine] = np.float32(food_counter[cuisine]) / np.float32(cuisine_counter[cuisine])\n sorted_food_counter = sorted(food_counter.items(), key=lambda a: a[1])[::-1]\n print(food, sorted_food_counter[0:2])\n food2cuisine.update({food: sorted_food_counter[0][0]})\n return food2cuisine\n\n\n# These are the \"Tableau 20\" colors as RGB. \ntableau20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120), \n (44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150), \n (148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148), \n (227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199), \n (188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)] \ntableau20_rgb = ['rgb' + str(triplet) for triplet in tableau20]\n\n# Prettify ingredients\npretty_food = lambda s: ' '.join(s.split('_')).capitalize().lstrip()\n# Prettify cuisine names\npretty_cuisine = lambda s: ''.join(map(lambda x: x if x.islower() else \" \"+x, s)).lstrip()\n\n\ndef make_plot(name, points, labels, legend_labels, legend_order, legend_label_to_color, pretty_legend_label, publish):\n lst = zip(points, labels, legend_labels)\n full = sorted(lst, key=lambda x: x[2])\n traces = []\n for legend_label, group in itertools.groupby(full, lambda x: x[2]):\n group_points = []\n group_labels = []\n for tup in group:\n point, label, _ = tup\n group_points.append(point)\n group_labels.append(label)\n group_points = np.stack(group_points)\n traces.append(go.Scattergl(\n x = group_points[:, 0],\n y = group_points[:, 1],\n mode = 'markers',\n marker = dict(\n color = legend_label_to_color[legend_label],\n size = 8,\n opacity = 0.6,\n #line = dict(width = 1)\n ),\n text = ['{} ({})'.format(label, pretty_legend_label(legend_label)) for label in group_labels],\n hoverinfo = 'text',\n name = legend_label\n )\n )\n # order the legend\n ordered = [[trace for trace in traces if trace.name == lab] for lab in legend_order]\n traces_ordered = flatten(ordered)\n def _set_name(trace):\n trace.name = pretty_legend_label(trace.name)\n return trace\n traces_ordered = list(map(_set_name, traces_ordered))\n layout = go.Layout(\n xaxis=dict(\n autorange=True,\n showgrid=False,\n zeroline=False,\n showline=False,\n autotick=True,\n ticks='',\n showticklabels=False\n ),\n yaxis=dict(\n autorange=True,\n showgrid=False,\n zeroline=False,\n showline=False,\n autotick=True,\n ticks='',\n showticklabels=False\n )\n )\n fig = go.Figure(data=traces_ordered, layout=layout)\n if publish:\n plotter = py.iplot\n else:\n plotter = offline.plot\n plotter(fig, filename=name + '.html')", "Load Data", "# path = '/home/jaan/fit/food2vec/2017-01-24'\npath = 'data/'\n\n# 104534 embedded vectors - unique recipes?\nnemb = load_embeddings(os.path.join(path, 'embeddings.h5'))\n\n# 2088 vocabs - unique ingredients\nvocab = load_vocab(os.path.join(path, 'vocab.txt'))\n\nfood2id = {tup[0]: i for i, tup in enumerate(vocab)}", "Plot ingredients\nUsing tnse, dimention reduction", "# don't plot UNK at position 0\nlow_dim_embs = run_tsne(nemb.astype(np.float64), multicore=False)\n\nrecipes = load_recipes('data/kaggle_and_nature.csv')\n\nfor i in recipes[:10]:\n print i\n\nfood2cuisine = build_food2cuisine(recipes, vocab)\n\nfor i in food2cuisine[:10]:\n print i\n\nwith open('data/food2cuisine.json', 'w') as f:\n json.dump(food2cuisine, f, indent=2)\n\ncuisines = list(set(food2cuisine.values()))\n# np.random.seed(1234)\n# tableau20_sample = np.random.choice(tableau20_rgb, len(cuisines), replace=False)\n# cuisine2color = {cuisine: tableau20_sample[i] for i, cuisine in enumerate(cuisines)}\ncuisine2color = {\n 'African': sns.xkcd_rgb[\"grey\"],\n 'LatinAmerican': sns.xkcd_rgb[\"forest green\"],\n 'NorthAmerican': sns.xkcd_rgb[\"light pink\"],\n 'MiddleEastern': sns.xkcd_rgb[\"mustard yellow\"],\n 'EastAsian': sns.xkcd_rgb[\"orange\"],\n 'SouthAsian': sns.xkcd_rgb[\"magenta\"],\n 'SoutheastAsian': sns.xkcd_rgb[\"purple\"],\n 'NorthernEuropean': sns.xkcd_rgb[\"blue\"],\n 'EasternEuropean': sns.xkcd_rgb[\"deep blue\"],\n 'WesternEuropean': sns.xkcd_rgb[\"sky blue\"],\n 'SouthernEuropean': sns.xkcd_rgb[\"olive\"],\n}\nfood2color = {food: cuisine2color[food2cuisine[food]] for food in food2cuisine.keys()}\n\nlegend_order = [\n'African',\n'LatinAmerican',\n'NorthAmerican',\n'EastAsian',\n'SouthAsian',\n'SoutheastAsian',\n'MiddleEastern',\n'NorthernEuropean',\n'EasternEuropean',\n'WesternEuropean',\n'SouthernEuropean',\n]\n\nlabels = [item[0] for item in vocab]\nlegend_labels = [food2cuisine[food] for food in labels]\nlabels = [item[0] for item in vocab]\nlabels = map(pretty_food, labels)\n# legend_order = cuisine2color.keys()\n\nprint labels\nprint legend_labels\n\n\n\"\"\"\nmake_plot(name='food2vec_food_embeddings_tsne',\n points=low_dim_embs, \n labels=labels, \n legend_labels=legend_labels, \n legend_order=legend_order, \n legend_label_to_color=cuisine2color, \n pretty_legend_label=pretty_cuisine,\n publish=False)\n\"\"\"\n\nlen(vocab)", "Plot recipes\nNB: TSNE Takes ~10-30 minutes on 50k recipes", "def build_recipe_embedding(recipes, nemb, food2id):\n \"\"\"Get the recipe embedding.\n \n A recipe's embedding is the mean of its ingredients' embeddings.\n \n Args:\n recipes: list of recipes in the form [cuisine, food1, food2, ...]\n nemb: normalized embeddings\n food2id: map from food string to index in normalized embeddings\n Returns:\n List of tuples, each tuple has form (cuisine, ingredients, recipe embedding)\n \"\"\"\n recipe_embeddings = []\n for line in recipes:\n cuisine = line.pop(0)\n foods = line\n # check that we have learned the embeddings for all the ingredients\n filtered_foods = [food for food in foods if food in food2id]\n if len(filtered_foods) > 0:\n food_ids = list(map(lambda x: food2id[x], filtered_foods))\n embedding = np.mean(nemb[food_ids], axis=0)\n recipe_embeddings.append((cuisine, foods, embedding))\n return recipe_embeddings\n\nrecipe_embeddings = build_recipe_embedding(recipes, nemb, food2id)\n\n# subset = np.random.choice(range(len(recipe_embeddings)), 2000, replace=False)\n# small = [recipe_embeddings[idx] for idx in subset]\n\ncuisine_labels, ingredients, embeddings = zip(*recipe_embeddings)\ncuisine_labels = list(cuisine_labels)\nrecipe_nemb = np.vstack(embeddings)\n\ncuisine_counter = collections.Counter(cuisine_labels)\n\ncuisine_counter\n\nrecipe_emb_path = os.path.join(path, 'low_dim_recipe_embs.npz')\n\n%load_ext wurlitzer\nt0 = time.time()\nlow_dim_recipe_embs = run_tsne(recipe_nemb.astype(np.float64))\nnp.savez_compressed(recipe_emb_path, low_dim_recipe_embs)\nprint('time to run tsne on %d points: %.3f mins' % (len(recipe_nemb), (time.time() - t0) / 60.))\n\nwith open(recipe_emb_path, 'rb') as f:\n low_dim_recipe_embs = np.load(f)['arr_0']\n\n# low_dim_recipe_embs = run_fast_tsne(embeddings)\n# low_dim_recipe_embs = tsne.bh_sne(embeddings)\n# t0 = time.time()\n# low_dim_recipe_embs = bhtsne.run_bh_tsne(nemb, no_dims=2, perplexity=50, theta=0.5, randseed=-1, verbose=False,initial_dims=50, use_pca=True, max_iter=1000)\n# print 'time to run tsne on %d points: %.3f mins' % (len(recipe_nemb), (time.time() - t0) / 60.)\nlow_dim_recipe_embs_list = low_dim_recipe_embs.tolist()\n\n# clean_string = lambda x: re.sub(r'([^\\s\\w]|_)+', '', x)\nrecipe_labels = [', '.join([pretty_food(food) for food in foods]).lower().capitalize() for foods in ingredients]\n\nmake_plot(name='food2vec_recipe_embeddings_tsne',\n points=low_dim_recipe_embs_list, \n labels=recipe_labels, \n legend_labels=cuisine_labels, \n legend_order=legend_order, \n legend_label_to_color=cuisine2color, \n pretty_legend_label=pretty_cuisine,\n publish=False)", "Cuisine embeddings", "# cuisine embedding as the average of recipe embeddings:\nrecipe_embeddings[0]\nsorted_recipe_embeddings = sorted(recipe_embeddings, key=lambda x: x[0])\ncuisine_embeddings = []\nfor cuisine_name, group in itertools.groupby(sorted_recipe_embeddings, lambda x: x[0]):\n cuisine_recipe_emb = []\n for tup in group:\n _, _, recipe_emb = tup\n cuisine_recipe_emb.append(recipe_emb)\n all_cuisine_recipe_emb = np.stack(cuisine_recipe_emb)\n cuisine_emb = np.mean(all_cuisine_recipe_emb, axis=0)\n cuisine_embeddings.append((cuisine_name, cuisine_emb)) \n\n\n# cuisine embedding as the average of food embeddings with highest relative prevalence in that cuisine\n# def reverse_dict(mydict):\n# reversed_dict = collections.defaultdict(list)\n# for key,value in mydict.iteritems():\n# reversed_dict[value].append(key)\n# return reversed_dict\n# cuisine2foods = reverse_dict(food2cuisine)\n# cuisine_embeddings = []\n# for cuisine, foods in cuisine2foods.items():\n# food_ids = [food2id[food] for food in foods]\n# food_embs = nemb[food_ids]\n# cuisine_embeddings.append((cuisine, np.mean(food_embs, axis=0)))\n\nt0 = time.time()\ncuisine_names, cuisine_emb = zip(*cuisine_embeddings)\ncuisine_emb = np.asarray(cuisine_emb)\nlow_dim_cuisine_embs = run_tsne(cuisine_emb, multicore=False)\nprint('time to run tsne on %d points: %.3f mins' % (len(cuisine_emb), (time.time() - t0) / 60.))\n\nmake_plot(name='food2vec_cuisine_embeddings_tsne',\n points=low_dim_cuisine_embs, \n labels=cuisine_names, \n legend_labels=cuisine_names, \n legend_order=legend_order, \n legend_label_to_color=cuisine2color, \n pretty_legend_label=pretty_cuisine,\n publish=False)", "Write foods to json", "foods = [tup[0] for tup in vocab]\nfood2prettyfood = [{\"value\": food, \"text\": pretty_food(food)} for food in foods]\nfood2prettyfood.append([{\"value\": tup[0], \"text\": pretty_cuisine(tup[0])} for tup in cuisine_embeddings])\nwith open(os.path.join(path, 'foods.json'), 'w') as f:\n json.dump(food2prettyfood, f, indent=4)\n\ndef write_to_js(words, embeddings, path):\n word_vecs = {}\n for word, embedding in zip(words, embeddings):\n word_vecs[word] = embedding.tolist()\n with open(path, 'w') as f:\n f.write('var wordVecs=')\n json.dump(word_vecs, f)\n f.write(';')\n# lower precision, faster\n# nemb = nemb.astype(np.float16)\nwords = [pretty_food(food) for food in foods] + [pretty_cuisine(tup[0]) for tup in cuisine_embeddings]\nall_emb = np.vstack([nemb, cuisine_emb])\n# '../../word2vecjson/data/foodVecs.js'\nwrite_to_js(words, all_emb, path=os.path.join(path, 'foodVecs.js'))\n\n# print list of foods for autocomplete in assets/js/initm.js\nstring = str({word: None for word in words})\nwith open(os.path.join(path, 'javascript_dict.txt'), 'w') as f:\n f.write(string.replace('None', 'null'))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/tfx_pipelines/guided_projects/guided_project_2.ipynb
apache-2.0
[ "Guided Project 2\nLearning Objective:\n\nLearn how to adapt the tfx template to an existing model.\n\nIn this guided project, we will use the tfx template tool to create a TFX pipeline around the covertype dataset.\nThe goal is to adapt the template pipeline to make use of the model code for the covertype dataset we already developed in\nthe first part of this course.", "import os", "Step 1. Environment setup\nEnvirnonment Variables\nSetup the your Kubeflow pipelines endopoint below the same way you did in guided project 1.", "ENDPOINT = \"\" # Enter your ENDPOINT here.\n\nPATH = %env PATH\n%env PATH={PATH}:/home/jupyter/.local/bin\n\nshell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\nGOOGLE_CLOUD_PROJECT = shell_output[0]\n\n%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}\n\n# Docker image name for the pipeline image.\nCUSTOM_TFX_IMAGE = \"gcr.io/\" + GOOGLE_CLOUD_PROJECT + \"/tfx-pipeline\"\nCUSTOM_TFX_IMAGE", "skaffold tool setup", "%%bash\n\nLOCAL_BIN=\"/home/jupyter/.local/bin\"\nSKAFFOLD_URI=\"https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64\"\n\ntest -d $LOCAL_BIN || mkdir -p $LOCAL_BIN\n\nwhich skaffold || (\n curl -Lo skaffold $SKAFFOLD_URI &&\n chmod +x skaffold &&\n mv skaffold $LOCAL_BIN\n)", "Modify the PATH environment variable so that skaffold is available:\nAt this point, you shoud see the skaffold tool with the command which:", "!which skaffold", "Step 2. Copy the predefined template to your project directory.\nIn this step, we will create a working pipeline project directory and \nfiles by copying additional files from a predefined template.\nYou may give your pipeline a different name by changing the PIPELINE_NAME below. \nThis will also become the name of the project directory where your files will be put.", "PIPELINE_NAME = \"guided_project_2\"\nPROJECT_DIR = os.path.join(os.path.expanduser(\".\"), PIPELINE_NAME)\nPROJECT_DIR", "TFX includes the taxi template with the TFX python package. \nIf you are planning to solve a point-wise prediction problem,\nincluding classification and regresssion, this template could be used as a starting point.\nThe tfx template copy CLI command copies predefined template files into your project directory.", "!tfx template copy \\\n --pipeline-name={PIPELINE_NAME} \\\n --destination-path={PROJECT_DIR} \\\n --model=taxi\n\n%cd {PROJECT_DIR}", "Step 3. Browse your copied source files\nThe TFX template provides basic scaffold files to build a pipeline, including Python source code,\nsample data, and Jupyter Notebooks to analyse the output of the pipeline. \nThe taxi template uses the same Chicago Taxi dataset and ML model as \nthe Airflow Tutorial.\nHere is brief introduction to each of the Python files:\npipeline - This directory contains the definition of the pipeline\n* configs.py — defines common constants for pipeline runners\n* pipeline.py — defines TFX components and a pipeline\nmodels - This directory contains ML model definitions.\n* features.py, features_test.py — defines features for the model\n* preprocessing.py, preprocessing_test.py — defines preprocessing jobs using tf::Transform\nmodels/estimator - This directory contains an Estimator based model.\n* constants.py — defines constants of the model\n* model.py, model_test.py — defines DNN model using TF estimator\nmodels/keras - This directory contains a Keras based model.\n* constants.py — defines constants of the model\n* model.py, model_test.py — defines DNN model using Keras\nbeam_dag_runner.py, kubeflow_dag_runner.py — define runners for each orchestration engine\nRunning the tests:\nYou might notice that there are some files with _test.py in their name. \nThese are unit tests of the pipeline and it is recommended to add more unit \ntests as you implement your own pipelines. \nYou can run unit tests by supplying the module name of test files with -m flag. \nYou can usually get a module name by deleting .py extension and replacing / with ..\nFor example:", "!python -m models.features_test\n!python -m models.keras.model_test", "Step 4. Create the artifact store bucket\nNote: You probably already have completed this step in guided project 1, so you may\nmay skip it if this is the case.\nComponents in the TFX pipeline will generate outputs for each run as\nML Metadata Artifacts, and they need to be stored somewhere.\nYou can use any storage which the KFP cluster can access, and for this example we\nwill use Google Cloud Storage (GCS).\nLet us create this bucket if you haven't created it in guided project 1.\nIts name will be &lt;YOUR_PROJECT&gt;-kubeflowpipelines-default.", "GCS_BUCKET_NAME = GOOGLE_CLOUD_PROJECT + \"-kubeflowpipelines-default\"\nGCS_BUCKET_NAME\n\n!gsutil ls gs://{GCS_BUCKET_NAME} | grep {GCS_BUCKET_NAME} || gsutil mb gs://{GCS_BUCKET_NAME}", "Step 5. Change the dataset\nNow we need to have the TFX pipeline dataset point to the GCS bucket\nwhere our covertype data is located. For that\n\n\nopen kubeflow_dag_runner.py\n\n\nSet the variable DATA_PATH to gs://workshop-datasets/covertype/small which contains the covertype dataset\n\n\nStep 6. Change the pre-processing module\nAt this step we want to reuse the features.py and preprocessing.py we already have for the covertype dataset. To do that, we will just copy these files over the template ones:", "FEATURE_PY = \"../../pipeline/solutions/pipeline/features.py\"\nPREPROC_PY = \"../../pipeline/solutions/pipeline/preprocessing.py\"\n\n!cp {FEATURE_PY} ./models/features.py\n!cp {PREPROC_PY} ./models/preprocessing.py", "Now when you run the tests in the two cells below they should fail \nbecause they were written for the template taxi dataset. \nExercise: Modify the tests features_test.py and preprocessing_test.py as well as possibly the original modules until the tests pass.", "!python -m models.features_test\n\n!python -m models.preprocessing_test", "Step 6. Change the model\nSimilarly as for the pre-processing we want to reuse the model we develop for the covertype dataset,\nso we will simply copy it over the template model:", "MODEL_PY = \"../../tfx_pipelines/pipeline/solutions/pipeline/model.py\"\n\n!cp {MODEL_PY} ./models/keras/model.py", "Exercise: Move the constants defined in model.py into constants.py and import them from model.py to respect the template structure.\nStep 7. Run the covertype TFX pipeline\nLet's create a TFX pipeline using the tfx pipeline create command.\nNote: When creating a pipeline for KFP, we need a container image which will \nbe used to run our pipeline. And skaffold will build the image for us. Because skaffold\npulls base images from the docker hub, it will take 5~10 minutes when we build\nthe image for the first time, but it will take much less time from the second build.", "!tfx pipeline create \\\n--pipeline-path=kubeflow_dag_runner.py \\\n--endpoint={ENDPOINT} \\\n--build-target-image={CUSTOM_TFX_IMAGE}", "While creating a pipeline, Dockerfile and build.yaml will be generated to build a Docker image.\nDon't forget to add these files to the source control system (for example, git) along with other source files.\nA pipeline definition file for argo will be generated, too. \nThe name of this file is ${PIPELINE_NAME}.tar.gz. \nFor example, it will be guided_project_2.tar.gz if the name of your pipeline is guided_project_1. \nIt is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in .gitignore which is generated automatically.\nNow start an execution run with the newly created pipeline using the tfx run create command.\nNote: You may see the following error Error importing tfx_bsl_extension.coders. Please ignore it.", "!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}", "Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed \nunder Experiments in the KFP Dashboard. \nClicking into the experiment will allow you to monitor progress and visualize \nthe artifacts created during the execution run.\nHowever, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from \nthe Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, \nyou will be able to find the pipeline, and access a wealth of information about the pipeline. \nFor example, you can find your runs under the Experiments menu, and when you open your\nexecution run under Experiments you can find all your artifacts from the pipeline under Artifacts menu.\nNote: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard.\nOne of the major sources of failure is permission related problems. \nPlease make sure your KFP cluster has permissions to access Google Cloud APIs.\nThis can be configured when you create a KFP cluster in GCP,\nor see Troubleshooting document in GCP.\nStep 8. Add components for data validation.\nIn this step, you will add components for data validation including StatisticsGen, SchemaGen, and ExampleValidator.\nIf you are interested in data validation, please see \nGet started with Tensorflow Data Validation.\nDouble-click to change directory to pipeline and double-click again to open pipeline.py. \nFind and uncomment the 3 lines which add StatisticsGen, SchemaGen, and ExampleValidator to the pipeline.\n(Tip: search for comments containing TODO(step 5):). Make sure to save pipeline.py after you edit it.\nYou now need to update the existing pipeline with modified pipeline definition. Use the tfx pipeline update command to update your pipeline, followed by the tfx run create command to create a new execution run of your updated pipeline.", "# Update the pipeline\n!tfx pipeline update \\\n--pipeline-path=kubeflow_dag_runner.py \\\n--endpoint={ENDPOINT}\n\n# You can run the pipeline the same way.\n!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}", "Check pipeline outputs\nVisit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the Experiments tab on the left, and All runs in the Experiments page. You should be able to find the latest run under the name of your pipeline.\nSee link below to access the dashboard:", "print(\"https://\" + ENDPOINT)", "Step 9. Add components for training\nIn this step, you will add components for training and model validation including Transform, Trainer, ResolverNode, Evaluator, and Pusher.\nDouble-click to open pipeline.py. Find and uncomment the 5 lines which add Transform, Trainer, ResolverNode, Evaluator and Pusher to the pipeline. (Tip: search for TODO(step 6):)\nHints: \n\n\nIn pipeline.py make sure you turn the cache of by setting enable_cache=False for debugging purposes (otherwise components that have been previously run won't be).\n\n\nIn pipeline.py, you'll need to set infer_feature_shape=False, otherwise you'll run into sparse/dense tensor mismatch.\n\n\nIn pipeline.py, you'll need to set the label big_tipper from the default template to the right label for our dataset in \n\n\npython\ntfma.EvalConfig(\n model_specs=[tfma.ModelSpec(label_key='big_tipper')],\n\nIn configs.py, adapt the values of the variables TRAIN_NUM_STEPS and EVAL_NUM_STEPS to match what we had in the original covertype pipeline.\n\nAs you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using tfx pipeline update, and create an execution run using tfx run create.\nVerify that the pipeline DAG has changed accordingly in the Kubeflow UI:", "!tfx pipeline update \\\n--pipeline-path=kubeflow_dag_runner.py \\\n--endpoint={ENDPOINT}\n\nprint(\"https://\" + ENDPOINT)\n\n!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}", "When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!\nStep 10. Try Cloud AI Platform Training and Prediction with KFP\nTFX interoperates with several managed GCP services, such as Cloud AI Platform for Training and Prediction. You can set your Trainer component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can push your model to Cloud AI Platform Prediction for serving. In this step, we will set our Trainer and Pusher component to use Cloud AI Platform services.\nBefore editing files, you might first have to enable AI Platform Training & Prediction API.\nDouble-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, GCP_AI_PLATFORM_TRAINING_ARGS and GCP_AI_PLATFORM_SERVING_ARGS. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set masterConfig.imageUri in GCP_AI_PLATFORM_TRAINING_ARGS to the same value as CUSTOM_TFX_IMAGE above.\nChange directory one level up, and double-click to open kubeflow_dag_runner.py. Uncomment ai_platform_training_args and ai_platform_serving_args.\nUpdate the pipeline and create an execution run as we did in step 5 and 6.", "!tfx pipeline update \\\n--pipeline-path=kubeflow_dag_runner.py \\\n--endpoint={ENDPOINT}\n\n!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}", "You can find your training jobs in Cloud AI Platform Jobs. If your pipeline completed successfully, you can find your model in Cloud AI Platform Models.\nLicense\nCopyright 2021 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/tfx-addons
tfx_addons/feature_selection/example/Palmer_Penguins_example_colab.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/deutranium/tfx-addons/blob/main/tfx_addons/feature_selection/example/Palmer_Penguins_example_colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nTFX Feature Selection Component\nYou may find the source code for the same here\nThis example demonstrate the use of feature selection component. This project allows the user to select different algorithms for performing feature selection on datasets artifacts in TFX pipelines\nBase code taken from: https://github.com/tensorflow/tfx/blob/master/docs/tutorials/tfx/components_keras.ipynb\nSetup\nInstall TFX\nNote: In Google Colab, because of package updates, the first time you run this cell you must restart the runtime (Runtime > Restart runtime ...).", "!pip install -U tfx\n\n# getting the code directly from the repo\nx = !pwd\n\nif 'feature_selection' not in str(x):\n !git clone -b main https://github.com/tensorflow/tfx-addons.git\n %cd tfx-addons/tfx_addons/feature_selection", "Import packages\nImporting the necessary packages, including the standard TFX component classes", "import os\nimport pprint\nimport tempfile\nimport urllib\n\nimport absl\nimport tensorflow as tf\nimport tensorflow_model_analysis as tfma\ntf.get_logger().propagate = False\nimport importlib\npp = pprint.PrettyPrinter()\n\nfrom tfx import v1 as tfx\nimport importlib\nfrom tfx.components import CsvExampleGen\nfrom tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext\n\n%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip\n\n# importing the feature selection component\nfrom component import FeatureSelection\n\n\n# This is the root directory for your TFX pip package installation.\n_tfx_root = tfx.__path__[0]", "Palmer Penguins example pipeline\nDownload Example Data\nWe download the example dataset for use in our TFX pipeline.\nThe dataset we're using is the Palmer Penguins dataset which is also used in other\nTFX examples.\nThere are four numeric features in this dataset:\n\nculmen_length_mm\nculmen_depth_mm\nflipper_length_mm\nbody_mass_g\n\nAll features were already normalized to have range [0,1]. We will build a\nthat selects 2 features to be eliminated from the dataset in other to improve the performance of the mode in predicting the species of penguins.", "# getting the dataset\n_data_root = tempfile.mkdtemp(prefix='tfx-data')\nDATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'\n \n_data_filepath = os.path.join(_data_root, \"data.csv\")\nurllib.request.urlretrieve(DATA_PATH, _data_filepath)", "Run TFX Components\nIn the cells that follow, we create TFX components one-by-one and generates example using exampleGen component.", "context = InteractiveContext()\n\n#create and run exampleGen component\nexample_gen = CsvExampleGen(input_base=_data_root )\ncontext.run(example_gen)\n\n#create and run statisticsGen component\nstatistics_gen = tfx.components.StatisticsGen(\n examples=example_gen.outputs['examples'])\ncontext.run(statistics_gen)\n\n# using the feature selection component\n#feature selection component\n\nfeature_selector = FeatureSelection(orig_examples = example_gen.outputs['examples'],\n module_file='example.modules.penguins_module')\ncontext.run(feature_selector)\n\n# Display Selected Features\ncontext.show(feature_selector.outputs['feature_selection']._artifacts[0])", "As seen above, .selected_features contains the features selected after running the component with the speified parameters.\nTo get the info about updated Example artifact, one can view it as follows:", "context.show(feature_selector.outputs['updated_data']._artifacts[0])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
isb-cgc/examples-Python
notebooks/BCGSC microRNA expression.ipynb
apache-2.0
[ "microRNA expression (BCGSC RPKM)\nThe goal of this notebook is to introduce you to the microRNA expression BigQuery table.\nThis table contains all available TCGA Level-3 microRNA expression data produced by BCGSC's microRNA pipeline using the Illumina HiSeq platform, as of July 2016. The most recent archive (eg bcgsc.ca_THCA.IlluminaHiSeq_miRNASeq.Level_3.1.9.0) for each of the 32 tumor types was downloaded from the DCC, and data extracted from all files matching the pattern %.isoform.quantification.txt. The isoform-quantification values were then processed through a Perl script provided by BCGSC which produces normalized expression levels for mature microRNAs. Each of these mature microRNAs is identified by name (eg hsa-mir-21) and by MIMAT accession number (eg MIMAT0000076).\nIn order to work with BigQuery, you need to import the python bigquery module (gcp.bigquery) and you need to know the name(s) of the table(s) you are going to be working with:", "import gcp.bigquery as bq\nmiRNA_BQtable = bq.Table('isb-cgc:tcga_201607_beta.miRNA_Expression')", "From now on, we will refer to this table using this variable ($miRNA_BQtable), but we could just as well explicitly give the table name each time.\nLet's start by taking a look at the table schema:", "%bigquery schema --table $miRNA_BQtable", "Now let's count up the number of unique patients, samples and aliquots mentioned in this table. We will do this by defining a very simple parameterized query. (Note that when using a variable for the table name in the FROM clause, you should not also use the square brackets that you usually would if you were specifying the table name as a string.)", "%%sql --module count_unique\n\nDEFINE QUERY q1\nSELECT COUNT (DISTINCT $f, 25000) AS n\nFROM $t\n\nfieldList = ['ParticipantBarcode', 'SampleBarcode', 'AliquotBarcode']\nfor aField in fieldList:\n field = miRNA_BQtable.schema[aField]\n rdf = bq.Query(count_unique.q1,t=miRNA_BQtable,f=field).results().to_dataframe()\n print \" There are %6d unique values in the field %s. \" % ( rdf.iloc[0]['n'], aField)\n\nfieldList = ['mirna_id', 'mirna_accession']\nfor aField in fieldList:\n field = miRNA_BQtable.schema[aField]\n rdf = bq.Query(count_unique.q1,t=miRNA_BQtable,f=field).results().to_dataframe()\n print \" There are %6d unique values in the field %s. \" % ( rdf.iloc[0]['n'], aField)", "These counts show that the mirna_id field is not a unique identifier and should be used in combination with the MIMAT accession number.\nAnother thing to note about this table is that these expression values are obtained from two different platforms -- approximately 15% of the data is from the Illumina GA platform, and 85% from the Illumina HiSeq:", "%%sql\n\nSELECT\n Platform,\n COUNT(*) AS n\nFROM\n $miRNA_BQtable\nGROUP BY\n Platform\nORDER BY\n n DESC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
antoniomezzacapo/qiskit-tutorial
to_sort/circuit_drawing.ipynb
apache-2.0
[ "<img src=\"../../images/qiskit-heading.gif\" alt=\"Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook\" width=\"500 px\" align=\"left\">\nmatplotlib_circuit_drawer: matplotlib-based quantum circuit drawer\nQiskit-terra ver 0.5.5 intoduces matplotlib_circuit_drawer, which is a drop-in replacement of latex_circuit_drawer. If LaTeX is installed, circuit_drawer draws circuits by latex_circuit_drawer. Otherwise, it draws them by matplotlib_circuit_drawer. We explain the details of matplotlib_circuit_drawer in this notebook.\nContributors\nTakashi Imamichi, Naoki Kanazawa\nSetup\nYou first import Qiskit and matplotlib_drawer. We recommend the inline mode of matplotlib to draw circuits.", "from math import pi\nfrom qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister\nfrom qiskit.tools.visualization import matplotlib_circuit_drawer as drawer, qx_color_scheme\n\n# We recommend the following options for Jupter notebook\n%matplotlib inline", "Create a quantum circuit\nYou can design a quantum circuit with quantum registers, quantum gates and classical registers.", "# Create a Quantum Register called \"q\" with 3 qubits\nqr = QuantumRegister(3, 'q')\n\n# Create a Classical Register called \"c\" with 3 bits\ncr = ClassicalRegister(3, 'c')\n\n# Create a Quantum Circuit called involving \"qr\" and \"cr\"\ncircuit = QuantumCircuit(qr, cr)\n\ncircuit.x(qr[0]).c_if(cr, 3)\ncircuit.z(qr[0])\ncircuit.u2(pi/2, 2*pi/3, qr[1])\n\ncircuit.cu1(pi, qr[0], qr[1])\n\n# Barrier to seperator the input from the circuit\ncircuit.barrier(qr[0])\ncircuit.barrier(qr[1])\ncircuit.barrier(qr[2])\n\n# Toffoli gate from qubit 0,1 to qubit 2\ncircuit.ccx(qr[0], qr[1], qr[2])\n\n# CNOT (Controlled-NOT) gate from qubit 0 to qubit 1\ncircuit.cx(qr[0], qr[1])\n\ncircuit.swap(qr[0], qr[2])\n\n# measure gate from qr to cr\ncircuit.measure(qr, cr)", "Extract OpenQASM\nYou can obtain a OpenQASM representation of your code.", "QASM_source = circuit.qasm()\nprint(QASM_source)", "Visualize Circuit\nYou can visualize your circuit using matplotlib_drawer, which plots the unrolled circuit in the specified basis. You can adjust the size by specifying scale.", "drawer(circuit)\n\ndrawer(circuit, basis='u1,u2,u3,id,cx', scale=1.0)", "Use Stylesheet\nYou can configure your plot appearance by using style sheet written in dictionary format.\nShow Barriers\nYou can visualize barriers by barrier key.", "my_style = {'plotbarrier': True}\ndrawer(circuit, style=my_style)", "Bundle Classical Registers\nYou can combine classical registers into a single line by cregbundle key.", "my_style = {'cregbundle': True}\ndrawer(circuit, style=my_style)", "Show Index\nIndex of operation can be shown by index key.", "my_style = {'showindex': True}\ndrawer(circuit, style=my_style)", "Reduce Gap Between Gate\nYou can reduce redundant gaps between gates by compress key.", "my_style = {'compress': True}\ndrawer(circuit, style=my_style)", "Fold a Long Circuit\nWhen the horizontal size of screen is limited, you can fold a circuit in multiple lines.\nMaximum number of gate in a single line is specified by fold key (default: 20). If the value of key is less than 2, no folding option will be applied.", "my_style = {'fold': 6}\ndrawer(circuit, style=my_style)", "Show Rotation Parameters in the unit of $\\pi$\nYou can show rotation parameters in the unit of $\\pi$ by usepiformat key.", "my_style = {'usepiformat': True}\ndrawer(circuit, style=my_style)", "Use Emoji and LaTeX Symbols as Gate Symbols\nYou can use unicode characters and latex symbols supported by matplotlib for gates.", "qr = QuantumRegister(1, 'q')\ncircuit_xyz = QuantumCircuit(qr)\ncircuit_xyz.x(qr[0])\ncircuit_xyz.y(qr[0])\ncircuit_xyz.z(qr[0])\ndrawer(circuit_xyz)\n\nmy_style = {'displaytext': {'x': '😺', 'y': '\\Sigma', 'z': '✈'}}\ndrawer(circuit_xyz, style=my_style)", "Different style of gates: cz, cu1\nCZ and CU1 gates can be visuarized in different formats by disable latexdrawerstyle key.", "qr = QuantumRegister(2, 'q')\ncircuit_cucz = QuantumCircuit(qr)\ncircuit_cucz.cz(qr[0], qr[1])\ncircuit_cucz.cu1(pi, qr[0], qr[1])\ndrawer(circuit_cucz)\n\nmy_style = {'latexdrawerstyle': False}\ndrawer(circuit_cucz, style=my_style)", "All Options\nYou can use own style sheet to suit the result to your GUI. By combining above options, composer of IBM Q Experience can be reproduced.", "qr = QuantumRegister(3, 'q')\ncr = ClassicalRegister(3, 'c')\ncircuit_all = QuantumCircuit(qr, cr)\n\ncircuit_all.x(qr[0])\ncircuit_all.y(qr[0])\ncircuit_all.z(qr[0])\ncircuit_all.barrier(qr[0])\ncircuit_all.barrier(qr[1])\ncircuit_all.barrier(qr[2])\ncircuit_all.h(qr[0])\ncircuit_all.s(qr[0])\ncircuit_all.sdg(qr[0])\ncircuit_all.t(qr[0])\ncircuit_all.tdg(qr[0])\ncircuit_all.iden(qr[0])\ncircuit_all.reset(qr[0])\ncircuit_all.rx(pi, qr[0])\ncircuit_all.ry(pi, qr[0])\ncircuit_all.rz(pi, qr[0])\ncircuit_all.u0(pi, qr[0])\ncircuit_all.u1(pi, qr[0])\ncircuit_all.u2(pi, pi, qr[0])\ncircuit_all.u3(pi, pi, pi, qr[0])\ncircuit_all.swap(qr[0], qr[1])\ncircuit_all.cx(qr[0], qr[1])\ncircuit_all.cy(qr[0], qr[1])\ncircuit_all.cz(qr[0], qr[1])\ncircuit_all.ch(qr[0], qr[1])\ncircuit_all.cu1(pi, qr[0], qr[1])\ncircuit_all.cu3(pi, pi, pi, qr[0], qr[1])\ncircuit_all.crz(pi, qr[0], qr[1])\ncircuit_all.ccx(qr[0], qr[1], qr[2])\ncircuit_all.cswap(qr[0], qr[1], qr[2])\n\ncircuit_all.measure(qr, cr)\n\ndrawer(circuit_all)", "You can configure the color scheme. Composer style sheet is prepared as qx_color_scheme.", "cmp_style = qx_color_scheme()\ncmp_style\n\ndrawer(circuit_all, style=cmp_style)", "Save the circuit image to a file\nThe following line saves the image to 'circuit.pdf' by specifying a parameter filename.", "cmp_style.update({\n 'usepiformat': True,\n 'showindex': True,\n 'cregbundle': True,\n 'compress': True,\n 'fold': 17\n})\ndrawer(circuit_all, filename='circuit.pdf', style=cmp_style)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
galtay/tensorflow_examples
01_linear_regression.ipynb
gpl-3.0
[ "Imports", "import pandas as pd\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow.contrib import keras\nfrom sklearn import datasets\nfrom sklearn import linear_model\nimport statsmodels.api as sm\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns", "Simple Mock Data\nLets create a simple mock dataset with one independent variable and one dependent variable with a little noise.", "Nsamp = 50\nNfeatures = 1\nxarr = np.linspace(-0.5, 0.5, Nsamp)\nnp.random.seed(83749)\nbeta_0 = -2.0\nbeta_1 = 4.3\nyarr = (beta_0 + beta_1 * xarr) + (np.random.normal(size=Nsamp) * 0.5)\n\nmdl = linear_model.LinearRegression(fit_intercept=False)\nmdl = mdl.fit(np.c_[np.ones(Nsamp), xarr], yarr)\nmdl.coef_\n\nfig, ax = plt.subplots(figsize=(5,5))\nplt.scatter(xarr, yarr, s=10, color='blue')\nplt.plot(xarr, mdl.coef_[0] + mdl.coef_[1] * xarr, color='red')\n\nph_x = tf.placeholder(tf.float32, [None, Nfeatures], name='features')\nph_y = tf.placeholder(tf.float32, [None, 1], name='output')\nph_x, ph_y\n\n# Set model weights\nv_W = tf.Variable(tf.random_normal([Nfeatures, 1]), name='weights')\nv_b = tf.Variable(tf.zeros([1]), name='bias')\nv_z = tf.matmul(ph_x, v_W) + v_b\ncost_1 = tf.squared_difference(v_z, ph_y)\ncost_2 = tf.reduce_mean(cost_1)\n\nlearning_rate=0.1\ntrain_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_2)\n# Construct model and encapsulating all ops into scopes, making\n# Tensorboard's Graph visualization more convenient\n#with tf.name_scope('Model'):\n# # Model\n# pred = tf.matmul(x, W) + b # basic linear regression\n#with tf.name_scope('Loss'):\n# # Minimize error (mean squared error)\n# cost = tf.reduce_mean(-tf.reduce_sum(y - pred)*tf.log(pred), reduction_indices=1))\n#with tf.name_scope('SGD'):\n# # Gradient Descent\n# optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)\n#with tf.name_scope('Accuracy'):\n# # Accuracy\n# acc = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))\n# acc = tf.reduce_mean(tf.cast(acc, tf.float32))\n\n\n\ninit = tf.global_variables_initializer()\n\nmerged = tf.summary.merge_all()\n\n\n\n# Launch the graph\nfeed_dict = {ph_x: xarr.reshape(Nsamp, 1), ph_y: yarr.reshape(Nsamp,1)}\nwith tf.Session() as sess:\n train_writer = tf.summary.FileWriter('/tmp/tensorflow/logs', sess.graph)\n \n sess.run(init)\n z_out = sess.run(v_z, feed_dict=feed_dict)\n cost_1_out = sess.run(cost_1, feed_dict=feed_dict)\n cost_2_out = sess.run(cost_2, feed_dict=feed_dict)\n for i in range(300):\n train_step_out = sess.run(train_step, feed_dict=feed_dict)\n W_out = sess.run(v_W, feed_dict=feed_dict)\n b_out = sess.run(v_b, feed_dict=feed_dict)\n\nprint(W_out)\nprint(b_out)\n", "Boston Housing Dataset\n\n\nfeautres: raw features variables in DataFrame\n\n\ntarget: raw target variable in DataFrame", "boston = datasets.load_boston()\nprint(boston['DESCR'])\nfeatures = pd.DataFrame(data=boston['data'], columns=boston['feature_names'])\ntarget = pd.DataFrame(data=boston['target'], columns=['MEDV'])\n\nfeatures.head(5)\n\ntarget.head(5)\n\nhh = features.hist(figsize=(14,18))", "Center and Normalize", "from sklearn.preprocessing import StandardScaler\nscalerX = StandardScaler()\nscalerX.fit(features)\ndfXn = pd.DataFrame(data=scalerX.transform(features), columns=features.columns)\nscalerY = StandardScaler()\nscalerY.fit(target)\ndfYn = pd.DataFrame(data=scalerY.transform(target), columns=target.columns)\n\ndfXn.head(5)\n\ndfYn.head(5)", "Statsmodels Linear Regression", "dfXn1 = dfXn.copy()\ndfXn1.insert(loc=0, column='intercept', value=1)\nresults = sm.OLS(dfYn, dfXn1).fit()\nprint(results.summary())\n\ndfYn.max()\ntarget.max()\n\nplt.scatter(dfYn.values, results.fittedvalues.values)\n\nfrom sklearn import linear_model\nmdl = linear_model.LinearRegression(fit_intercept=False)\nmdl = mdl.fit(dfXn1.values, dfYn.values)\n\nprint('n_params (statsmodels): ', len(results.params))\nprint('n params (sklearn linear): ', len(mdl.coef_.flatten()))\n\nprint(results.params)\nprint()\nprint(mdl.coef_)\n\nnp.all(np.abs(mdl.coef_ - results.params.values) < 1.0e-10)\n\nplt.scatter(dfYn.values, mdl.predict(dfXn1.values).flatten())", "Linear Regression, Keras", "from keras.models import Sequential\nfrom keras.layers import Dense, InputLayer\nfrom keras.optimizers import SGD, Adam, RMSprop\nfrom keras.losses import mean_squared_error\n\nnfeatures = features.shape[1]\nmodel = Sequential()\nmodel.add(InputLayer(input_shape=(nfeatures,), name='input'))\nmodel.add(Dense(1, kernel_initializer='uniform', activation='linear', name='dense_1'))\nmodel.summary()\nweights_initial = model.get_weights()\n\nprint('weights_initial - input nodes: \\n', weights_initial[0])\nprint('weights_initial - bias node: ', weights_initial[1])\n\nmodel.compile(optimizer=RMSprop(lr=0.001), loss='mean_squared_error')\n\ndfYn.shape\n\nmodel.set_weights(weights_initial)\nhistory = model.fit(dfXn.values, dfYn.values, epochs=5000, batch_size=dfYn.shape[0], verbose=0)\n\nplt.plot(history.history['loss'])\n\nmodel.get_weights()\n\nmdl.coef_\n\nplt.scatter(model.get_weights()[0].flatten(), mdl.coef_.flatten()[1:])\n\nfig, ax = plt.subplots(figsize=(10,10))\nplt.scatter(dfYn.values, mdl.predict(dfXn1.values).flatten(), color='red', alpha=0.6, marker='o')\nplt.scatter(dfYn.values, model.predict(dfXn.values), color='blue', alpha=0.6, marker='+')\n\n# tf Graph Input\n\n# input data \nn_samples, n_features = features.shape\nx = tf.placeholder(tf.float32, [None, n_features], name='InputData')\n# output data \ny = tf.placeholder(tf.float32, [None, 1], name='TargetData')\n\n# Set model weights\nW = tf.Variable(tf.random_normal([n_features, 1]), name='Weights')\nb = tf.Variable(tf.zeros([1]), name='Bias')\nz = tf.matmul(x,W) + b\ncost_1 = tf.squared_difference(z,y)\ncost_2 = tf.reduce_mean(cost_1)\n\nlearning_rate=0.1\ntrain_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_2)\n# Construct model and encapsulating all ops into scopes, making\n# Tensorboard's Graph visualization more convenient\n#with tf.name_scope('Model'):\n# # Model\n# pred = tf.matmul(x, W) + b # basic linear regression\n#with tf.name_scope('Loss'):\n# # Minimize error (mean squared error)\n# cost = tf.reduce_mean(-tf.reduce_sum(y - pred)*tf.log(pred), reduction_indices=1))\n#with tf.name_scope('SGD'):\n# # Gradient Descent\n# optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)\n#with tf.name_scope('Accuracy'):\n# # Accuracy\n# acc = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))\n# acc = tf.reduce_mean(tf.cast(acc, tf.float32))\n\n\n\n\ntf.Session().run(y, feed_dict={x: features.values, y: target.values}).shape\n\ninit = tf.global_variables_initializer()\n# Launch the graph\nwith tf.Session() as sess:\n sess.run(init)\n z_out = sess.run(z, feed_dict={x: features.values, y:target.values})\n cost_1_out = sess.run(cost_1, feed_dict={x: features.values, y:target.values})\n cost_2_out = sess.run(cost_2, feed_dict={x: features.values, y:target.values})\n for i in range(100):\n train_step_out = sess.run(train_step, feed_dict={x: features.values, y:target.values})\nprint(cost_1_out[0:5,:])\nprint(cost_2_out)\nprint(train_step_out)\n\nsess = tf.Session()\n\nsess.run(c)\n\nx\n\ny\n\nW\n\nb" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.20/_downloads/2212671cb1d04d466a35eb15470863da/plot_forward_sensitivity_maps.ipynb
bsd-3-clause
[ "%matplotlib inline", "Display sensitivity maps for EEG and MEG sensors\nSensitivity maps can be produced from forward operators that\nindicate how well different sensor types will be able to detect\nneural currents from different regions of the brain.\nTo get started with forward modeling see tut-forward.", "# Author: Eric Larson <larson.eric.d@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.datasets import sample\nimport matplotlib.pyplot as plt\n\nprint(__doc__)\n\ndata_path = sample.data_path()\n\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nfwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\n\nsubjects_dir = data_path + '/subjects'\n\n# Read the forward solutions with surface orientation\nfwd = mne.read_forward_solution(fwd_fname)\nmne.convert_forward_solution(fwd, surf_ori=True, copy=False)\nleadfield = fwd['sol']['data']\nprint(\"Leadfield size : %d x %d\" % leadfield.shape)", "Compute sensitivity maps", "grad_map = mne.sensitivity_map(fwd, ch_type='grad', mode='fixed')\nmag_map = mne.sensitivity_map(fwd, ch_type='mag', mode='fixed')\neeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed')", "Show gain matrix a.k.a. leadfield matrix with sensitivity map", "picks_meg = mne.pick_types(fwd['info'], meg=True, eeg=False)\npicks_eeg = mne.pick_types(fwd['info'], meg=False, eeg=True)\n\nfig, axes = plt.subplots(2, 1, figsize=(10, 8), sharex=True)\nfig.suptitle('Lead field matrix (500 dipoles only)', fontsize=14)\nfor ax, picks, ch_type in zip(axes, [picks_meg, picks_eeg], ['meg', 'eeg']):\n im = ax.imshow(leadfield[picks, :500], origin='lower', aspect='auto',\n cmap='RdBu_r')\n ax.set_title(ch_type.upper())\n ax.set_xlabel('sources')\n ax.set_ylabel('sensors')\n fig.colorbar(im, ax=ax, cmap='RdBu_r')\n\nfig_2, ax = plt.subplots()\nax.hist([grad_map.data.ravel(), mag_map.data.ravel(), eeg_map.data.ravel()],\n bins=20, label=['Gradiometers', 'Magnetometers', 'EEG'],\n color=['c', 'b', 'k'])\nfig_2.legend()\nax.set(title='Normal orientation sensitivity',\n xlabel='sensitivity', ylabel='count')\n\ngrad_map.plot(time_label='Gradiometer sensitivity', subjects_dir=subjects_dir,\n clim=dict(lims=[0, 50, 100]))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
maxvogel/NetworKit-mirror2
Doc/Notebooks/NetworKit_UserGuide.ipynb
mit
[ "NetworKit User Guide\nAbout NetworKit\nNetworKit is an open-source toolkit for high-performance\nnetwork analysis. Its aim is to provide tools for the analysis of large\nnetworks in the size range from thousands to billions of edges. For this\npurpose, it implements efficient graph algorithms, many of them parallel to\nutilize multicore architectures. These are meant to compute standard measures\nof network analysis, such as degree sequences, clustering coefficients and\ncentrality. In this respect, NetworKit is comparable\nto packages such as NetworkX, albeit with a focus on parallelism \nand scalability. NetworKit is also a testbed for algorithm engineering and\ncontains a few novel algorithms from recently published research, especially\nin the area of community detection.\nIntroduction\nThis notebook provides an interactive introduction to the features of NetworKit, consisting of text and executable code. We assume that you have read the Readme and successfully built the core library and the Python module. Code cells can be run one by one (e.g. by selecting the cell and pressing shift+enter), or all at once (via the Cell-&gt;Run All command). Try running all cells now to verify that NetworKit has been properly built and installed.\nPreparation\nThis notebook creates some plots. To show them in the notebook, matplotlib must be imported and we need to activate matplotlib's inline mode:", "%matplotlib inline\nimport matplotlib.pyplot as plt", "NetworKit is a hybrid built from C++ and Python code: Its core functionality is implemented in C++ for performance reasons, and then wrapped for Python using the Cython toolchain. This allows us to expose high-performance parallel code as a normal Python module. On the surface, NetworKit is just that and can be imported accordingly:", "from networkit import * ", "IPython lets us use familiar shell commands in a Python interpreter. Use one of them now to change into the directory of your NetworKit download:", "cd ../../", "Reading and Writing Graphs\nLet us start by reading a network from a file on disk: PGPgiantcompo.graph. In the course of this tutorial, we are going to work on the PGPgiantcompo network, a social network/web of trust in which nodes are PGP keys and an edge represents a signature from one key on another. It is distributed with NetworKit as a good starting point.\nThere is a convenient function in the top namespace which tries to guess the input format and select the appropriate reader:", "G = readGraph(\"input/PGPgiantcompo.graph\", Format.METIS)", "There is a large variety of formats for storing graph data in files. For NetworKit, the currently best supported format is the METIS adjacency format. Various example graphs in this format can be found here. The readGraph function tries to be an intelligent wrapper for various reader classes. In this example, it uses the METISGraphReader which is located in the graphio submodule, alongside other readers. These classes can also be used explicitly:", "graphio.METISGraphReader().read(\"input/PGPgiantcompo.graph\")\n# is the same as: readGraph(\"input/PGPgiantcompo.graph\", Format.METIS)", "It is also possible to specify the format for readGraph() and writeGraph(). Supported formats can be found via [graphio.]Format. However, graph formats are most likely only supported as far as the NetworKit::Graph can hold and use the data. Please note, that not all graph formats are supported for reading and writing.\nThus, it is possible to use NetworKit to convert graphs between formats. Let's say I need the previously read PGP graph in the Graphviz format:", "graphio.writeGraph(G,\"output/PGPgiantcompo.graphviz\", Format.GraphViz)", "NetworKit also provides a function to convert graphs directly:", "graphio.convertGraph(Format.LFR, Format.GML, \"input/example.edgelist\", \"output/example.gml\")", "The Graph Object\nGraph is the central class of NetworKit. An object of this type represents an undirected, optionally weighted network. Let us inspect several of the methods which the class provides.", "n = G.numberOfNodes()\nm = G.numberOfEdges()\nprint(n, m)\n\nG.toString()", "Nodes are simply integer indices, and edges are pairs of such indices.", "V = G.nodes()\nprint(V[:10])\nE = G.edges()\nprint(E[:10])\n\nG.hasEdge(42,11)", "This network is unweighted, meaning that each edge has the default weight of 1.", "G.weight(42,11)", "Connected Components\nA connected component is a set of nodes in which each pair of nodes is connected by a path. The following function determines the connected components of a graph:", "cc = components.ConnectedComponents(G)\ncc.run()\nprint(\"number of components \", cc.numberOfComponents())\nv = 0\nprint(\"component of node \", v , \": \" , cc.componentOfNode(0))\n#print(\"map of component sizes: \", cc.getComponentSizes())", "Degree Distribution\nNode degree, the number of edges connected to a node, is one of the most studied properties of networks. Types of networks are often characterized in terms of their distribution of node degrees. We obtain and visualize the degree distribution of our example network as follows.", "dd = sorted(centrality.DegreeCentrality(G).run().scores(), reverse=True)\nplt.xscale(\"log\")\nplt.xlabel(\"degree\")\nplt.yscale(\"log\")\nplt.ylabel(\"number of nodes\")\nplt.plot(dd)", "Search and Shortest Paths\nA simple breadth-first search from a starting node can be performed as follows:", "v = 0\nbfs = graph.BFS(G, v)\nbfs.run()\n\nbfsdist = bfs.getDistances()", "The return value is a list of distances from v to other nodes - indexed by node id. For example, we can now calculate the mean distance from the starting node to all other nodes:", "sum(bfsdist) / len(bfsdist)", "Similarly, Dijkstra's algorithm yields shortest path distances from a starting node to all other nodes in a weighted graph. Because PGPgiantcompo is an unweighted graph, the result is the same here:", "dijkstra = graph.Dijkstra(G, v)\ndijkstra.run()\nspdist = dijkstra.getDistances()\nsum(spdist) / len(spdist)", "Core Decomposition\nA $k$-core decomposition of a graph is performed by successicely peeling away nodes with degree less than $k$. The remaining nodes form the $k$-core of the graph.", "K = readGraph(\"input/karate.graph\",Format.METIS)\ncoreDec = centrality.CoreDecomposition(K)\ncoreDec.run()", "Core decomposition assigns a core number to each node, being the maximum $k$ for which a node is contained in the $k$-core. For this small graph, core numbers have the following range:", "set(coreDec.scores())\n\nviztasks.drawGraph(K, nodeSizes=[(k**2)*20 for k in coreDec.scores()])", "Community Detection\nThis section demonstrates the community detection capabilities of NetworKit. Community detection is concerned with identifying groups of nodes which are significantly more densely connected to eachother than to the rest of the network.\nCode for community detection is contained in the community module. The module provides a top-level function to quickly perform community detection with a suitable algorithm and print some stats about the result.", "community.detectCommunities(G)", "The function prints some statistics and returns the partition object representing the communities in the network as an assignment of node to community label. Let's capture this result of the last function call.", "communities = community.detectCommunities(G)", "Modularity is the primary measure for the quality of a community detection solution. The value is in the range [-0.5,1] and usually depends both on the performance of the algorithm and the presence of distinctive community structures in the network.", "community.Modularity().getQuality(communities, G)", "The Partition Data Structure\nThe result of community detection is a partition of the node set into disjoint subsets. It is represented by the Partition data strucure, which provides several methods for inspecting and manipulating a partition of a set of elements (which need not be the nodes of a graph).", "type(communities)\n\nprint(\"{0} elements assigned to {1} subsets\".format(communities.numberOfElements(), communities.numberOfSubsets()))\n\n\nprint(\"the biggest subset has size {0}\".format(max(communities.subsetSizes())))", "The contents of a partition object can be written to file in a simple format, in which each line i contains the subset id of node i.", "community.writeCommunities(communities, \"output/communties.partition\")", "Choice of Algorithm\nThe community detection function used a good default choice for an algorithm: PLM, our parallel implementation of the well-known Louvain method. It yields a high-quality solution at reasonably fast running times. Let us now apply a variation of this algorithm.", "community.detectCommunities(G, algo=community.PLP(G))", "We have switched on refinement, and we can see how modularity is slightly improved. For a small network like this, this takes only marginally longer.\nVisualizing the Result\nWe can easily plot the distribution of community sizes as follows. While the distribution is skewed, it does not seem to fit a power-law, as shown by a log-log plot.", "sizes = communities.subsetSizes()\nsizes.sort(reverse=True)\nax1 = plt.subplot(2,1,1)\nax1.set_ylabel(\"size\")\nax1.plot(sizes)\n\nax2 = plt.subplot(2,1,2)\nax2.set_xscale(\"log\")\nax2.set_yscale(\"log\")\nax2.set_ylabel(\"size\")\nax2.plot(sizes)", "Subgraph\nNetworKit supports the creation of Subgraphs depending on an original graph and a set of nodes. This might be useful in case you want to analyze certain communities of a graph. Let's say that community 2 of the above result is of further interest, so we want a new graph that consists of nodes and intra cluster edges of community 2.", "from networkit.graph import Subgraph\nc2 = communities.getMembers(2)\nsg = Subgraph()\ng2 = sg.fromNodes(G,c2)\n\ncommunities.subsetSizeMap()[2]\n\ng2.numberOfNodes()", "As we can see, the number of nodes in our subgraph matches the number of nodes of community 2. The subgraph can be used like any other graph object, e.g. further community analysis:", "communities2 = community.detectCommunities(g2)", "Centrality\nCentrality measures the relative importance of a node within a graph. Code for centrality analysis is grouped into the centrality module.\nBetweenness Centrality\nWe implement Brandes' algorithm for the exact calculation of betweenness centrality. While the algorithm is efficient, it still needs to calculate shortest paths between all pairs of nodes, so its scalability is limited. We demonstrate it here on the small Karate club graph.", "K = readGraph(\"input/karate.graph\", Format.METIS)\n\nbc = centrality.Betweenness(K)\nbc.run()", "We have now calculated centrality values for the given graph, and can retrieve them either as an ordered ranking of nodes or as a list of values indexed by node id.", "bc.ranking()[:10] # the 10 most central nodes", "Approximation of Betweenness\nSince exact calculation of betweenness scores is often out of reach, NetworKit provides an approximation algorithm based on path sampling. Here we estimate betweenness centrality in PGPgiantcompo, with a probabilistic guarantee that the error is no larger than an additive constant $\\epsilon$.", "abc = centrality.ApproxBetweenness(G, epsilon=0.1)\nabc.run()", "The 10 most central nodes according to betweenness are then", "abc.ranking()[:10]", "Eigenvector Centrality and PageRank\nEigenvector centrality and its variant PageRank assign relative importance to nodes according to their connections, incorporating the idea that edges to high-scoring nodes contribute more. PageRank is a version of eigenvector centrality which introduces a damping factor, modeling a random web surfer which at some point stops following links and jumps to a random page. In PageRank theory, centrality is understood as the probability of such a web surfer to arrive on a certain page. Our implementation of both measures is based on parallel power iteration, a relatively simple eigensolver.", "# Eigenvector centrality\nec = centrality.EigenvectorCentrality(K)\nec.run()\nec.ranking()[:10] # the 10 most central nodes\n\n# PageRank\npr = centrality.PageRank(K, 1e-6)\npr.run()\npr.ranking()[:10] # the 10 most central nodes", "NetworkX Compatibility\nNetworkX is a popular Python package for network analysis. To let both packages complement eachother, and to enable the adaptation of existing NetworkX-based code, we support the conversion of the respective graph data structures.", "import networkx as nx\nnxG = nxadapter.nk2nx(G) # convert from NetworKit.Graph to networkx.Graph\nprint(nx.degree_assortativity_coefficient(nxG))", "Generating Graphs\nAn important subfield of network science is the design and analysis of generative models. A variety of generative models have been proposed with the aim of reproducing one or several of the properties we find in real-world complex networks. NetworKit includes generator algorithms for several of them.\nThe Erdös-Renyi model is the most basic random graph model, in which each edge exists with the same uniform probability. NetworKit provides an efficient generator:", "ERG = generators.ErdosRenyiGenerator(1000, 0.1).generate()", "A simple way to generate a random graph with community structure is to use the ClusteredRandomGraphGenerator. It uses a simple variant of the Erdös-Renyi model: The node set is partitioned into a given number of subsets. Nodes within the same subset have a higher edge probability.", "CRG = generators.ClusteredRandomGraphGenerator(200, 4, 0.2, 0.002).generate()\n\ncommunity.detectCommunities(CRG)", "The Chung-Lu model (also called configuration model) generates a random graph which corresponds to a given degree sequence, i.e. has the same expected degree sequence. It can therefore be used to replicate some of the properties of a given real networks, while others are not retained, such as high clustering and the specific community structure.", "degreeSequence = [G.degree(v) for v in G.nodes()]\nclgen = generators.ChungLuGenerator(degreeSequence)\n", "Settings\nIn this section we discuss global settings.\nLogging\nWhen using NetworKit from the command line, the verbosity of console output can be controlled via several loglevels, from least to most verbose: FATAL, ERROR, WARN, INFO, DEBUG and TRACE. (Currently, logging is only available on the console and not visible in the IPython Notebook).", "getLogLevel() # the default loglevel\n\nsetLogLevel(\"TRACE\") # set to most verbose mode\nsetLogLevel(\"ERROR\") # set back to default", "Please note, that the default build setting is optimized (--optimize=Opt) and thus, every LOG statement below INFO is removed. If you need DEBUG and TRACE statements, please build the extension module by appending --optimize=Dbg when calling the setup script.\nParallelism\nThe degree of parallelism can be controlled and monitored in the following way:", "setNumberOfThreads(4) # set the maximum number of available threads\n\ngetMaxNumberOfThreads() # see maximum number of available threads\n\ngetCurrentNumberOfThreads() # the number of threads currently executing", "Support\nNetworKit is an open-source project that improves with suggestions and contributions from its users. The email list networkit@ira.uni-karlsruhe.de is the place for general discussion and questions. Also feel free to contact the authors with questions on how NetworKit can be applied to your research.\n-- Christian L. Staudt and Henning Meyerhenke" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
OpenWeavers/openanalysis
doc/Libraries/3 - Introduction to Graph Analysis with networkx.ipynb
gpl-3.0
[ "Introduction to Graph Analysis with networkx\nGraph theory deals with various properties and algorithms concerned with Graphs. Although it is very easy to implement a Graph ADT in Python, we will use networkx library for Graph Analysis as it has inbuilt support for visualizing graphs. In future versions of networkx, graph visualization might be removed. When this happens, it is required to modify some parts of this chapter\nStandard import statement\nThroughout this tutorial, we assume that you have imported networkx as follows", "import networkx as nx", "Creating Graphs\nCreate an empty graph with no nodes and no edges.", "G = nx.Graph()", "By definition, a Graph is a collection of nodes (vertices) along with identified pairs of nodes (called edges, links, etc). In NetworkX, nodes can be any hashable object e.g. a text string, an image, an XML object, another Graph, a customized node object, etc. (Note: Python's None object should not be used as a node as it determines whether optional function arguments have been assigned in many functions.)\nNodes\nThe graph G can be grown in several ways. NetworkX includes many graph generator functions and facilities to read and write graphs in many formats. To get started, we'll look at simple manipulations. You can add one node at a time,", "G.add_node(1)", "add a list of nodes,", "G.add_nodes_from([2,3])", "Edges\nG can also be grown by adding one edge at a time,", "G.add_edge(1,2)\ne=(2,3)\nG.add_edge(*e) # Unpacking tuple", "by adding a list of edges,", "G.add_edges_from([(1,2),(1,3)])", "we add new nodes/edges and NetworkX quietly ignores any that are already present. \nAt this stage the graph G consists of 3 nodes and 3 edges, as can be seen by:", "G.number_of_nodes()\n\nG.number_of_edges()", "Accessing edges\nIn addition to the methods Graph.nodes, Graph.edges, and Graph.neighbors, iterator versions (e.g. Graph.edges_iter) can save you from creating large lists when you are just going to iterate through them anyway.\nFast direct access to the graph data structure is also possible using subscript notation.\nWarning\nDo not change the returned dict--it is part of the graph data structure and direct manipulation may leave the graph in an inconsistent state.", "G.nodes()\n\nG.edges()\n\nG[1]\n\nG[1][2]", "You can safely set the attributes of an edge using subscript notation if the edge already exists.", "G[1][2]['weight'] = 10\n\nG[1][2]", "Fast examination of all edges is achieved using adjacency iterators. Note that for undirected graphs this actually looks at each edge twice.", "FG=nx.Graph()\nFG.add_weighted_edges_from([(1,2,0.125),(1,3,0.75),(2,4,1.2),(3,4,0.375)])\nfor n,nbrs in FG.adjacency_iter():\n for nbr,eattr in nbrs.items():\n data=eattr['weight']\n if data<0.5: print('(%d, %d, %.3f)' % (n,nbr,data))\n\nlist(FG.adjacency_iter())", "Convenient access to all edges is achieved with the edges method.", "for (u,v,d) in FG.edges(data='weight'):\n if d<0.5: print('(%d, %d, %.3f)'%(n,nbr,d))", "Adding attributes to graphs, nodes, and edges\nAttributes such as weights, labels, colors, or whatever Python object you like, can be attached to graphs, nodes, or edges.\nEach graph, node, and edge can hold key/value attribute pairs in an associated attribute dictionary (the keys must be hashable). By default these are empty, but attributes can be added or changed using add_edge, add_node or direct manipulation of the attribute dictionaries named G.graph, G.node and G.edge for a graph G.\nGraph attributes\nAssign graph attributes when creating a new graph", "G = nx.Graph(day=\"Friday\")\nG.graph", "Or you can modify attributes later", "G.graph['day']='Monday'\nG.graph", "Node attributes\nAdd node attributes using add_node(), add_nodes_from() or G.node", "G.add_node(1,time = '5pm')\n\nG.add_nodes_from([3], time='2pm')\n\nG.node[1]\n\nG.node[1]['room'] = 714\n\nG.nodes(data=True)", "Note that adding a node to G.node does not add it to the graph, use G.add_node() to add new nodes.\nEdge Attributes\nAdd edge attributes using add_edge(), add_edges_from(), subscript notation, or G.edge.", "G.add_edge(1, 2, weight=4.7 )\n\nG[1][2]\n\nG.add_edges_from([(3,4),(4,5)], color='red')\n\nG.add_edges_from([(1,2,{'color':'blue'}), (2,3,{'weight':8})])\n\nG[1][2]['weight'] = 4.7\n\nG.edge[1][2]['weight'] = 4\n\nG.edges(data=True)", "Converting Graph to Adjacency matrix\nYou can use nx.to_numpy_matrix(G) to convert G to numpy matrix. If the graph is weighted, the elements of the matrix are weights. If an edge doesn't exsist, its value will be 0, not Infinity. You have to manually modify those values to Infinity (float('inf'))", "nx.to_numpy_matrix(G)\n\nnx.to_numpy_matrix(FG)", "Drawing graphs\nNetworkX is not primarily a graph drawing package but basic drawing with Matplotlib as well as an interface to use the open source Graphviz software package are included. These are part of the networkx.drawing package and will be imported if possible", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nnx.draw(FG)", "Now we shall draw the graph using graphviz layout", "from networkx.drawing.nx_agraph import graphviz_layout\npos = graphviz_layout(FG)\nplt.axis('off')\nnx.draw_networkx_nodes(FG,pos,node_color='g',alpha = 0.8) # draws nodes\nnx.draw_networkx_edges(FG,pos,edge_color='b',alpha = 0.6) # draws edges\nnx.draw_networkx_edge_labels(FG,pos,edge_labels = nx.get_edge_attributes(FG,'weight')) # edge lables\nnx.draw_networkx_labels(FG,pos) # node lables", "Going Further\nWe have only seen the basic graph functionalities. In addition to this, NetworkX provides many Graph Algorithms, and Many types of Graphs. Interested reader can look at Official Documentation" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
TomTranter/OpenPNM
examples/simulations/Advection-Diffusion.ipynb
mit
[ "Advection-Diffusion\nIn this example, we will learn how to perform an advection-diffusion simulation of a given chemical species through a Cubic network. The algorithm can be applied to more complex networks in the same manner as described in this example. For the sake of simplicity, a one layer 3D cubic network is used here. On OpenPNM, 4 different space discretization schemes for the advection-diffusion problem are available and consist of:\n\nUpwind\nHybrid\nPowerlaw\nExponential\n\nDepending on the Peclet number characterizing the transport (ratio of advective to diffusive fluxes), the solutions obtained using these schemes may differ. In order to achive a high numerical accuracy, the user should use either the powerlaw or the exponential schemes.\nGenerating network\nFirst, we need to generate a Cubic network. For now, we stick to a one layer 3d network, but you might as well try more complex networks!", "import numpy as np\nimport openpnm as op\nnp.random.seed(10)\n%matplotlib inline\nws = op.Workspace()\nws.settings[\"loglevel\"] = 40\nnp.set_printoptions(precision=5)\nnet = op.network.Cubic(shape=[1, 20, 30], spacing=1e-4)", "Adding geometry\nNext, we need to add a geometry to the generated network. A geometry contains information about size of the pores/throats in a network. OpenPNM has tons of prebuilt geometries that represent the microstructure of different materials such as Toray090 carbon papers, sand stone, electrospun fibers, etc. For now, we stick to a sample geometry called StickAndBall that assigns random values to pore/throat diameters.", "geom = op.geometry.StickAndBall(network=net, pores=net.Ps, throats=net.Ts)", "Adding phase\nNext, we need to add a phase to our simulation. A phase object(s) contain(s) thermophysical information about the working fluid(s) in the simulation. OpenPNM has tons of prebuilt phases as well! For this simulation, we use air as our working fluid.", "air = op.phases.Air(network=net)", "Adding physics\nFinally, we need to add a physics. A physics object contains information about the working fluid in the simulation that depend on the geometry of the network. A good example is diffusive conductance, which not only depends on the thermophysical properties of the working fluid, but also depends on the geometry of pores/throats.", "phys_air = op.physics.Standard(network=net, phase=air, geometry=geom)", "Performing Stokes flow\nNote that the advection diffusion algorithm assumes that velocity field is given. Naturally, we solve Stokes flow inside a pore network model to obtain the pressure field, and eventually the velocity field. Therefore, we need to run the StokesFlow algorithm prior to running our advection diffusion. There's a separate tutorial on how to run StokesFlow in OpenPNM, but here's a simple code snippet that does the job for us.", "sf = op.algorithms.StokesFlow(network=net, phase=air)\nsf.set_value_BC(pores=net.pores('left'), values=200.0)\nsf.set_value_BC(pores=net.pores('right'), values=0.0)\nsf.run();", "It is essential that you attach the results from StokesFlow (i.e. pressure field) to the corresponding phase, since the results from any algorithm in OpenPNM are by default only attached to the algorithm object (in this case to sf). Here's how you can update your phase:", "air.update(sf.results())", "Performing advection-diffusion\nNow that everything is set up, it's time to perform our advection-diffusion simulation. For this purpose, we need to add corresponding algorithm to our simulation. As mentioned above, OpenPNM supports 4 different discretizations that may be used with the AdvectionDiffusion and Dispersion algorithms.\nSetting the discretization scheme can be performed when defining the physics model as follows:", "mod = op.models.physics.ad_dif_conductance.ad_dif\nphys_air.add_model(propname='throat.ad_dif_conductance', model=mod, s_scheme='powerlaw')", "Then, the advection-diffusion algorithm is defined by:", "ad = op.algorithms.AdvectionDiffusion(network=net, phase=air)", "Note that network and phase are required parameters for pretty much every algorithm we add, since we need to specify on which network and for which phase do we want to run the algorithm.\nNote that you can also specify the discretization scheme by modifying the settings of our AdvectionDiffusion algorithm. You can choose between upwind, hybrid, powerlaw, and exponential.\nIt is important to note that the scheme specified within the algorithm's settings is only used when calling the rate method for post processing.\nAdding boundary conditions\nNext, we need to add some boundary conditions to the simulation. By default, OpenPNM assumes zero flux for the boundary pores.", "inlet = net.pores('left') \noutlet = net.pores(['right', 'top', 'bottom'])\nad.set_value_BC(pores=inlet, values=100.0)\nad.set_value_BC(pores=outlet, values=0.0)", "set_value_BC applies the so-called \"Dirichlet\" boundary condition to the specified pores. Note that unless you want to apply a single value to all of the specified pores (like we just did), you must pass a list (or ndarray) as the values parameter.\nRunning the algorithm\nNow, it's time to run the algorithm. This is done by calling the run method attached to the algorithm object.", "ad.run();", "Post processing\nWhen an algorithm is successfully run, the results are attached to the same object. To access the results, you need to know the quantity for which the algorithm was solving. For instance, AdvectionDiffusion solves for the quantity pore.concentration, which is somewhat intuitive. However, if you ever forget it, or wanted to manually check the quantity, you can take a look at the algorithm settings:", "print(ad.settings)", "Now that we know the quantity for which AdvectionDiffusion was solved, let's take a look at the results:", "c = ad['pore.concentration']", "Heatmap\nSince the network is 2d, we can simply reshape the results in form of a 2d array similar to the shape of the network and plot the heatmap of it using matplotlib.", "print('Network shape:', net._shape)\nc2d = c.reshape((net._shape))\n\n#NBVAL_IGNORE_OUTPUT\nimport matplotlib.pyplot as plt\nplt.imshow(c2d[0,:,:]);\nplt.title('Concentration (mol/m$^3$)');\nplt.colorbar();" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AkshanshChahal/BTP
Satellite/Lat_Lon_Pixel.ipynb
mit
[ "Latitude, Longitude for any pixel in a GeoTiff File\nHow to generate the latitude and longitude for a pixel at any given position in a GeoTiff file.", "from osgeo import ogr, osr, gdal\n\n# opening the geotiff file\nds = gdal.Open('G:\\BTP\\Satellite\\Data\\Test2\\LE07_L1GT_147040_20050506_20170116_01_T2\\LE07_L1GT_147040_20050506_20170116_01_T2_B1.TIF')\n\ncol, row, band = ds.RasterXSize, ds.RasterYSize, ds.RasterCount\nprint(col, row, band)\n\nxoff, a, b, yoff, d, e = ds.GetGeoTransform()\nprint(xoff, a, b, yoff, d, e)\n\n# details about the params: GDAL affine transform parameters\n# xoff,yoff = left corner \n# a,e = weight,height of pixels\n# b,d = rotation of the image (zero if image is north up)\n\ndef pixel2coord(x, y):\n \"\"\"Returns global coordinates from coordinates x,y of the pixel\"\"\"\n xp = a * x + b * y + xoff\n yp = d * x + e * y + yoff\n return(xp, yp)\n\nx,y = pixel2coord(col/2,row/2)\nprint (x, y)", "These global coordinates are in a projected coordinated system, which is a representation of the spheroidal earth's surface, but flattened and distorted onto a plane.\nTo convert these into latitude and longitude, we need to convert these coordinates into geographic coordinate system.", "# get the existing coordinate system\nold_cs= osr.SpatialReference()\nold_cs.ImportFromWkt(ds.GetProjectionRef())\n\n# create the new coordinate system\nwgs84_wkt = \"\"\"\nGEOGCS[\"WGS 84\",\n DATUM[\"WGS_1984\",\n SPHEROID[\"WGS 84\",6378137,298.257223563,\n AUTHORITY[\"EPSG\",\"7030\"]],\n AUTHORITY[\"EPSG\",\"6326\"]],\n PRIMEM[\"Greenwich\",0,\n AUTHORITY[\"EPSG\",\"8901\"]],\n UNIT[\"degree\",0.01745329251994328,\n AUTHORITY[\"EPSG\",\"9122\"]],\n AUTHORITY[\"EPSG\",\"4326\"]]\"\"\"\nnew_cs = osr.SpatialReference()\nnew_cs.ImportFromWkt(wgs84_wkt)\n\n# create a transform object to convert between coordinate systems\ntransform = osr.CoordinateTransformation(old_cs,new_cs) \n\n# converting into geographic coordinate system\nlonx, latx, z = transform.TransformPoint(x,y)\n\nprint (latx, lonx, z)\n\n# rb = ds.GetRasterBand(1)\npx,py = col/2,row/2 # the pixel location\npix = ds.ReadAsArray(px,py,1,1) \nprint pix[0][0] # pixel value", "Reverse Geocoding\nConverting a lat/long to a physical address or location. \nWe want the name of the DISTRICT.\n--------------------------------------------------------------------------------\nAPI 1: Not so accurate\n--------------------------------------------------------------------------------", "coordinates = (latx,lonx)\nresults = rg.search(coordinates)\nprint results\nprint type(results)\nprint type(results[0])\nresults[0]\n\nk = 4 # If we want k*k pixels in total from the image\n\n\nfor i in range(0,col,col/k):\n for j in range(0,row,row/k):\n \n # fetching the lat and lon coordinates \n x,y = pixel2coord(i,j)\n lonx, latx, z = transform.TransformPoint(x,y)\n \n # fetching the name of district\n coordinates = (latx,lonx)\n results = rg.search(coordinates)\n \n # The pixel value for that location\n px,py = i,j \n pix = ds.ReadAsArray(px,py,1,1) \n pix = pix[0][0]\n \n # printing\n s = \"The pixel value for the location Lat: {0:5.1f}, Long: {1:5.1f} ({2:15}) is {3:7}\".format(latx,lonx,results[0][\"name\"],pix)\n print (s)", "--------------------------------------------------------------------------------\nAPI 2\n--------------------------------------------------------------------------------", "g = geocoder.google([latx,lonx], method='reverse')\nprint type(g)\nprint g\nprint g.city\nprint g.state\nprint g.state_long\nprint g.country\nprint g.country_long\nprint g.address", "The above wrapper for Google API is not good enough for us. Its not providing us with the district.\nLets try another python library available for the Google Geo API", "results = Geocoder.reverse_geocode(latx, lonx)\nprint results.city\nprint results.country\nprint results.street_address\nprint results.administrative_area_level_1\nprint results.administrative_area_level_2 ## THIS GIVES THE DISTRICT !! <----------------\nprint results.administrative_area_level_3", "This is what we need, we are getting the district name for given lat,lon coordinates", "## Converting the unicode string to ascii string\nv = results.country\nprint type(v)\nv = v.encode(\"ascii\")\nprint type(v)\nprint v", "Now lets check for an image from Rajasthan", "k = 4 # If we want k*k pixels in total from the image\n\n\nfor i in range(0,col,col/k):\n for j in range(0,row,row/k):\n \n # fetching the lat and lon coordinates \n x,y = pixel2coord(i,j)\n lonx, latx, z = transform.TransformPoint(x,y)\n \n # fetching the name of district\n results = Geocoder.reverse_geocode(latx, lonx)\n \n # The pixel value for that location\n px,py = i,j \n pix = ds.ReadAsArray(px,py,1,1) \n pix = pix[0][0]\n \n # printing\n if results.country.encode('ascii') == 'India':\n s = \"Lat: {0:5.1f}, Long: {1:5.1f}, District: {2:12}, Pixel Val: {3:7}\".format(latx,lonx,results.administrative_area_level_2,pix)\n print (s)", "Bing Maps REST API", "import requests # To make the REST API Call\nimport json\n\n(latx,lonx)\n\nurl = \"http://dev.virtualearth.net/REST/v1/Locations/\"\npoint = str(latx)+\",\"+str(lonx)\nkey = \"Aktjg1X8bLQ_KhLQbVueYMhXDEMo7OaTweIkBvFojInYE4tVxoTp1bGKWbtU_OPJ\"\nresponse = requests.get(url+point+\"?key=\"+key)\nprint(response.status_code)\n\ndata = response.json()\nprint(type(data))\n\ndata\n\ns = data[\"resourceSets\"][0][\"resources\"][0][\"address\"][\"adminDistrict2\"]\ns = s.encode(\"ascii\")\ns\n\nurl = \"http://dev.virtualearth.net/REST/v1/Locations/\"\nkey = \"Aktjg1X8bLQ_KhLQbVueYMhXDEMo7OaTweIkBvFojInYE4tVxoTp1bGKWbtU_OPJ\"", "Bing API Test\nFor 100 pixel locations", "k = 10 # If we want k*k pixels in total from the image\n\n\nfor i in range(0,col,col/k):\n for j in range(0,row,row/k):\n \n ############### fetching the lat and lon coordinates #######################################\n x,y = pixel2coord(i,j)\n lonx, latx, z = transform.TransformPoint(x,y)\n \n ############### fetching the name of district ##############################################\n point = str(latx)+\",\"+str(lonx)\n response = requests.get(url+point+\"?key=\"+key)\n data = response.json()\n s = data[\"resourceSets\"][0][\"resources\"][0][\"address\"]\n if s[\"countryRegion\"].encode(\"ascii\") != \"India\":\n print (\"Outside Indian Territory\")\n continue\n district = s[\"adminDistrict2\"].encode(\"ascii\")\n \n ############### The pixel value for that location ##########################################\n px,py = i,j \n pix = ds.ReadAsArray(px,py,1,1) \n pix = pix[0][0]\n \n # printing\n s = \"Lat: {0:5.1f}, Long: {1:5.1f}, District: {2:12}, Pixel Val: {3:7}\".format(latx,lonx,district,pix)\n print (s)", "We have another player in the ground!\nCan Reverse Geocode by using the python libraries shapely and fiona with a shapefile for all the district boundaries of India", "import fiona\nfrom shapely.geometry import Point, shape\n\n# Change this for Win7\nbase = \"/Users/macbook/Documents/BTP/Satellite/Data/Maps/Districts/Census_2011\"\nfc = fiona.open(base+\"/2011_Dist.shp\")\n\ndef reverse_geocode(pt):\n for feature in fc:\n if shape(feature['geometry']).contains(pt):\n return feature['properties']['DISTRICT']\n return \"NRI\"\n\nk = 10 # If we want k*k pixels in total from the image\n\n\nfor i in range(0,col,col/k):\n for j in range(0,row,row/k):\n \n ############### fetching the lat and lon coordinates #######################################\n x,y = pixel2coord(i,j)\n lonx, latx, z = transform.TransformPoint(x,y)\n \n ############### fetching the name of district ##############################################\n point = Point(lonx,latx)\n district = reverse_geocode(point)\n if district==\"NRI\":\n print (\"Outside Indian Territory\")\n continue\n \n ############### The pixel value for that location ##########################################\n px,py = i,j \n pix = ds.ReadAsArray(px,py,1,1) \n pix = pix[0][0]\n \n # printing\n s = \"Lat: {0:5.1f}, Long: {1:5.1f}, District: {2:12}, Pixel Val: {3:7}\".format(latx,lonx,district,pix)\n print (s)", "Now we can proceed to GenFeatures Notebook" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
apatti/apatti_ml
AmazonReviewValidator.ipynb
mit
[ "<a href=\"https://colab.research.google.com/github/apatti/apatti_ml/blob/master/AmazonReviewValidator.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nData Generation", "!pip install bs4\n!pip install requests\n\nimport requests\nfrom bs4 import BeautifulSoup\nimport re\nfrom datetime import datetime\n\n\nreview_url = \"https://www.amazon.com/BERTER-Knee-Brace-Men-Women/product-reviews/B087G62NTF/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&reviewerType=all_reviews&sortBy=recent&pageNumber=1&filterByStar=all_stars\"\n\n\nclass ReviewCollector:\n HEADERS = ({'User-Agent':\n 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) \\\n AppleWebKit/537.36 (KHTML, like Gecko) \\\n Chrome/90.0.4430.212 Safari/537.36',\n 'Accept-Language': 'en-US, en;q=0.5'});\n\n def __init__(self):\n self.reviews=[]\n \n def loadReviews(self,url):\n next_url = url\n page=1\n while next_url:\n #print(\"Loading page {0}\".format(page))\n self.__getHtmlContent(next_url)\n self.__getCustomerReviews();\n next_url = self.__getNextPageUrl()\n page +=1\n\n print(\"Loaded {0} reviews\".format(len(self.reviews)))\n return\n \n def __getNextPageUrl(self):\n next_page_tag = self.soup.find('li',class_=\"a-last\")\n if next_page_tag and next_page_tag.find(\"a\"):\n #print(next_page_tag.find(\"a\").get('href'))\n return \"https://www.amazon.com\"+next_page_tag.find(\"a\").get('href')\n\n return None\n\n def __getHtmlContent(self,url):\n r = requests.get(url,headers=ReviewCollector.HEADERS) \n self.soup = BeautifulSoup(r.text, 'html.parser')\n #print(self.soup)\n \n def __extractStars(self,tag):\n starTag = tag.find('i',{\"data-hook\",\"review-star-rating\"})\n if starTag:\n return self.__starsToInt(starTag.get(\"title\"))\n else:\n return self.__starsToInt(tag.find('i',class_=\"review-rating\").get_text())\n\n def __starsToInt(self, starString):\n if not starString:\n return -1\n\n starsMatch = re.match(\"([0-9]).0 out of\",starString)\n if starsMatch:\n return int(starsMatch.group(1))\n else:\n return -1\n \n def __extractCountryDate(self,reviewDateString):\n match = re.match(\"Reviewed in (.*) on (.*)\",reviewDateString)\n if match:\n return match.group(1),match.group(2) #country, date dateformat: datetime.strptime(match.group(2), '%B %d, %Y')\n else:\n return None,None\n\n def __extractBody(self,tag):\n bodyTag = tag.find(\"span\",{'data-hook':\"review-body\"})\n return bodyTag.get_text() if bodyTag else tag.find(\"span\",_class=\"a-size-base cr-lightbox-review-bod\").get_text()\n\n def __extractTitle(self,tag):\n titleTag = tag.find(\"a\",{\"data-hook\":\"review-title\"})\n if titleTag:\n return titleTag.get_text().replace(\"\\n\",\"\")\n else:\n return tag.find(\"span\",{\"data-hook\":\"review-title\"}).get_text()\n\n def __getCustomerReviews(self):\n for customer_review in self.soup.find_all(\"div\",class_=\"a-section celwidget\"):\n review = {}\n review['author'] = customer_review.find('span',class_=\"a-profile-name\").get_text()\n review['stars'] = self.__extractStars(customer_review)\n review['verified_purchase'] = True if customer_review.find('span',{\"data-hook\":\"avp-badge\"}) else False\n review['title'] = self.__extractTitle(customer_review)\n review['content'] = self.__extractBody(customer_review).strip()\n review['country'],review['dateString'] = self.__extractCountryDate(customer_review.find(\"span\",{\"data-hook\":\"review-date\"}).get_text())\n self.reviews.append(review)\n\n pass\n\n\nreviewHelper = ReviewCollector()\nreviewHelper.loadReviews(review_url)\n\nimport pandas as pd\nimport csv\n\n\n\nkeys = reviewHelper.reviews[0].keys()\n\nwith open('review.csv', 'w', newline='') as output_file:\n dict_writer = csv.DictWriter(output_file, keys)\n dict_writer.writeheader()\n dict_writer.writerows(reviewHelper.reviews)\n\nfrom google.colab import files\n\nfiles.download('review.csv')\n\nuploaded = files.upload()\n\nfor fn in uploaded.keys():\n print('User uploaded file \"{name}\" with length {length} bytes'.format(\n name=fn, length=len(uploaded[fn])))\n\ntry:\n df = pd.DataFrame(reviewHelper.reviews) \nexcept:\n df = pd.read_csv('review.csv')\n\ndf['date']=pd.to_datetime(df['dateString'], format='%B %d, %Y')\ndf.sort_values(\"date\",inplace=True)\ndf.info()", "Visualization & Observations\nStars over the time", "mva_df = df.set_index('date')\nmva_df = mva_df['stars'].to_frame()\n\n\n# calculating simple moving average\n# using .rolling(window).mean() ,\n# with window size = 5\nmva_df['SMA5'] = mva_df['stars'].rolling(5).mean()\n\n# calculating exponential moving average\n# using .ewm(span).mean() , with window size = 5\nmva_df['EWMA5'] = mva_df['stars'].ewm(span=5).mean()\n\nmva_df['CMA5'] = mva_df['stars'].expanding().mean()\n\n \n# removing all the NULL values using \n# dropna() method\nmva_df.dropna(inplace=True)\n \n# printing Dataframe\nmva_df\n\nmva_df[['EWMA5','CMA5']].plot(label='Stars Over time', \n figsize=(16, 8))", "Grouping data based on time interval.\n\nQuarterly interval", "mva_df.resample('Q').stars.median()\n\nmva_df.groupby([pd.Grouper(freq='Q'), 'stars']).agg(unique_items=('stars', 'sum'))\n", "Applying NLP to review content\nSentiment analysis\nGet the general sentiment from review content and visualize it over the time.\nUsing huggingface transformers for sentiment analysis.", "!pip install transformers\nfrom transformers import pipeline\n\nclassifier = pipeline(\"sentiment-analysis\") \n\ndef getSentiment(s):\n sentiment= classifier(s.content)\n label = sentiment[0].get('label')\n score = sentiment[0].get('score')\n #print(sentiment)\n if s.stars==5:\n s[\"sentiment_label\"] = \"POSITIVE\"\n s[\"sentiment_score\"] = 0.99\n return s\n \n s[\"sentiment_label\"] = label\n\n s[\"sentiment_score\"] = score if label==\"POSITIVE\" else 0\n return s\ndf=df.apply(getSentiment,axis=1)\n#classifier(df.content.tolist())\n\nsent_df = df.set_index('date')\n\nsent_df = sent_df['sentiment_score'].to_frame()\n\n#sent_df[['sentiment_score']].plot(label='Stars Over time', \n# figsize=(16, 8))\n\nsent_df['SMA5'] = sent_df['sentiment_score'].rolling('90D').mean()\n\nsent_df['CMA5'] = sent_df['sentiment_score'].expanding().mean()\n\n \n# removing all the NULL values using \n# dropna() method\nsent_df.dropna(inplace=True)\n \n# printing Dataframe\nsent_df\n\nsent_df[['SMA5','CMA5']].plot(label='Stars Over time', \n figsize=(16, 8))\nsent_df.index\n\n\nimport matplotlib.pyplot as plt\nimport datetime as dt\n\nfig = plt.figure(figsize=(20,5))\nax = fig.add_subplot(111)\nax.scatter(sent_df.index,sent_df['sentiment_score'], label='Review Sentiment')\nax.plot(sent_df.index,sent_df['SMA5'], color ='r', label='Rolling Mean')\nax.plot(sent_df.index,sent_df['CMA5'], color='y', label='Expanding Mean')\nax.set_xlim([dt.date(2019,1,15),dt.date(2022,5,21)])\nax.set(title='Amazon reviews over Time', xlabel='Date', ylabel='Sentiment')\nax.legend(loc='best')\nfig.tight_layout()", "Topic modeling\nIf we find diverse topics in the review contents then the current product reviews might be dubious", "!pip install -U sentence-transformers\n!pip install bertopic\n\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom nltk.stem.porter import PorterStemmer\nfrom nltk import word_tokenize, WordNetLemmatizer\nfrom nltk.corpus import stopwords\nimport nltk\nnltk.download('stopwords')\nnltk.download('wordnet')\nnltk.download('punkt')", "clean data", "stopwordSet = set(stopwords.words(\"english\"))\n\nlemma = WordNetLemmatizer()\n\ndef cleanup_sentences(sentence):\n text = re.sub('[^a-zA-Z0-9]',\" \", sentence) #removing non alpha numeric\n text = text.lower() # convert to lower case.\n text = word_tokenize(text, language=\"english\") # tokenize\n text = [lemma.lemmatize(word) for word in text if(word) not in stopwordSet] # Lemmatizing words and removing stopwords\n text = \" \".join(text) \n return text\n\ndf['content_cleaned'] = df['content'].apply(cleanup_sentences)\n\nfrom bertopic import BERTopic\nfrom sentence_transformers import SentenceTransformer\n\n#sentence-transformers/all-roberta-large-v1: This is a sentence-transformers model: \n#It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.\nsentence_model = SentenceTransformer(\"sentence-transformers/all-roberta-large-v1\")\ntopic_model = BERTopic(embedding_model=sentence_model, calculate_probabilities = True, verbose = True, diversity = 0.2)\n\n#train\ndocs = df['content_cleaned'].to_list()\ntopics, probabilities = topic_model.fit_transform(docs)\n\ntopic_freq = topic_model.get_topic_info()\ntopic_freq.head(5)", "Majority belong to one topic and hence we could conclude that the reviews are geninue", "topic_model.visualize_barchart()\n\nfor i in range(3):\n print(topic_model.get_topic(i))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zerothi/ts-tbt-sisl-tutorial
TB_04/run.ipynb
gpl-3.0
[ "import sisl\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "In this example we will begin to cover some of the extraction utilities in sisl allowing one to really go in-depth on analysis of calculations.\nWe will begin by creating a large graphene flake, then subsequently a hole will be created by removing a circular shape.\nSubsequent to the calculations you are encouraged to explore the sisl toolbox for ways to extract important information regarding your system.", "graphene = sisl.geom.graphene(orthogonal=True)\n\nelec = graphene.tile(25, axis=0)\nH = sisl.Hamiltonian(elec)\nH.construct(([0.1, 1.43], [0., -2.7]))\nH.write('ELEC.nc')\n\ndevice = elec.tile(15, axis=1)\ndevice = device.remove(\n device.close(\n device.center(what='cell'), R=10.)\n)", "We will also (for physical reasons) remove all dangling bonds, and secondly we will create a list of atomic indices which corresponds to the atoms at the edge of the hole.", "dangling = [ia for ia in device.close(device.center(what='cell'), R=14.)\n if len(device.close(ia, R=1.43)) < 3]\ndevice = device.remove(dangling)\nedge = []\nfor ia in device.close(device.center(what='cell'), R=14.):\n if len(device.close(ia, R=1.43)) < 4:\n edge.append(ia)\nedge = np.array(edge)\n\n# Pretty-print the list of atoms (for use in sdata)\n# Note we add 1 to get fortran indices\nprint(sisl.utils.list2str(edge + 1))\n\nHdev = sisl.Hamiltonian(device)\nHdev.construct(([0.1, 1.43], [0, -2.7]))\nHdev.geometry.write('device.xyz')\nHdev.write('DEVICE.nc')", "Exercises\nPlease carefully go through the RUN.fdf file before running TBtrans, determine what each flag means and what it tells TBtrans to output.\nNow run TBtrans:\ntbtrans RUN.fdf\n\nIn addition to the previous example, many more files are now being created (for all files the siesta.TBT.AV* files are the $k$-averaged equivalents. You should, while reading the below list, also be able to specify which of the fdf-flags that are responsible for the creation of which files.\n\nsiesta.TBT.ADOS_*\n The $k$-resolved spectral density of states injected from the named electrode\nsiesta.TBT.BDOS_*\n The $k$-resolved bulk Green function density of states for the named electrode\nsiesta.TBT.BTRANS_*\n The $k$-resolved bulk transmission through the named electrode\nsiesta.TBT.DOS\n Green function density of states\nsiesta.TBT.TEIG_&lt;1&gt;-&lt;2&gt;\n Transmission eigenvalues for electrons injected from &lt;1&gt; and emitted in &lt;2&gt;.\n\nThis exercise mainly shows a variety of methods useful for extracting data from the *.TBT.nc files in a simple and consistent manner. You are encouraged to play with the routines, because the next example forces you to utilize them.", "tbt = sisl.get_sile('siesta.TBT.nc')", "As this system is not a pristine periodic system we have a variety of options available for analysis.\nFirst and foremost we plot the transmission:", "plt.plot(tbt.E, tbt.transmission(),label='Av');\nplt.plot(tbt.E, tbt.transmission(kavg=tbt.kindex([0,0,0])), label=r'$\\Gamma$'); plt.legend()\nplt.ylabel('Transmission'); plt.xlabel('Energy [eV]'); plt.ylim([0, None]);", "The contained data in the *.TBT.nc file is very much dependent on the flags in the fdf-file. To ease the overview of the available output and what is contained in the file one can execute the following block to see the content of the file.\nCheck whether the bulk transmission is present in the output file and if so, add it to the plot above to compare the bulk transmission with the device transmission. \nThere are two electrodes, hence two bulk transmissions. Is there a difference between the two? If so, why, if not, why not?", "print(tbt.info())", "Density of states\nWe may also plot the Green function density of states as well as the spectral density of states from each electrode:", "DOS_all = [tbt.DOS(), tbt.ADOS(0), tbt.ADOS(1)]\nplt.plot(tbt.E, DOS_all[0], label='G');\nplt.plot(tbt.E, DOS_all[1], label=r'$A_L$');\nplt.plot(tbt.E, DOS_all[2], label=r'$A_R$');\nplt.ylim([0, None]); plt.ylabel('Total DOS [1/eV]'); plt.xlabel('Energy [eV]'); plt.legend();", "Can you from the above three quantities determine whether there are any localized states in the examined system?\nHINT: What is the sum of the spectral density of states ($\\mathbf A_i$) compared to the Green function ($\\mathbf G$) density of states?\nExamining DOS on individual atoms\nThe total DOS is a measure of the DOS spread out in the entire atomic region. However, TBtrans calculates, and stores all DOS related quantities in orbital resolution. I.e. we are able to post-process the DOS and examine the atom and/or orbital resolved DOS.\nTo do this the .DOS and .ADOS routines have two important keywords, 1) atom and 2) orbital which may be used to extract a subset of the DOS quantities. For details on extracting these subset quantities please read the documentation by executing the following line:\nhelp(tbt.DOS)\n\nThe following code will extract the DOS only on the atoms in the hole edge.", "DOS_edge = [tbt.DOS(atoms=edge), tbt.ADOS(0, atoms=edge), tbt.ADOS(1, atoms=edge)]\nplt.plot(tbt.E, DOS_edge[0], label='G');\nplt.plot(tbt.E, DOS_edge[1], label=r'$A_L$');\nplt.plot(tbt.E, DOS_edge[2], label=r'$A_R$');\nplt.ylim([0, None]); plt.ylabel('DOS on edge atoms [1/eV]'); plt.xlabel('Energy [eV]'); plt.legend();", "Comparing the two previous figures leaves us with little knowlegde of the DOS ratio. I.e. both plots show the total DOS and they are summed over a different number of atoms (or orbitals if you wish). Instead of showing the total DOS we can normalize the DOS by the number of atoms; $1/N_a$ where $N_a$ is the number of atoms in the selected DOS region. With this normalization we can compare the average DOS on all atoms with the average DOS on only edge atoms. \n\nThe tbtncSileTBtrans can readily do this for you.\nRead about the norm keyword in .DOS, and also look at the documentation for the .norm function:\nhelp(tbt.DOS)\nhelp(tbt.norm)\n\n\nNow create a plot with DOS normalized per atom by editing the below lines, feel free to add the remaining DOS plots to have them all:", "N_all = tbt.norm(<change here>)\nN_edge = tbt.norm(<change here>)\nplt.plot(tbt.E, DOS_all[0] / N_all, label=r'$G$');\nplt.plot(tbt.E, DOS_edge[0] / N_edge, label=r'$G_E$');\nplt.ylim([0, None]); plt.ylabel('DOS [1/eV/atom]'); plt.xlabel('Energy [eV]'); plt.legend();", "DOS depending on the distance from the hole\nWe can further analyze the DOS evolution for atoms farther and farther from the hole.\nThe following DOS analysis will extract DOS (from $\\mathbf G$) for the edge atoms, then for the nearest neighbours to the edge atoms, and for the next-nearest neighbours to the edge atoms.\nTry and extend the plot to contain the DOS of the next-next-nearest neighbours to the edge atoms.", "# Get nearest neighbours to the edge atoms\nn_edge = Hdev.edges(edge, exclude=edge)\n# Get next-nearest neighbours to the edge atoms\nnn_edge = Hdev.edges(n_edge, exclude=np.hstack((edge, n_edge)))\n# Try and create the next-next-nearest neighbours to the edge atoms and add it to the plot\nplt.plot(tbt.E, tbt.DOS(atoms=edge, norm='atom'), label='edge: G');\nplt.plot(tbt.E, tbt.DOS(atoms=n_edge, norm='atom'), label='n-edge: G');\nplt.plot(tbt.E, tbt.DOS(atoms=nn_edge, norm='atom'), label='nn-edge: G');\nplt.ylim([0, None]); plt.ylabel('DOS [1/eV/atom]'); plt.xlabel('Energy [eV]'); plt.legend();", "Learned methods\n\nfdf-flags for TBtrans to specify which quantities to calculate\nOpening files via get_sile\nExtraction of DOS with different normalizations.\nExtraction of DOS for a subset of atoms/orbitals.\nDetermining coupling atoms" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
slundberg/shap
notebooks/api_examples/plots/heatmap.ipynb
mit
[ "heatmap plot\nThis notebook is designed to demonstrate (and so document) how to use the shap.plots.heatmap function. It uses an XGBoost model trained on the classic UCI adult income dataset (which is a classification task to predict if people made over $50k annually in the 1990s).", "import xgboost\nimport shap\n\n# train XGBoost model\nX,y = shap.datasets.adult()\nmodel = xgboost.XGBClassifier(n_estimators=100, max_depth=2).fit(X, y)\n\n# compute SHAP values\nexplainer = shap.Explainer(model, X)\nshap_values = explainer(X[:1000])", "Passing a matrix of SHAP values to the heatmap plot function creates a plot with the instances on the x-axis, the model inputs on the y-axis, and the SHAP values encoded on a color scale. By default the samples are ordered using shap.order.hclust, which orders the samples based on a hierarchical clustering by their explanation similarity. This results in samples that have the same model output for the same reason getting grouped together (such as people with a high impact from capital gain in the plot below).\nThe output of the model is shown above the heatmap matrix (centered around the explaination's .base_value), and the global importance of each model input shown as a bar plot on the right hand side of the plot (by default this is the shap.order.abs.mean measure of overall importance).", "shap.plots.heatmap(shap_values)", "Increasing the max_display parameter allows for more features to be shown:", "shap.plots.heatmap(shap_values, max_display=12)", "Changing sort order and global feature importance values\nWe can change the way the overall importance of features are measured (and so also their sort order) by passing a set of values to the feature_values parameter. By default feature_values=shap.Explanation.abs.mean(0), but below we show how to instead sort by the maximum absolute value of a feature over all the samples:", "shap.plots.heatmap(shap_values, feature_values=shap_values.abs.max(0))", "We can also control the ordering of the instances using the instance_order parameter. By default it is set to shap.Explanation.hclust(0) to group samples with similar explantions together. Below we show how sorting by the sum of the SHAP values over all features gives a complementary perspective on the data:", "shap.plots.heatmap(shap_values, instance_order=shap_values.sum(1))", "<hr>\nHave an idea for more helpful examples? Pull requests that add to this documentation notebook are encouraged!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DataPilot/notebook-miner
summary_of_work/9. Bottom Up Exploration.ipynb
apache-2.0
[ "Bottom Up Exploration\nThis notebook documents the bottom-up strategy exploration to determine notebook similarity. It is based on the notion that it is easier to aggregate than to break down a 'black box.'\nThe biggest challenge is working with the AST structure. Because it is a tree, we need to merge leafs with their parents, working our way up.\nGOAL\nThe Goal is to come up with a similarity function for entire notebooks", "# Necessary imports \nimport os\nimport time\nfrom nbminer.notebook_miner import NotebookMiner\nfrom nbminer.cells.cells import Cell\nfrom nbminer.features.ast_features import ASTFeatures\nfrom nbminer.stats.summary import Summary\nfrom nbminer.stats.multiple_summary import MultipleSummary\n\n#Loading in the notebooks\npeople = os.listdir('../testbed/Final')\nnotebooks = []\nfor person in people:\n person = os.path.join('../testbed/Final', person)\n if os.path.isdir(person):\n direc = os.listdir(person)\n notebooks.extend([os.path.join(person, filename) for filename in direc if filename.endswith('.ipynb')])\nnotebook_objs = [NotebookMiner(file) for file in notebooks]\na = ASTFeatures(notebook_objs)\n\n\n# For each notebook, break notebook up into top level AST nodes\nfor i, nb in enumerate(a.nb_features):\n a.nb_features[i] = nb.get_new_notebook()\n\nimport networkx\nfrom collections import deque\nimport ast\n\n# Function that returns a networkx graph from a top level AST node\ndef return_graph(node):\n dgraph = networkx.DiGraph()\n nodes = deque()\n nodes.append(node.body[0])\n dgraph.add_node(node.body[0])\n while len(nodes) != 0:\n cur_node = nodes.pop()\n for node in ast.iter_child_nodes(cur_node):\n dgraph.add_node(node)\n dgraph.add_edge(cur_node,node)\n nodes.append(node)\n return dgraph\n\n# Function that returns a list of these graph for all nodes in all notebooks\ndef return_all_graphs(a):\n graphs = []\n roots = []\n cells = []\n for i, nb in enumerate(a.nb_features):\n for cell in nb.get_all_cells():\n graphs.append(return_graph(cell.get_feature('ast')))\n roots.append(cell.get_feature('ast').body[0])\n cells.append(cell)\n return graphs, roots, cells\n# Call to retrieve all graphs\nall_graphs, all_roots, all_cells = return_all_graphs(a)\n\nlen(all_graphs)", "Size of the AST trees\nTo look at the size of the AST trees, I'll create a histogram using the max shortest path from the root node to another node in the graph", "max_values = []\nfor n in range(len(all_graphs)):\n max_values.append( max(networkx.shortest_path_length(all_graphs[n],all_roots[n]).values()))\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.hist(max_values, bins = 20)", "Which node is huge\nThis is a fairly good result. Most of the nodes have few levels. However, some of these nodes are really big, 5 have more than 13 levels, and one seems to have over 44. Lets tak a look at the worst case we got", "sorted_indices = [i[0] for i in sorted(enumerate(max_values), key=lambda x:x[1])]\n\nmax_values[sorted_indices[-1]]\nast.dump(all_cells[sorted_indices[-1]].get_feature('ast'))\n\n# Code that created it\nprint (all_cells[sorted_indices[-1]].get_feature('original_code'))", "Which nodes are big\nOk, that piece of code is kind of ridiculous, how about some of the smaller big asts.", "print (all_cells[sorted_indices[-2]].get_feature('original_code'))\n\nprint (ast.dump(all_cells[sorted_indices[-3]].get_feature('ast')))\n\nprint (all_cells[sorted_indices[-4]].get_feature('original_code'))\n\nprint (all_cells[sorted_indices[-5]].get_feature('original_code'))", "Exploring the immediate children\nIn order to do bottom up, we need to look at the leaf nodes and their parents. This is made simpler by the order that can be found for certain node types. Descriptions of each node type can be found at http://greentreesnakes.readthedocs.io/en/latest/nodes.html\nMany nodes will always have the same number of children. However, some nodes have variable numbers -- Assign is a good example of this. All three of the below are valid:\n- x = 1\n- x, y = 1, 2\n- x, y, z = 1, 2, 3\nLet's look at both the size and form of a nodes children to see if we can come up with a first pass at the bottom up approach.", "len(all_graphs), len(all_roots)\n\ndef traverse_graph(g, cur_d):\n for node in networkx.dfs_preorder_nodes(g):\n t_node = type(node)\n if t_node not in cur_d:\n cur_d[t_node] = []\n child_set = set()\n for child in g[node]:\n child_set.add(type(child))\n cur_d[t_node].append(child_set)\n return cur_d\nmy_dict = {}\nfor g in all_graphs:\n my_dict = traverse_graph(g, my_dict)\n\n\nimport numpy as np\nfor key in my_dict.keys():\n print (key, np.unique(np.array([len(s) for s in my_dict[key]])), len(my_dict[key]))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fonnesbeck/scientific-python-workshop
notebooks/Programming with Python.ipynb
cc0-1.0
[ "Programming with Python\nControl Flow Statements\nThe control flow of a program determines the order in which lines of code are executed. All else being equal, Python code is executed linearly, in the order that lines appear in the program. However, all is not usually equal, and so the appropriate control flow is frequently specified with the help of control flow statements. These include loops, conditional statements and calls to functions. Let’s look at a few of these here.\nfor statements\nOne way to repeatedly execute a block of statements (i.e. loop) is to use a for statement. These statements iterate over the number of elements in a specified sequence, according to the following syntax:", "for letter in 'ciao':\n print('give me a {0}'.format(letter.upper()))", "Recall that strings are simply regarded as sequences of characters. Hence, the above for statement loops over each letter, converting each to upper case with the upper() method and printing it. \nSimilarly, as shown in the introduction, list comprehensions may be constructed using for statements:", "[i**2 for i in range(10)]", "Here, the expression loops over range(10) -- the sequence from 0 to 9 -- and squares each before placing it in the returned list.\nif statements\nAs the name implies, if statements execute particular sections of code depending on some tested condition. For example, to code an absolute value function, one might employ conditional statements:", "def absval(some_list):\n\n # Create empty list\n absolutes = [] \n\n # Loop over elements in some_list\n for value in some_list:\n\n # Conditional statement\n if value<0:\n # Negative value\n absolutes.append(-value)\n\n else:\n # Positive value\n absolutes.append(value)\n \n return absolutes ", "Here, each value in some_list is tested for the condition that it is negative, in which case it is multiplied by -1, otherwise it is appended as-is. \nFor conditions that have more than two possible values, the elif clause can be used:", "x = 5\nif x < 0:\n print('x is negative')\nelif x % 2:\n print('x is positive and odd')\nelse:\n print('x is even and non-negative')", "while statements\nA different type of conditional loop is provided by the while statement. Rather than iterating a specified number of times, according to a given sequence, while executes its block of code repeatedly, until its condition is no longer true. \nFor example, suppose we want to sample from a truncated normal distribution, where we are only interested in positive-valued samples. The following function is one solution:", "# Import function\nfrom numpy.random import normal\n\ndef truncated_normals(how_many, l):\n\n # Create empty list\n values = []\n\n # Loop until we have specified number of samples\n while (len(values) < how_many):\n\n # Sample from standard normal\n x = normal(0,1)\n\n # Append if not truncateed\n if x > l: values.append(x)\n\n return values \n\ntruncated_normals(15, 0)", "This function iteratively samples from a standard normal distribution, and appends it to the output array if it is positive, stopping to return the array once the specified number of values have been added.\nObviously, the body of the while statement should contain code that eventually renders the condition false, otherwise the loop will never end! An exception to this is if the body of the statement contains a break or return statement; in either case, the loop will be interrupted.\nGenerators\nWhen a Python functions is called, it creates a namespace for the function, executes the code that comprises the function (creating objects inside the namespace as required), and returns some result to its caller. After the return, everything inside the namespace (including the namespace itself) is gone, and is created anew when the function is called again. \nHowever, one particular class of functions in Python breaks this pattern, returning a value to the caller while still active, and able to return subsequent values as needed. Python generators employ yield statements in place of return, allowing a sequence of values to be generated without having to create a new function namespace each time. In other languages, this construct is known as a coroutine. \nFor example, we may want to have a function that returns a sequence of values; let's consider, for a simple illustration, the Fibonacci sequence:\n$$F_i = F_{i-2} + F_{i-1}$$\nits certaintly possible to write a standard Python function that returns however many Fibonacci numbers that we need:", "import numpy as np\n\ndef fibonacci(size):\n F = np.empty(size, 'int')\n a, b = 0, 1\n for i in range(size):\n F[i] = a\n a, b = b, a + b\n return F", "and this works just fine:", "fibonacci(20)", "However, what if we need one number at a time, or if we need a million or 10 million values? In the first case, you would somehow have to store the values from the last iteration, and restore the state to the function each time it is called. In the second case, you would have to generate and then store a very large number of values, most of which you may not need right now.\nA more sensible solution is to create a generator, which calculates a single value in the sequence, then returns control back to the caller. This allows the generator to be called again, resuming the sequence generation where it left off. Here's the Fibonacci function, implemented as a generator:", "def gfibonacci(size):\n a, b = 0, 1\n for _ in range(size):\n yield a\n a, b = b, a + b", "Notice that there is no return statement at all; just yield, which is where a value is returned each time one is requested. The yield statement is what defines a generator. \nWhen we call our generator, rather than a sequence of Fibonacci numbers, we get a generator object:", "f = gfibonacci(100)\nf", "A generator has a __next__() method that can be called via the builtin function next(). The call to next executes the generator until the yield statement is reached, returning the next generated value, and then pausing until another call to next occurs:", "next(f), next(f), next(f)", "A generator is a type of iterator. If we call a function that supports iterables using a generator as an argument, it will know how to use the generator.", "np.array(list(f))", "What happens when we reach the \"end\" of a generator?", "a_few_fibs = gfibonacci(2)\n\nnext(a_few_fibs)\n\nnext(a_few_fibs)\n\nnext(a_few_fibs)", "Thus, generators signal when there are no further values to generate by throwing a StopIteration exception. We must either handle this exception, or create a generator that is infinite, which we can do in this example by replacing a for loop with a while loop:", "def infinite_fib():\n a, b = 0, 1\n while True:\n yield a\n a, b = b, a + b\n\nf = infinite_fib()\nvals = [next(f) for _ in range(10000)]\nvals[-1]", "Error Handling\nInevitably, some code you write will generate errors, at least in some situations. Unless we explicitly anticipate and handle these errors, they will cause your code to halt (sometimes this is a good thing!). Errors are handled using try/except blocks.\nIf code executed in the try block generates an error, code execution moves to the except block. If the exception that is specified corresponsd to that which has been raised, the code in the except block is executed before continuing; otherwise, the exception is carried out and the code is halted.", "absval(-5)", "In the call to absval, we passed a single negative integer, whereas the function expects some sort of iterable data structure. Other than changing the function itself, we can avoid this error using exception handling.", "x = -5\ntry:\n print(absval(x))\nexcept TypeError:\n print('The argument to absval must be iterable!')\n\nx = -5\ntry:\n print(absval(x))\nexcept TypeError:\n print(absval([x]))", "We can raise exceptions manually by using the raise expression.", "raise ValueError('This is the wrong value')", "Importing and Manipulating Data\nPython includes operations for importing and exporting data from files and binary objects, and third-party packages exist for database connectivity. The easiest way to import data from a file is to parse delimited text file, which can usually be exported from spreadsheets and databases. In fact, file is a built-in type in python. Data may be read from and written to regular files by specifying them as file objects:", "microbiome = open('../data/microbiome.csv')", "Here, a file containing microbiome data in a comma-delimited format is opened, and assigned to an object, called microbiome. The next step is to transfer the information in the file to a usable data structure in Python. Since this dataset contains four variables, the name of the taxon, the patient identifier (de-identified), the bacteria count in tissue and the bacteria count in stool, it is convenient to use a dictionary. This allows each variable to be specified by name. \nFirst, a dictionary object is initialized, with appropriate keys and corresponding lists, initially empty. Since the file has a header, we can use it to generate an empty dict:", "column_names = next(microbiome).rstrip('\\n').split(',')\ncolumn_names", "Compatibility Corner: In Python 2, open would not return a generator, but rather a file object with a next method. In Python 3, an generator is returned, which requires the use of the built-in function next.", "mb_dict = {name:[] for name in column_names}\n\nmb_dict\n\nfor line in microbiome:\n taxon, patient, group, tissue, stool = line.rstrip('\\n').split(',')\n mb_dict['Taxon'].append(taxon)\n mb_dict['Patient'].append(int(patient))\n mb_dict['Group'].append(int(group))\n mb_dict['Tissue'].append(int(tissue))\n mb_dict['Stool'].append(int(stool))", "For each line in the file, data elements are split by the comma delimiter, using the split method that is built-in to string objects. Each datum is subsequently appended to the appropriate list stored in the dictionary. After all the data is parsed, it is polite to close the file:", "microbiome.close()", "The data can now be readily accessed by indexing the appropriate variable by name:", "mb_dict['Tissue'][:10]", "A second approach to importing data involves interfacing directly with a relational database management system. Relational databases are far more efficient for storing, maintaining and querying data than plain text files or spreadsheets, particularly for large datasets or multiple tables. A number of third parties have created packages for database access in Python. For example, sqlite3 is a package that provides connectivity for SQLite databases:", "import sqlite3\ndb = sqlite3.connect(database='../data/baseball-archive-2011.sqlite')\n\n# create a cursor object to communicate with database\ncur = db.cursor() \n\n# run query\ncur.execute('SELECT playerid, HR, SB FROM Batting WHERE yearID=1970')\n\n# fetch data, and assign to variable\nbaseball = cur.fetchall() \nbaseball[:10]", "Functions\nPython uses the def statement to encapsulate code into a callable function. Here again is a very simple Python function:", "# Function for calulating the mean of some data\ndef mean(data):\n\n # Initialize sum to zero\n sum_x = 0.0\n\n # Loop over data\n for x in data:\n\n # Add to sum\n sum_x += x \n \n # Divide by number of elements in list, and return\n return sum_x / len(data)", "As we can see, arguments are specified in parentheses following the function name. If there are sensible \"default\" values, they can be specified as a keyword argument.", "def var(data, sample=True):\n\n # Get mean of data from function above\n x_bar = mean(data)\n\n # Do sum of squares in one line\n sum_squares = sum([(x - x_bar)**2 for x in data])\n\n # Divide by n-1 and return\n if sample:\n return sum_squares/(len(data)-1)\n return sum_squares/len(data)", "Non-keyword arguments must always predede keyword arguments, and must always be presented in order; order is not important for keyword arguments.\nArguments can also be passed to functions as a tuple/list/dict using the asterisk notation.", "def some_computation(a=-1, b=4.3, c=7):\n return (a + b) / float(c)\n\nargs = (5, 4, 3)\nsome_computation(*args)\n\nkwargs = {'b':4, 'a':5, 'c':3}\nsome_computation(**kwargs)", "The lambda statement creates anonymous one-line functions that can simply be assigned to a name.", "import numpy as np\nnormalize = lambda data: (np.array(data) - np.mean(data)) / np.std(data)", "or not:", "(lambda data: (np.array(data) - np.mean(data)) / np.std(data))([5,8,3,8,3,1,2,1])", "Python has several built-in, higher-order functions that are useful.", "list(filter(lambda x: x > 5, range(10)))\n\nabs([5,-6])\n\nlist(map(abs, [5, -6]))", "Example: Least Squares Estimation\nLets try coding a statistical function. Suppose we want to estimate the parameters of a simple linear regression model. The objective of regression analysis is to specify an equation that will predict some response variable $Y$ based on a set of predictor variables $X$. This is done by fitting parameter values $\\beta$ of a regression model using extant data for $X$ and $Y$. This equation has the form:\n$$Y = X\\beta + \\epsilon$$\nwhere $\\epsilon$ is a vector of errors. One way to fit this model is using the method of least squares, which is given by:\n$$\\hat{\\beta} = (X^{\\prime} X)^{-1}X^{\\prime} Y$$\nWe can write a function that calculates this estimate, with the help of some functions from other modules:", "from numpy.linalg import inv\nfrom numpy import transpose, array, dot", "We will call this function solve, requiring the predictor and response variables as arguments. For simplicity, we will restrict the function to univariate regression, whereby only a single slope and intercept are estimated:", "def solve(x,y):\n 'Estimates regession coefficents from data'\n\n '''\n The first step is to specify the design matrix. For this, \n we need to create a vector of ones (corresponding to the intercept term, \n and along with x, create a n x 2 array:\n '''\n X = array([[1]*len(x), x])\n\n '''\n An array is a data structure from the numpy package, similar to a list, \n but allowing for multiple dimensions. Next, we calculate the transpose of x, \n using another numpy function, transpose\n '''\n Xt = transpose(X)\n\n '''\n Finally, we use the matrix multiplication function dot, also from numpy \n to calculate the dot product. The inverse function is provided by the LinearAlgebra \n package. Provided that x is not singular (which would raise an exception), this \n yields estimates of the intercept and slope, as an array\n '''\n b_hat = dot(inv(dot(X,Xt)), dot(X,y))\n\n return b_hat ", "Here is solve in action:", "solve((10,5,10,11,14),(-4,3,0,23,0.6))", "Object-oriented Programming\nAs previously stated, Python is an object-oriented programming (OOP) language, in contrast to procedural languages like FORTRAN and C. As the name implies, object-oriented languages employ objects to create convenient abstractions of data structures. This allows for more flexible programs, fewer lines of code, and a more natural programming paradigm in general. An object is simply a modular unit of data and associated functions, related to the state and behavior, respectively, of some abstract entity. Object-oriented languages group similar objects into classes. For example, consider a Python class representing a bird:", "class Bird:\n # Class representing a bird\n\n name = 'bird'\n \n def __init__(self, sex):\n # Initialization method\n \n self.sex = sex\n\n def fly(self):\n # Makes bird fly\n\n print('Flying!')\n \n def nest(self):\n # Makes bird build nest\n\n print('Building nest ...')\n \n @classmethod\n def get_name(cls):\n # Class methods are shared among instances\n \n return cls.name", "You will notice that this bird class is simply a container for two functions (called methods in Python), fly and nest, as well as one attribute, name. The methods represent functions in common with all members of this class. You can run this code in Python, and create birds:", "Tweety = Bird('male')\nTweety.name\n\nTweety.fly()\n\nFoghorn = Bird('male')\nFoghorn.nest()", "A classmethod can be called without instantiating an object.", "Bird.get_name()", "Whereas standard methods cannot:", "Bird.fly()", "As many instances of the bird class can be generated as desired, though it may quickly become boring. One of the important benefits of using object-oriented classes is code re-use. For example, we may want more specific kinds of birds, with unique functionality:", "class Duck(Bird):\n # Duck is a subclass of bird\n\n name = 'duck'\n \n def swim(self):\n # Ducks can swim\n\n print('Swimming!')\n\n def quack(self,n):\n # Ducks can quack\n \n print('Quack! ' * n)", "Notice that this new duck class refers to the bird class in parentheses after the class declaration; this is called inheritance. The subclass duck automatically inherits all of the variables and methods of the superclass, but allows new functions or variables to be added. In addition to flying and best-building, our duck can also swim and quack:", "Daffy = Duck('male')\nDaffy.swim()\n\nDaffy.quack(3)\n\nDaffy.nest()", "Along with adding new variables and methods, a subclass can also override existing variables and methods of the superclass. For example, one might define fly in the duck subclass to return an entirely different string. It is easy to see how inheritance promotes code re-use, sometimes dramatically reducing development time. Classes which are very similar need not be coded repetitiously, but rather, just extended from a single superclass. \nThis brief introduction to object-oriented programming is intended only to introduce new users of Python to this programming paradigm. There are many more salient object-oriented topics, including interfaces, composition, and introspection. I encourage interested readers to refer to any number of current Python and OOP books for a more comprehensive treatment.\nIn Python, everything is an object\nEverything (and I mean everything) in Python is an object, in the sense that they possess attributes, such as methods and variables, that we usually associate with more \"structured\" objects like those we created above.\nCheck it out:", "dir(1)\n\n(1).bit_length()", "This has implications for how assignment works in Python.\nLet's create a trivial class:", "class Thing: pass", "and instantiate it:", "x = Thing()\nx", "Here, x is simply a \"label\" for the object that we created when calling Thing. That object resides at the memory location that is identified when we print x. Notice that if we create another Thing, we create an new object, and give it a label. We know it is a new object because it has its own memory location.", "y = Thing()\ny", "What if we assign x to z?", "z = x\nz", "We see that the object labeled with z is the same as the object as that labeled with x. So, we say that z is a label (or name) with a binding to the object created by Thing.\nSo, there are no \"variables\", in the sense of a container for values, in Python. There are only labels and bindings.", "x.name = 'thing x'\n\nz.name", "This can get you into trouble. Consider the following (seemingly inoccuous) way of creating a dictionary of emtpy lists:", "evil_dict = dict.fromkeys(column_names, [])\nevil_dict\n\nevil_dict['Tissue'].append(5)\n\nevil_dict", "Why did this happen?\nReferences\n\nLearn Python the Hard Way \nLearn X in Y Minutes (where X=Python) \n29 common beginner Python errors on one page\nUnderstanding Python's Execution Model" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
zklgame/CatEyeNets
test/StyleTransfer-TensorFlow.ipynb
mit
[ "Style Transfer\nIn this notebook we will implement the style transfer technique from \"Image Style Transfer Using Convolutional Neural Networks\" (Gatys et al., CVPR 2015).\nThe general idea is to take two images, and produce a new image that reflects the content of one but the artistic \"style\" of the other. We will do this by first formulating a loss function that matches the content and style of each respective image in the feature space of a deep network, and then performing gradient descent on the pixels of the image itself.\nThe deep network we use as a feature extractor is SqueezeNet, a small model that has been trained on ImageNet. You could use any network, but we chose SqueezeNet here for its small size and efficiency.\nHere's an example of the images you'll be able to produce by the end of this notebook:\n\nSetup", "\n%load_ext autoreload\n%autoreload 2\nfrom scipy.misc import imread, imresize\nimport numpy as np\n\nfrom scipy.misc import imread\nimport matplotlib.pyplot as plt\n\n# Helper functions to deal with image preprocessing\nfrom cs231n.image_utils import load_image, preprocess_image, deprocess_image\n\n%matplotlib inline\n\ndef get_session():\n \"\"\"Create a session that dynamically allocates memory.\"\"\"\n # See: https://www.tensorflow.org/tutorials/using_gpu#allowing_gpu_memory_growth\n config = tf.ConfigProto()\n config.gpu_options.allow_growth = True\n session = tf.Session(config=config)\n return session\n\ndef rel_error(x,y):\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Older versions of scipy.misc.imresize yield different results\n# from newer versions, so we check to make sure scipy is up to date.\ndef check_scipy():\n import scipy\n vnum = int(scipy.__version__.split('.')[1])\n assert vnum >= 16, \"You must install SciPy >= 0.16.0 to complete this notebook.\"\n\ncheck_scipy()", "Load the pretrained SqueezeNet model. This model has been ported from PyTorch, see cs231n/classifiers/squeezenet.py for the model architecture. \nTo use SqueezeNet, you will need to first download the weights by changing into the cs231n/datasets directory and running get_squeezenet_tf.sh . Note that if you ran get_assignment3_data.sh then SqueezeNet will already be downloaded.", "from cs231n.classifiers.squeezenet import SqueezeNet\nimport tensorflow as tf\n\ntf.reset_default_graph() # remove all existing variables in the graph \nsess = get_session() # start a new Session\n\n# Load pretrained SqueezeNet model\nSAVE_PATH = 'cs231n/datasets/squeezenet.ckpt'\nif not os.path.exists(SAVE_PATH):\n raise ValueError(\"You need to download SqueezeNet!\")\nmodel = SqueezeNet(save_path=SAVE_PATH, sess=sess)\n\n# Load data for testing\ncontent_img_test = preprocess_image(load_image('styles/tubingen.jpg', size=192))[None]\nstyle_img_test = preprocess_image(load_image('styles/starry_night.jpg', size=192))[None]\nanswers = np.load('style-transfer-checks-tf.npz')\n", "Computing Loss\nWe're going to compute the three components of our loss function now. The loss function is a weighted sum of three terms: content loss + style loss + total variation loss. You'll fill in the functions that compute these weighted terms below.\nContent loss\nWe can generate an image that reflects the content of one image and the style of another by incorporating both in our loss function. We want to penalize deviations from the content of the content image and deviations from the style of the style image. We can then use this hybrid loss function to perform gradient descent not on the parameters of the model, but instead on the pixel values of our original image.\nLet's first write the content loss function. Content loss measures how much the feature map of the generated image differs from the feature map of the source image. We only care about the content representation of one layer of the network (say, layer $\\ell$), that has feature maps $A^\\ell \\in \\mathbb{R}^{1 \\times C_\\ell \\times H_\\ell \\times W_\\ell}$. $C_\\ell$ is the number of filters/channels in layer $\\ell$, $H_\\ell$ and $W_\\ell$ are the height and width. We will work with reshaped versions of these feature maps that combine all spatial positions into one dimension. Let $F^\\ell \\in \\mathbb{R}^{N_\\ell \\times M_\\ell}$ be the feature map for the current image and $P^\\ell \\in \\mathbb{R}^{N_\\ell \\times M_\\ell}$ be the feature map for the content source image where $M_\\ell=H_\\ell\\times W_\\ell$ is the number of elements in each feature map. Each row of $F^\\ell$ or $P^\\ell$ represents the vectorized activations of a particular filter, convolved over all positions of the image. Finally, let $w_c$ be the weight of the content loss term in the loss function.\nThen the content loss is given by:\n$L_c = w_c \\times \\sum_{i,j} (F_{ij}^{\\ell} - P_{ij}^{\\ell})^2$", "def content_loss(content_weight, content_current, content_original):\n \"\"\"\n Compute the content loss for style transfer.\n \n Inputs:\n - content_weight: scalar constant we multiply the content_loss by.\n - content_current: features of the current image, Tensor with shape [1, height, width, channels]\n - content_target: features of the content image, Tensor with shape [1, height, width, channels]\n \n Returns:\n - scalar content loss\n \"\"\"\n pass\n", "Test your content loss. You should see errors less than 0.001.", "def content_loss_test(correct):\n content_layer = 3\n content_weight = 6e-2\n c_feats = sess.run(model.extract_features()[content_layer], {model.image: content_img_test})\n bad_img = tf.zeros(content_img_test.shape)\n feats = model.extract_features(bad_img)[content_layer]\n student_output = sess.run(content_loss(content_weight, c_feats, feats))\n error = rel_error(correct, student_output)\n print('Maximum error is {:.3f}'.format(error))\n\ncontent_loss_test(answers['cl_out'])", "Style loss\nNow we can tackle the style loss. For a given layer $\\ell$, the style loss is defined as follows:\nFirst, compute the Gram matrix G which represents the correlations between the responses of each filter, where F is as above. The Gram matrix is an approximation to the covariance matrix -- we want the activation statistics of our generated image to match the activation statistics of our style image, and matching the (approximate) covariance is one way to do that. There are a variety of ways you could do this, but the Gram matrix is nice because it's easy to compute and in practice shows good results.\nGiven a feature map $F^\\ell$ of shape $(1, C_\\ell, M_\\ell)$, the Gram matrix has shape $(1, C_\\ell, C_\\ell)$ and its elements are given by:\n$$G_{ij}^\\ell = \\sum_k F^{\\ell}{ik} F^{\\ell}{jk}$$\nAssuming $G^\\ell$ is the Gram matrix from the feature map of the current image, $A^\\ell$ is the Gram Matrix from the feature map of the source style image, and $w_\\ell$ a scalar weight term, then the style loss for the layer $\\ell$ is simply the weighted Euclidean distance between the two Gram matrices:\n$$L_s^\\ell = w_\\ell \\sum_{i, j} \\left(G^\\ell_{ij} - A^\\ell_{ij}\\right)^2$$\nIn practice we usually compute the style loss at a set of layers $\\mathcal{L}$ rather than just a single layer $\\ell$; then the total style loss is the sum of style losses at each layer:\n$$L_s = \\sum_{\\ell \\in \\mathcal{L}} w_\\ell L_s^\\ell$$\nBegin by implementing the Gram matrix computation below:", "def gram_matrix(features, normalize=True):\n \"\"\"\n Compute the Gram matrix from features.\n \n Inputs:\n - features: Tensor of shape (1, H, W, C) giving features for\n a single image.\n - normalize: optional, whether to normalize the Gram matrix\n If True, divide the Gram matrix by the number of neurons (H * W * C)\n \n Returns:\n - gram: Tensor of shape (C, C) giving the (optionally normalized)\n Gram matrices for the input image.\n \"\"\"\n pass\n", "Test your Gram matrix code. You should see errors less than 0.001.", "def gram_matrix_test(correct):\n gram = gram_matrix(model.extract_features()[5])\n student_output = sess.run(gram, {model.image: style_img_test})\n error = rel_error(correct, student_output)\n print('Maximum error is {:.3f}'.format(error))\n\ngram_matrix_test(answers['gm_out'])", "Next, implement the style loss:", "def style_loss(feats, style_layers, style_targets, style_weights):\n \"\"\"\n Computes the style loss at a set of layers.\n \n Inputs:\n - feats: list of the features at every layer of the current image, as produced by\n the extract_features function.\n - style_layers: List of layer indices into feats giving the layers to include in the\n style loss.\n - style_targets: List of the same length as style_layers, where style_targets[i] is\n a Tensor giving the Gram matrix the source style image computed at\n layer style_layers[i].\n - style_weights: List of the same length as style_layers, where style_weights[i]\n is a scalar giving the weight for the style loss at layer style_layers[i].\n \n Returns:\n - style_loss: A Tensor contataining the scalar style loss.\n \"\"\"\n # Hint: you can do this with one for loop over the style layers, and should\n # not be very much code (~5 lines). You will need to use your gram_matrix function.\n pass\n", "Test your style loss implementation. The error should be less than 0.001.", "def style_loss_test(correct):\n style_layers = [1, 4, 6, 7, 10]\n style_weights = [300000, 1000, 15, 3]\n \n feats = model.extract_features()\n style_target_vars = []\n for idx in style_layers:\n style_target_vars.append(gram_matrix(feats[idx]))\n style_targets = sess.run(style_target_vars,\n {model.image: style_img_test})\n \n s_loss = style_loss(feats, style_layers, style_targets, style_weights)\n student_output = sess.run(s_loss, {model.image: content_img_test})\n error = rel_error(correct, student_output)\n print('Error is {:.3f}'.format(error))\n\nstyle_loss_test(answers['sl_out'])", "Total-variation regularization\nIt turns out that it's helpful to also encourage smoothness in the image. We can do this by adding another term to our loss that penalizes wiggles or \"total variation\" in the pixel values. \nYou can compute the \"total variation\" as the sum of the squares of differences in the pixel values for all pairs of pixels that are next to each other (horizontally or vertically). Here we sum the total-variation regualarization for each of the 3 input channels (RGB), and weight the total summed loss by the total variation weight, $w_t$:\n$L_{tv} = w_t \\times \\sum_{c=1}^3\\sum_{i=1}^{H-1} \\sum_{j=1}^{W-1} \\left( (x_{i,j+1, c} - x_{i,j,c})^2 + (x_{i+1, j,c} - x_{i,j,c})^2 \\right)$\nIn the next cell, fill in the definition for the TV loss term. To receive full credit, your implementation should not have any loops.", "def tv_loss(img, tv_weight):\n \"\"\"\n Compute total variation loss.\n \n Inputs:\n - img: Tensor of shape (1, H, W, 3) holding an input image.\n - tv_weight: Scalar giving the weight w_t to use for the TV loss.\n \n Returns:\n - loss: Tensor holding a scalar giving the total variation loss\n for img weighted by tv_weight.\n \"\"\"\n # Your implementation should be vectorized and not require any loops!\n pass\n", "Test your TV loss implementation. Error should be less than 0.001.", "def tv_loss_test(correct):\n tv_weight = 2e-2\n t_loss = tv_loss(model.image, tv_weight)\n student_output = sess.run(t_loss, {model.image: content_img_test})\n error = rel_error(correct, student_output)\n print('Error is {:.3f}'.format(error))\n\ntv_loss_test(answers['tv_out'])", "Style Transfer\nLets put it all together and make some beautiful images! The style_transfer function below combines all the losses you coded up above and optimizes for an image that minimizes the total loss.", "def style_transfer(content_image, style_image, image_size, style_size, content_layer, content_weight,\n style_layers, style_weights, tv_weight, init_random = False):\n \"\"\"Run style transfer!\n \n Inputs:\n - content_image: filename of content image\n - style_image: filename of style image\n - image_size: size of smallest image dimension (used for content loss and generated image)\n - style_size: size of smallest style image dimension\n - content_layer: layer to use for content loss\n - content_weight: weighting on content loss\n - style_layers: list of layers to use for style loss\n - style_weights: list of weights to use for each layer in style_layers\n - tv_weight: weight of total variation regularization term\n - init_random: initialize the starting image to uniform random noise\n \"\"\"\n # Extract features from the content image\n content_img = preprocess_image(load_image(content_image, size=image_size))\n feats = model.extract_features(model.image)\n content_target = sess.run(feats[content_layer],\n {model.image: content_img[None]})\n\n # Extract features from the style image\n style_img = preprocess_image(load_image(style_image, size=style_size))\n style_feat_vars = [feats[idx] for idx in style_layers]\n style_target_vars = []\n # Compute list of TensorFlow Gram matrices\n for style_feat_var in style_feat_vars:\n style_target_vars.append(gram_matrix(style_feat_var))\n # Compute list of NumPy Gram matrices by evaluating the TensorFlow graph on the style image\n style_targets = sess.run(style_target_vars, {model.image: style_img[None]})\n\n # Initialize generated image to content image\n \n if init_random:\n img_var = tf.Variable(tf.random_uniform(content_img[None].shape, 0, 1), name=\"image\")\n else:\n img_var = tf.Variable(content_img[None], name=\"image\")\n\n # Extract features on generated image\n feats = model.extract_features(img_var)\n # Compute loss\n c_loss = content_loss(content_weight, feats[content_layer], content_target)\n s_loss = style_loss(feats, style_layers, style_targets, style_weights)\n t_loss = tv_loss(img_var, tv_weight)\n loss = c_loss + s_loss + t_loss\n \n # Set up optimization hyperparameters\n initial_lr = 3.0\n decayed_lr = 0.1\n decay_lr_at = 180\n max_iter = 200\n\n # Create and initialize the Adam optimizer\n lr_var = tf.Variable(initial_lr, name=\"lr\")\n # Create train_op that updates the generated image when run\n with tf.variable_scope(\"optimizer\") as opt_scope:\n train_op = tf.train.AdamOptimizer(lr_var).minimize(loss, var_list=[img_var])\n # Initialize the generated image and optimization variables\n opt_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=opt_scope.name)\n sess.run(tf.variables_initializer([lr_var, img_var] + opt_vars))\n # Create an op that will clamp the image values when run\n clamp_image_op = tf.assign(img_var, tf.clip_by_value(img_var, -1.5, 1.5))\n \n f, axarr = plt.subplots(1,2)\n axarr[0].axis('off')\n axarr[1].axis('off')\n axarr[0].set_title('Content Source Img.')\n axarr[1].set_title('Style Source Img.')\n axarr[0].imshow(deprocess_image(content_img))\n axarr[1].imshow(deprocess_image(style_img))\n plt.show()\n plt.figure()\n \n # Hardcoded handcrafted \n for t in range(max_iter):\n # Take an optimization step to update img_var\n sess.run(train_op)\n if t < decay_lr_at:\n sess.run(clamp_image_op)\n if t == decay_lr_at:\n sess.run(tf.assign(lr_var, decayed_lr))\n if t % 100 == 0:\n print('Iteration {}'.format(t))\n img = sess.run(img_var)\n plt.imshow(deprocess_image(img[0], rescale=True))\n plt.axis('off')\n plt.show()\n print('Iteration {}'.format(t))\n img = sess.run(img_var) \n plt.imshow(deprocess_image(img[0], rescale=True))\n plt.axis('off')\n plt.show()", "Generate some pretty pictures!\nTry out style_transfer on the three different parameter sets below. Make sure to run all three cells. Feel free to add your own, but make sure to include the results of style transfer on the third parameter set (starry night) in your submitted notebook.\n\nThe content_image is the filename of content image.\nThe style_image is the filename of style image.\nThe image_size is the size of smallest image dimension of the content image (used for content loss and generated image).\nThe style_size is the size of smallest style image dimension.\nThe content_layer specifies which layer to use for content loss.\nThe content_weight gives weighting on content loss in the overall loss function. Increasing the value of this parameter will make the final image look more realistic (closer to the original content).\nstyle_layers specifies a list of which layers to use for style loss. \nstyle_weights specifies a list of weights to use for each layer in style_layers (each of which will contribute a term to the overall style loss). We generally use higher weights for the earlier style layers because they describe more local/smaller scale features, which are more important to texture than features over larger receptive fields. In general, increasing these weights will make the resulting image look less like the original content and more distorted towards the appearance of the style image.\ntv_weight specifies the weighting of total variation regularization in the overall loss function. Increasing this value makes the resulting image look smoother and less jagged, at the cost of lower fidelity to style and content. \n\nBelow the next three cells of code (in which you shouldn't change the hyperparameters), feel free to copy and paste the parameters to play around them and see how the resulting image changes.", "# Composition VII + Tubingen\nparams1 = {\n 'content_image' : 'styles/tubingen.jpg',\n 'style_image' : 'styles/composition_vii.jpg',\n 'image_size' : 192,\n 'style_size' : 512,\n 'content_layer' : 3,\n 'content_weight' : 5e-2, \n 'style_layers' : (1, 4, 6, 7, 8),\n 'style_weights' : (20000, 500, 12, 1),\n 'tv_weight' : 5e-2\n}\n\nstyle_transfer(**params1)\n\n# Scream + Tubingen\nparams2 = {\n 'content_image':'styles/tubingen.jpg',\n 'style_image':'styles/the_scream.jpg',\n 'image_size':192,\n 'style_size':224,\n 'content_layer':3,\n 'content_weight':3e-2,\n 'style_layers':[1, 4, 6, 7, 10],\n 'style_weights':[200000, 800, 12, 1],\n 'tv_weight':2e-2\n}\n\nstyle_transfer(**params2)\n\n# Starry Night + Tubingen\nparams3 = {\n 'content_image' : 'styles/tubingen.jpg',\n 'style_image' : 'styles/starry_night.jpg',\n 'image_size' : 192,\n 'style_size' : 192,\n 'content_layer' : 3,\n 'content_weight' : 6e-2,\n 'style_layers' : [1, 4, 6, 7, 10],\n 'style_weights' : [300000, 1000, 15, 3],\n 'tv_weight' : 2e-2\n}\n\nstyle_transfer(**params3)", "Feature Inversion\nThe code you've written can do another cool thing. In an attempt to understand the types of features that convolutional networks learn to recognize, a recent paper [1] attempts to reconstruct an image from its feature representation. We can easily implement this idea using image gradients from the pretrained network, which is exactly what we did above (but with two different feature representations).\nNow, if you set the style weights to all be 0 and initialize the starting image to random noise instead of the content source image, you'll reconstruct an image from the feature representation of the content source image. You're starting with total noise, but you should end up with something that looks quite a bit like your original image.\n(Similarly, you could do \"texture synthesis\" from scratch if you set the content weight to 0 and initialize the starting image to random noise, but we won't ask you to do that here.) \n[1] Aravindh Mahendran, Andrea Vedaldi, \"Understanding Deep Image Representations by Inverting them\", CVPR 2015", "# Feature Inversion -- Starry Night + Tubingen\nparams_inv = {\n 'content_image' : 'styles/tubingen.jpg',\n 'style_image' : 'styles/starry_night.jpg',\n 'image_size' : 192,\n 'style_size' : 192,\n 'content_layer' : 3,\n 'content_weight' : 6e-2,\n 'style_layers' : [1, 4, 6, 7, 10],\n 'style_weights' : [0, 0, 0, 0], # we discard any contributions from style to the loss\n 'tv_weight' : 2e-2,\n 'init_random': True # we want to initialize our image to be random\n}\n\nstyle_transfer(**params_inv)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
taiducvu/NudityDetection
MODEL.ipynb
apache-2.0
[ "NUDITY DETECTION MODEL\nStage 1: Preprocessing Data\nOur input images have varied resolution while a model often requires fixed-size inputs so that we need to preprocess them to have a uniform size for our data. To do that, we take two bellow steps:\n + Dropping 87.5 per cent of the central region of image\n + Resizing them into the size $34 \\times 34 \\times 3$", "%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nimport tensorflow as tf\nfrom scipy.misc import imread, imresize\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import gridspec\nfrom Dataset.data import preprocess_image\n\nimage = tf.placeholder(\"uint8\", [None, None, 3])\nresult_image = preprocess_image(image, 34, 34)\nraw_image = imread('/home/taivu/workspace/NudityDetection/Dataset/train/normal/34.jpg')\n\ninit = tf.global_variables_initializer()\nwith tf.Session() as sess:\n sess.run(init)\n result_img = sess.run(result_image, feed_dict={image:raw_image})\n\n################ Plot the raw image and the processed image ##########################\ngs = gridspec.GridSpec(1, 2, width_ratios=[3, 2]) \n\nfig = plt.figure()\na1 = fig.add_subplot(gs[0])\na1.set_title(\"Raw image\")\nplt.imshow(raw_image)\na2 = fig.add_subplot(gs[1])\na2.set_title(\"Processed image\")\nplt.imshow(result_img, shape =(34, 34))\nplt.show()", "Stage 2: Build a model\nThe model consists of 6 hidden layers including 2 convolutional layers, 2 pool layers, and 2 fully-connected layers. The detail of model are shown in the bellow figure\n\nStage 2.1: Training the model\n\nTo training this model, we used a training set with 4000 images with the ratio of nudity to normal images is 1:1. In additional, we alse estimated the model in the training process using a validation set with 2000 images with the ratio is similar to the training.\nCross-entropy is the loss function to assesses the difference between the predicted labels of model and the real labels of samples. Its notation: $L$\n $$ L = - \\log\\left ( \\frac{e^{f_y}}{\\sum_{j}e^{f_j}} \\right ) $$ in which, $f_y$ is the activation of neuron that present the real class of a sample\nMini-batch Gradient Descent algorithm is used to optimize the weights of the model\n $$\\mathit{w}{t} = \\mathit{w}{t-1} - \\alpha \\frac{1}{m} \\frac{\\partial L}{\\partial w}$$ in which $\\mathit{w}_{t}$ is the weights of model at time $t$ of the optimizing process.\nThe hyper-parameters of model such as the number of images in each mini-batch $m$, learning rate $\\alpha$ are set empirically", "import tensorflow as tf\nfrom vng_train import train\n\n# Do train the model\ntrain()", "Stage 2.2: Run the model\n\nAfter the training process, the trained weights of model are saved into a hard drive to reuse in the future\nTo run the model, we need to re-construct the model and then load the trained weights into it so that the model will not optimize again its weights in this stage. After the input images are feed-forward via the model, it will only classify them into two classes (Nudity or Normal)", "%matplotlib inline\n%load_ext autoreload\n%autoreload 2\nimport tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom execute_model import evaluate\n\n# Evaluate the model\nimg, pre_lb, real_lb = evaluate()\n\n\nfig = plt.figure(figsize=(20,20))\n\nfor i in range(50):\n a = fig.add_subplot(10,5, i)\n a.set_title('PL:%d'%(pre_lb[i]))\n a.set_yticklabels([])\n a.set_xticklabels([])\n plt.imshow(img[i])\nplt.pause(1)\nplt.show()\n\n#print pre_lb\n#print real_lb\nnum_err = np.absolute(pre_lb - real_lb)\nprint('The number of error samples: %d'%np.sum(num_err))\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\nfrom execute_model import evaluate\nimport numpy as np\n\n# Evaluate the model\nimg, pre_lb, real_lb = evaluate('/home/taivu/workspace/NudityDetection/Dataset/normal_test_set.tfrecords')\n#print('Predicted labels: ',pre_lb)\n#print('Real labels: ',real_lb)\n\nfig = plt.figure(figsize=(20,20))\nfor i in range(50):\n a = fig.add_subplot(10,5, i)\n a.set_title('PL:%d'%(pre_lb[i]))\n a.set_yticklabels([])\n a.set_xticklabels([])\n plt.imshow(img[i])\nplt.show()\n\nnum_err = np.absolute(pre_lb - real_lb)\nprint('The number of error samples: %d'%np.sum(num_err))\n\n\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\nfrom execute_model import evaluate\n\n# Evaluate the model\nimg, pre_lb, real_lb = evaluate('/home/taivu/workspace/NudityDetection/Dataset/nudity_test_set.tfrecords', False)\n\nfig = plt.figure(figsize=(20,20))\n\nfor i in range(50):\n a = fig.add_subplot(10,5, i)\n a.set_title('PL:%d'%(pre_lb[i]))\n a.set_yticklabels([])\n a.set_xticklabels([])\n plt.imshow(img[i])\nplt.show()\n\nnum_err = np.absolute(pre_lb - real_lb)\nprint('The number of error samples: %d'%np.sum(num_err))\n\n\n############################PIPELINE INPUT DATA########################################\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\nfrom execute_model import evaluate\nimport numpy as np\n\neval_img, pre_label, _ = evaluate('/home/taivu/workspace/AddPic', True)\n\n#fig = plt.figure(figsize=(40,80))\n\n#for i in range(80):\n# a = fig.add_subplot(16,5, i)\n# a.set_title('PL:%d'%(pre_label[i]))\n# a.set_yticklabels([])\n# a.set_xticklabels([])\n# plt.imshow(eval_img[i])\n#plt.show()", "Stage 3: Optimize the model\n\nApply Transfer Learning method", "# Test program\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\nfrom test_program import test_program\n\ntest_program()\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fifabsas/talleresfifabsas
python/Extras/Big_Data/analisis.ipynb
mit
[ "import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\nplt.rcParams[\"axes.grid\"] = True\n\nsoy = pd.read_csv(\"soy.csv\",delimiter=\"\\t\")\ngini = pd.read_csv(\"gini.csv\")\ngdp = pd.read_csv(\"gdpcap.csv\")\ndata = pd.concat([gdp,gini]).groupby(\"Country Name\")\n\nax = soy.plot(x=\"Month\",y=\"Price[USD]\",legend=False)\nax.set_ylabel('Precio[USD]')\nax.set_xlabel(\"Mes\")\ndata = pd.DataFrame(data.get_group(\"Argentina\").iloc[:,5:-1].dropna(axis=1).T.reset_index().values, \n columns=[\"Date\",\"GDP\",\"GINI\"], dtype=float)\nplt.figure(2)\nplt.plot(data.values[:,1],data.values[:,2],\"ro-\")\nfor i in data.values:\n plt.annotate(s=int(i[0]),xy=(i[1],i[2]))\n", "Estadísticas educativas\nDatos provenientes del GCBA http://data.buenosaires.gob.ar/dataset/estadistica-educativa", "#Obtengo los datos directamente de la página web. No es necesario bajarlos!\neduca = pd.read_csv(r\"https://recursos-data.buenosaires.gob.ar/ckan2/estadistica-educativa/estadistica-educativa.csv\", \n delimiter=\";\")\nprint(educa.shape) #Imprime la cantidad de filas primero, y después la cantidad de columnas\neduca.head() #Imprime los 5 primeros datos\n\n#Imprimamos las columnas para saber los datos\neduca.columns", "En el archivo https://recursos-data.buenosaires.gob.ar/ckan2/estadistica-educativa/documentacion-estadistica-educativa.pdf indica el significado de cada columna. Vamos a tomar el nivel de educación de madre, la tasa de repetición, domiciliados PBA e inversión en alumnos como datos relevantes", "features = [\"nivel_educ_madre\",\"iecep\",\"tasa_repeticion_2012\",\"domiciliados_pba\",\"inversion_alumnos_2013\"]\n\n#Ahora, para analizar los datos, usamos el pairplot de seaborn, \n#que te permite hacer histogramas 2d y agregarle una regresión lineal\nsns.pairplot(educa[educa.tipo_gestion == \"Estatal\"], \n vars=features, kind=\"reg\")" ]
[ "code", "markdown", "code", "markdown", "code" ]
superliaoyong/plist-forsource
python 第四课课件 一.ipynb
apache-2.0
[ "人生苦短,我用python\n\npython第四课\n课程安排\n1、numpy\n2、pandas\n3、matplotlib\n\nnumpy\n数组跟列表,列表可以存储任意类型的数据,而数组只能存储一种类型数据", "import array\n\na = array.array('i', range(10))\n\n# 数据类型必须统一\na[1] = 's'\n\na\n\nimport numpy as np", "从原有列表转换为数组", "a_list = list(range(10))\nb = np.array(a_list)\ntype(b)", "生成数组", "a = np.zeros(10, dtype=int)\nprint(type(a))\n# 查看数组类型\na.dtype\n\na = np.zeros((4,4), dtype=int)\nprint(type(a))\n# 查看数组类型\nprint(a.dtype)\na\n\nnp.ones((4,4), dtype=float)\n\nnp.full((3,3), 3.14)\n\na\n\nnp.zeros_like(a)\n\nnp.ones_like(a)\n\nnp.full_like(a, 4.12, dtype=float)", "random", "print(random.randint(5,10))\nprint(random.random())\n\nnp.random.random((3,3))\n\n# 经常会用到\nnp.random.randint(0,10, (5,5))", "范围取值", "list(range(0,10,2))\n\nnp.arange(0,3,2)\n\n# 经常用到\nnp.linspace(0, 3, 10)\n\n# n维的单位矩阵\nnp.eye(5)", "| Data type | Description |\n|:---------------|:-------------|\n| bool_ | Boolean (True or False) stored as a byte |\n| int_ | Default integer type (same as C long; normally either int64 or int32)| \n| intc | Identical to C int (normally int32 or int64)| \n| intp | Integer used for indexing (same as C ssize_t; normally either int32 or int64)| \n| int8 | Byte (-128 to 127)| \n| int16 | Integer (-32768 to 32767)|\n| int32 | Integer (-2147483648 to 2147483647)|\n| int64 | Integer (-9223372036854775808 to 9223372036854775807)| \n| uint8 | Unsigned integer (0 to 255)| \n| uint16 | Unsigned integer (0 to 65535)| \n| uint32 | Unsigned integer (0 to 4294967295)| \n| uint64 | Unsigned integer (0 to 18446744073709551615)| \n| float_ | Shorthand for float64.| \n| float16 | Half precision float: sign bit, 5 bits exponent, 10 bits mantissa| \n| float32 | Single precision float: sign bit, 8 bits exponent, 23 bits mantissa| \n| float64 | Double precision float: sign bit, 11 bits exponent, 52 bits mantissa| \n| complex_ | Shorthand for complex128.| \n| complex64 | Complex number, represented by two 32-bit floats| \n| complex128| Complex number, represented by two 64-bit floats|\n访问数组中元素", "# 嵌套列表的元素访问\nvar = [[1,2,3], [3,4,5], [5,6,7]]\nvar[0][0]\n\n# 数组中元素的访问\na = np.array(var)\na[-1][0]\n\na\n\n# 这两种访问方式是等价的\na[2, 0], a[2][0]\n\n# 数组切片\na[:2, :2]\n\n# 同上边的方式是不等价的\na[:2][:2]", "数组属性", "a\n\n# 维度\nprint(a.ndim)\n# shape\nprint(a.shape)\n# size\nprint(a.size)\n# dtype\nprint(a.dtype)\n# a.itemsize\nprint(a.itemsize)\n# nbytes\nprint(a.nbytes)", "运算", "a = np.array(list(range(10)))\na\n\nprint(a + 10)\nprint(a - 10)\nprint(a * 100)\n\na = np.full((3,3), 1.0, dtype=float)\na + 10 # 等价于 np.add(a, 10)", "| Operator | Equivalent ufunc | Description |\n|---------------|---------------------|---------------------------------------|\n|+ |np.add |Addition (e.g., 1 + 1 = 2) |\n|- |np.subtract |Subtraction (e.g., 3 - 2 = 1) |\n|- |np.negative |Unary negation (e.g., -2) |\n|* |np.multiply |Multiplication (e.g., 2 * 3 = 6) |\n|/ |np.divide |Division (e.g., 3 / 2 = 1.5) |\n|// |np.floor_divide |Floor division (e.g., 3 // 2 = 1) |\n|** |np.power |Exponentiation (e.g., 2 ** 3 = 8) |\n|% |np.mod |Modulus/remainder (e.g., 9 % 4 = 1)|", "a = np.linspace(0, np.pi, 5)\nb = np.sin(a)\nprint(a)\nprint(b)", "统计类型", "# 求和\nprint(sum([1,2,3,4,5,6]))\n# 数组一维求和\na = np.full(10, 2.3)\nprint(sum(a))\n# 数组多维求和\na = np.array([[1,2],[3,4]])\nprint(sum(a))\n\n# np.sum 求和\nnp.sum(a)\nnp.sum(a, axis=1)\nnp.max(a, axis=1)\n\nn = np.random.rand(10000)", "notebook使用小技巧\n%timeit 代码 ; 此方法来判断程序的执行效率", "%timeit sum(n)\n\n%timeit np.sum(n)", "由上代码可已看出np.sum的执行效率高,推荐使用\n比较", "a = np.array(range(10))\na\n\na > 3\n\na != 3\n\na == a", "| Operator | Equivalent ufunc || Operator | Equivalent ufunc |\n|---------------|---------------------||---------------|---------------------|\n|== |np.equal ||!= |np.not_equal |\n|&lt; |np.less ||&lt;= |np.less_equal |\n|&gt; |np.greater ||&gt;= |np.greater_equal |", "np.all(a>-1)\n\nnp.any(a>-1)", "变形", "a = np.full((2,10), 1, dtype=float)\na\n\na.reshape(4, 5)", "排序", "l = [\n [1,2,3],\n [34,12,4],\n [32,2,33]\n]\na = np.array(l)\na\n\nnp.sort(a)\na.sort(axis=0)\na", "拼接", "a = np.array([1, 2, 3])\nb = np.array([[0, 2, 4], [1, 3, 5]])\n\n# 按行去连接\nnp.concatenate([b,b,b], axis=0)\n\n# 按列去连接\nnp.concatenate([b,b,b], axis=1)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sainathadapa/fastai-courses
deeplearning1/nbs-custom-mine/lesson1_01.ipynb
apache-2.0
[ "Using Convolutional Neural Networks\nWelcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.\nIntroduction to this week's task: 'Dogs vs Cats'\nWe're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): \"State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task\". So if we can beat 80%, then we will be at the cutting edge as of 2013!\nBasic setup\nThere isn't too much to do to get started - just a few simple configuration steps.\nThis shows plots in the web page itself - we always wants to use this when using jupyter notebook:", "%matplotlib inline", "Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)", "path = \"data/dogscats/\"\n#path = \"data/dogscats/sample/\"", "A few basic libraries that we'll need for the initial exercises:", "from __future__ import division,print_function\n\nimport os, json\nfrom glob import glob\nimport numpy as np\nnp.set_printoptions(precision=4, linewidth=100)\nfrom matplotlib import pyplot as plt", "We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.", "import utils; reload(utils)\nfrom utils import plots", "Use a pretrained VGG model with our Vgg16 class\nOur first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.\nWe have created a python class, Vgg16, which makes using the VGG 16 model very straightforward. \nThe punchline: state of the art custom model in 7 lines of code\nHere's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.", "# As large as you can, but no larger than 64 is recommended. \n# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.\nbatch_size=64\n\n# Import our class, and instantiate\nimport vgg16; reload(vgg16)\nfrom vgg16 import Vgg16\n\nvgg = Vgg16()\n# Grab a few images at a time for training and validation.\n# NB: They must be in subdirectories named based on their category\nbatches = vgg.get_batches(path+'train', batch_size=batch_size)\nval_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)\nvgg.finetune(batches)\nvgg.fit(batches, val_batches, nb_epoch=1)", "The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.\nLet's take a look at how this works, step by step...\nUse Vgg16 for basic image recognition\nLet's start off by using the Vgg16 class to recognise the main imagenet category for each image.\nWe won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.\nFirst, create a Vgg16 object:", "vgg = Vgg16()", "Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.\nLet's grab batches of data from our training folder:", "batches = vgg.get_batches(path+'train', batch_size=4)", "(BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)\nBatches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.", "imgs,labels = next(batches)", "As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding. \nThe arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.", "plots(imgs, titles=labels)", "We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.", "vgg.predict(imgs, True)", "The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:", "vgg.classes[:4]", "(Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)\nUse our Vgg16 class to finetune a Dogs vs Cats model\nTo change our model so that it outputs \"cat\" vs \"dog\", instead of one of 1,000 very specific categories, we need to use a process called \"finetuning\". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.\nHowever, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().\nWe create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.", "batch_size=64\n\nbatches = vgg.get_batches(path+'train', batch_size=batch_size)\nval_batches = vgg.get_batches(path+'valid', batch_size=batch_size)", "Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.", "vgg.finetune(batches)", "Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)", "vgg.fit(batches, val_batches, nb_epoch=1)", "That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.\nNext up, we'll dig one level deeper to see what's going on in the Vgg16 class.\nCreate a VGG model from scratch in Keras\nFor the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.\nModel setup\nWe need to import all the modules we'll be using from numpy, scipy, and keras:", "from numpy.random import random, permutation\nfrom scipy import misc, ndimage\nfrom scipy.ndimage.interpolation import zoom\n\nimport keras\nfrom keras import backend as K\nfrom keras.utils.data_utils import get_file\nfrom keras.models import Sequential, Model\nfrom keras.layers.core import Flatten, Dense, Dropout, Lambda\nfrom keras.layers import Input\nfrom keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D\nfrom keras.optimizers import SGD, RMSprop\nfrom keras.preprocessing import image", "Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.", "FILES_PATH = 'http://files.fast.ai/models/'; CLASS_FILE='imagenet_class_index.json'\n# Keras' get_file() is a handy function that downloads files, and caches them for re-use later\nfpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')\nwith open(fpath) as f: class_dict = json.load(f)\n# Convert dictionary with string indexes into an array\nclasses = [class_dict[str(i)][1] for i in range(len(class_dict))]", "Here's a few examples of the categories we just imported:", "classes[:5]", "Model creation\nCreating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.\nVGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:", "def ConvBlock(layers, model, filters):\n for i in range(layers): \n model.add(ZeroPadding2D((1,1)))\n model.add(Convolution2D(filters, 3, 3, activation='relu'))\n model.add(MaxPooling2D((2,2), strides=(2,2)))", "...and here's the fully-connected definition.", "def FCBlock(model):\n model.add(Dense(4096, activation='relu'))\n model.add(Dropout(0.5))", "When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:", "# Mean of each channel as provided by VGG researchers\nvgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))\n\ndef vgg_preprocess(x):\n x = x - vgg_mean # subtract mean\n return x[:, ::-1] # reverse axis bgr->rgb", "Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!", "def VGG_16():\n model = Sequential()\n model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))\n\n ConvBlock(2, model, 64)\n ConvBlock(2, model, 128)\n ConvBlock(3, model, 256)\n ConvBlock(3, model, 512)\n ConvBlock(3, model, 512)\n\n model.add(Flatten())\n FCBlock(model)\n FCBlock(model)\n model.add(Dense(1000, activation='softmax'))\n return model", "We'll learn about what these different blocks do later in the course. For now, it's enough to know that:\n\nConvolution layers are for finding patterns in images\nDense (fully connected) layers are for combining patterns across an image\n\nNow that we've defined the architecture, we can create the model like any python object:", "model = VGG_16()", "As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem. \nDownloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.", "fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')\nmodel.load_weights(fpath)", "Getting imagenet predictions\nThe setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.", "batch_size = 4", "Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:", "def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True, \n batch_size=batch_size, class_mode='categorical'):\n return gen.flow_from_directory(path+dirname, target_size=(224,224), \n class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)", "From here we can use exactly the same steps as before to look at predictions from the model.", "batches = get_batches('train', batch_size=batch_size)\nval_batches = get_batches('valid', batch_size=batch_size)\nimgs,labels = next(batches)\n\n# This shows the 'ground truth'\nplots(imgs, titles=labels)", "The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label.", "def pred_batch(imgs):\n preds = model.predict(imgs)\n idxs = np.argmax(preds, axis=1)\n\n print('Shape: {}'.format(preds.shape))\n print('First 5 classes: {}'.format(classes[:5]))\n print('First 5 probabilities: {}\\n'.format(preds[0, :5]))\n print('Predictions prob/class: ')\n \n for i in range(len(idxs)):\n idx = idxs[i]\n print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))\n\npred_batch(imgs)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.23/_downloads/cc2f4b498fc65366ac39d017e939eec5/xdawn_denoising.ipynb
bsd-3-clause
[ "%matplotlib inline", "XDAWN Denoising\nXDAWN filters are trained from epochs, signal is projected in the sources\nspace and then projected back in the sensor space using only the first two\nXDAWN components. The process is similar to an ICA, but is\nsupervised in order to maximize the signal to signal + noise ratio of the\nevoked response :footcite:RivetEtAl2009, RivetEtAl2011.\n<div class=\"alert alert-danger\"><h4>Warning</h4><p>As this denoising method exploits the known events to\n maximize SNR of the contrast between conditions it can lead\n to overfitting. To avoid a statistical analysis problem you\n should split epochs used in fit with the ones used in\n apply method.</p></div>", "# Authors: Alexandre Barachant <alexandre.barachant@gmail.com>\n#\n# License: BSD (3-clause)\n\n\nfrom mne import (io, compute_raw_covariance, read_events, pick_types, Epochs)\nfrom mne.datasets import sample\nfrom mne.preprocessing import Xdawn\nfrom mne.viz import plot_epochs_image\n\nprint(__doc__)\n\ndata_path = sample.data_path()", "Set parameters and read data", "raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\ntmin, tmax = -0.1, 0.3\nevent_id = dict(vis_r=4)\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname, preload=True)\nraw.filter(1, 20, fir_design='firwin') # replace baselining with high-pass\nevents = read_events(event_fname)\n\nraw.info['bads'] = ['MEG 2443'] # set bad channels\npicks = pick_types(raw.info, meg=True, eeg=False, stim=False, eog=False,\n exclude='bads')\n# Epoching\nepochs = Epochs(raw, events, event_id, tmin, tmax, proj=False,\n picks=picks, baseline=None, preload=True,\n verbose=False)\n\n# Plot image epoch before xdawn\nplot_epochs_image(epochs['vis_r'], picks=[230], vmin=-500, vmax=500)", "Now, we estimate a set of xDAWN filters for the epochs (which contain only\nthe vis_r class).", "# Estimates signal covariance\nsignal_cov = compute_raw_covariance(raw, picks=picks)\n\n# Xdawn instance\nxd = Xdawn(n_components=2, signal_cov=signal_cov)\n\n# Fit xdawn\nxd.fit(epochs)", "Epochs are denoised by calling apply, which by default keeps only the\nsignal subspace corresponding to the first n_components specified in the\nXdawn constructor above.", "epochs_denoised = xd.apply(epochs)\n\n# Plot image epoch after Xdawn\nplot_epochs_image(epochs_denoised['vis_r'], picks=[230], vmin=-500, vmax=500)", "References\n.. footbibliography::" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
macks22/gensim
docs/notebooks/translation_matrix.ipynb
lgpl-2.1
[ "Tranlation Matrix Tutorial\nWhat is it ?\nSuppose we are given a setofword pairs and their associated vector representaion ${x_{i},z_{i}}{i=1}^{n}$, where $x{i} \\in R^{d_{1}}$ is the distibuted representation of word $i$ in the source language, and ${z_{i} \\in R^{d_{2}}}$ is the vector representation of its translation. Our goal is to find a transformation matrix $W$ such that $Wx_{i}$ approximates $z_{i}$. In practice, $W$ can be learned by the following optimization prolem:\n<center>$\\min \\limits_{W} \\sum \\limits_{i=1}^{n} ||Wx_{i}-z_{i}||^{2}$</center>\nResources\nTomas Mikolov, Quoc V Le, Ilya Sutskever. 2013.Exploiting Similarities among Languages for Machine Translation\nGeorgiana Dinu, Angelikie Lazaridou and Marco Baroni. 2014.Improving zero-shot learning by mitigating the hubness problem", "import os\n\nfrom gensim import utils\nfrom gensim.models import translation_matrix\nfrom gensim.models import KeyedVectors", "For this tutorial, we'll be training our model using the English -> Italian word pairs from the OPUS collection. This corpus contains 5000 word pairs. Each pair is a English word and corresponding Italian word.\ndataset download: \nOPUS_en_it_europarl_train_5K.txt", "train_file = \"OPUS_en_it_europarl_train_5K.txt\"\n\nwith utils.smart_open(train_file, \"r\") as f:\n word_pair = [tuple(utils.to_unicode(line).strip().split()) for line in f]\nprint word_pair[:10]", "This tutorial uses 300-dimensional vectors of English words as source and vectors of Italian words as target.(those vector trained by the word2vec toolkit with cbow. The context window was set 5 words to either side of the target,\nthe sub-sampling option was set to 1e-05 and estimate the probability of a target word with the negative sampling method, drawing 10 samples from the noise distribution)\ndataset download:\nEN.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt\nIT.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt", "# Load the source language word vector\nsource_word_vec_file = \"EN.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt\"\nsource_word_vec = KeyedVectors.load_word2vec_format(source_word_vec_file, binary=False)\n\n#Load the target language word vector\ntarget_word_vec_file = \"IT.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt\"\ntarget_word_vec = KeyedVectors.load_word2vec_format(target_word_vec_file, binary=False)", "training the translation matrix", "transmat = translation_matrix.TranslationMatrix(word_pair, source_word_vec, target_word_vec)\ntransmat.train(word_pair)\nprint \"the shape of translation matrix is: \", transmat.translation_matrix.shape", "Prediction Time: for any given new word, we can map it to the other language space by coputing $z = Wx$, then we find the word whose representation is closet to z in the target language space, using consine similarity as the distance metric.\npart one:\nLet's look at some number translation. We use English words (one, two, three, four and five) as test.", "# the piar is (English, Italian), we can see whether the translated word is right or not \nwords = [(\"one\", \"uno\"), (\"two\", \"due\"), (\"three\", \"tre\"), (\"four\", \"quattro\"), (\"five\", \"cinque\")]\nsource_word, target_word = zip(*words)\ntranslated_word = transmat.translate(source_word, 5)\n\nfor k, v in translated_word.iteritems():\n print \"word \", k, \" and translated word\", v", "part two:\nLet's look at some fruit translations. We use English words (apple, orange, grape, banana and mango) as test.", "words = [(\"apple\", \"mela\"), (\"orange\", \"arancione\"), (\"grape\", \"acino\"), (\"banana\", \"banana\"), (\"mango\", \"mango\")]\nsource_word, target_word = zip(*words)\ntranslated_word = transmat.translate(source_word, 5)\nfor k, v in translated_word.iteritems():\n print \"word \", k, \" and translated word\", v", "part three:\nLet's look at some animal translations. We use English words (dog, pig, cat, horse and bird) as test.", "words = [(\"dog\", \"cane\"), (\"pig\", \"maiale\"), (\"cat\", \"gatto\"), (\"fish\", \"cavallo\"), (\"birds\", \"uccelli\")]\nsource_word, target_word = zip(*words)\ntranslated_word = transmat.translate(source_word, 5)\nfor k, v in translated_word.iteritems():\n print \"word \", k, \" and translated word\", v", "The Creation Time for the Translation Matrix\nTesting the creation time, we extracted more word pairs from a dictionary built from Europarl(Europara, en-it).we obtain about 20K word pairs and their coresponding word vectors.Or you can download from this.word_dict.pkl", "import pickle\nword_dict = \"word_dict.pkl\"\nwith utils.smart_open(word_dict, \"r\") as f:\n word_pair = pickle.load(f)\nprint \"the length of word pair \", len(word_pair)\n\nimport time\n\ntest_case = 10\nword_pair_length = len(word_pair)\nstep = word_pair_length / test_case\n\nduration = []\nsizeofword = []\n\nfor idx in xrange(0, test_case):\n sub_pair = word_pair[: (idx + 1) * step]\n\n startTime = time.time()\n transmat = translation_matrix.TranslationMatrix(sub_pair, source_word_vec, target_word_vec)\n transmat.train(sub_pair)\n endTime = time.time()\n \n sizeofword.append(len(sub_pair))\n duration.append(endTime - startTime)\n\nimport plotly\nfrom plotly.graph_objs import Scatter, Layout\n\nplotly.offline.init_notebook_mode(connected=True)\n\nplotly.offline.iplot({\n \"data\": [Scatter(x=sizeofword, y=duration)],\n \"layout\": Layout(title=\"time for creation\"),\n}, filename=\"tm_creation_time.html\")", "You will see a two dimensional coordination whose horizontal axis is the size of corpus and vertical axis is the time to train a translation matrix (the unit is second). As the size of corpus increases, the time increases linearly.\nLinear Relationship Between Languages\nTo have a better understanding of the principles behind, we visualized the word vectors using PCA, we noticed that the vector representations of similar words in different languages were related by a linear transformation.", "from sklearn.decomposition import PCA\n\nimport plotly\nfrom plotly.graph_objs import Scatter, Layout, Figure\nplotly.offline.init_notebook_mode(connected=True)\n\nwords = [(\"one\", \"uno\"), (\"two\", \"due\"), (\"three\", \"tre\"), (\"four\", \"quattro\"), (\"five\", \"cinque\")]\nen_words_vec = [source_word_vec[item[0]] for item in words]\nit_words_vec = [target_word_vec[item[1]] for item in words]\n\nen_words, it_words = zip(*words)\n\npca = PCA(n_components=2)\nnew_en_words_vec = pca.fit_transform(en_words_vec)\nnew_it_words_vec = pca.fit_transform(it_words_vec)\n\n# remove the code, use the plotly for ploting instead\n# fig = plt.figure()\n# fig.add_subplot(121)\n# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])\n# for idx, item in enumerate(en_words):\n# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))\n\n# fig.add_subplot(122)\n# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])\n# for idx, item in enumerate(it_words):\n# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))\n# plt.show()\n\n# you can also using plotly lib to plot in one figure\ntrace1 = Scatter(\n x = new_en_words_vec[:, 0],\n y = new_en_words_vec[:, 1],\n mode = 'markers+text',\n text = en_words,\n textposition = 'top'\n)\ntrace2 = Scatter(\n x = new_it_words_vec[:, 0],\n y = new_it_words_vec[:, 1],\n mode = 'markers+text',\n text = it_words,\n textposition = 'top'\n)\nlayout = Layout(\n showlegend = False\n)\ndata = [trace1, trace2]\n\nfig = Figure(data=data, layout=layout)\nplot_url = plotly.offline.iplot(fig, filename='relatie_position_for_number.html')", "The figure shows that the word vectors for English number one to five and the corresponding Italian words uno to cinque have similar geometric arrangements. So the relationship between vector spaces that represent these tow languages can be captured by linear mapping. \nIf we know the translation of one and four from English to Spanish, we can learn the transformation matrix that can help us to translate five or other numbers.", "words = [(\"one\", \"uno\"), (\"two\", \"due\"), (\"three\", \"tre\"), (\"four\", \"quattro\"), (\"five\", \"cinque\")]\nen_words, it_words = zip(*words)\nen_words_vec = [source_word_vec[item[0]] for item in words]\nit_words_vec = [target_word_vec[item[1]] for item in words]\n\n# translate the English word five to Spanish\ntranslated_word = transmat.translate([en_words[4]], 3)\nprint \"translation of five: \", translated_word\n\n# the translated words of five\nfor item in translated_word[en_words[4]]:\n it_words_vec.append(target_word_vec[item])\n\npca = PCA(n_components=2)\nnew_en_words_vec = pca.fit_transform(en_words_vec)\nnew_it_words_vec = pca.fit_transform(it_words_vec)\n\n# remove the code, use the plotly for ploting instead\n# fig = plt.figure()\n# fig.add_subplot(121)\n# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])\n# for idx, item in enumerate(en_words):\n# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))\n\n# fig.add_subplot(122)\n# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])\n# for idx, item in enumerate(it_words):\n# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))\n# # annote for the translation of five, the red text annotation is the translation of five\n# for idx, item in enumerate(translated_word[en_words[4]]):\n# plt.annotate(item, xy=(new_it_words_vec[idx + 5][0], new_it_words_vec[idx + 5][1]),\n# xytext=(new_it_words_vec[idx + 5][0] + 0.1, new_it_words_vec[idx + 5][1] + 0.1),\n# color=\"red\",\n# arrowprops=dict(facecolor='red', shrink=0.1, width=1, headwidth=2),)\n# plt.show()\n\ntrace1 = Scatter(\n x = new_en_words_vec[:, 0],\n y = new_en_words_vec[:, 1],\n mode = 'markers+text',\n text = en_words,\n textposition = 'top'\n)\ntrace2 = Scatter(\n x = new_it_words_vec[:, 0],\n y = new_it_words_vec[:, 1],\n mode = 'markers+text',\n text = it_words,\n textposition = 'top'\n)\nlayout = Layout(\n showlegend = False,\n annotations = [dict(\n x = new_it_words_vec[5][0],\n y = new_it_words_vec[5][1],\n text = translated_word[en_words[4]][0],\n arrowcolor = \"black\",\n arrowsize = 1.5,\n arrowwidth = 1,\n arrowhead = 0.5\n ), dict(\n x = new_it_words_vec[6][0],\n y = new_it_words_vec[6][1],\n text = translated_word[en_words[4]][1],\n arrowcolor = \"black\",\n arrowsize = 1.5,\n arrowwidth = 1,\n arrowhead = 0.5\n ), dict(\n x = new_it_words_vec[7][0],\n y = new_it_words_vec[7][1],\n text = translated_word[en_words[4]][2],\n arrowcolor = \"black\",\n arrowsize = 1.5,\n arrowwidth = 1,\n arrowhead = 0.5\n )]\n)\ndata = [trace1, trace2]\n\nfig = Figure(data=data, layout=layout)\nplot_url = plotly.offline.iplot(fig, filename='relatie_position_for_numbers.html')", "You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word five, we return top 3 similar words [u'cinque', u'quattro', u'tre']. We can easily see that the translation is convincing.\nLet's see some animals word, the figue show that most of words are also share the similar geometric arrangements.", "words = [(\"dog\", \"cane\"), (\"pig\", \"maiale\"), (\"cat\", \"gatto\"), (\"horse\", \"cavallo\"), (\"birds\", \"uccelli\")]\nen_words_vec = [source_word_vec[item[0]] for item in words]\nit_words_vec = [target_word_vec[item[1]] for item in words]\n\nen_words, it_words = zip(*words)\n\n# remove the code, use the plotly for ploting instead\n# pca = PCA(n_components=2)\n# new_en_words_vec = pca.fit_transform(en_words_vec)\n# new_it_words_vec = pca.fit_transform(it_words_vec)\n\n# fig = plt.figure()\n# fig.add_subplot(121)\n# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])\n# for idx, item in enumerate(en_words):\n# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))\n\n# fig.add_subplot(122)\n# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])\n# for idx, item in enumerate(it_words):\n# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))\n# plt.show()\n\ntrace1 = Scatter(\n x = new_en_words_vec[:, 0],\n y = new_en_words_vec[:, 1],\n mode = 'markers+text',\n text = en_words,\n textposition = 'top'\n)\ntrace2 = Scatter(\n x = new_it_words_vec[:, 0],\n y = new_it_words_vec[:, 1],\n mode = 'markers+text',\n text = it_words,\n textposition ='top'\n)\nlayout = Layout(\n showlegend = False\n)\ndata = [trace1, trace2]\n\nfig = Figure(data=data, layout=layout)\nplot_url = plotly.offline.iplot(fig, filename='relatie_position_for_animal.html')\n\nwords = [(\"dog\", \"cane\"), (\"pig\", \"maiale\"), (\"cat\", \"gatto\"), (\"horse\", \"cavallo\"), (\"birds\", \"uccelli\")]\nen_words, it_words = zip(*words)\nen_words_vec = [source_word_vec[item[0]] for item in words]\nit_words_vec = [target_word_vec[item[1]] for item in words]\n\n# translate the English word birds to Spanish\ntranslated_word = transmat.translate([en_words[4]], 3)\nprint \"translation of birds: \", translated_word\n\n# the translated words of birds\nfor item in translated_word[en_words[4]]:\n it_words_vec.append(target_word_vec[item])\n\npca = PCA(n_components=2)\nnew_en_words_vec = pca.fit_transform(en_words_vec)\nnew_it_words_vec = pca.fit_transform(it_words_vec)\n\n# # remove the code, use the plotly for ploting instead\n# fig = plt.figure()\n# fig.add_subplot(121)\n# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])\n# for idx, item in enumerate(en_words):\n# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))\n\n# fig.add_subplot(122)\n# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])\n# for idx, item in enumerate(it_words):\n# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))\n# # annote for the translation of five, the red text annotation is the translation of five\n# for idx, item in enumerate(translated_word[en_words[4]]):\n# plt.annotate(item, xy=(new_it_words_vec[idx + 5][0], new_it_words_vec[idx + 5][1]),\n# xytext=(new_it_words_vec[idx + 5][0] + 0.1, new_it_words_vec[idx + 5][1] + 0.1),\n# color=\"red\",\n# arrowprops=dict(facecolor='red', shrink=0.1, width=1, headwidth=2),)\n# plt.show()\n\ntrace1 = Scatter(\n x = new_en_words_vec[:, 0],\n y = new_en_words_vec[:, 1],\n mode = 'markers+text',\n text = en_words,\n textposition = 'top'\n)\ntrace2 = Scatter(\n x = new_it_words_vec[:5, 0],\n y = new_it_words_vec[:5, 1],\n mode = 'markers+text',\n text = it_words[:5],\n textposition = 'top'\n)\nlayout = Layout(\n showlegend = False,\n annotations = [dict(\n x = new_it_words_vec[5][0],\n y = new_it_words_vec[5][1],\n text = translated_word[en_words[4]][0],\n arrowcolor = \"black\",\n arrowsize = 1.5,\n arrowwidth = 1,\n arrowhead = 0.5\n ), dict(\n x = new_it_words_vec[6][0],\n y = new_it_words_vec[6][1],\n text = translated_word[en_words[4]][1],\n arrowcolor = \"black\",\n arrowsize = 1.5,\n arrowwidth = 1,\n arrowhead = 0.5\n ), dict(\n x = new_it_words_vec[7][0],\n y = new_it_words_vec[7][1],\n text = translated_word[en_words[4]][2],\n arrowcolor = \"black\",\n arrowsize = 1.5,\n arrowwidth = 1,\n arrowhead = 0.5\n )]\n)\ndata = [trace1, trace2]\n\nfig = Figure(data=data, layout=layout)\nplot_url = plotly.offline.iplot(fig, filename='relatie_position_for_animal.html')", "You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word bird, we return top 3 similar words [u'uccelli', u'garzette', u'iguane']. We can easily see that the animals' words translation is also convincing as the numbers.\nTranlation Matrix Revisit\nAs dicussion in this PR, Translation Matrix not only can used to translate the words from one source language to another target lanuage, but also to translate new document vectors back to old model space.\nFor example, if we have trained 15k documents using doc2vec (we called this as model1), and we are going to train new 35k documents using doc2vec(we called this as model2). So we can include those 15k documents as reference documents into the new 35k documents. Then we can get 15k document vectors from model1 and 50k document vectors from model2, but both of the two models have vectors for those 15k documents. We can use those vectors to build a mapping from model1 to model2. Finally, with this relation, we can back-mapping the model2's vector to model1. Therefore, 35k document vectors are learned using this method.\nIn this notebook, we use the IMDB dataset as example. For more information about this dataset, please refer to this. And some of code are borrowed from this notebook", "import gensim\nfrom gensim.models.doc2vec import TaggedDocument\nfrom gensim.models import Doc2Vec\nfrom collections import namedtuple\nfrom gensim import utils\n\ndef read_sentimentDocs():\n SentimentDocument = namedtuple('SentimentDocument', 'words tags split sentiment')\n\n alldocs = [] # will hold all docs in original order\n with utils.smart_open('aclImdb/alldata-id.txt', encoding='utf-8') as alldata:\n for line_no, line in enumerate(alldata):\n tokens = gensim.utils.to_unicode(line).split()\n words = tokens[1:]\n tags = [line_no] # `tags = [tokens[0]]` would also work at extra memory cost\n split = ['train','test','extra','extra'][line_no // 25000] # 25k train, 25k test, 25k extra\n sentiment = [1.0, 0.0, 1.0, 0.0, None, None, None, None][line_no // 12500] # [12.5K pos, 12.5K neg]*2 then unknown\n alldocs.append(SentimentDocument(words, tags, split, sentiment))\n\n train_docs = [doc for doc in alldocs if doc.split == 'train']\n test_docs = [doc for doc in alldocs if doc.split == 'test']\n doc_list = alldocs[:] # for reshuffling per pass\n\n print('%d docs: %d train-sentiment, %d test-sentiment' % (len(doc_list), len(train_docs), len(test_docs)))\n\n return train_docs, test_docs, doc_list\n\ntrain_docs, test_docs, doc_list = read_sentimentDocs()\n\nsmall_corpus = train_docs[:15000]\nlarge_corpus = train_docs + test_docs\n\nprint len(train_docs), len(test_docs), len(doc_list), len(small_corpus), len(large_corpus)", "Here, we train two Doc2vec model, the parameters can be determined by yourself. We trained on 15k documents for the model1 and 50k documents for the model2. But you should mixed some documents which from the 15k document in model to the model2 as dicussed before.", "# for the computer performance limited, didn't run on the notebook. \n# You do can trained on the server and save the model to the disk.\nimport multiprocessing\nfrom random import shuffle\n\ncores = multiprocessing.cpu_count()\nmodel1 = Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores)\nmodel2 = Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores)\n\nsmall_train_docs = train_docs[:15000]\n# train for small corpus\nmodel1.build_vocab(small_train_docs)\nfor epoch in xrange(50):\n shuffle(small_train_docs)\n model1.train(small_train_docs, total_examples=len(small_train_docs), epochs=1)\nmodel.save(\"small_doc_15000_iter50.bin\")\n\nlarge_train_docs = train_docs + test_docs\n# train for large corpus\nmodel2.build_vocab(large_train_docs)\nfor epoch in xrange(50):\n shuffle(large_train_docs)\n model2.train(large_train_docs, total_examples=len(train_docs), epochs=1)\n# save the model\nmodel2.save(\"large_doc_50000_iter50.bin\")", "For the IMDB training dataset, we train an classifier on the train data which has 25k documents with positive and negative label. Then using this classifier to predict the test data. To see what accuracy can the document vectors which learned by different method achieve.", "import os\nimport numpy as np\nfrom sklearn.linear_model import LogisticRegression\n\ndef test_classifier_error(train, train_label, test, test_label):\n classifier = LogisticRegression()\n classifier.fit(train, train_label)\n score = classifier.score(test, test_label)\n print \"the classifier score :\", score\n return score", "For the experiment one, we use the vector which learned by the Doc2vec method.To evalute those document vector, we use split those 50k document into two part, one for training and the other for testing.", "#you can change the data folder\nbasedir = \"/home/robotcator/doc2vec\"\n\nmodel2 = Doc2Vec.load(os.path.join(basedir, \"large_doc_50000_iter50.bin\"))\nm2 = []\nfor i in range(len(large_corpus)):\n m2.append(model2.docvecs[large_corpus[i].tags])\n\ntrain_array = np.zeros((25000, 100))\ntrain_label = np.zeros((25000, 1))\ntest_array = np.zeros((25000, 100))\ntest_label = np.zeros((25000, 1))\n\nfor i in range(12500):\n train_array[i] = m2[i]\n train_label[i] = 1\n\n train_array[i + 12500] = m2[i + 12500]\n train_label[i + 12500] = 0\n\n test_array[i] = m2[i + 25000]\n test_label[i] = 1\n\n test_array[i + 12500] = m2[i + 37500]\n test_label[i + 12500] = 0\n\nprint \"The vectors are learned by doc2vec method\"\ntest_classifier_error(train_array, train_label, test_array, test_label)", "For the experiment two, the document vectors are learned by the back-mapping method, which has a linear mapping for the model1 and model2. Using this method like translation matrix for the word translation, If we provide the vector for the addtional 35k document vector in model2, we can infer this vector for the model1.", "from gensim.models import translation_matrix\n# you can change the data folder\nbasedir = \"/home/robotcator/doc2vec\"\n\nmodel1 = Doc2Vec.load(os.path.join(basedir, \"small_doc_15000_iter50.bin\"))\nmodel2 = Doc2Vec.load(os.path.join(basedir, \"large_doc_50000_iter50.bin\"))\n\nl = model1.docvecs.count\nl2 = model2.docvecs.count\nm1 = np.array([model1.docvecs[large_corpus[i].tags].flatten() for i in range(l)])\n\n# learn the mapping bettween two model\nmodel = translation_matrix.BackMappingTranslationMatrix(large_corpus[:15000], model1, model2)\nmodel.train(large_corpus[:15000])\n\nfor i in range(l, l2):\n infered_vec = model.infer_vector(model2.docvecs[large_corpus[i].tags])\n m1 = np.vstack((m1, infered_vec.flatten()))\n\ntrain_array = np.zeros((25000, 100))\ntrain_label = np.zeros((25000, 1))\ntest_array = np.zeros((25000, 100))\ntest_label = np.zeros((25000, 1))\n\n# because those document, 25k documents are postive label, 25k documents are negative label\nfor i in range(12500):\n train_array[i] = m1[i]\n train_label[i] = 1\n\n train_array[i + 12500] = m1[i + 12500]\n train_label[i + 12500] = 0\n\n test_array[i] = m1[i + 25000]\n test_label[i] = 1\n\n test_array[i + 12500] = m1[i + 37500]\n test_label[i + 12500] = 0\n\nprint \"The vectors are learned by back-mapping method\"\ntest_classifier_error(train_array, train_label, test_array, test_label)", "As we can see that, the vectors learned by back-mapping method performed not bad but still need improved.\nVisulization\nwe pick some documents and extract the vector both from model1 and model2, we can see that they also share the similar geometric arrangment.", "from sklearn.decomposition import PCA\n\nimport plotly\nfrom plotly.graph_objs import Scatter, Layout, Figure\nplotly.offline.init_notebook_mode(connected=True)\n\nm1_part = m1[14995: 15000]\nm2_part = m2[14995: 15000]\n\nm1_part = np.array(m1_part).reshape(len(m1_part), 100)\nm2_part = np.array(m2_part).reshape(len(m2_part), 100)\n\npca = PCA(n_components=2)\nreduced_vec1 = pca.fit_transform(m1_part)\nreduced_vec2 = pca.fit_transform(m2_part)\n\ntrace1 = Scatter(\n x = reduced_vec1[:, 0],\n y = reduced_vec1[:, 1],\n mode = 'markers+text',\n text = ['doc' + str(i) for i in range(len(reduced_vec1))],\n textposition = 'top'\n)\ntrace2 = Scatter(\n x = reduced_vec2[:, 0],\n y = reduced_vec2[:, 1],\n mode = 'markers+text',\n text = ['doc' + str(i) for i in range(len(reduced_vec1))],\n textposition ='top'\n)\nlayout = Layout(\n showlegend = False\n)\ndata = [trace1, trace2]\n\nfig = Figure(data=data, layout=layout)\nplot_url = plotly.offline.iplot(fig, filename='doc_vec_vis')\n\nm1_part = m1[14995: 15002]\nm2_part = m2[14995: 15002]\n\nm1_part = np.array(m1_part).reshape(len(m1_part), 100)\nm2_part = np.array(m2_part).reshape(len(m2_part), 100)\n\npca = PCA(n_components=2)\nreduced_vec1 = pca.fit_transform(m1_part)\nreduced_vec2 = pca.fit_transform(m2_part)\n\ntrace1 = Scatter(\n x = reduced_vec1[:, 0],\n y = reduced_vec1[:, 1],\n mode = 'markers+text',\n text = ['sdoc' + str(i) for i in range(len(reduced_vec1))],\n textposition = 'top'\n)\ntrace2 = Scatter(\n x = reduced_vec2[:, 0],\n y = reduced_vec2[:, 1],\n mode = 'markers+text',\n text = ['tdoc' + str(i) for i in range(len(reduced_vec1))],\n textposition ='top'\n)\nlayout = Layout(\n showlegend = False\n)\ndata = [trace1, trace2]\n\nfig = Figure(data=data, layout=layout)\nplot_url = plotly.offline.iplot(fig, filename='doc_vec_vis')\n", "You probably will see kinds of colors point. One for the model1, the sdoc0 to sdoc4 document vector are learned by Doc2vec and sdoc5 and sdoc6 are learned by back-mapping. One for the model2, the tdoc0 to tdoc6 are learned by Doc2vec. We can see that some of points learned from the back-mapping method still have the relative position with the point learned by Doc2vec." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
barjacks/pythonrecherche
03 intro python II/01 Python II .ipynb
mit
[ "Python II\n\nWiederholung: die wichtigsten Funktion\nViel mächtigere Funktion: Modules und Libraries\nSchauen wir uns diese simplen Funktionen genauer an\nBauen wir die eigenen Funktionen\nStruktur und Troubleshooting\n\n1 Wichtigste Funktionen\nEine Übersicht der 64 wichtigsten simplen Python-Funktionen sind hier gelistet.", "lst = [11,2,34, 4,5,5111]\n\nlen(lst)\n\nlen([11,2,'sort',4,5,5111])\n\nsorted(lst)\n\nlst\n\nlst.sort()\n\nlst\n\nmin(lst)\n\nmax(lst)\n\nstr(1212)\n\nsum([1,2,2])\n\nlst\n\nlst.remove(4)\n\nlst.append(4)\n\nstring = 'hello, wie geht, es Dir?'\n\nstring.split(',')", "2 Viel mächtigere Funktion: Modules und Libraries\nModules & Libraries", "import urllib \nimport requests\nimport glob\nimport pandas\nfrom bs4 import BeautifulSoup\nimport re\n#etc. etc.\n\ndef sort(string):\n elem = input('Bitte geben Sie den Suchbegriff ein: ')\n if elem in string:\n return 'Treffer'\n else:\n return 'Kein Treffer'\n\nstring_test = \"«Guten Tag, ich bin der, der Sie vor einer Stunde geweckt hat», sagte der Moderator des Podiums in Stockholm, als er am Montagmittag den US-Wissenschaftler Richard H. Thaler anrief. Für seine Erforschung der Psychologie hinter wirtschaftlichen Entscheidungen bekommt dieser den Nobelpreis für Wirtschaft. Das gab die Königlich-Schwedische Wissenschaftsakademie bekannt. Der 72-Jährige lehrt an der Universität Chicago. Der Verhaltensökonom habe gezeigt, dass begrenzte Rationalität, soziale.\"\n\nstring_test\n\ndef suche(elem, string):\n #elem = input('Bitte geben Sie den Suchbegriff ein: ')\n if elem in string:\n return 'Treffer'\n else:\n return 'Kein Treffer'\n \n\nsuche(strings[1], string_test)\n\nstrings = ['Stockholm', 'blödes Wort', 'Rationalität', 'soziale']\n\nfor st in strings:\n ergebnis = suche(st, string_test)\n print(st, ergebnis)\n\nsuche(string_test)\n\nsuche(string_test)\n\nlst = [1,3,5]\n\nlen(lst)", "3 Aber wie sind Funktion, Modules und Libraries aufgebaut?", "import os\n\n#Funktioniert leider nicht mit allen Built in Functionen\nos.path.split??\n\n#Beispiel Sort\ndef sort(list):\n for index in range(1,len(list)):\n value = list[index]\n i = index-1\n while i>=0:\n if value < list[i]:\n list[i+1] = list[i]\n list[i] = value\n i -= 1\n else:\n break\n return list\n\n#Ganz komplexe. Wenn Du nicht mit dem Modul urllib, bzw. urlretrieve \n#arbeiten könntest, müsstest Du jetzt all das eintippen.\n\ndef urlretrieve(url, filename=None, reporthook=None, data=None):\n url_type, path = splittype(url)\n\n with contextlib.closing(urlopen(url, data)) as fp:\n headers = fp.info()\n\n # Just return the local path and the \"headers\" for file://\n # URLs. No sense in performing a copy unless requested.\n if url_type == \"file\" and not filename:\n return os.path.normpath(path), headers\n\n # Handle temporary file setup.\n if filename:\n tfp = open(filename, 'wb')\n else:\n tfp = tempfile.NamedTemporaryFile(delete=False)\n filename = tfp.name\n _url_tempfiles.append(filename)\n\n with tfp:\n result = filename, headers\n bs = 1024*8\n size = -1\n read = 0\n blocknum = 0\n if \"content-length\" in headers:\n size = int(headers[\"Content-Length\"])\n\n if reporthook:\n reporthook(blocknum, bs, size)\n\n while True:\n block = fp.read(bs)\n if not block:\n break\n read += len(block)\n tfp.write(block)\n blocknum += 1\n if reporthook:\n reporthook(blocknum, bs, size)\n\n if size >= 0 and read < size:\n raise ContentTooShortError(\n \"retrieval incomplete: got only %i out of %i bytes\"\n % (read, size), result)\n\n return result\n\nimport urllib.request\n\nwith urllib.request.urlopen('http://tagesanzeiger.ch/') as response:\n html = response.read()\n\nhtml", "4 Bauen wir die eigenen Funktion\nBauen wir ganze Sätze, aus Listen von Strings", "lst = ['ich', 'habe', None, 'ganz', 'kalt']\n\ndef join(mylist):\n long_str = ''\n for elem in mylist:\n try:\n long_str = long_str + elem + \" \"\n except:\n None \n return long_str.strip()\n\njoin(lst)", "Und zum aufrufen packe ich meine List in Klammen ()", "join(lst)\n\nstring = ' ich habe ganz kalt '\n\nstring.strip()", "Bauen wir eine simple Suche", "satz = \"Die Unabhängigkeit der Notenbanken von der Politik gilt bisher als anerkannter Grundpfeiler der modernen Wirtschafts- und Geldpolitik in fortgeschrittenen Volkswirtschaften. Zu gross wäre sonst das Risiko, dass gewählte Politiker die Notenpresse anwerfen, wenn es ihren persönlichen Zielen gerade gelegen kommt, und dass dadurch die Stabilität des Geldes und das Vertrauen in das Zahlungsmittel untergraben wird.\"\n\nsort(satz)\n\ndef find(string):\n elem = input('Bitte geben Sie den Suchbegriff ein: ')\n if elem in string:\n return 'Treffer'\n else:\n return 'Kein Treffer'\n\nfind(satz)", "5 Struktur und Troubleshooting\n\nZuerst die Imports\nDann die eigenen Funktionen\nNun der eigentliche Code", "print('Immer im Code verwenden, um zu wissen wo der Fehler nun ganz genau passiert.')\n\n#Beispiel Sort\ndef sort(list):\n for index in range(1,len(list)):\n value = list[index]\n print(value)\n i = index-1\n print(i)\n while i>=0:\n if value < list[i]:\n list[i+1] = list[i]\n list[i] = value\n i -= 1\n else:\n break\n return list\n\nsort(lst)\n\nlst" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/gapic/automl/showcase_automl_image_classification_online_proxy.ipynb
apache-2.0
[ "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex client library: AutoML image classification model for online prediction using Cloud Function\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_classification_online_proxy.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_classification_online_proxy.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis tutorial demonstrates how to use the Vertex client library for Python to create image classification models and do online prediction using Google Cloud's AutoML.\nDataset\nThe dataset used for this tutorial is the Flowers dataset from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip.\nObjective\nIn this tutorial, you create an AutoML image classification model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.\nThe steps performed include:\n\nCreate a Vertex Dataset resource.\nTrain the model.\nView the model evaluation.\nDeploy the Model resource to a serving Endpoint resource.\nMake a prediction.\nUndeploy the Model.\n\nCosts\nThis tutorial uses billable components of Google Cloud (GCP):\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nInstallation\nInstall the latest version of Vertex client library.", "import os\nimport sys\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install -U google-cloud-aiplatform $USER_FLAG", "Install the latest GA version of google-cloud-storage library as well.", "! pip3 install -U google-cloud-storage $USER_FLAG", "Restart the kernel\nOnce you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.", "if not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nGPU runtime\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex APIs and Compute Engine APIs.\n\n\nThe Google Cloud SDK is already installed in Google Cloud Notebook.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.", "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation", "REGION = \"us-central1\" # @param {type: \"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your Google Cloud account\nIf you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants\nImport Vertex client library\nImport the Vertex client library into our Python environment.", "import time\n\nfrom google.cloud.aiplatform import gapic as aip\nfrom google.protobuf import json_format\nfrom google.protobuf.json_format import MessageToJson, ParseDict\nfrom google.protobuf.struct_pb2 import Struct, Value", "Vertex constants\nSetup up the following constants for Vertex:\n\nAPI_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.\nPARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.", "# API service endpoint\nAPI_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n\n# Vertex location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION", "AutoML constants\nSet constants unique to AutoML datasets and training:\n\nDataset Schemas: Tells the Dataset resource service which type of dataset it is.\nData Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).\nDataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.", "# Image Dataset type\nDATA_SCHEMA = \"gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml\"\n# Image Labeling type\nLABEL_SCHEMA = \"gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml\"\n# Image Training task\nTRAINING_SCHEMA = \"gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_classification_1.0.0.yaml\"", "Tutorial\nNow you are ready to start creating your own AutoML image classification model.\nSet up clients\nThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.\nYou will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.\n\nDataset Service for Dataset resources.\nModel Service for Model resources.\nPipeline Service for training.\nEndpoint Service for deployment.\nPrediction Service for serving.", "# client options same for all services\nclient_options = {\"api_endpoint\": API_ENDPOINT}\n\n\ndef create_dataset_client():\n client = aip.DatasetServiceClient(client_options=client_options)\n return client\n\n\ndef create_model_client():\n client = aip.ModelServiceClient(client_options=client_options)\n return client\n\n\ndef create_pipeline_client():\n client = aip.PipelineServiceClient(client_options=client_options)\n return client\n\n\ndef create_endpoint_client():\n client = aip.EndpointServiceClient(client_options=client_options)\n return client\n\n\ndef create_prediction_client():\n client = aip.PredictionServiceClient(client_options=client_options)\n return client\n\n\nclients = {}\nclients[\"dataset\"] = create_dataset_client()\nclients[\"model\"] = create_model_client()\nclients[\"pipeline\"] = create_pipeline_client()\nclients[\"endpoint\"] = create_endpoint_client()\nclients[\"prediction\"] = create_prediction_client()\n\nfor client in clients.items():\n print(client)", "Dataset\nNow that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.\nCreate Dataset resource instance\nUse the helper function create_dataset to create the instance of a Dataset resource. This function does the following:\n\nUses the dataset client service.\nCreates an Vertex Dataset resource (aip.Dataset), with the following parameters:\ndisplay_name: The human-readable name you choose to give it.\nmetadata_schema_uri: The schema for the dataset type.\nCalls the client dataset service method create_dataset, with the following parameters:\nparent: The Vertex location root path for your Database, Model and Endpoint resources.\ndataset: The Vertex dataset object instance you created.\nThe method returns an operation object.\n\nAn operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.\nYou can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:\n| Method | Description |\n| ----------- | ----------- |\n| result() | Waits for the operation to complete and returns a result object in JSON format. |\n| running() | Returns True/False on whether the operation is still running. |\n| done() | Returns True/False on whether the operation is completed. |\n| canceled() | Returns True/False on whether the operation was canceled. |\n| cancel() | Cancels the operation (this may take up to 30 seconds). |", "TIMEOUT = 90\n\n\ndef create_dataset(name, schema, labels=None, timeout=TIMEOUT):\n start_time = time.time()\n try:\n dataset = aip.Dataset(\n display_name=name, metadata_schema_uri=schema, labels=labels\n )\n\n operation = clients[\"dataset\"].create_dataset(parent=PARENT, dataset=dataset)\n print(\"Long running operation:\", operation.operation.name)\n result = operation.result(timeout=TIMEOUT)\n print(\"time:\", time.time() - start_time)\n print(\"response\")\n print(\" name:\", result.name)\n print(\" display_name:\", result.display_name)\n print(\" metadata_schema_uri:\", result.metadata_schema_uri)\n print(\" metadata:\", dict(result.metadata))\n print(\" create_time:\", result.create_time)\n print(\" update_time:\", result.update_time)\n print(\" etag:\", result.etag)\n print(\" labels:\", dict(result.labels))\n return result\n except Exception as e:\n print(\"exception:\", e)\n return None\n\n\nresult = create_dataset(\"flowers-\" + TIMESTAMP, DATA_SCHEMA)", "Now save the unique dataset identifier for the Dataset resource instance you created.", "# The full unique ID for the dataset\ndataset_id = result.name\n# The short numeric ID for the dataset\ndataset_short_id = dataset_id.split(\"/\")[-1]\n\nprint(dataset_id)", "Data preparation\nThe Vertex Dataset resource for images has some requirements for your data:\n\nImages must be stored in a Cloud Storage bucket.\nEach image file must be in an image format (PNG, JPEG, BMP, ...).\nThere must be an index file stored in your Cloud Storage bucket that contains the path and label for each image.\nThe index file must be either CSV or JSONL.\n\nCSV\nFor image classification, the CSV index file has the requirements:\n\nNo heading.\nFirst column is the Cloud Storage path to the image.\nSecond column is the label.\n\nLocation of Cloud Storage training data.\nNow set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.", "IMPORT_FILE = (\n \"gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv\"\n)", "Quick peek at your data\nYou will use a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file.\nStart by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.", "if \"IMPORT_FILES\" in globals():\n FILE = IMPORT_FILES[0]\nelse:\n FILE = IMPORT_FILE\n\ncount = ! gsutil cat $FILE | wc -l\nprint(\"Number of Examples\", int(count[0]))\n\nprint(\"First 10 rows\")\n! gsutil cat $FILE | head", "Import data\nNow, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following:\n\nUses the Dataset client.\nCalls the client method import_data, with the following parameters:\nname: The human readable name you give to the Dataset resource (e.g., flowers).\n\nimport_configs: The import configuration.\n\n\nimport_configs: A Python list containing a dictionary, with the key/value entries:\n\ngcs_sources: A list of URIs to the paths of the one or more index files.\nimport_schema_uri: The schema identifying the labeling type.\n\nThe import_data() method returns a long running operation object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.", "def import_data(dataset, gcs_sources, schema):\n config = [{\"gcs_source\": {\"uris\": gcs_sources}, \"import_schema_uri\": schema}]\n print(\"dataset:\", dataset_id)\n start_time = time.time()\n try:\n operation = clients[\"dataset\"].import_data(\n name=dataset_id, import_configs=config\n )\n print(\"Long running operation:\", operation.operation.name)\n\n result = operation.result()\n print(\"result:\", result)\n print(\"time:\", int(time.time() - start_time), \"secs\")\n print(\"error:\", operation.exception())\n print(\"meta :\", operation.metadata)\n print(\n \"after: running:\",\n operation.running(),\n \"done:\",\n operation.done(),\n \"cancelled:\",\n operation.cancelled(),\n )\n\n return operation\n except Exception as e:\n print(\"exception:\", e)\n return None\n\n\nimport_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)", "Train the model\nNow train an AutoML image classification model using your Vertex Dataset resource. To train the model, do the following steps:\n\nCreate an Vertex training pipeline for the Dataset resource.\nExecute the pipeline to start the training.\n\nCreate a training pipeline\nYou may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:\n\nBeing reusable for subsequent training jobs.\nCan be containerized and ran as a batch job.\nCan be distributed.\nAll the steps are associated with the same pipeline job for tracking progress.\n\nUse this helper function create_pipeline, which takes the following parameters:\n\npipeline_name: A human readable name for the pipeline job.\nmodel_name: A human readable name for the model.\ndataset: The Vertex fully qualified dataset identifier.\nschema: The dataset labeling (annotation) training schema.\ntask: A dictionary describing the requirements for the training job.\n\nThe helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:\n\nparent: The Vertex location root path for your Dataset, Model and Endpoint resources.\ntraining_pipeline: the full specification for the pipeline training job.\n\nLet's look now deeper into the minimal requirements for constructing a training_pipeline specification:\n\ndisplay_name: A human readable name for the pipeline job.\ntraining_task_definition: The dataset labeling (annotation) training schema.\ntraining_task_inputs: A dictionary describing the requirements for the training job.\nmodel_to_upload: A human readable name for the model.\ninput_data_config: The dataset specification.\ndataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.\nfraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.", "def create_pipeline(pipeline_name, model_name, dataset, schema, task):\n\n dataset_id = dataset.split(\"/\")[-1]\n\n input_config = {\n \"dataset_id\": dataset_id,\n \"fraction_split\": {\n \"training_fraction\": 0.8,\n \"validation_fraction\": 0.1,\n \"test_fraction\": 0.1,\n },\n }\n\n training_pipeline = {\n \"display_name\": pipeline_name,\n \"training_task_definition\": schema,\n \"training_task_inputs\": task,\n \"input_data_config\": input_config,\n \"model_to_upload\": {\"display_name\": model_name},\n }\n\n try:\n pipeline = clients[\"pipeline\"].create_training_pipeline(\n parent=PARENT, training_pipeline=training_pipeline\n )\n print(pipeline)\n except Exception as e:\n print(\"exception:\", e)\n return None\n return pipeline", "Construct the task requirements\nNext, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.\nThe minimal fields we need to specify are:\n\nmulti_label: Whether True/False this is a multi-label (vs single) classification.\nbudget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour. For image classification, the budget must be a minimum of 8 hours.\nmodel_type: The type of deployed model:\nCLOUD: For deploying to Google Cloud.\nMOBILE_TF_LOW_LATENCY_1: For deploying to the edge and optimizing for latency (response time).\nMOBILE_TF_HIGH_ACCURACY_1: For deploying to the edge and optimizing for accuracy.\nMOBILE_TF_VERSATILE_1: For deploying to the edge and optimizing for a trade off between latency and accuracy.\ndisable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.\n\nFinally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.", "PIPE_NAME = \"flowers_pipe-\" + TIMESTAMP\nMODEL_NAME = \"flowers_model-\" + TIMESTAMP\n\ntask = json_format.ParseDict(\n {\n \"multi_label\": False,\n \"budget_milli_node_hours\": 8000,\n \"model_type\": \"CLOUD\",\n \"disable_early_stopping\": False,\n },\n Value(),\n)\n\nresponse = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)", "Now save the unique identifier of the training pipeline you created.", "# The full unique ID for the pipeline\npipeline_id = response.name\n# The short numeric ID for the pipeline\npipeline_short_id = pipeline_id.split(\"/\")[-1]\n\nprint(pipeline_id)", "Get information on a training pipeline\nNow get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:\n\nname: The Vertex fully qualified pipeline identifier.\n\nWhen the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.", "def get_training_pipeline(name, silent=False):\n response = clients[\"pipeline\"].get_training_pipeline(name=name)\n if silent:\n return response\n\n print(\"pipeline\")\n print(\" name:\", response.name)\n print(\" display_name:\", response.display_name)\n print(\" state:\", response.state)\n print(\" training_task_definition:\", response.training_task_definition)\n print(\" training_task_inputs:\", dict(response.training_task_inputs))\n print(\" create_time:\", response.create_time)\n print(\" start_time:\", response.start_time)\n print(\" end_time:\", response.end_time)\n print(\" update_time:\", response.update_time)\n print(\" labels:\", dict(response.labels))\n return response\n\n\nresponse = get_training_pipeline(pipeline_id)", "Deployment\nTraining the above model may take upwards of 20 minutes time.\nOnce your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.", "while True:\n response = get_training_pipeline(pipeline_id, True)\n if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:\n print(\"Training job has not completed:\", response.state)\n model_to_deploy_id = None\n if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:\n raise Exception(\"Training Job Failed\")\n else:\n model_to_deploy = response.model_to_upload\n model_to_deploy_id = model_to_deploy.name\n print(\"Training Time:\", response.end_time - response.start_time)\n break\n time.sleep(60)\n\nprint(\"model to deploy:\", model_to_deploy_id)", "Model information\nNow that your model is trained, you can get some information on your model.\nEvaluate the Model resource\nNow find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.\nList evaluations for all slices\nUse this helper function list_model_evaluations, which takes the following parameter:\n\nname: The Vertex fully qualified model identifier for the Model resource.\n\nThis helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.\nFor each evaluation (you probably only have one) we then print all the key names for each metric in the evaluation, and for a small set (logLoss and auPrc) you will print the result.", "def list_model_evaluations(name):\n response = clients[\"model\"].list_model_evaluations(parent=name)\n for evaluation in response:\n print(\"model_evaluation\")\n print(\" name:\", evaluation.name)\n print(\" metrics_schema_uri:\", evaluation.metrics_schema_uri)\n metrics = json_format.MessageToDict(evaluation._pb.metrics)\n for metric in metrics.keys():\n print(metric)\n print(\"logloss\", metrics[\"logLoss\"])\n print(\"auPrc\", metrics[\"auPrc\"])\n\n return evaluation.name\n\n\nlast_evaluation = list_model_evaluations(model_to_deploy_id)", "Deploy the Model resource\nNow deploy the trained Vertex Model resource you created with AutoML. This requires two steps:\n\n\nCreate an Endpoint resource for deploying the Model resource to.\n\n\nDeploy the Model resource to the Endpoint resource.\n\n\nCreate an Endpoint resource\nUse this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:\n\ndisplay_name: A human readable name for the Endpoint resource.\n\nThe helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:\n\ndisplay_name: A human readable name for the Endpoint resource.\n\nCreating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.", "ENDPOINT_NAME = \"flowers_endpoint-\" + TIMESTAMP\n\n\ndef create_endpoint(display_name):\n endpoint = {\"display_name\": display_name}\n response = clients[\"endpoint\"].create_endpoint(parent=PARENT, endpoint=endpoint)\n print(\"Long running operation:\", response.operation.name)\n\n result = response.result(timeout=300)\n print(\"result\")\n print(\" name:\", result.name)\n print(\" display_name:\", result.display_name)\n print(\" description:\", result.description)\n print(\" labels:\", result.labels)\n print(\" create_time:\", result.create_time)\n print(\" update_time:\", result.update_time)\n return result\n\n\nresult = create_endpoint(ENDPOINT_NAME)", "Now get the unique identifier for the Endpoint resource you created.", "# The full unique ID for the endpoint\nendpoint_id = result.name\n# The short numeric ID for the endpoint\nendpoint_short_id = endpoint_id.split(\"/\")[-1]\n\nprint(endpoint_id)", "Compute instance scaling\nYou have several choices on scaling the compute instances for handling your online prediction requests:\n\nSingle Instance: The online prediction requests are processed on a single compute instance.\n\nSet the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.\n\n\nManual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.\n\n\nSet the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.\n\n\nAuto Scaling: The online prediction requests are split across a scaleable number of compute instances.\n\nSet the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.\n\nThe minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.", "MIN_NODES = 1\nMAX_NODES = 1", "Deploy Model resource to the Endpoint resource\nUse this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:\n\nmodel: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.\ndeploy_model_display_name: A human readable name for the deployed model.\nendpoint: The Vertex fully qualified endpoint identifier to deploy the model to.\n\nThe helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:\n\nendpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.\ndeployed_model: The requirements specification for deploying the model.\ntraffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.\nIf only one model, then specify as { \"0\": 100 }, where \"0\" refers to this model being uploaded and 100 means 100% of the traffic.\nIf there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { \"0\": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.\n\nLet's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:\n\nmodel: The Vertex fully qualified model identifier of the (upload) model to deploy.\ndisplay_name: A human readable name for the deployed model.\ndisable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.\nautomatic_resources: This refers to how many redundant compute instances (replicas). For this example, we set it to one (no replication).\n\nTraffic Split\nLet's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.\nWhy would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.\nResponse\nThe method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.", "DEPLOYED_NAME = \"flowers_deployed-\" + TIMESTAMP\n\n\ndef deploy_model(\n model, deployed_model_display_name, endpoint, traffic_split={\"0\": 100}\n):\n\n deployed_model = {\n \"model\": model,\n \"display_name\": deployed_model_display_name,\n \"automatic_resources\": {\n \"min_replica_count\": MIN_NODES,\n \"max_replica_count\": MAX_NODES,\n },\n }\n\n response = clients[\"endpoint\"].deploy_model(\n endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split\n )\n\n print(\"Long running operation:\", response.operation.name)\n result = response.result()\n print(\"result\")\n deployed_model = result.deployed_model\n print(\" deployed_model\")\n print(\" id:\", deployed_model.id)\n print(\" model:\", deployed_model.model)\n print(\" display_name:\", deployed_model.display_name)\n print(\" create_time:\", deployed_model.create_time)\n\n return deployed_model.id\n\n\ndeployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)", "Make a online prediction request\nNow do a online prediction to your deployed model.\nGet test item\nYou will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.", "test_item = !gsutil cat $IMPORT_FILE | head -n1\nif len(str(test_item[0]).split(\",\")) == 3:\n _, test_item, test_label = str(test_item[0]).split(\",\")\nelse:\n test_item, test_label = str(test_item[0]).split(\",\")\n\nprint(test_item, test_label)", "Predict through Cloud Function proxy\nEndpoints in Vertex are pinned to the project they are deployed in. That means, in general, only process operating within the same project can access the endpoint for a prediction.\nThere are several approaches for a web-based application to obtain predictions from an endpoint:\n\n\nThe backend application server for the web-based application is deployed in the same project.\n\n\nThe backend application server is deployed in a different project which has access rights granted to the project where the endpoint is deployed.\n\n\nUse a proxy which as access rights to the project where the endpoint is deployed.\n\n\nIn this example, the proxy method is demonstrated using Cloud Functions.\nCreate a cloud function\nYou start by creating a cloud function, which will consists of:\n\nA folder (e.g., function)\nmain.py: The implemented function as a Python script.\nrequirements.txt: The environment requirements for executing the Python script.", "! rm -rf function\n! mkdir function\n\n%%writefile function/main.py\nimport logging\nfrom operator import itemgetter\nimport os\n\nfrom flask import jsonify\nfrom google.cloud.aiplatform import gapic as aip\nfrom google.protobuf import json_format\nfrom google.protobuf.struct_pb2 import Value\nimport requests\nimport tensorflow as tf\n\nIMG_WIDTH = 128\nCOLUMNS = ['dandelion', 'daisy', 'tulips', 'sunflowers', 'roses']\n\naip_client = aip.PredictionServiceClient(client_options={\n 'api_endpoint': 'us-central1-prediction-aiplatform.googleapis.com'\n})\naip_endpoint_name = f'projects/{os.environ[\"GCP_PROJECT\"]}/locations/us-central1/endpoints/{os.environ[\"ENDPOINT_ID\"]}'\n\n\ndef get_prediction(instance):\n logging.info('Sending prediction request to Vertex ...')\n try:\n pb_instance = json_format.ParseDict(instance, Value())\n response = aip_client.predict(endpoint=aip_endpoint_name,\n instances=[pb_instance])\n return list(response.predictions[0])\n except Exception as err:\n logging.error(f'Prediction request failed: {type(err)}: {err}')\n return None\n\n\ndef preprocess_image(image_url):\n logging.info(f'Fetching image from URL: {image_url}')\n try:\n image_response = requests.get(image_url)\n image_response.raise_for_status()\n assert image_response.headers.get('Content-Type') == 'image/jpeg'\n except (ConnectionError, requests.exceptions.RequestException,\n AssertionError):\n logging.error(f'Error fetching image from URL: {image_url}')\n return None\n\n logging.info('Decoding and preprocessing image ...')\n image = tf.io.decode_jpeg(image_response.content, channels=3)\n image = tf.image.resize_with_pad(image, IMG_WIDTH, IMG_WIDTH)\n image = image / 255.\n return image.numpy().tolist() # Make it JSON-serializable\n\ndef classify_flower(request):\n # Set CORS headers for the preflight request\n if request.method == 'OPTIONS':\n # Allows POST requests from any origin with the Content-Type\n # header and caches preflight response for an 3600s\n headers = {\n 'Access-Control-Allow-Origin': '*',\n 'Access-Control-Allow-Methods': 'POST',\n 'Access-Control-Allow-Headers': 'Content-Type',\n 'Access-Control-Max-Age': '3600'\n }\n return ('', 204, headers)\n\n # Disallow non-POSTs\n if request.method != 'POST':\n return ('Not found', 404)\n\n # Set CORS headers for the main request\n headers = {'Access-Control-Allow-Origin': '*'}\n\n request_json = request.get_json(silent=True)\n if not request_json or not 'image_url' in request_json:\n return ('Invalid request', 400, headers)\n\n instance = preprocess_image(request_json['image_url'])\n if not instance:\n return ('Invalid request', 400, headers)\n\n raw_prediction = get_prediction(instance)\n if not raw_prediction:\n return ('Error getting prediction', 500, headers)\n\n probabilities = zip(COLUMNS, raw_prediction)\n sorted_probabilities = sorted(probabilities,\n key=itemgetter(1),\n reverse=True)\n return (jsonify(sorted_probabilities), 200, headers)\n\n\n%%writefile function/requirements.txt\nFlask==1.0.2\nrequests==2.21.0\ntensorflow-cpu~=2.1.0\ngoogle-cloud-aiplatform", "Deploy your Cloud Functionl\nTODO\nThese APIs need to be enabled\nEnable Cloud Function API and Cloud Build API\nhttps://console.developers.google.com/apis/api/cloudbuild.googleapis.com/overview?project=", "! gcloud functions deploy classify_flower\n --region $REGION \n --source=function \n --runtime=python37 \n --memory=2048MB \n --trigger-http \n --allow-unauthenticated \n --set-env-vars ENDPOINT_ID=${endpoint_id}", "Construct the URL to your deployed Cloud Function\nNext, you will construct the Url for your cloud function. You will use this Url to route predictions through your cloud function, acting as a proxy to your deployed model.", "CLOUD_FUNCTION_URL = \"https://{}-{}.cloudfunctions.net/classify_flower\".format(\n REGION, PROJECT_ID\n)\nprint(CLOUD_FUNCTION_URL)", "Construct prediction request file\nTODO", "image = test_item.replace(\"gs://\", \"https://storage.googleapis.com/\")\nprint(image)\n\nimport json\n\nwith open(\"request.json\", \"w\") as f:\n json.dump({\"image_url\": image}, f)\n\n! cat request.json", "Make the prediction request\nTODO", "! curl -X POST\n-H \"Authorization: Bearer \"$(gcloud auth application-default print-access-token) \n-H \"Content-Type: application/json; charset=utf-8\" \n-d @request.json \n$CLOUD_FUNCTION_URL", "Undeploy the Model resource\nNow undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:\n\ndeployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.\nendpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.\n\nThis function calls the endpoint client service's method undeploy_model, with the following parameters:\n\ndeployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.\nendpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.\ntraffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.\n\nSince this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.", "def undeploy_model(deployed_model_id, endpoint):\n response = clients[\"endpoint\"].undeploy_model(\n endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}\n )\n print(response)\n\n\nundeploy_model(deployed_model_id, endpoint_id)", "Cleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket", "delete_dataset = True\ndelete_pipeline = True\ndelete_model = True\ndelete_endpoint = True\ndelete_batchjob = True\ndelete_customjob = True\ndelete_hptjob = True\ndelete_bucket = True\n\n# Delete the dataset using the Vertex fully qualified identifier for the dataset\ntry:\n if delete_dataset and \"dataset_id\" in globals():\n clients[\"dataset\"].delete_dataset(name=dataset_id)\nexcept Exception as e:\n print(e)\n\n# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline\ntry:\n if delete_pipeline and \"pipeline_id\" in globals():\n clients[\"pipeline\"].delete_training_pipeline(name=pipeline_id)\nexcept Exception as e:\n print(e)\n\n# Delete the model using the Vertex fully qualified identifier for the model\ntry:\n if delete_model and \"model_to_deploy_id\" in globals():\n clients[\"model\"].delete_model(name=model_to_deploy_id)\nexcept Exception as e:\n print(e)\n\n# Delete the endpoint using the Vertex fully qualified identifier for the endpoint\ntry:\n if delete_endpoint and \"endpoint_id\" in globals():\n clients[\"endpoint\"].delete_endpoint(name=endpoint_id)\nexcept Exception as e:\n print(e)\n\n# Delete the batch job using the Vertex fully qualified identifier for the batch job\ntry:\n if delete_batchjob and \"batch_job_id\" in globals():\n clients[\"job\"].delete_batch_prediction_job(name=batch_job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the custom job using the Vertex fully qualified identifier for the custom job\ntry:\n if delete_customjob and \"job_id\" in globals():\n clients[\"job\"].delete_custom_job(name=job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job\ntry:\n if delete_hptjob and \"hpt_job_id\" in globals():\n clients[\"job\"].delete_hyperparameter_tuning_job(name=hpt_job_id)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
karlstroetmann/Algorithms
Python/Chapter-07/ListMap.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open('../style.css') as file:\n css = file.read()\nHTML(css)", "Implementing Maps as Lists of Key-Value-Pairs\nThe class ListNode implements a node of a <em style=\"color:blue\">linked lists</em> of \nkey-value pairs. Every node has three member variables:\n- mKey stores the key,\n- mValue stores the value associated with this key, and\n- mNextPtr stores a reference to the next node. If there is no next node, then \n mNextPtr is None.\nObjects of class ListNode are used to represent linked lists. \nThe constructor of the class ListNode creates a single node that stores the given key and its associated value.", "class ListNode:\n def __init__(self, key, value):\n self.mKey = key\n self.mValue = value\n self.mNextPtr = None", "Given a key, the method find traverses the given list until it finds a node that stores the given key. In this case, it returns the associated value. Otherwise, None is returned.", "def find(self, key):\n ptr = self\n while True:\n if ptr.mKey == key:\n return ptr.mValue\n if ptr.mNextPtr != None:\n ptr = ptr.mNextPtr\n else:\n return\n \nListNode.find = find\ndel find", "Given the first node of a linked list $L$, the function $L.\\texttt{insert}(k, v)$ inserts the key-value pair $(k, v)$ into the list $L$. If there is already a key value pair in $L$ that has the same key, then the old value is overwritten. It returns a boolean that is true if a new node has been allocated.", "def insert(self, key, value):\n while True:\n if self.mKey == key:\n self.mValue = value\n return False\n elif self.mNextPtr != None:\n self = self.mNextPtr\n else:\n self.mNextPtr = ListNode(key, value)\n return True\n\nListNode.insert = insert\ndel insert", "Given the first node of a linked list $L$, the function $L.\\texttt{delete}(k)$ deletes the first key-value pair of the form $(k, v)$ from the list $L$. If there is no such pair, the list $L$ is unchanged. It returns a pair such that:\n- The first component of this pair is a pointer to the changed list.\n If the list becomes empty, the first component is None.\n- The second component is a Boolean that is True if a node has been deleted.", "def delete(self, key):\n previous = None\n ptr = self\n while True:\n if ptr.mKey == key:\n if previous == None:\n return ptr.mNextPtr, True\n else:\n previous.mNextPtr = ptr.mNextPtr\n return self, True\n elif ptr.mNextPtr != None:\n previous = ptr\n ptr = ptr.mNextPtr\n else:\n return self, False\n\nListNode.delete = delete\ndel delete", "Given the first node of a linked list $L$, the function $L.\\texttt{toString}()$ returns a string representing $L$.", "def toString(self):\n if self.mNextPtr != None:\n return f'{self.mKey} ↦ {self.mValue}, ' + self.mNextPtr.__str__()\n else:\n return f'{self.mKey} ↦ {self.mValue}'\n\nListNode.__str__ = toString\ndel toString", "The class ListMap implements a map using a linked list of key-value pairs. Basically, it is a wrapper for the class\nListNode. Furthermore, an object of type ListMap is iterable.", "class ListMap:\n def __init__(self):\n self.mPtr = None\n \n def find(self, key):\n if self.mPtr != None:\n return self.mPtr.find(key)\n \n def insert(self, key, value):\n if self.mPtr != None:\n return self.mPtr.insert(key, value)\n else:\n self.mPtr = ListNode(key, value)\n return True\n \n def delete(self, key):\n if self.mPtr != None:\n self.mPtr, flag = self.mPtr.delete(key)\n return flag\n return False\n \n def __iter__(self):\n return MapIterator(self.mPtr)\n \n def __str__(self):\n if self.mPtr != None:\n return '{' + self.mPtr.__str__() + '}'\n else:\n return '{}'\n \n def __repr__(self):\n return self.__str__()", "A MapIterator is an iterator that iterates over the elements of a linked list. It maintains a pointer mPtr that points to the next element.\nIt is implemented via the function __next__. This function either returns the next key-vale pair or, if there are no more key-value pairs left, then it raises a\nStopIteration exception.\nIf the __iter__ method of a class $C$ returns an iterator, then we can use a \nfor-loop to iterate over the elements contained in class $C$.", "class MapIterator:\n def __init__(self, ptr):\n self.mPtr = ptr\n \n def __next__(self):\n if self.mPtr == None:\n raise StopIteration\n key = self.mPtr.mKey\n value = self.mPtr.mValue\n self.mPtr = self.mPtr.mNextPtr\n return key, value\n\ndef main(n = 100):\n S = ListMap()\n for i in range(2, n + 1):\n S.insert(i, True)\n for i in range(2, n // 2 + 1):\n for j in range(i, n // i + 1):\n S.delete(i * j)\n print([p for p, _ in S]) # iterates over ListMap\n print(S.find(83))\n print(S.find(99))\n\nmain()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/awi/cmip6/models/awi-cm-1-0-mr/toplevel.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: AWI\nSource ID: AWI-CM-1-0-MR\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:37\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'awi', 'awi-cm-1-0-mr', 'toplevel')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Flux Correction\n3. Key Properties --&gt; Genealogy\n4. Key Properties --&gt; Software Properties\n5. Key Properties --&gt; Coupling\n6. Key Properties --&gt; Tuning Applied\n7. Key Properties --&gt; Conservation --&gt; Heat\n8. Key Properties --&gt; Conservation --&gt; Fresh Water\n9. Key Properties --&gt; Conservation --&gt; Salt\n10. Key Properties --&gt; Conservation --&gt; Momentum\n11. Radiative Forcings\n12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\n13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\n14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\n15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\n16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\n17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\n18. Radiative Forcings --&gt; Aerosols --&gt; SO4\n19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\n20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\n21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\n22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\n23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\n24. Radiative Forcings --&gt; Aerosols --&gt; Dust\n25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\n26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\n27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\n28. Radiative Forcings --&gt; Other --&gt; Land Use\n29. Radiative Forcings --&gt; Other --&gt; Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop level overview of coupled model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of coupled model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nYear the model was released", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. CMIP3 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP3 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. CMIP5 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP5 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Previous Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPreviously known as", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.4. Components Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.5. Coupler\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nOverarching coupling framework for model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Coupling\n**\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of coupling in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Atmosphere Double Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhere are the air-sea fluxes calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.4. Atmosphere Relative Winds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.5. Energy Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.6. Fresh Water Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Conservation --&gt; Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.6. Land Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation --&gt; Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Runoff\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how runoff is distributed and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Iceberg Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Endoreic Basins\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Snow Accumulation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Key Properties --&gt; Conservation --&gt; Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Key Properties --&gt; Conservation --&gt; Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how momentum is conserved in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Equivalence Concentration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of any equivalence concentrations used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Radiative Forcings --&gt; Aerosols --&gt; SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.3. RFaci From Sulfate Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "24. Radiative Forcings --&gt; Aerosols --&gt; Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Radiative Forcings --&gt; Other --&gt; Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28.2. Crop Change Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLand use change represented via crop change only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Radiative Forcings --&gt; Other --&gt; Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow solar forcing is provided", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
doc/notebooks/automaton.coaccessible.ipynb
gpl-3.0
[ "automaton.coaccessible\nCreate a new automaton from the coaccessible part of the input, i.e., the subautomaton whose states can be reach a final state.\nPreconditions:\n- None\nPostconditions:\n- Result.is_coaccessible\nSee also:\n- automaton.is_coaccessible\n- automaton.accessible\n- automaton.trim\nExamples", "import vcsn", "The following automaton has states that cannot be reach any final(s) states:", "%%automaton a\ncontext = \"lal_char(abc), b\"\n$ -> 0\n0 -> 1 a\n1 -> $\n2 -> 0 a\n1 -> 3 a\n\na.is_coaccessible()", "Calling coaccessible returns the same automaton, but without its non-coaccessible states:", "a.coaccessible()\n\na.coaccessible().is_coaccessible()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
ara-ta3/ml4se
Chapter3.ipynb
mit
[ "%matplotlib inline\n\n# -*- coding: utf-8 -*-\n#\n# 最尤推定による回帰分析\n#\n# 2015/05/19 ver1.0\n#\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom pandas import Series, DataFrame\n\nfrom numpy.random import normal\n\n#------------#\n# Parameters #\n#------------#\nN=10 # サンプルを取得する位置 x の個数\nM=[0,1,3,9] # 多項式の次数\n\n\n# データセット {x_n,y_n} (n=1...N) を用意\ndef create_dataset(num):\n dataset = DataFrame(columns=['x','y'])\n for i in range(num):\n x = float(i)/float(num-1)\n y = np.sin(2*np.pi*x) + normal(scale=0.3)\n dataset = dataset.append(Series([x,y], index=['x','y']),\n ignore_index=True)\n return dataset\n\n# 最大対数尤度(Maximum log likelihood)を計算\ndef log_likelihood(dataset, f):\n dev = 0.0\n n = float(len(dataset))\n for index, line in dataset.iterrows():\n x, y = line.x, line.y\n dev += (y - f(x))**2\n err = dev * 0.5\n beta = n / dev\n lp = -beta*err + 0.5*n*np.log(0.5*beta/np.pi)\n return lp\n\n# 最尤推定で解を求める(解法は最小二乗法と同じ)\ndef resolve(dataset, m):\n t = dataset.y\n phi = DataFrame()\n for i in range(0,m+1):\n p = dataset.x**i\n p.name=\"x**%d\" % i\n phi = pd.concat([phi,p], axis=1)\n tmp = np.linalg.inv(np.dot(phi.T, phi))\n ws = np.dot(np.dot(tmp, phi.T), t)\n\n def f(x):\n y = 0.0\n for i, w in enumerate(ws):\n y += w * (x ** i)\n return y\n\n sigma2 = 0.0\n for index, line in dataset.iterrows():\n sigma2 += (f(line.x)-line.y)**2\n sigma2 /= len(dataset)\n\n return (f, ws, np.sqrt(sigma2))\n\n# Main\ndef main():\n train_set = create_dataset(N)\n test_set = create_dataset(N)\n df_ws = DataFrame()\n\n # 多項式近似の曲線を求めて表示\n fig = plt.figure()\n for c, m in enumerate(M):\n f, ws, sigma = resolve(train_set, m)\n df_ws = df_ws.append(Series(ws,name=\"M=%d\" % m))\n\n subplot = fig.add_subplot(2,2,c+1)\n subplot.set_xlim(-0.05,1.05)\n subplot.set_ylim(-1.5,1.5)\n subplot.set_title(\"M=%d\" % m)\n\n # トレーニングセットを表示\n subplot.scatter(train_set.x, train_set.y, marker='o', color='blue')\n\n # 真の曲線を表示\n linex = np.linspace(0,1,101)\n liney = np.sin(2*np.pi*linex)\n subplot.plot(linex, liney, color='green', linestyle='--')\n\n # 多項式近似の曲線を表示\n linex = np.linspace(0,1,101)\n liney = f(linex)\n label = \"Sigma=%.2f\" % sigma\n subplot.plot(linex, liney, color='red', label=label)\n subplot.plot(linex, liney+sigma, color='red', linestyle='--')\n subplot.plot(linex, liney-sigma, color='red', linestyle='--')\n subplot.legend(loc=1)\n\n fig.show()\n\n # 多項式近似に対する最大対数尤度を計算\n df = DataFrame()\n train_mlh = []\n test_mlh = []\n for m in range(0,9): # 多項式の次数\n f, ws, sigma = resolve(train_set, m)\n train_mlh.append(log_likelihood(train_set, f))\n test_mlh.append(log_likelihood(test_set, f))\n df = pd.concat([df,\n DataFrame(train_mlh, columns=['Training set']),\n DataFrame(test_mlh, columns=['Test set'])],\n axis=1)\n df.plot(title='Log likelihood for N=%d' % N, grid=True, style=['-','--'])\n plt.show()\n\n \n \nif __name__ == '__main__':\n main()\n\nN = 100\n\nmain()\n\nM = [3,10,20,50]\n\nmain()\n\ndef create_dataset(num):\n dataset = DataFrame(columns=['x','y'])\n for i in range(num):\n x = float(i)/float(num-1)\n y = np.sin(2*np.pi*x) + normal(scale=0.3)\n dataset = dataset.append(Series([x,y], index=['x','y']),\n ignore_index=True)\n return dataset\n\n\ndef create_dataset(num):\n dataset = DataFrame(columns=['x','y'])\n for i in range(num):\n x = float(i)/float(num-1)\n y = 0 + normal(scale=0.3)\n dataset = dataset.append(Series([x,y], index=['x','y']),\n ignore_index=True)\n return dataset\n", "main()", "main()\n\nmain()\n\nmain()\n\nM = [0,1,2,3]\n\nmain()\n\nmain()\n\n# -*- coding: utf-8 -*-\n#\n# 最尤推定による正規分布の推定\n#\n# 2015/04/23 ver1.0\n#\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom pandas import Series, DataFrame\n\nfrom numpy.random import normal\nfrom scipy.stats import norm\n\ndef gauss():\n fig = plt.figure()\n for c, datapoints in enumerate([2,4,10,100]): # サンプル数\n ds = normal(loc=0, scale=1, size=datapoints)\n mu = np.mean(ds) # 平均の推定値\n sigma = np.sqrt(np.var(ds)) # 標準偏差の推定値\n\n subplot = fig.add_subplot(2,2,c+1)\n subplot.set_title(\"N=%d\" % datapoints)\n # 真の曲線を表示\n linex = np.arange(-10,10.1,0.1)\n orig = norm(loc=0, scale=1)\n subplot.plot(linex, orig.pdf(linex), color='green', linestyle='--')\n # 推定した曲線を表示\n est = norm(loc=mu, scale=np.sqrt(sigma))\n label = \"Sigma=%.2f\" % sigma\n subplot.plot(linex, est.pdf(linex), color='red', label=label)\n subplot.legend(loc=1)\n # サンプルの表示\n subplot.scatter(ds, orig.pdf(ds), marker='o', color='blue')\n subplot.set_xlim(-4,4)\n subplot.set_ylim(0)\n fig.show()\n\nif __name__ == '__main__':\n gauss()\n", "標準偏差の推定値は実際より小さくなるらしい\nけど、↑ではなってないw", "gauss()\n\ngauss()", "標準偏差が小さくなるのは裾野のデータの発生確率が低いため\n\n推定量\n\n何らかの理屈にもとづいて推定値を計算する方法が得られた時に、その計算方法を推定量と呼ぶらしい\n方法なのに量?\n一致性と不偏性を持つのが良い推定量\n一致性\nデータを大きくしていった時に真の値に近づいていくこと\n一致性を持つ推定量を一致推定量というらしい\n不偏性\n何回か取得した推定値の平均が真の値に近づいていくこと", "# -*- coding: utf-8 -*-\n#\n# 推定量の一致性と不偏性の確認\n#\n# 2015/06/01 ver1.0\n#\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom pandas import Series, DataFrame\nfrom numpy.random import normal\n\ndef draw_subplot(subplot, linex1, liney1, linex2, liney2, ylim):\n subplot.set_ylim(ylim)\n subplot.set_xlim(min(linex1), max(linex1)+1)\n subplot.scatter(linex1, liney1)\n subplot.plot(linex2, liney2, color='red', linewidth=4, label=\"mean\")\n subplot.legend(loc=0)\n\ndef bias():\n mean_linex = []\n mean_mu = []\n mean_s2 = []\n mean_u2 = []\n raw_linex = []\n raw_mu = []\n raw_s2 = []\n raw_u2 = []\n for n in np.arange(2,51): # 観測データ数Nを変化させて実行\n for c in range(2000): # 特定のNについて2000回の推定を繰り返す\n ds = normal(loc=0, scale=1, size=n)\n raw_mu.append(np.mean(ds))\n raw_s2.append(np.var(ds))\n raw_u2.append(np.var(ds)*n/(n-1))\n raw_linex.append(n)\n mean_mu.append(np.mean(raw_mu)) # 標本平均の平均\n mean_s2.append(np.mean(raw_s2)) # 標本分散の平均\n mean_u2.append(np.mean(raw_u2)) # 不偏分散の平均\n mean_linex.append(n)\n\n # プロットデータを40個に間引きする\n raw_linex = raw_linex[0:-1:50]\n raw_mu = raw_mu[0:-1:50]\n raw_s2 = raw_s2[0:-1:50]\n raw_u2 = raw_u2[0:-1:50]\n\n # 標本平均の結果表示\n fig1 = plt.figure()\n subplot = fig1.add_subplot(1,1,1)\n subplot.set_title('Sample mean')\n draw_subplot(subplot, raw_linex, raw_mu, mean_linex, mean_mu, (-1.5,1.5))\n\n # 標本分散の結果表示\n fig2 = plt.figure()\n subplot = fig2.add_subplot(1,1,1)\n subplot.set_title('Sample variance')\n draw_subplot(subplot, raw_linex, raw_s2, mean_linex, mean_s2, (-0.5,3.0))\n\n # 不偏分散の結果表示\n fig3 = plt.figure()\n subplot = fig3.add_subplot(1,1,1)\n subplot.set_title('Unbiased variance')\n draw_subplot(subplot, raw_linex, raw_u2, mean_linex, mean_u2, (-0.5,3.0))\n\n fig1.show()\n fig2.show()\n fig3.show()\n \nif __name__ == '__main__':\n bias()\n " ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
raymondberg/doodles
events/chipy-dojo-2016-01/coding_dojo.ipynb
gpl-2.0
[ "import pandas as pd\nimport csv\nimport numpy as np\nimport matplotlib.pyplot\nimport math\n\n\ntitle_file = \"titles.csv\"\nrelease_date_file = \"release_dates.csv\"\ncast_file = \"cast.csv\"\n\nfiles = [title_file, release_date_file,cast_file]\ndataframes = {}\n\nfor f in files:\n csv_file_object = pd.read_csv(open(f), header=0)\n dataframes[f] = csv_file_object \n", "Question 1: How many movies are listed in the titles dataframe?\n210591\nBonus\nUnique movies == 192839", "dataframes['titles.csv'].count()\nlen(dataframes['titles.csv'].ix[:,0].unique())", "Question 2: What are the earliest two films listed in the titles dataframe?\nMiss Jerry (1894) && Reproduction of the Corbett and Fitzsimmons Fight (1897)", "dataframes['titles.csv'].sort_values(['year']).iloc[0:2]", "Question 3: How many movies have the title \"Hamlet\"?\n19", "len(np.where(dataframes['titles.csv']['title'] == 'Hamlet')[0])", "Question 4: How many movies are titled \"North by Northwest\"?\n1", "len(np.where(dataframes['titles.csv']['title'] == 'North by Northwest')[0])", "Question 5: When was the first movie titled \"Hamlet\" made?\n1910", "dataframes['titles.csv'].loc[(dataframes['titles.csv']['title']=='Hamlet'), 'year'].sort_values().iloc[0]", "Question 6: List all of the \"Treasure Island\" movies from earliest to most recent\n\n1918\n1920\n1934\n1950\n1972\n1973\n1985\n1999", "dataframes['titles.csv'].loc[(dataframes['titles.csv']['title']=='Treasure Island'), 'year'].sort_values()", "Question 7. How many movies were made in the year 1950?\n1033", "dataframes['titles.csv'].loc[(dataframes['titles.csv']['year']==1950), 'year'].count()", "Question 8. How many movies were made from 1950 through 1959?\n11999", "dataframes['titles.csv'].loc[(dataframes['titles.csv']['year']>=1950) & (dataframes['titles.csv']['year']<=1959), 'year'].count()", "Question 9. In what years has a movie titled \"Batman\" been released?\n2", "len(dataframes['titles.csv'].loc[(dataframes['titles.csv']['title']=='Batman'), 'year'].unique())", "Question 10. How many roles were there in the movie \"Inception\"?\n72", "dataframes['cast.csv'].loc[(dataframes['cast.csv']['title']=='Inception'), 'character'].count()", "WE GIVE UP", "dataframes['titles.csv'].groupby('year')\n#dataframes['titles.csv']['decade']= pd.Series(math.floor(dataframes['titles.csv']['year'].apply(int)/10.0)*10)\n#math.floor(dataframes['titles.csv']['year'].apply(float))\n " ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
raman-sharma/stanford-mir
numpy_basics.ipynb
mit
[ "import numpy, scipy, matplotlib.pyplot as plt, pandas, librosa", "&larr; Back to Index\nNumPy and SciPy\nThe quartet of NumPy, SciPy, Matplotlib, and IPython is a popular combination in the Python world. We will use each of these libraries in this workshop.\nTutorial\nNumPy is one of the most popular libraries for numerical computing in the world. It is used in several disciplines including image processing, finance, bioinformatics, and more. This entire workshop is based upon NumPy and its derivatives.\nIf you are new to NumPy, follow this NumPy Tutorial.\nSciPy is a Python library for scientific computing which builds on top of NumPy. If NumPy is like the Matlab core, then SciPy is like the Matlab toolboxes. It includes support for linear algebra, sparse matrices, spatial data structions, statistics, and more.\nWhile there is a SciPy Tutorial, it isn't critical that you follow it for this workshop.\nSpecial Arrays", "print numpy.arange(5)\n\nprint numpy.linspace(0, 5, 10, endpoint=False)\n\nprint numpy.zeros(5)\n\nprint numpy.ones(5)\n\nprint numpy.ones((5,2))\n\nprint scipy.randn(5) # random Gaussian, zero-mean unit-variance\n\nprint scipy.randn(5,2)", "Slicing Arrays", "x = numpy.arange(10)\nprint x[2:4]\n\nprint x[-1]", "The optional third parameter indicates the increment value:", "print x[0:8:2]\n\nprint x[4:2:-1]", "If you omit the start index, the slice implicitly starts from zero:", "print x[:4]\n\nprint x[:999]\n\nprint x[::-1]", "Array Arithmetic", "x = numpy.arange(5)\ny = numpy.ones(5)\nprint x+2*y", "dot computes the dot product, or inner product, between arrays or matrices.", "x = scipy.randn(5)\ny = numpy.ones(5)\nprint numpy.dot(x, y)\n\nx = scipy.randn(5,3)\ny = numpy.ones((3,2))\nprint numpy.dot(x, y)", "Boolean Operations", "x = numpy.arange(10)\nprint x < 5\n\ny = numpy.ones(10)\nprint x < y", "Distance Metrics", "from scipy.spatial import distance\nprint distance.euclidean([0, 0], [3, 4])\nprint distance.sqeuclidean([0, 0], [3, 4])\nprint distance.cityblock([0, 0], [3, 4])\nprint distance.chebyshev([0, 0], [3, 4])", "The cosine distance measures the angle between two vectors:", "print distance.cosine([67, 0], [89, 0])\nprint distance.cosine([67, 0], [0, 89])", "Sorting\nNumPy arrays have a method, sort, which sorts the array in-place.", "x = scipy.randn(5)\nprint x\nx.sort()\nprint x", "numpy.argsort returns an array of indices, ind, such that x[ind] is a sorted version of x.", "x = scipy.randn(5)\nprint x\nind = numpy.argsort(x)\nprint ind\nprint x[ind]", "&larr; Back to Index" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
statsmodels/statsmodels.github.io
v0.13.1/examples/notebooks/generated/chi2_fitting.ipynb
bsd-3-clause
[ "Least squares fitting of models to data\nThis is a quick introduction to statsmodels for physical scientists (e.g. physicists, astronomers) or engineers.\nWhy is this needed?\nBecause most of statsmodels was written by statisticians and they use a different terminology and sometimes methods, making it hard to know which classes and functions are relevant and what their inputs and outputs mean.", "import numpy as np\nimport pandas as pd\nimport statsmodels.api as sm", "Linear models\nAssume you have data points with measurements y at positions x as well as measurement errors y_err.\nHow can you use statsmodels to fit a straight line model to this data?\nFor an extensive discussion see Hogg et al. (2010), \"Data analysis recipes: Fitting a model to data\" ... we'll use the example data given by them in Table 1.\nSo the model is f(x) = a * x + b and on Figure 1 they print the result we want to reproduce ... the best-fit parameter and the parameter errors for a \"standard weighted least-squares fit\" for this data are:\n* a = 2.24 +- 0.11\n* b = 34 +- 18", "data = \"\"\"\n x y y_err\n201 592 61\n244 401 25\n 47 583 38\n287 402 15\n203 495 21\n 58 173 15\n210 479 27\n202 504 14\n198 510 30\n158 416 16\n165 393 14\n201 442 25\n157 317 52\n131 311 16\n166 400 34\n160 337 31\n186 423 42\n125 334 26\n218 533 16\n146 344 22\n\"\"\"\ntry:\n from StringIO import StringIO\nexcept ImportError:\n from io import StringIO\ndata = pd.read_csv(StringIO(data), delim_whitespace=True).astype(float)\n\n# Note: for the results we compare with the paper here, they drop the first four points\ndata.head()", "To fit a straight line use the weighted least squares class WLS ... the parameters are called:\n* exog = sm.add_constant(x)\n* endog = y\n* weights = 1 / sqrt(y_err)\nNote that exog must be a 2-dimensional array with x as a column and an extra column of ones. Adding this column of ones means you want to fit the model y = a * x + b, leaving it off means you want to fit the model y = a * x.\nAnd you have to use the option cov_type='fixed scale' to tell statsmodels that you really have measurement errors with an absolute scale. If you do not, statsmodels will treat the weights as relative weights between the data points and internally re-scale them so that the best-fit model will have chi**2 / ndf = 1.", "exog = sm.add_constant(data[\"x\"])\nendog = data[\"y\"]\nweights = 1.0 / (data[\"y_err\"] ** 2)\nwls = sm.WLS(endog, exog, weights)\nresults = wls.fit(cov_type=\"fixed scale\")\nprint(results.summary())", "Check against scipy.optimize.curve_fit", "# You can use `scipy.optimize.curve_fit` to get the best-fit parameters and parameter errors.\nfrom scipy.optimize import curve_fit\n\n\ndef f(x, a, b):\n return a * x + b\n\n\nxdata = data[\"x\"]\nydata = data[\"y\"]\np0 = [0, 0] # initial parameter estimate\nsigma = data[\"y_err\"]\npopt, pcov = curve_fit(f, xdata, ydata, p0, sigma, absolute_sigma=True)\nperr = np.sqrt(np.diag(pcov))\nprint(\"a = {0:10.3f} +- {1:10.3f}\".format(popt[0], perr[0]))\nprint(\"b = {0:10.3f} +- {1:10.3f}\".format(popt[1], perr[1]))", "Check against self-written cost function", "# You can also use `scipy.optimize.minimize` and write your own cost function.\n# This does not give you the parameter errors though ... you'd have\n# to estimate the HESSE matrix separately ...\nfrom scipy.optimize import minimize\n\n\ndef chi2(pars):\n \"\"\"Cost function.\"\"\"\n y_model = pars[0] * data[\"x\"] + pars[1]\n chi = (data[\"y\"] - y_model) / data[\"y_err\"]\n return np.sum(chi ** 2)\n\n\nresult = minimize(fun=chi2, x0=[0, 0])\npopt = result.x\nprint(\"a = {0:10.3f}\".format(popt[0]))\nprint(\"b = {0:10.3f}\".format(popt[1]))", "Non-linear models", "# TODO: we could use the examples from here:\n# http://probfit.readthedocs.org/en/latest/api.html#probfit.costfunc.Chi2Regression" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session09/Day4/workbook_QPO.ipynb
mit
[ "import numpy as np\nimport scipy.fftpack as fftpack\nfrom astropy.table import Table\nimport matplotlib.pyplot as plt\nimport matplotlib.font_manager as font_manager\n%matplotlib inline\nfont_prop = font_manager.FontProperties(size=16)", "Using Fourier Analysis to Analyze Quasi-Periodic Oscillations\nBy Abigail Stevens\nProblem 1: damped harmonic oscillator example\nGenerating a light curve", "a = Table()\na.meta['dt'] = 0.0001 # time step, in seconds\na.meta['duration'] = 200 # length of time, in seconds\na.meta['omega'] = 2*np.pi # angular frequency, in radians\na.meta['phi'] = 0.0 # offset angle, in radians", "1a. Compute the time steps and a cosine harmonic with the above-defined properties.\n1b. Compute four exponentially damped versions of the harmonic oscillation.\n$$D(t_i) = e^{-\\zeta t_i}H(t_i)$$\nPick your own four $\\zeta$s! I recommend values between 0.01 and 1.\n1c. Plot them all on top of each other.\n1d. Take the power spectrum of the harmonic and 4 damped harmonic time series.\nThe power $P$ at each frequency $\\nu_i$, for the Fourier transform $F$, is $$P(\\nu_i)=|F(\\nu_i)|^2$$\nComputing the Fourier frequencies and the index of the Nyquist frequency.", "freq = fftpack.fftfreq(len(a), d=a.meta['dt'])\nnyq_ind = int(len(a)/2.) # the index of the last positive Fourier frequency", "1e. Plot them!\nNotice the trend between the width of the peak in the power spectrum, and the strength of the damping factor.\nFor bonus points, put in a key/legend with the corresponding $\\zeta$ value for each curve.\nProblem 2: Analyzing NICER data of the black hole X-ray binary MAXI J1535-571\nImport it with astropy tables from the file \"J1535_evt.fits\".\nThe data have come to us as an 'event list', meaning that it's a list of the time at which a photon was detected (in seconds, in spacecraft clock time) and what the energy of the photon was (a channel number; channel/100=photon energy in keV). \n2a. Turn this rag-tag list of photons into an evenly-spaced light curve\n2a.i. First, clean it up a little by only keeping photons with energies greater than 1 keV and less than 12 keV, using a mask. \n2a.ii. Then, make an evenly-spaced light curve array with np.histogram. Pick a light curve time resolution of dt=1/8seconds to start with. To put your light curve in units of count rate, divide the histogram (counts per bin) by dt (seconds per bin); to avoid typecasting errors, do this by multiplying by int(1/dt).\n(yes, it takes a second or two; you're using half a million time bins in your light curve!)\n2b. Let's try taking the power spectrum of it.\n$$P(\\nu_i)=|F(\\nu_i)|^2$$\nwhere $F$ is the FFT of your light curve.", "freq = fftpack.fftfreq(len(psd), d=dt)\nnyq_ind = int(len(psd)/2.) # the index of the last positive Fourier frequency", "Plot it!\nIt's ugly! But more importantly, you can't get useful science out of it. \nWhat's going on?\n\nThere are gaps in the light curve due to the orbit of the spacecraft (and occasionally stuff gets in the way). This has the effect of inserting top-hat windows into our function, which give the lumpy bumps at ~0.25 Hz. So, we need to break the light curve up into shorter segments that won't have weird drop-outs.\nThere is a giant DC component at $\\nu=0$. This is not astrophysical in origin, but from the mean of the light curve.\nPower spectra are often plotted on log-log scales, but the power gets really noisy and 'scatter-y' at higher frequencies. \nThe astute observer will notice that we can only go up to a Nyquist frequency of 4 Hz. There's interesting astrophysical signals above 4 Hz, but if we did smaller dt with keeping the very long segment length, we'd have >1 million time bins, which can be asking a lot of a laptop processor. \n\n2c. Segments!\n2c.i. Turn your light curve code from 2a.ii. into a function:", "def make_lc(events, start, end, dt):\n", "2c.ii. Sometimes, the detector is on and recording photons, but it's pointed too close to the Earth, or a structure on the spacecraft is occulting part of the view, or the instrument is moving through a zone of high particle background, or other things. The times when these things happen are recorded, and in data reduction you make a list of Good Time Intervals, or GTIs, which is when you can use good science data. I made a list of GTIs for this data file that are longer than 4 seconds long, which you can read in from \"J1535_gti.fits\".\n2c.iii. Not only do we want to only use data in the GTIs, but we want to split the light curve up into multiple equal-length segments, take the power spectrum of each segment, and average them together. By using shorter time segments, we can use finer dt on the light curves without having so many bins for the computation that our computer grinds to a halt. There is the added bonus that the noise amplitudes will tend to cancel each other out, and the signal amplitudes will add, and we get better signal-to-noise!\nAs you learned in Jess's lecture yesterday, setting the length of the segment determines the lowest frequency you can probe, but for stellar-mass compact objects where we're usually interested in variability above ~0.1 Hz, this is an acceptable trade-off.", "time = np.asarray(j1535['TIME']) ## Doing this so that we can re-run\nseg_length = 32. # seconds\ndt = 1./128.# seconds\nn_bins = int(seg_length/dt) # Number of time bins in a segment of light curve\npsd_avg = np.zeros(n_bins) # initiating, to keep running sum (then avearge at end)\nn_seg = 0\nfor (start_gti, stop_gti) in zip(gti_tab['START'], gti_tab['STOP']):\n start_time = start_gti\n end_time = start_time + seg_length\n while end_time <= stop_gti:\n ## Make a mask of events in this segment\n \n ## Keep the stuff not in this segment for next time\n\n ## Make the light curve\n\n ## Turn that into a power spectrum\n\n ## Keep a running sum (to average at end)\n\n ## Print out progress\n if n_seg % 5 == 0:\n print(n_seg)\n ## Incrementing for next loop\n n_seg += 1\n start_time += seg_length\n end_time += seg_length\n \n## Divide summed powers by n_seg to get the average\n", "Plot it! Use similar code I gave you above to make the array of Fourier frequencies and get the index of the Nyquist frequency.\n2d. Mean-subtracted\nSo, you can see something just to the left 10 Hz much clearer, but there's this giant whopping signal at the lowest frequency bin. This is what I've heard called the 'DC component', which arises from the mean count rate of the light curve. To get rid of it, subtract the mean from your light curve segment before taking the Fourier transform. Otherwise, keep the same code as above for 2c.iii. You may want to put some of this in a function for future use in this notebook.\n2e. Error on average power\nThe average power at a particular frequency has a chi-squared distribution with two degrees of freedom about the true underlying power spectrum. So, error is the power value divided by the square root of the number of segments. A big reason why we love power spectra(/periodograms) is because this is so straight forward! \nOne way to intuitively check if your errors are way-overestimated or way-underestimated is whether the size of the error bar looks commeasurate with the amount of bin-to-bin scatter of power at neighbouring frequencies.\nPlotting, this time with ax.errorbar instead of ax.plot.\nThe thing at ~8 Hz is a low-frequency QPO, and the hump at-and-below 1 Hz is broadband noise (which we'll discuss in detail this afternoon)!! Now that you've got the basic analysis step complete, we'll focus on plotting the data in a meaningful way so you can easily extract information about the QPO and noise.\n2f. Re-binning\nWe often plot power spectra on log-log scaled axes (so, log on both the X and Y), and you'll notice that there's a big noisy part above 10 Hz. It is common practice to bin up the power spectrum geometrically (which is like making it equal-spaced in when log-plotted). \nFor this example, I'll use a re-binning factor of 1.03. If new bin 1 has the width of one old bin, new bin 2 will be some 1.03 bins wider. New bin 3 will be 1.03 times wider than that (the width of new bin 2), etc. For the first couple bins, this will round to one old bin (since you can only have an integer number of bins), but eventually a new bin will be two old bins, then more and more as you move higher in frequency. If the idea isn't quite sticking, try drawing out a representation of old bins and how the new bins get progressively larger by the rebinning factor.\nFor a given new bin x that spans indices a to b in the old bin array: \n$$\\nu_{x} = \\frac{1}{b-a}\\sum_{i=a}^{b}\\nu_{i}$$\n$$P_{x} = \\frac{1}{b-a}\\sum_{i=a}^{b}P_{i}$$\n$$\\delta P_{x} = \\frac{1}{b-a}\\sqrt{\\sum_{i=a}^{b}(\\delta P_{i})^{2}}$$", "def rebin(freq, power, err_power, rebin_factor=1.05):\n \"\"\"\n Re-bin the power spectrum in frequency space by some re-binning factor\n (rebin_factor > 1). This is sometimes called 'geometric re-binning' or \n 'logarithmic re-binning', as opposed to linear re-binning \n (e.g., grouping by 2)\n\n Parameters\n ----------\n freq : np.array of floats\n 1-D array of the Fourier frequencies.\n\n power : np.array of floats\n 1-D array of the power at each Fourier frequency, with any/arbitrary\n normalization.\n\n err_power : np.array of floats\n 1-D array of the error on the power at each Fourier frequency, with the\n same normalization as the power.\n\n rebin_factor : float\n The factor by which the data are geometrically re-binned.\n\n Returns\n -------\n rb_freq : np.array of floats\n 1-D array of the re-binned Fourier frequencies.\n\n rb_power : np.array of floats\n 1-D array of the power at the re-binned Fourier frequencies, with the\n same normalization as the input power array.\n\n rb_err : np.array of floats\n 1-D array of the error on the power at the re-binned Fourier\n frequencies, with the same normalization as the input error on power.\n \"\"\"\n assert rebin_factor >= 1.0\n\n rb_power = np.asarray([]) # Array of re-binned power\n rb_freq = np.asarray([]) # Array of re-binned frequencies\n rb_err = np.asarray([]) # Array of error in re-binned power\n real_index = 1.0 # The unrounded next index in power\n int_index = 1 # The int of real_index, added to current_m every iteration\n current_m = 1 # Current index in power\n prev_m = 0 # Previous index m\n\n ## Loop through the length of the array power, new bin by new bin, to\n ## compute the average power and frequency of that new geometric bin.\n while current_m < len(power):\n\n\n\n return rb_freq, rb_power, rb_err", "Apply this to the data (using JUST the frequency, power, and error at positive Fourier frequencies). Start with a rebin factor of 1.03.\nPlay around with a few different values of rebin_factor to see how it changes the plotted power spectrum. 1 should give back exactly what you put in, and 1.1 tends to bin things up quite a lot.\nCongratulations! You can make great-looking power spectra! Now, go back to part 2d. and try 4 or 5 different combinations of dt and seg_length. What happens when you pick too big of a dt to see the QPO frequency? What if your seg_length is really short?\nOne of the most important things to notice is that for a real astrophysical signal, the QPO (and low-frequency noise) are present for a variety of different dt and seg_length parameters. \n2g. Normalization\nThe final thing standing between us and a publication-ready power spectrum plot is the normalization of the power along the y-axis. The normalization that's commonly used is fractional rms-squared normalization, sometimes just called the rms normalization. For a power spectrum created from counts/second unit light curves, the equation is:\n$$P_{frac.rms2} = P \\times \\frac{2*dt}{N * mean^2}$$\nP is the power we already have,\ndt is the time step of the light curve,\nN is n_bins, the number of bins in one segment, and\nmean is the mean count rate (in counts/s) of the light curve (so, you will need to go back to 2d. and re-run keeping a running sum-then-average of the mean count rate).\nDon't forget to apply the normalization to the error, and re-bin after!\n2h. Poisson noise level\nNotice that the Poisson noise is a power law with slope 0 at high frequencies. With this normalization, we can predict the power of the Poisson noise level from the mean counts/s rate of the light curve! \n$$P_{noise} = 2/mean$$\nCompute this noise level, and plot it with the power spectrum.\nYour horizontal Poisson noise line should be really close to the power at and above ~10 Hz.\n2i. For plotting purposes, we sometimes subtract the Poisson noise level from the power before plotting.\nOnce we've done this and removed the noise, we can also plot the data in units of Power, instead of Power/Hz, by multiplying the power by the frequency. Recall that following the propagation of errors, you will need to multiply the error by the frequency as well, but not subtract the Poisson noise level there.\nBeautiful! This lets us see the components clearly above the noise and see their relative contributions to the power spectrum (and thus to the light curve).\nRecap of what you learned in problem 2:\nYou are now able to take a light curve, break it into appropriate segments using the given Good Time Intervals, compute the average power spectrum (without weird aliasing artefacts or a strong DC component), and plot it in such away that you can see the signals clearly.\nProblem 3: It's pulsar time\nWe are going to take these skills and now work on two different observations of the same source, the ultra-luminous X-ray pulsar Swift J0243.6+6124. The goal is for you to see how different harmonics in the pulse shape manifest in the power spectrum.\n3a. Load the data and GTI\nUsing the files J0243-122_evt.fits and J0243-134_evt.fits, and the corresponding x_gti.fits.\n3b. Apply a mask to remove energies below 0.5 keV and above 12 keV.\n3c. Make the average power spectrum for each data file.\nGo through in the same way as 2d (using the make_lc function you already wrote). Re-bin and normalize (using the rebin function you already wrote). The spin period is 10 seconds, so I don't recommend using a segment length shorter than that (try 64 seconds). Since the period is quite long (for a pulsar), you can use a longer dt, like 1/8 seconds. Use the same segment length and dt for both data sets.", "seg_length = 64. # seconds\ndt = 1./8.# seconds\nn_bins = int(seg_length/dt) # Number of time bins in a segment of light curve", "If you didn't turn the power spectrum code from 2d into a function, do that here.\nData set 1\nData set 2\nPlot together!\n3d. Make a phase-folded light curve\n3d.i. Determine the spin period from the frequency of the lowest (fundamental) tone in the power spectrum. Remember that period=1/f. Hint: np.argmax is a great function for quick, brute-force things.\n3d.ii. Use the modulo operator of the light curve (starting it at time zero) to determine the relative phase of each photon event, then divide by the period to have relative phase from 0 to 1.\n3d.iii. Make an array of 20 phase bins and put the relative phases in their phase bins with np.histogram.\n3d.iv. Plot the light curve next to its accompanying power spectrum!\nData set 1:\nData set 2:\nThough these are very quickly made phase-folded light curves, you can see how the light curve with stronger harmonic content shows more power at the harmonic frequency in the power spectrum, and the light curve that's more asymmetric in rise and fall times (number 1) shows powe at higher harmonics!\nIf you want to see what a real phase-folded pulse profile looks like for these data, check out the beautiful plots in Wilson-Hodge et al. 2018: https://ui.adsabs.harvard.edu/abs/2018ApJ...863....9W/abstract\nData set 1 has an observation ID that ends in 122 and corresponds to MJD 58089.626, and data set 2 has an observation ID that ends in 134 and corresponds to MJD 58127.622.\nBonus challenges:\n1. Dynamical power spectrum (/spectrogram):\nInstead of averaging the power spectra at each segment, save it in an array (if one power spectrum has length n_bins, the array will end up with size (n_bins, n_seg). Apply the normalization and re-binning to each segment, then make a 3d plot with frequency along the y-axis, segment (which corresponds to elapsed time) along the x-axis, and power as the colormap. This approach is useful if you think the QPO turns on and off rapidly (high-frequency QPOs do this) or is changing its frequency on short timescales. If the frequency is changing, this can artificially broaden the Lorentzian-shaped peak we see in the average power spectrum. Or, sometimes it's intrinsically broad. A look at the dynamical power spectrum will tell you! This will be most interesting on the black hole J1535 data, but could be done for both objects.\n2. Energy bands:\nMake and plot power spectra of the same object using light curves of different energy bands. For example, try 1-2 keV, 2-4 keV, and 4-12 keV. Try to only loop through the event list once as you do the analysis for all three bands. What do you notice about the energy dependence of the signal?\n3. Optimization:\nOptimize the memory and/or time usage of the algorithm we wrote that reads through the light curve and makes an average power spectrum. You can use skills you learned at this and previous DSFP sessions!\n4. Modeling:\nUsing astropy.modeling (or your own preferred modeling package), fit the power spectrum of the black hole J1535 with a Lorentzian for the QPO, a few Lorentzians for the low-frequency broadband noise, and a power law for the Poisson noise level. In papers we often report the centroid frequency and the full-width at half maximum (FWHM) of the QPO Lorentzian model. How would you rule out the presence of a QPO at, e.g., 12 Hz?" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
iRipVanWinkle/ml
Data Science UA - September 2017/Lecture 03 - Basic Statistics/Overview_of_distributions.ipynb
mit
[ "Introduction to Statistics\nMany statistical tools and techniques used in data analysis are based on probability, a measures on a scale from 0 to 1 the likelihood of an event occurence (0 the event never occurs, 1, the event always occurs.). \nVariables in the columns of a data set can be thought of as random variables: their values varying due to chance. \nThe distribution of the outcomes (values) of a random variable is described using probability distribution (function). In statistics, there are several common probability distributions, corresponding to various \"shapes\". The most commonly used to model real life random events are the Uniform, Normal, Binomial, Exponential, Poisson, and Lognormal distributions.\nAbout this Notebook\nThis notebook presents several common probability distributions and how to work with them in Python.\nThe Uniform Distribution\nThe uniform distribution is a probability distribution where each value within a certain range is equally likely to occur and values outside of the range never occur.\nImporting Needed packages\nLet's generate some uniform data and plot a density curve using the scipy.stats library", "# Uncomment next command if you need to install a missing module\n#!pip install statsmodels\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport scipy.stats as stats\n%matplotlib inline", "Print the current version of Python:", "uniform_data = stats.uniform.rvs(size = 10000, # Generate 10000 numbers\n loc = 0, # From 0 \n scale = 20) # To 20\n\npd.DataFrame(uniform_data).plot(kind = \"density\", # Plot the distribution\n figsize = (9,9),\n xlim = (-5,25))", "Notes:\nThe plot above is an approximation of the theoretical uniform distribution because it is based on a sample of observations: we generated 10,000 data points from a uniform distribution spanning the range 0 to 20. \nIn the density plot, we see that the density of sample data is almost level, i.e., all values have the same probability of occurring. \nNote: the area under a probability density curve is always equal to 1.\nMore useful scipy functions:\n\n\nstats.distribution.rvs() generates random numbers from the specified distribution. The arguments to rvs() vary depending on the type of distribution considered, e.g., the starting and ending points and the sample size for a uniform distribution.\n\n\n stats.distribution.cdf() returns the probability that an observation drawn from a distribution is below a specified value (a.k.a., the cumulative distribution function) calculated as the area under the distribution's density curve to the left of the specified value (on the x axis). For example, in the uniform distribution above, there is a 25% chance that an observation will be in the range 0 to 5 and a 75% chance it will fall in the range 5 to 20. We can confirm this with cdf():", "stats.uniform.cdf(x = 5, # quantile to check\n loc = 0, # start value of the distribution\n scale = 20) # end value of the distribution \n", "stats.distribution.ppf() is the inverse of cdf(): it returns the quantile (x axis cutoff value) associated with a given probability. For example, to get the cutoff value for which there is a 20% chance of drawing an observation below that value, we can use ppf():", "stats.uniform.ppf( q = 0.2, # Probability cutoff\n loc = 0, # start value of the distribution\n scale = 20) # end value of the distribution ", "stats.distribution.pdf() returns the probability density (height of the distribution) at a given x value. Note that all values within the range of a uniform distribution have the same probability density and all values outside the range of the distribution have a probability density of 0.\n\nProbability distribution functions in scipy also support median(), mean(), var() and std().", "for x in range(-2,24,4):\n print(\"Density at x value \" + str(x))\n print( stats.uniform.pdf(x, loc = 0, scale = 20) ) \n", "Generating Random Numbers and Setting The Seed\nTo generate random real numbers in a range with equal probability one can draw numbers from a uniform distribution using the stats.distribution.rvs() described above.\nPython also has with a library called \"random\" which comes equiped with various operations that involve randomization.", "import random\nprint(random.randint(0,10)) # Get a random integer in the specified range\nprint(random.choice([2,4,6,9])) # Get a random element from a sequence\n\nprint(random.random()) # Get a real number between 0 and 1\n\nprint(random.uniform(0,10)) # Get a real in the specified range", "Setting the Seed\nRegardless of the method used to generate random numbers, the result of a random process can differ from one run to the next. If having different results each time is not desired, e.g., if results need to be reproduced exactly, one can ensure that the results are the same each time by setting the random number generator's seed value to a specific figure. random.seed() allows us to set the random number generator's seed.\nNotice that we generate the exact same numbers with both calls to random.uniform() because the same seed is set before each call. This reproducibility illustrates the fact that these random numbers aren't truly random, but rather \"pseudorandom\".\nMany Python library functions that use randomness have an optional random seed argument built in, e.g., the rvs() function has an argument random_state, which allows us to set the seed.\nNote: there is a separate internal seed from the numpy library, which can be set with numpy.random.seed() when using functions from numpy and libraries built on top of it such as pandas and scipy.", "random.seed(10) # Set the seed to 12 \n\nprint([random.uniform(0,20) for x in range(4)])\n\nrandom.seed(10) # Set the seed to the same value\n\nprint([random.uniform(0,20) for x in range(4)])", "The Normal Distribution\nThe normal (or Gaussian distribution) is a continuous probability distribution with a bell-shaped curve and is characterized by its center point (mean) and spread (standard deviation). Most observations from a normal distribution lie close to the mean, i.e., about 68% of the data lies within 1 standard deviation of the mean, 95% lies within 2 standard deviations and 99.7% within 3 standard deviations.\nNote that many common statistical tests assume distributions are normal. (In the scipy module the normal distribution is referred to as norm.)", "prob_under_minus1 = stats.norm.cdf(x = -1, \n loc = 0, \n scale = 1) \n\nprob_over_1 = 1 - stats.norm.cdf(x= 1, \n loc = 0, \n scale= 1) \n\nbetween_prob = 1 - (prob_under_minus1 + prob_over_1)\n\nprint(prob_under_minus1, prob_over_1, between_prob)", "The figures above show that approximately 16% of the data in a normal distribution with mean 0 and standard deviation 1 is below -1, 16% is above 1 and 68% is between -1 and 1. Let's plot the normal distribution:", "# Plot normal distribution areas*\n\n#plt.rcParams[\"figure.figsize\"] = (9,9)\n \nplt.fill_between(x=np.arange(-4,-1,0.01), \n y1= stats.norm.pdf(np.arange(-4,-1,0.01)) ,\n facecolor='red',\n alpha=0.35)\n\nplt.fill_between(x=np.arange(1,4,0.01), \n y1= stats.norm.pdf(np.arange(1,4,0.01)) ,\n facecolor='red',\n alpha=0.35)\n\nplt.fill_between(x=np.arange(-1,1,0.01), \n y1= stats.norm.pdf(np.arange(-1,1,0.01)) ,\n facecolor='blue',\n alpha=0.35)\n\nplt.text(x=-1.8, y=0.03, s= round(prob_under_minus1,3))\nplt.text(x=-0.2, y=0.1, s= round(between_prob,3))\nplt.text(x=1.4, y=0.03, s= round(prob_over_1,3))", "The plot above shows the bell shape of the normal distribution as well as the area below, above and within one standard deviation of the mean. One can also check the quantiles of a normal distribution with stats.norm.ppf():", "print( stats.norm.ppf(q=0.025) ) # Find the quantile for the 2.5% cutoff\n\nprint( stats.norm.ppf(q=0.975) ) # Find the quantile for the 97.5% cutoff", "The figures above show that approximately 5% of the data is further than 2 standard deviations from the mean.\nThe Binomial Distribution\nThe binomial distribution is a discrete probability distribution that models the outcomes of a given number of random trails of some experiment or event. The binomial distribution has two parameters: the probability of success in a trial and the number of trials. The binomial distribution represents the likelihood to achieve a given number of successes in n trials of an experiment. We could model flipping a fair coin 10 times with a binomial distribution where the number of trials is set to 10 and the probability of success is set to 0.5. \nThe scipy name for the binomial is binom.", "fair_coin_flips = stats.binom.rvs(n=10, # Number of flips per trial\n p=0.5, # Success probability\n size=10000) # Number of trials\n\nprint( pd.crosstab(index=\"counts\", columns= fair_coin_flips))\n\npd.DataFrame(fair_coin_flips).hist(range=(-0.5,10.5), bins=11)\n", "The binomial distribution is discrete and can be summarized with a frequency table (and a histogram). The histogram shows us that a binomial distribution with a 50% probability of success is roughly symmetric, but this is not the case in general.", "biased_coin_flips = stats.binom.rvs(n=10, # Number of flips per trial\n p=0.8, # Success probability\n size=10000) # Number of trials\n\n# Print table of counts\nprint( pd.crosstab(index=\"counts\", columns= biased_coin_flips))\n\n# Plot histogram\npd.DataFrame(biased_coin_flips).hist(range=(-0.5,10.5), bins=11)\n\n#cdf() check the probability of achieving a number of successes within a certain range\n\nprint(stats.binom.cdf(k=5, # Probability of k = 5 successes or less\n n=10, # With 10 flips\n p=0.8))# And success probability 0.8\n\nprint(1 - stats.binom.cdf(k=8, # Probability of k = 9 successes or more\n n=10, # With 10 flips\n p=0.8)) # And success probability 0.8", "For continuous probability density functions, we can use pdf() to check the probability density at a given x value. For discrete distributions like the binomial, we can use stats.distribution.pmf(), the probability mass function to check the proportion of observations at given number of successes k.", "print(stats.binom.pmf(k=5, # Probability of k = 5 successes\n n=10, # With 10 flips\n p=0.5)) # And success probability 0.5\n\nprint(stats.binom.pmf(k=8, # Probability of k = 8 successes\n n=10, # With 10 flips\n p=0.8)) # And success probability 0.8", "The Geometric and Exponential Distributions\nThe geometric distribution is discrete and models the number of trials it takes to achieve a success in repeated experiments with a given probability of success. The exponential distribution is continuous and models the amount of time before an event occurs given a certain occurrence rate.\nThe scipy nickname for the geometric distribution is \"geom\". Below we model the number of trials it takes to get a success (heads) when flipping a fair coin:", "random.seed(12)\n\nflips_till_heads = stats.geom.rvs(size=10000, # Generate geometric data\n p=0.5) # With success prob 0.5\n\n# Print table of counts\nprint( pd.crosstab(index=\"counts\", columns= flips_till_heads))\n\n# Plot histogram\npd.DataFrame(flips_till_heads).hist(range=(-0.5,max(flips_till_heads)+0.5)\n , bins=max(flips_till_heads)+1)", "In the 10,000 trails generated, the longest it took to get a heads was 13 flips.\nWe can use cdf() to check the probability of needing 3 flips or more to get a success:", "first_three = stats.geom.cdf(k=3, # Prob of success in first 5 flips\n p=0.5)\n\n1 - first_three\n\n#Use pmf() to check the probability of seeing a specific number of flips before a successes\nstats.geom.pmf(k=2, # Prob of needing exactly 2 flips to get the first success\n p=0.5)", "The scipy name for the exponential distribution is \"expon\".", "# Get the probability of waiting more than 1 time unit before a success\n\nprob_1 = stats.expon.cdf(x=1, \n scale=1) # Arrival rate\n\n1 - prob_1", "Note: The average arrival time for the exponential distribution is equal to 1/arrival_rate.\nLet's plot this exponential distribution:", "plt.fill_between(x=np.arange(0,1,0.01), \n y1= stats.expon.pdf(np.arange(0,1,0.01)) ,\n facecolor='blue',\n alpha=0.35)\n\nplt.fill_between(x=np.arange(1,7,0.01), \n y1= stats.expon.pdf(np.arange(1,7,0.01)) ,\n facecolor='red',\n alpha=0.35)\n\n\nplt.text(x=0.3, y=0.2, s= round(prob_1,3))\nplt.text(x=1.5, y=0.08, s= round(1 - prob_1,3))", "The Poisson Distribution\nThe Poisson distribution models the probability of seeing a certain number of successes within a time interval, where the time it takes for the next success is modeled by an exponential distribution. The Poisson distribution can be used to model the number of arrivals a hospital can expect in a hour's time, the number of emailsone can expect to receive in a day, etc.\nThe scipy name for the Poisson distribution is \"poisson\". Below is a plot for data from a Poisson distribution with an arrival rate of 1 per time unit:", "random.seed(12)\n\narrival_rate_1 = stats.poisson.rvs(size=10000, # Generate Poisson data\n mu=1 ) # Average arrival time 1\n\n# Print table of counts\nprint( pd.crosstab(index=\"counts\", columns= arrival_rate_1))\n\n# Plot histogram\npd.DataFrame(arrival_rate_1).hist(range=(-0.5,max(arrival_rate_1)+0.5)\n , bins=max(arrival_rate_1)+1)\n", "The histogram shows that when arrivals are relatively infrequent, it is rare to see more than a couple of arrivals in each time period.", "random.seed(12)\n\narrival_rate_10 = stats.poisson.rvs(size=10000, # Generate Poisson data\n mu=10 ) # Average arrival time 10\n\n# Print table of counts\nprint( pd.crosstab(index=\"counts\", columns= arrival_rate_10))\n\n# Plot histogram\npd.DataFrame(arrival_rate_10).hist(range=(-0.5,max(arrival_rate_10)+0.5)\n , bins=max(arrival_rate_10)+1)\n", "We can use cdf() to check the probability of achieving more (or less) than a certain number of successes and pmf() to check the probability of obtaining a specific number of successes:", "print(stats.poisson.cdf(k=5, # Check the probability of 5 arrivals or less\n mu=10)) # With arrival rate 10\n\nprint(stats.poisson.pmf(k=10, # Check the prob f exactly 10 arrivals\n mu=10)) # With arrival rate 10", "Material adapted from:\nhttp://hamelg.blogspot.ca/2015/11/python-for-data-analysis-part-22.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
acamps/udacity_p2
Projects/P2_Investigate_a_dataset/submission_1/P2 - Investigate a Dataset - Titanic.ipynb
mit
[ "Investigate Titanic's Data\nQuestions to ask ourselves\nWhat factors made people more likely to survive?\n\nSex\nClass\nAge\nHow much they paid", "#imports\nimport pandas as pd\nimport numpy as np\n\nraw_data = pd.read_csv('titanic_data.csv')\n\nraw_data.head()", "Data Wrangling\nWe need to find the amount of nulls that our data has.\ndescribe function might be useful", "raw_data.describe()", "we realise however that in this way we are not able to see NA in non-numeric columns.\nWe move to another option:", "raw_data.isnull().sum()", "How do we treat nulls\nIn AGE\nOut of 891 rows, we have 177 NaN, which represent roughly a 20%. If we replace this NaN with some other value we should be guard value, so it does not affect the rest of the values. \nIn Cabin\nOut of 891 rows, 687 are nulls, representing an astounding 77%. Ignoring this column altogether makes more sense.\nIn Embarked\nOnly 2 NaN in this column make it possible to simply ignore this rows. We could also decide another value and see how they behave.\nCode\nAge", "clean_data = raw_data.copy()\nclean_data['Age'] = clean_data['Age'].fillna(-1)", "Cabin", "clean_data.drop('Cabin', axis=1, inplace=True)", "Embarked\nBefore deleting anything, let's check the rows", "raw_data[raw_data['Embarked'].isnull()]", "It looks a bit strange that they both survived, are in the same Cabin and we lack their Embarked information, using the same ticket.\nInstead of deleting them we will leave the rows for now.\nThis are configuration options for the charts.", "%pylab inline\nfigsize(47,20)", "Data Exploration\nWe want to be able to see all this data depicted in this ways:\n* How many people survived?\n* Survival by age\n* Survival by sex\n* Survival by age and sex\n* Survival by age and class\n* Survival by sex and class\nTo be able to see where the survival rates are most gathered.\nHow many people survived?\nAs a first data exploration trade we are interested first, in how many people survived.", "import matplotlib.pyplot as plt\nsurvivors = clean_data.groupby('Survived').count()['Name']\n\nplt.figure(figsize=(18,8))\ncmap = plt.cm.hsv\ncolors = ['grey','cyan']\nplt.pie(survivors, labels=['Died','Survived'], explode=[0,0.05], autopct='%1.1f%%', colors = colors)\n\nplt.axis(\"equal\")\nplt.title(\"Titanic Survivors\")\nplt.show();", "Survival by Age\nCode", "clean_data[clean_data['Survived'] == 1].groupby('Age').count().reset_index().plot(kind='bar',y='PassengerId', x='Age')\n#pd.pivot_table(clean_data[clean_data['Survived'] == 1], index='Age', aggfunc=np.count_nonzero", "But this is not very helpful, since we don't see how many people there was in each group. We can either represent both survivors or not, or calculate a ratio by age.\nLet's see which helps us more.", "#clean_data.groupby(['Age','Survived']).count().reset_index().plot(kind='bar',stacked = True, y='PassengerId', x='Age')\npivot_age = pd.pivot_table(clean_data, values='PassengerId', index='Age', columns='Survived', aggfunc=np.count_nonzero)\npivot_age.fillna(0).plot(kind='bar', stacked='True')", "From what we can see, not much information can be gained from age, but let's analyse by ratio, to be certain about that.", "pivot_age = pivot_age.fillna(0)\npivot_age['survival_ratio'] = pivot_age[1] / (pivot_age[0] + pivot_age[1])\npivot_age.plot(kind = 'bar', y='survival_ratio')", "From this plot we can extract that the higher ratios of survival are up to 9 years, and between 11 and 14. Some other interesting ranges of age have good survival rates, like from 47 to 55.\nSurvival by Sex\nLet's see which sex survived more.\nCode", "import matplotlib.pyplot as plt\n\nfig, axes = plt.subplots(nrows=1, ncols=2)\nsurvivors_male = clean_data[clean_data['Sex']=='male'].groupby('Survived').count()['Name']\nsurvivors_female = clean_data[clean_data['Sex']=='female'].groupby('Survived').count()['Name']\n\ncolors = ['grey','cyan']\n\nmale_plot = survivors_male.plot(kind='pie', labels=['Died','Survived'], explode=[0,0.05], autopct='%1.1f%%', colors = colors, ax=axes[0])\nmale_plot.axis(\"equal\")\nmale_plot.set_title(\"Male Titanic Survivors\")\n\nfemale_plot = survivors_female.plot(kind='pie', labels=['Died','Survived'], explode=[0,0.05], autopct='%1.1f%%', colors = colors, ax=axes[1])\nfemale_plot.axis(\"equal\")\nfemale_plot.set_title(\"Female Titanic Survivors\")", "As we can clearly see with this representation, we have a lot of females surviving. Around a 74 %.\nOnly with this information we could already have a pretty good prediction.\nSurvival by age and sex\nAn interesting set of visualizations might help us see if the highest survival ratios for males are skewed to one particular range of ages. Checking the dead ratio by age with females looks interesting, to avoid it as well.", "survivors_male_age_pivot = clean_data[clean_data['Sex']=='male'].pivot_table(index='Age', columns='Survived', aggfunc=np.count_nonzero)\nsurvivors_male_age_pivot = survivors_male_age_pivot.fillna(0)['PassengerId']\nsurvivors_male_age_pivot['survival_ratio'] = survivors_male_age_pivot[1]/(survivors_male_age_pivot[1]+survivors_male_age_pivot[0])\nsurvivors_male_age_pivot.plot(kind='bar', y='survival_ratio')\n", "With this representation we can clearly see that the 0 to 6 year old males are the ones that survive the most.\nWith females we want to study which where the ages that died the most, since we have a lot more women surviving.", "survivors_female_age_pivot = clean_data[clean_data['Sex']=='female'].pivot_table(index='Age', columns='Survived', aggfunc=np.count_nonzero)\nsurvivors_female_age_pivot = survivors_female_age_pivot.fillna(0)['PassengerId']\nsurvivors_female_age_pivot['dead_ratio'] = survivors_female_age_pivot[0]/(survivors_female_age_pivot[1]+survivors_female_age_pivot[0])\nsurvivors_female_age_pivot.plot(kind='bar', y='dead_ratio')", "We would have expected something more clear, but this doesn't help us. There is no conclusion that we can draw from this data.\nSurvival by age and class\nFirst we need to explore the different values we have in class.", "clean_data['Pclass'].head()", "We see data is structured in values ranging from 1 to 3. Standin for 1st class (richer) to 3rd class (poorer).", "survivors_first_age_pivot = get_survival_ratio_pivot(clean_data,'Pclass', 1)\nsurvivors_first_age_pivot.plot(kind='bar', y='survival_ratio')\n\ndef get_survival_ratio_pivot(source, attribute, value):\n pivot = source[source[attribute]==value].pivot_table(index='Age', columns='Survived', aggfunc=np.count_nonzero)\n pivot = pivot.fillna(0)['PassengerId']\n pivot['survival_ratio'] = pivot[1]/(pivot[1]+pivot[0])\n return pivot\n\nsurvivors_second_age_pivot = get_survival_ratio_pivot(clean_data,'Pclass', 2)\nsurvivors_second_age_pivot.plot(kind='bar', y='survival_ratio')", "This distribution is more revealing. People from second class only got saved if they were extremely young. At this point it would be helpful to know how many people this represented.", "survivors_second_age_pivot.columns = ['Died', 'Survived', 'Ratio']\nssap_plot = survivors_second_age_pivot.plot(kind='bar',stacked = True, y=[0,1])\n#ssap_plot.set_label(['Died','Survived'])\n\nsurvivors_third_age_pivot = get_survival_ratio_pivot(clean_data,'Pclass', 3)\nsurvivors_third_age_pivot.plot(kind='bar', y='survival_ratio')", "This distribution shows that just by being on 3rd class, your chances of surviving were a lot lower. Let's calculate how lower.", "survived_by_class = clean_data.pivot_table(index='Pclass', columns='Survived', aggfunc=np.count_nonzero)['PassengerId']\nsurvived_by_class['ratio'] = survived_by_class[1]/(survived_by_class[1]+survived_by_class[0])\nsurvived_by_class", "The trend is clear. Less money, less possibility of survival.\nSurvival by sex and class\nLet's get a pivot table representing as clearly as possible this information.", "from pivottablejs import pivot_ui\npivot_ui(clean_data)", "With the help of this tool we see that the best result is:", "class_gender_pivot = pd.pivot_table(clean_data, index=['Pclass','Sex'],columns='Survived', aggfunc=np.count_nonzero)['PassengerId']\nclass_gender_pivot['survival_ratio'] = class_gender_pivot[1]/(class_gender_pivot[1]+class_gender_pivot[0])\nclass_gender_pivot", "With this informations we can say that higher class means life, specially for men, that have their chances more than doubled. Woman in higher and middle class survived. And woman in lower classes had exactly 50% chances of surviving.\nConclusions\nAfter analysing the data, we can state that:\n* Females were more likely to survive than males.\n* Upper classes had higher survival ratios. First had the best survival ratio for men, while 1st and 2nd had best survival ratios for women.\n* Age was a factor but difficult to pin point precisely." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jseabold/statsmodels
examples/notebooks/tsa_filters.ipynb
bsd-3-clause
[ "Time Series Filters", "%matplotlib inline\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nimport statsmodels.api as sm\n\ndta = sm.datasets.macrodata.load_pandas().data\n\nindex = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3'))\nprint(index)\n\ndta.index = index\ndel dta['year']\ndel dta['quarter']\n\nprint(sm.datasets.macrodata.NOTE)\n\nprint(dta.head(10))\n\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111)\ndta.realgdp.plot(ax=ax);\nlegend = ax.legend(loc = 'upper left');\nlegend.prop.set_size(20);", "Hodrick-Prescott Filter\nThe Hodrick-Prescott filter separates a time-series $y_t$ into a trend $\\tau_t$ and a cyclical component $\\zeta_t$ \n$$y_t = \\tau_t + \\zeta_t$$\nThe components are determined by minimizing the following quadratic loss function\n$$\\min_{\\{ \\tau_{t}\\} }\\sum_{t}^{T}\\zeta_{t}^{2}+\\lambda\\sum_{t=1}^{T}\\left[\\left(\\tau_{t}-\\tau_{t-1}\\right)-\\left(\\tau_{t-1}-\\tau_{t-2}\\right)\\right]^{2}$$", "gdp_cycle, gdp_trend = sm.tsa.filters.hpfilter(dta.realgdp)\n\ngdp_decomp = dta[['realgdp']].copy()\ngdp_decomp[\"cycle\"] = gdp_cycle\ngdp_decomp[\"trend\"] = gdp_trend\n\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111)\ngdp_decomp[[\"realgdp\", \"trend\"]][\"2000-03-31\":].plot(ax=ax, fontsize=16);\nlegend = ax.get_legend()\nlegend.prop.set_size(20);", "Baxter-King approximate band-pass filter: Inflation and Unemployment\nExplore the hypothesis that inflation and unemployment are counter-cyclical.\nThe Baxter-King filter is intended to explicitly deal with the periodicity of the business cycle. By applying their band-pass filter to a series, they produce a new series that does not contain fluctuations at higher or lower than those of the business cycle. Specifically, the BK filter takes the form of a symmetric moving average \n$$y_{t}^{*}=\\sum_{k=-K}^{k=K}a_ky_{t-k}$$\nwhere $a_{-k}=a_k$ and $\\sum_{k=-k}^{K}a_k=0$ to eliminate any trend in the series and render it stationary if the series is I(1) or I(2).\nFor completeness, the filter weights are determined as follows\n$$a_{j} = B_{j}+\\theta\\text{ for }j=0,\\pm1,\\pm2,\\dots,\\pm K$$\n$$B_{0} = \\frac{\\left(\\omega_{2}-\\omega_{1}\\right)}{\\pi}$$\n$$B_{j} = \\frac{1}{\\pi j}\\left(\\sin\\left(\\omega_{2}j\\right)-\\sin\\left(\\omega_{1}j\\right)\\right)\\text{ for }j=0,\\pm1,\\pm2,\\dots,\\pm K$$\nwhere $\\theta$ is a normalizing constant such that the weights sum to zero.\n$$\\theta=\\frac{-\\sum_{j=-K^{K}b_{j}}}{2K+1}$$\n$$\\omega_{1}=\\frac{2\\pi}{P_{H}}$$\n$$\\omega_{2}=\\frac{2\\pi}{P_{L}}$$\n$P_L$ and $P_H$ are the periodicity of the low and high cut-off frequencies. Following Burns and Mitchell's work on US business cycles which suggests cycles last from 1.5 to 8 years, we use $P_L=6$ and $P_H=32$ by default.", "bk_cycles = sm.tsa.filters.bkfilter(dta[[\"infl\",\"unemp\"]])", "We lose K observations on both ends. It is suggested to use K=12 for quarterly data.", "fig = plt.figure(figsize=(12,10))\nax = fig.add_subplot(111)\nbk_cycles.plot(ax=ax, style=['r--', 'b-']);", "Christiano-Fitzgerald approximate band-pass filter: Inflation and Unemployment\nThe Christiano-Fitzgerald filter is a generalization of BK and can thus also be seen as weighted moving average. However, the CF filter is asymmetric about $t$ as well as using the entire series. The implementation of their filter involves the\ncalculations of the weights in\n$$y_{t}^{*}=B_{0}y_{t}+B_{1}y_{t+1}+\\dots+B_{T-1-t}y_{T-1}+\\tilde B_{T-t}y_{T}+B_{1}y_{t-1}+\\dots+B_{t-2}y_{2}+\\tilde B_{t-1}y_{1}$$\nfor $t=3,4,...,T-2$, where\n$$B_{j} = \\frac{\\sin(jb)-\\sin(ja)}{\\pi j},j\\geq1$$\n$$B_{0} = \\frac{b-a}{\\pi},a=\\frac{2\\pi}{P_{u}},b=\\frac{2\\pi}{P_{L}}$$\n$\\tilde B_{T-t}$ and $\\tilde B_{t-1}$ are linear functions of the $B_{j}$'s, and the values for $t=1,2,T-1,$ and $T$ are also calculated in much the same way. $P_{U}$ and $P_{L}$ are as described above with the same interpretation.\nThe CF filter is appropriate for series that may follow a random walk.", "print(sm.tsa.stattools.adfuller(dta['unemp'])[:3])\n\nprint(sm.tsa.stattools.adfuller(dta['infl'])[:3])\n\ncf_cycles, cf_trend = sm.tsa.filters.cffilter(dta[[\"infl\",\"unemp\"]])\nprint(cf_cycles.head(10))\n\nfig = plt.figure(figsize=(14,10))\nax = fig.add_subplot(111)\ncf_cycles.plot(ax=ax, style=['r--','b-']);", "Filtering assumes a priori that business cycles exist. Due to this assumption, many macroeconomic models seek to create models that match the shape of impulse response functions rather than replicating properties of filtered series. See VAR notebook." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
quests/tpu/flowers_resnet.ipynb
apache-2.0
[ "Image Classification from scratch with TPUs on Cloud ML Engine using ResNet\nThis notebook demonstrates how to do image classification from scratch on a flowers dataset using TPUs and the resnet trainer.", "import os\nPROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID\nBUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME\nREGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\n\n# do not change these\nos.environ['PROJECT'] = PROJECT\nos.environ['BUCKET'] = BUCKET\nos.environ['REGION'] = REGION\nos.environ['TFVERSION'] = '1.9'\n\n%%bash\ngcloud config set project $PROJECT\ngcloud config set compute/region $REGION", "Convert JPEG images to TensorFlow Records\nMy dataset consists of JPEG images in Google Cloud Storage. I have two CSV files that are formatted as follows:\n image-name, category\nInstead of reading the images from JPEG each time, we'll convert the JPEG data and store it as TF Records.", "%%bash\ngsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | head -5 > /tmp/input.csv\ncat /tmp/input.csv\n\n%%bash\ngsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt\ncat /tmp/labels.txt", "Clone the TPU repo\nLet's git clone the repo and get the preprocessing and model files. The model code has imports of the form:\n<pre>\nimport resnet_model as model_lib\n</pre>\nWe will need to change this to:\n<pre>\nfrom . import resnet_model as model_lib\n</pre>", "%%writefile copy_resnet_files.sh\n#!/bin/bash\nrm -rf tpu\ngit clone https://github.com/tensorflow/tpu\ncd tpu\nTFVERSION=$1\necho \"Switching to version r$TFVERSION\"\ngit checkout r$TFVERSION\ncd ..\n \nMODELCODE=tpu/models/official/resnet\nOUTDIR=mymodel\nrm -rf $OUTDIR\n\n# preprocessing\ncp -r imgclass $OUTDIR # brings in setup.py and __init__.py\ncp tpu/tools/datasets/jpeg_to_tf_record.py $OUTDIR/trainer/preprocess.py\n\n# model: fix imports\nfor FILE in $(ls -p $MODELCODE | grep -v /); do\n CMD=\"cat $MODELCODE/$FILE \"\n for f2 in $(ls -p $MODELCODE | grep -v /); do\n MODULE=`echo $f2 | sed 's/.py//g'`\n CMD=\"$CMD | sed 's/^import ${MODULE}/from . import ${MODULE}/g' \"\n done\n CMD=\"$CMD > $OUTDIR/trainer/$FILE\"\n eval $CMD\ndone\nfind $OUTDIR\necho \"Finished copying files into $OUTDIR\"\n\n!bash ./copy_resnet_files.sh $TFVERSION", "Enable TPU service account\nAllow Cloud ML Engine to access the TPU and bill to your project", "%%writefile enable_tpu_mlengine.sh\nSVC_ACCOUNT=$(curl -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n https://ml.googleapis.com/v1/projects/${PROJECT}:getConfig \\\n | grep tpuServiceAccount | tr '\"' ' ' | awk '{print $3}' )\necho \"Enabling TPU service account $SVC_ACCOUNT to act as Cloud ML Service Agent\"\ngcloud projects add-iam-policy-binding $PROJECT \\\n --member serviceAccount:$SVC_ACCOUNT --role roles/ml.serviceAgent\necho \"Done\"\n\n!bash ./enable_tpu_mlengine.sh", "Try preprocessing locally", "%%bash\nexport PYTHONPATH=${PYTHONPATH}:${PWD}/mymodel\n \nrm -rf /tmp/out\npython -m trainer.preprocess \\\n --train_csv /tmp/input.csv \\\n --validation_csv /tmp/input.csv \\\n --labels_file /tmp/labels.txt \\\n --project_id $PROJECT \\\n --output_dir /tmp/out --runner=DirectRunner\n\n!ls -l /tmp/out", "Now run it over full training and evaluation datasets. This will happen in Cloud Dataflow.", "%%bash\nexport PYTHONPATH=${PYTHONPATH}:${PWD}/mymodel\ngsutil -m rm -rf gs://${BUCKET}/tpu/resnet/data\npython -m trainer.preprocess \\\n --train_csv gs://cloud-ml-data/img/flower_photos/train_set.csv \\\n --validation_csv gs://cloud-ml-data/img/flower_photos/eval_set.csv \\\n --labels_file /tmp/labels.txt \\\n --project_id $PROJECT \\\n --output_dir gs://${BUCKET}/tpu/resnet/data", "The above preprocessing step will take <b>15-20 minutes</b>. Wait for the job to finish before you proceed. Navigate to Cloud Dataflow section of GCP web console to monitor job progress. You will see something like this <img src=\"dataflow.png\" />\nAlternately, you can simply copy my already preprocessed files and proceed to the next step:\n<pre>\ngsutil -m cp gs://cloud-training-demos/tpu/resnet/data/* gs://${BUCKET}/tpu/resnet/copied_data\n</pre>", "%%bash\ngsutil ls gs://${BUCKET}/tpu/resnet/data", "Train on the Cloud", "%%bash\necho -n \"--num_train_images=$(gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | wc -l) \"\necho -n \"--num_eval_images=$(gsutil cat gs://cloud-ml-data/img/flower_photos/eval_set.csv | wc -l) \"\necho \"--num_label_classes=$(cat /tmp/labels.txt | wc -l)\"\n\n%%bash\nTOPDIR=gs://${BUCKET}/tpu/resnet\nOUTDIR=${TOPDIR}/trained\nJOBNAME=imgclass_$(date -u +%y%m%d_%H%M%S)\necho $OUTDIR $REGION $JOBNAME\ngsutil -m rm -rf $OUTDIR # Comment out this line to continue training from the last time\ngcloud ml-engine jobs submit training $JOBNAME \\\n --region=$REGION \\\n --module-name=trainer.resnet_main \\\n --package-path=$(pwd)/mymodel/trainer \\\n --job-dir=$OUTDIR \\\n --staging-bucket=gs://$BUCKET \\\n --scale-tier=BASIC_TPU \\\n --runtime-version=$TFVERSION --python-version=3.5 \\\n -- \\\n --data_dir=${TOPDIR}/data \\\n --model_dir=${OUTDIR} \\\n --resnet_depth=18 \\\n --train_batch_size=128 --eval_batch_size=32 --skip_host_call=True \\\n --steps_per_eval=250 --train_steps=1000 \\\n --num_train_images=3300 --num_eval_images=370 --num_label_classes=5 \\\n --export_dir=${OUTDIR}/export", "The above training job will take 15-20 minutes. \nWait for the job to finish before you proceed. \nNavigate to Cloud ML Engine section of GCP web console \nto monitor job progress.\nThe model should finish with a 80-83% accuracy (results will vary):\nEval results: {'global_step': 1000, 'loss': 0.7359053, 'top_1_accuracy': 0.82954544, 'top_5_accuracy': 1.0}", "%%bash\ngsutil ls gs://${BUCKET}/tpu/resnet/trained/export/", "You can look at the training charts with TensorBoard:", "OUTDIR = 'gs://{}/tpu/resnet/trained/'.format(BUCKET)\nfrom google.datalab.ml import TensorBoard\nTensorBoard().start(OUTDIR)\n\nTensorBoard().stop(11531)\nprint(\"Stopped Tensorboard\")", "These were the charts I got (I set smoothing to be zero):\n<img src=\"resnet_traineval.png\" height=\"50\"/>\nAs you can see, the final blue dot (eval) is quite close to the lowest training loss, indicating that the model hasn't overfit. The top_1 accuracy on the evaluation dataset, however, is 80% which isn't that great. More data would help.\n<img src=\"resnet_accuracy.png\" height=\"50\"/>\nDeploying and predicting with model\nDeploy the model:", "%%bash\nMODEL_NAME=\"flowers\"\nMODEL_VERSION=resnet\nMODEL_LOCATION=$(gsutil ls gs://${BUCKET}/tpu/resnet/trained/export/ | tail -1)\necho \"Deleting/deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes\"\n\n# comment/uncomment the appropriate line to run. The first time around, you will need only the two create calls\n# But during development, you might need to replace a version by deleting the version and creating it again\n\n#gcloud ml-engine versions delete --quiet ${MODEL_VERSION} --model ${MODEL_NAME}\n#gcloud ml-engine models delete ${MODEL_NAME}\ngcloud ml-engine models create ${MODEL_NAME} --regions $REGION\ngcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION", "We can use saved_model_cli to find out what inputs the model expects:", "%%bash\nsaved_model_cli show --dir $(gsutil ls gs://${BUCKET}/tpu/resnet/trained/export/ | tail -1) --tag_set serve --signature_def serving_default", "As you can see, the model expects image_bytes. This is typically base64 encoded\nTo predict with the model, let's take one of the example images that is available on Google Cloud Storage <img src=\"http://storage.googleapis.com/cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg\" /> and convert it to a base64-encoded array", "import base64, sys, json\nimport tensorflow as tf\nimport io\nwith tf.gfile.GFile('gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg', 'rb') as ifp:\n with io.open('test.json', 'w') as ofp:\n image_data = ifp.read()\n img = base64.b64encode(image_data).decode('utf-8')\n json.dump({\"image_bytes\": {\"b64\": img}}, ofp)\n\n!ls -l test.json", "Send it to the prediction service", "%%bash\ngcloud ml-engine predict --model=flowers --version=resnet --json-instances=./test.json", "What does CLASS no. 3 correspond to? (remember that classes is 0-based)", "%%bash\nhead -4 /tmp/labels.txt | tail -1", "Here's how you would invoke those predictions without using gcloud", "from googleapiclient import discovery\nfrom oauth2client.client import GoogleCredentials\nimport base64, sys, json\nimport tensorflow as tf\n\nwith tf.gfile.GFile('gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg', 'rb') as ifp:\n credentials = GoogleCredentials.get_application_default()\n api = discovery.build('ml', 'v1', credentials=credentials,\n discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')\n \n request_data = {'instances':\n [\n {\"image_bytes\": {\"b64\": base64.b64encode(ifp.read()).decode('utf-8')}}\n ]}\n\n parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'flowers', 'resnet')\n response = api.projects().predict(body=request_data, name=parent).execute()\n print(\"response={0}\".format(response))", "<pre>\n# Copyright 2018 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n</pre>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jacksongomesbr/academia-md
probabilidade/probabilidade.ipynb
cc0-1.0
[ "Probabilidade\nDefinições iniciais\n\nExperimento: uma ocorrência com um resultado incerto. Também pode ser chamado \"processo\". Exemplo: rolar um dado.\nResultado: o resultado de um experimento; um \"estado\" particular do mundo (ambiente). Também pode ser chamado \"caso\". Exemplo: 4\nEspaço amostral: o conjunto de todos os resultados possíveis para um experimento. Exemplo: ${1, 2, 3, 4, 5, 6}$\nEvento: um subconjunto do espaço amostral. Exemplo: o evento \"dado com face par\" é o conjunto dos resultados ${2, 4, 6}$\nProbabilidade: a probabilidade de um evento em relação a um espaço amostral é a quantidade de resultados no evento dividida pela quantidade de resultados no espaço amostral. Como é uma razão, o valor de uma probabilidade será será um número entre $0$ (um evento impossível) e $1$ (um evento certo). Exemplo: a probabilidade do evento \"dado com face par\" é $3/6 = 1/2 = 0.5$\n\nCódigo para P\n$P()$ é o nome tradicional para a função Probabilidade:", "from fractions import Fraction\n\ndef P(evento, espaco):\n \"A probabilidade de um evento, dado o espaco amostral\"\n return Fraction(len(evento & espaco), len(espaco))", "O código implementa o conceito \"Probabilidade é simplemente uma fração cujo numerador é o número de casos favorávels e cujo denominador é o número de todos os casos possíveis\" (Laplace).\nAquecimento: Rolar um dado\nQual é a probabilidade de rolar um dado e a face superior ser um número par? Consideramos, para isso, um dado com seis faces.\nPodemos definir o espaço amostral $A$, o evento $par$ e calcular a probabilidade:", "A = {1, 2, 3, 4, 5, 6}\npar = {2, 4, 6}\nP(par, A)", "Você pode perguntar: \"por que não usar len(evento) ao invés de len(evento &amp; espaco)? Isso é necessário porque não podem ser considerados resultados que não estejam presentes no espaço amostral. Veja só:", "par = {2, 4, 6, 8, 10, 12}\nP(par, A)", "Se considerasse apenas len(evento), o retorno seria outro:", "Fraction(len(par), len(A))", "Os \"casos favoráveis\", da definição de Laplace, são a interseção entre par e A, ou seja, $\\mbox{evento} \\cap A$. Em Python, isso é feito usando o operador &amp;, por isso len(evento &amp; espaco).\nSobre Fraction em Python\nO tipo Fraction do Python é usado para representar valores na forma de frações (racional). Por exemplo:", "print(Fraction(0.5))\nprint(Fraction(0.3))\nprint(Fraction(0.3).limit_denominator()) # usa limit_denominator() para encontrar a fração mais aproximada\nprint(Fraction(3, 10)) \nprint(Fraction(0.3333333))\nprint(Fraction(0.3333333).limit_denominator()) # limitando o denominador (padrão é max=1000000)\nprint(float(Fraction(0.3333333))) # convertendo o valor de Fraction() para float()", "Problemas com urnas\nNão estou falando de política! (ok, essa piada foi péssima)\nProblemas com urnas surgiram por volta de 1700, com o matemático Jacob Bernoulli. Por exemplo:\n\n\nUma urna contém 23 bolas: 8 brancas, 6 azuis e 9 vermelhas. Selecionamos seis bolas aleatoriamente (cada seleção com a mesma chance de acontecer). Qual é a probabilidade desses três resultados possíveis:\n\n\ntodas as bolas são vermelhas\n\n3 são azuis, 2 são vermelhas e 1 é branca\n4 são brancas\n\nEntão, um resultado é um conjunto com 6 bolas, enquanto o espaço amostral é o conjunto de todas as possíveis combinações (também conjuntos) com 6 bolas. Antes de continuar a solução, duas questões:\n\nhá múltiplas bolas da mesma cor (ex: 8 bolas brancas)\num resultado é um conjunto de bolas, então a ordem dos seus elementos não importa (diferentemente da lista -- ou sequência -- para a qual a ordem importa)\n\nPara a primeira questão, vamos nomear as bolas usando uma letra e um número, assim: \n* as 8 bolas brancas serão chamadas W1 até W8\n* as 6 bolas azuis serão chamadas B1 até B6\n* as 9 bolas vermelhas serão chamadas R1 até R9\nPara a segunda questão teremos que trabalhar com permutações e então encontrar combinações, dividindo o número de permutações por $c!$, onde $c$ é o número de bolas em uma combinação. \nPor exemplo, se eu precisar escolher 2 bolas de 8 disponíveis, há 8 formas de escolher a primeira, e 7 formas de escolher a segunda. Sendo assim, há $8 \\times 7 = 56$ permutações, mas $52/2 = 26$ combinações. Tudo isso porque ${W1, W2} = {W2, W1}$.\nVamos lá. O código a seguir define o conteúdo da urna:", "def cross(A, B):\n \"O conjunto de formas de concatenar os itens de A e B (produto cartesiano)\"\n return {a + b\n for a in A for b in B\n }\n\nurna = cross('W', '12345678') | cross('B', '123456') | cross('R', '123456789')\n\nurna\n\nlen(urna)", "Agora, vamos definir o espaço amostral, chamado U6, o conjunto de todas as combinações com 6 bolas:", "import itertools\n\ndef combos(items, n):\n \"Todas as combinações de n items; cada combinação concatenada em uma string\"\n return {' '.join(combo)\n for combo in itertools.combinations(items, n)\n }\n\nU6 = combos(urna, 6)\n\nlen(U6)", "Para não mostrar todos os conjuntos, vamos mostrar uma pequena amostra:", "import random\n\nrandom.sample(U6, 10)", "Mas será que o $100,947$ é a quantidade correta de formas de escolher 6 de 23 bolas? Bem, o raciocínio é esse:\n\nEscolha uma de 23 (sobram 22)\nEscolha uma de 22 (sobram 21)\n...\nEscolha seis de 18\n\nComo não ligamos para a ordem (enfim, é um conjunto), então dividimos por $6!$. Isso dá:\n\\begin{align}\n23 \\mbox{ escolha } 6 = \\frac{23 \\times 22 \\times 21 \\times 20 \\times 19 \\times 18}{6!} = 100947\n\\end{align}\nNote que $23 \\times 22 \\times 21 \\times 20 \\times 19 \\times 18 = 23!/17!$, então, podemos generalizar:\n\\begin{align}\nn \\mbox{ escolha } c = \\frac{n!}{c!\\times(n - c)!} = \\binom{n}{c}\n\\end{align}\nPodemos traduzir isso para código, assim:", "from math import factorial\n\ndef escolha(n, c):\n \"Número de formas de escolher c itens de uma lista com n items\"\n return factorial(n) // (factorial(c) * factorial(n - c))\n\nescolha(23, 6)", "Agora podemos resolver vários problemas.\nProblema 1: qual a probabilidade de selecionar 6 bolas vermelhas?", "red6 = {s for s in U6 if s.count('R') == 6}\n\nP(red6, U6)", "ou", "len(red6)", "Por que 84 formas? Por que há 9 bolas vermelhas na urna, então estamos querendo saber quantas são as formas de escolher 6 delas:", "escolha(9, 6)", "Então a probabilidade de selecionar 6 bolas vermelhas da urna é escolha(9, 6) dividido pelo tamanho do espaço amostral:", "P(red6, U6) == Fraction(escolha(9, 6), len(U6))", "Problema 2: qual a probabilidade de escolher 3 azuis, 2 brancas 1 vermelha?", "b3w2r1 = {s for s in U6 if s.count('B') == 3 and s.count('W') == 2 and s.count('R') == 1}\n\nP(b3w2r1, U6)", "Podemos encontrar o mesmo resultado contando de quantas formas podemos escolher 3 de 6 azuis, 2 de 8 brancas, 1 de 9 vermelhas e dividindo o resultado pelo tamanho do espaço amostral:", "P(b3w2r1, U6) == Fraction(escolha(6, 3) * escolha(8, 2) * escolha(9, 1), len(U6))", "O raciocínio seguinte também é válido:\n* há 6 formas de encontrar a primeira bola azul\n* há 5 formas de encontrar a segunda azul\n* há 4 formas de encontrar a terceira azul\n* há 8 formas de encontrar a primeira bola branca\n* há 7 formas de encontrar a segunda branca\n* há 9 formas de encontrar a bola vermelha\nComo ${B1, B2, B3} = {B3, B2, B1}$ e ${W1, W2} = {W2, W1}$, teríamos:\n\\begin{align}\n\\frac{(6 \\times 5 \\times 4) \\times (8 \\times 7) \\times (9)}{3! \\times 2! \\times |A|}\n\\end{align}", " P(b3w2r1, U6) == Fraction((6 * 5 * 4) * (8 * 7) * 9, \n factorial(3) * factorial(2) * len(U6))", "Problema 3: Qual a probabilidade de termos exatamente 4 bolas brancas?\nPodemos encontrar a resposta usando os mesmos raciocínios anteriores:", "w4 = {s for s in U6 if\n s.count('W') == 4}\n\nP(w4, U6)\n\nP(w4, U6) == Fraction(escolha(8, 4) * escolha(15, 2),\n len(U6))\n\nP(w4, U6) == Fraction((8 * 7 * 6 * 5) * (15 * 14),\n factorial(4) * factorial(2) * len(U6))", "Esse último raciocínio, em particular, é interpretado assim:\n* 8 escolhas para a primeira bola\n* 7 escolhas para a segunda\n* 6 escolhas para a terceira\n* 5 escolhas para a segunda\n* 15 escolhas para as outras bolas (não brancas)\n* 14 escolhas para as outras bolas (não brancas restantes)\nOu seja:\n\\begin{align}\n\\frac{(8 \\times 7 \\times 6) \\times (15 \\times 14)}{4! \\times 2! \\times |A|}\n\\end{align}\nRevisando a função P, com eventos mais gerais\nAté o momento, para calcular a probabilidade de um dado com face par, usamos:\npar = {2, 4, 6}\nMas enumerar um conjunto pequeno é fácil, o difícil seria enumerar um conjunto maior. Assim:", "def P(evento, espaco):\n \"\"\"A probabilidade de um evento, dado um espaco amostral. \n evento pode ser um conjunto ou um predicado\"\"\"\n if callable(evento):\n evento = tal_que(evento, espaco)\n return Fraction(len(evento & espaco), len(espaco))\n\ndef tal_que(predicado, colecao):\n \"O subconjunto de elementos da colecao para os quais o predicado é verdadeiro\"\n return {e for e in colecao if predicado(e)}", "Vamos verificar como isso se comporta.", "def eh_par(n):\n return n % 2 == 0\n\nP(eh_par, A)\n\nD12 = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}\n\ntal_que(eh_par, D12)\n\nP(eh_par, D12)", "Isso permite coisas interessantes. Por exemplo, como determinar a probabilidade de que a soma de três dados é um número primo?", "A3 = {(a1, a2, a3) for a1 in A for a2 in A for a3 in A}\n\ndef soma_eh_primo(r): return eh_primo(sum(r))\n\ndef eh_primo(n): return n > 1 and not any(n % i == 0 for i in range(2, n))\n\nP(soma_eh_primo, A3)", "Problemas com cartas\nDefinimos o problema assim:\n* 4 naipes: copas (C), ouro (O), paus (P) e espadas (E) (no inglês: hearts (H), diamond (D), clubs (C) and spades (S))\n* 13 graus: Ás, 2, 3, 4, 5, 6, 7, 8, 9, 10 (ou T), J, Q, K\nO baralho possui 52 cartas.", "naipes = 'SHDC'\ngraus = 'A23456789TJQK'\nbaralho = cross(graus, naipes)\nlen(baralho)", "Qual a quantidade de jogadas (mãos) com 5 cartas?", "jogadas = combos(baralho, 5)\n\nprint(len(jogadas) == escolha(52, 5))\n\nrandom.sample(jogadas, 5)", "Qual a probabilidade de dar 'flush' (5 cartas do mesmo naipe)?", "def flush(jogada):\n return any(jogada.count(n) == 5 for n in naipes)\n\nP(flush, jogadas)", "Ou... qual a probabilidade de dar 'four of a kind'?", "def four_kind(jogada):\n return any(jogada.count(g) == 4 for g in graus)\n\nP(four_kind, jogadas)", "Fermat e Pascal: Apostas, Triângulos e o nascimento da Probabilidade\nConsidere um jogo de apostas, consistindo do [simples] fato de jogar uma moeda. Dois jogadores: H aposta em cara; e T aposta em coroa. Ganha o jogador que conseguir 10 acertos primeiro. Se o jogo for interrompido quando H tiver acertado 8 e T, acertado 7? Como dividir o pote de dinheiro? Fermat e Pascam trocaram correspondências sobre esse problema, o que pode ser lido aqui.\nPodemos resolver o problema com as ferramentas que temos:", "def ganhar_jogo_incompleto(h_pontos, t_pontos):\n \"A probabilidade de que H vai ganhar o jogo não terminado, dados os pontos necessários para H e T ganharem.\"\n def h_ganha(r): return r.count('h') >= h_pontos\n return P(h_ganha, continuacoes(h_pontos, t_pontos))\n\ndef continuacoes(h_pontos, t_pontos):\n \"Todas as continuações possíveis quando H precisa de `h_pontos` e T precisa de `t_pontos`\"\n rodadas = ['ht' for _ in range(h_pontos + t_pontos - 1)]\n return set(itertools.product(*rodadas))\n\ncontinuacoes(2, 3)\n\nganhar_jogo_incompleto(2, 3)", "O resultado foi confirma o encontrado por Pascal e Fermat.\nResultados não equiprováveis: Distribuições de probabilidade\nAté aqui, lidamos com casos em que a probabilidade de um retorno no espaço amostral é a mesma (uniforme). No mundo real, isso não é sempre verdade. Por exemplo, a probabilidade de uma criança ser uma menina não é exatamente 1/2, e a probabilidade é um pouco diferente para a segunda criança. Uma pesquisa encontrou as seguintes contagens de famílias com dois filhos na Dinamarca (GB significa uma família em que o primeiro filho é uma garota e o segundo filho é um menino):\nGG: 121801 GB: 126840\nBG: 127123 BB: 135138\nVamos introduzir mais três definições:\n\nFrequência: um número que descreve o quão frequente um resultado ocorre. Pode ser uma contagem, como 121801, ou uma razão, como 0,515\nDistribuição: um mapeamento de resultado para frequência, para cada resultado no espaço amostral.\nDistribuição de Probabilidade: uma distribuição que foi normalizada de tal forma que a soma das frequências é 1\n\nDefinimos a classe ProbDist (um subtipo do dict do Python):", "class ProbDist(dict):\n \"Uma distribuição de probablidade; um mapeamento {resultado: probabilidade}\"\n def __init__(self, mapping=(), **kwargs):\n self.update(mapping, **kwargs)\n total = sum(self.values())\n for outcome in self:\n self[outcome] = self[outcome]/total\n assert self[outcome] >= 0", "Também redefinimos as funções P e tal_que:", "def P(evento, espaco):\n \"\"\"A probabilidade de um evento, dado um espaço amostral de resultados equiprováveis.\n evento: uma coleção de resultados, ou um predicado.\n espaco: um conjunto de resultados ou a distribuicao de probabilidade na forma de pares {resultado: frequencia}.\n \"\"\"\n if callable(evento):\n evento = tal_que(evento, espaco)\n if isinstance(espaco, ProbDist):\n return sum(espaco[o] for o in espaco if o in evento)\n else:\n return Fraction(len(evento & espaco), len(espaco))\n \ndef tal_que(predicado, espaco):\n \"\"\"Os resultados no espaço amostral para os quais o predicado é verdadeiro.\n Se espaco é um conjunto, retorna um subconjunto {resultado, ...}\n Se espaco é ProbDist, retorna um ProbDist{resultado, frequencia}\"\"\"\n if isinstance(espaco, ProbDist):\n return ProbDist({o:espaco[o] for o in espaco if predicado(o)})\n else:\n return {o for o in espaco if predicado(o)}\n ", "Aqui está a distribuição de probabilidade para as famílias Dinamarquesas com dois filhos:", "DK = ProbDist(GG=121801, GB=126840, BG=127123, BB=135138)\n\nDK", "Vamos entender o resultado por partes. Para isso, alguns predicados:", "def primeiro_menina(r): return r[0] == 'G'\ndef primeiro_menino(r): return r[0] == 'B'\ndef segundo_menina(r): return r[1] == 'G'\ndef segundo_menino(r): return r[1] == 'B'\ndef duas_meninas(r): return r == 'GG'\ndef dois_meninos(r): return r == 'BB'\n\nP(primeiro_menina, DK)\n\nP(segundo_menina, DK)", "Isso indica que a probabilidade de uma criança ser menina está entre 48% e 49%, mas isso é um pouco diferente entre o primeiro e o segundo filhos.", "P(segundo_menina, tal_que(primeiro_menina, DK)), P(segundo_menina, tal_que(primeiro_menino, DK))\n\nP(segundo_menino, tal_que(primeiro_menina, DK)), P(segundo_menino, tal_que(primeiro_menino, DK))", "Isso diz que é mais provável que o sexo do segundo filho seja igual ao do primeiro cerca de 50%.\nMais problemas de Urnas: M&Ms e Bayes\nOutro problema de urna (Allen Downey):\n\nO M&M azul foi introduzido em 1995. Antes disso, a mistura de cores em um pacote de M&Ms era formado por: 30% marrom, 20% amarelo, 20% vermelho, 10% verde, 10% laranja, 10% tostado. Depois, ficou: 24% azul, 20% verde, 16% laranja, 14% amarelo, 14% vermelho, 13% marrom. Um amigo meu possui dois pacotes de M&Ms e ele me diz que um é de 1994 e outro é de 1996. Ele não me diz qual é qual, mas me dá um M&M de cada pacote. Um é amarelho e outro é verde. Qual a probabilidade de que o M&M amarelo seja do pacote de 1994?\n\nPara resolver esse problema, primeiro representamos as distribuições de probabilidade de cada pacote:", "bag94 = ProbDist(brown=30, yellow=20, red=20, green=10, orange=10, tan=10)\nbag96 = ProbDist(blue=24, green=20, orange=16, yellow=14, red=13, brown=13)", "A seguir, definimos MM como a probabilidade conjunta -- o espaço amostral para escolher um M&M de cada pacote. O resultado yellow green significa que um M&M amarelo foi selecionado do pacote de 1994 e um verde, do pacote de 1996.", "def joint(A, B, sep=''):\n \"\"\"A probabilidade conjunta de duas distribuições de probabilidade independentes. \n Resultado é todas as entradas da forma {a+sep+b: P(a)*P(b)}\"\"\"\n return ProbDist({a + sep + b: A[a] * B[b]\n for a in A\n for b in B})\n\nMM = joint(bag94, bag96, ' ')\nMM", "Primeiro, o predicado que trata \"um é amarelho e o outro é verde\":", "def yellow_and_green(r): return 'yellow' in r and 'green' in r\n\ntal_que(yellow_and_green, MM)", "Agora podemos responder a pergunta: dado que tivemos amarelo e verde (mas não sabemos sabemos qual vem de qual pacote), qual é a probabilidade de que o amarelo tenha vido do pacote de 1994?", "def yellow94(r): return r.startswith('yellow')\n\nP(yellow94, tal_que(yellow_and_green, MM))", "Então, há 74% de chance de que o amarelo tenha vindo do pacote de 1994.\nA forma de resolver o problema foi semelhante ao que já vínhamos fazendo: criar um espaço amostral, usar P para escolher a probabilidade do evento em questão, dado que sabemos sobre o retorno. \nPoderíamos usar o Teorema de Bayes, mas por que? Porque queremos saber a probabilidade de um evento dada uma evidência, que não está imediatamente disponível; entretanto, a probabilidade da evidência dado o evento está [disponível].\nAntes de ver as cores dos M&Ms, há duas hipóteses, A e B, ambas com igual probabilidade:\nA: primeiro M&amp;M do pacote de 1994, segundo do pacote de 1996\nB: primeiro M&amp;M do pacote de 1996, segundo do pacote de 1994\n\\begin{align}\nP(A) = P(B) = 0.5\n\\end{align}\nEntão, temos uma evidência:\nE: primeiro M&amp;M amarelo, depois verde\nQueremos saber a probabilidade da hipótese A, dada a evidência E: $P(A \\mid E)$.\nIsso não é fácil de calcular (exceto numerando o espaço amostral), mas o Teorema de Bayes diz:\n\\begin{align}\nP(A \\mid E) = \\frac{P(E \\mid A) \\times P(A)}{P(E)} \n\\end{align}\nAs quantidades do lado direito são mais fáceis de calcular:\n\\begin{align}\nP(E \\mid A) &= P(Yellow94) \\times P(Green96) &= 0.20 \\times 0.20 &= 0.04 \\\nP(E \\mid B) &= P(Yellow96) \\times P(Green94) &= 0.10 \\times 0.14 &= 0.014 \\\nP(A) &= 0.5 \\\nP(B) &= 0.5 \\\nP(E) &= P(E \\mid A) \\times P(A) + P(E \\mid B) \\times P(B) \\\n&= 0.04 \\times 0.5 + 0.014 \\times 0.5 = 0.027\n\\end{align}\nO resultado final:\n\\begin{align}\nP(A \\mid E) &= \\frac{P(E \\mid A) \\times P(A)}{P(E)} \\\n&= \\frac{0.4 \\times 0.5}{0.027} \\\n&= 0.7407407407\n\\end{align}\nEntão é isso. Você tem uma escolha: O Teorema de Bayes permite fazer menos cálculos, mas usa mais álgebra; é melhor custo-benefício se você estiver trabalhando com lápis e papel. Por outro lado, enumerar o espaço amostrar usa menos álgebra, pelo custo de requerer mais cálculos; é melhor custo-benefício se você estiver usando um computador. Idependentemente da abordagem utilizada, é importante conhecer o Teorema de Bayes e como ele funciona.\nMais importante ainda: você comeria M&Ms de 20 anos?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ddtm/dl-course
Seminar4/bonus/Bonus-advanced-theano.ipynb
mit
[ "Theano, Lasagne\nand why they matter\ngot no lasagne?\nInstall the bleeding edge version from here: http://lasagne.readthedocs.org/en/latest/user/installation.html\nWarming up\n\nImplement a function that computes the sum of squares of numbers from 0 to N\nUse numpy or python\nAn array of numbers 0 to N - numpy.arange(N)", "import numpy as np\ndef sum_squares(N):\n return <student.Implement_me()>\n\n%%time\nsum_squares(10**8)", "theano teaser\nDoing the very same thing", "import theano\nimport theano.tensor as T\n\n\n\n#I gonna be function parameter\nN = T.scalar(\"a dimension\",dtype='int32')\n\n\n#i am a recipe on how to produce sum of squares of arange of N given N\nresult = (T.arange(N)**2).sum()\n\n#Compiling the recipe of computing \"result\" given N\nsum_function = theano.function(inputs = [N],outputs=result)\n\n%%time\nsum_function(10**8)", "How does it work?\nif you're currently in classroom, chances are i am explaining this text wall right now\n* 1 You define inputs f your future function;\n* 2 You write a recipe for some transformation of inputs;\n* 3 You compile it;\n* You have just got a function!\n* The gobbledegooky version: you define a function as symbolic computation graph.\n\nThere are two main kinвs of entities: \"Inputs\" and \"Transformations\"\nBoth can be numbers, vectors, matrices, tensors, etc.\n\nBoth can be integers, floats of booleans (uint8) of various size.\n\n\nAn input is a placeholder for function parameters.\n\n\nN from example above\n\n\nTransformations are the recipes for computing something given inputs and transformation\n\n(T.arange(N)^2).sum() are 3 sequential transformations of N\nDoubles all functions of numpy vector syntax\nYou can almost always go with replacing \"np.function\" with \"T.function\" aka \"theano.tensor.function\"\nnp.mean -> T.mean\nnp.arange -> T.arange\nnp.cumsum -> T.cumsum\nand so on.\nbuiltin operations also work that way\nnp.arange(10).mean() -> T.arange(10).mean()\nOnce upon a blue moon the functions have different names or locations (e.g. T.extra_ops)\nAsk us or google it\n\n\n\nStill confused? We gonna fix that.", "#Inputs\nexample_input_integer = T.scalar(\"scalar input\",dtype='float32')\n\nexample_input_tensor = T.tensor4(\"four dimensional tensor input\") #dtype = theano.config.floatX by default\n#не бойся, тензор нам не пригодится\n\n\n\ninput_vector = T.vector(\"\", dtype='int32') # vector of integers\n\n\n#Transformations\n\n#transofrmation: elementwise multiplication\ndouble_the_vector = input_vector*2\n\n#elementwise cosine\nelementwise_cosine = T.cos(input_vector)\n\n#difference between squared vector and vector itself\nvector_squares = input_vector**2 - input_vector\n\n\n#Practice time:\n#create two vectors of size float32\nmy_vector = student.init_float32_vector()\nmy_vector2 = student.init_one_more_such_vector()\n\n#Write a transformation(recipe):\n#(vec1)*(vec2) / (sin(vec1) +1)\nmy_transformation = student.implementwhatwaswrittenabove()\n\nprint my_transformation\n#it's okay it aint a number", "Compiling\n\nSo far we were using \"symbolic\" variables and transformations\nDefining the recipe for computation, but not computing anything\nTo use the recipe, one should compile it", "inputs = [<two vectors that my_transformation depends on>]\noutputs = [<What do we compute (can be a list of several transformation)>]\n\n# The next lines compile a function that takes two vectors and computes your transformation\nmy_function = theano.function(\n inputs,outputs,\n allow_input_downcast=True #automatic type casting for input parameters (e.g. float64 -> float32)\n )\n\n#using function with, lists:\nprint \"using python lists:\"\nprint my_function([1,2,3],[4,5,6])\nprint\n\n#Or using numpy arrays:\n#btw, that 'float' dtype is casted to secong parameter dtype which is float32\nprint \"using numpy arrays:\"\nprint my_function(np.arange(10),\n np.linspace(5,6,10,dtype='float'))\n", "Debugging\n\nCompilation can take a while for big functions\nTo avoid waiting, one can evaluate transformations without compiling\nWithout compilation, the code runs slower, so consider reducing input size", "#a dictionary of inputs\nmy_function_inputs = {\n my_vector:[1,2,3],\n my_vector2:[4,5,6]\n}\n\n# evaluate my_transformation\n# has to match with compiled function output\nprint my_transformation.eval(my_function_inputs)\n\n\n# can compute transformations on the fly\nprint \"add 2 vectors\", (my_vector + my_vector2).eval(my_function_inputs)\n\n#!WARNING! if your transformation only depends on some inputs,\n#do not provide the rest of them\nprint \"vector's shape:\", my_vector.shape.eval({\n my_vector:[1,2,3]\n })\n", "When debugging, one would generally want to reduce the computation complexity. For example, if you are about to feed neural network with 1000 samples batch, consider taking first 2.\nIf you really want to debug graph of high computation complexity, you could just as well compile it (e.g. with optimizer='fast_compile')\n\nDo It Yourself\n[2 points max]", "# Quest #1 - implement a function that computes a mean squared error of two input vectors\n# Your function has to take 2 vectors and return a single number\n\n<student.define_inputs_and_transformations()>\n\ncompute_mse =<student.compile_function()>\n\n# Tests\nfrom sklearn.metrics import mean_squared_error\n\nfor n in [1,5,10,10**3]:\n \n elems = [np.arange(n),np.arange(n,0,-1), np.zeros(n),\n np.ones(n),np.random.random(n),np.random.randint(100,size=n)]\n \n for el in elems:\n for el_2 in elems:\n true_mse = np.array(mean_squared_error(el,el_2))\n my_mse = compute_mse(el,el_2)\n if not np.allclose(true_mse,my_mse):\n print 'Wrong result:'\n print 'mse(%s,%s)'%(el,el_2)\n print \"should be: %f, but your function returned %f\"%(true_mse,my_mse)\n raise ValueError,\"Что-то не так\"\n\nprint \"All tests passed\"\n \n ", "Shared variables\n\n\nThe inputs and transformations only exist when function is called\n\n\nShared variables always stay in memory like global variables\n\nShared variables can be included into a symbolic graph\nThey can be set and evaluated using special methods\nbut they can't change value arbitrarily during symbolic graph computation\n\nwe'll cover that later;\n\n\nHint: such variables are a perfect place to store network parameters\n\ne.g. weights or some metadata", "#creating shared variable\nshared_vector_1 = theano.shared(np.ones(10,dtype='float64'))\n\n\n#evaluating shared variable (outside symbolicd graph)\nprint \"initial value\",shared_vector_1.get_value()\n\n# within symbolic graph you use them just as any other inout or transformation, not \"get value\" needed\n\n#setting new value\nshared_vector_1.set_value( np.arange(5) )\n\n#getting that new value\nprint \"new value\", shared_vector_1.get_value()\n\n#Note that the vector changed shape\n#This is entirely allowed... unless your graph is hard-wired to work with some fixed shape", "Your turn", "# Write a recipe (transformation) that computes an elementwise transformation of shared_vector and input_scalar\n#Compile as a function of input_scalar\n\ninput_scalar = T.scalar('coefficient',dtype='float32')\n\nscalar_times_shared = <student.write_recipe()>\n\n\nshared_times_n = <student.compile_function()>\n\n\nprint \"shared:\", shared_vector_1.get_value()\n\nprint \"shared_times_n(5)\",shared_times_n(5)\n\nprint \"shared_times_n(-0.5)\",shared_times_n(-0.5)\n\n\n#Changing value of vector 1 (output should change)\nshared_vector_1.set_value([-1,0,1])\nprint \"shared:\", shared_vector_1.get_value()\n\nprint \"shared_times_n(5)\",shared_times_n(5)\n\nprint \"shared_times_n(-0.5)\",shared_times_n(-0.5)\n", "T.grad - why theano matters\n\nTheano can compute derivatives and gradients automatically\nDerivatives are computed symbolically, not numerically\n\nLimitations:\n* You can only compute a gradient of a scalar transformation over one or several scalar or vector (or tensor) transformations or inputs.\n* A transformation has to have float32 or float64 dtype throughout the whole computation graph\n * derivative over an integer has no mathematical sense", "my_scalar = T.scalar(name='input',dtype='float64')\n\nscalar_squared = T.sum(my_scalar**2)\n\n#a derivative of v_squared by my_vector\nderivative = T.grad(scalar_squared,my_scalar)\n\nfun = theano.function([my_scalar],scalar_squared)\ngrad = theano.function([my_scalar],derivative) \n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n\nx = np.linspace(-3,3)\nx_squared = map(fun,x)\nx_squared_der = map(grad,x)\n\nplt.plot(x, x_squared,label=\"x^2\")\nplt.plot(x, x_squared_der, label=\"derivative\")\nplt.legend()", "Why that rocks", "\nmy_vector = T.vector('float64')\n\n#Compute the gradient of the next weird function over my_scalar and my_vector\n#warning! Trying to understand the meaning of that function may result in permanent brain damage\n\nweird_psychotic_function = ((my_vector+my_scalar)**(1+T.var(my_vector)) +1./T.arcsinh(my_scalar)).mean()/(my_scalar**2 +1) + 0.01*T.sin(2*my_scalar**1.5)*(T.sum(my_vector)* my_scalar**2)*T.exp((my_scalar-4)**2)/(1+T.exp((my_scalar-4)**2))*(1.-(T.exp(-(my_scalar-4)**2))/(1+T.exp(-(my_scalar-4)**2)))**2\n\n\nder_by_scalar,der_by_vector = <student.compute_grad_over_scalar_and_vector()>\n\n\ncompute_weird_function = theano.function([my_scalar,my_vector],weird_psychotic_function)\ncompute_der_by_scalar = theano.function([my_scalar,my_vector],der_by_scalar)\n\n\n#Plotting your derivative\nvector_0 = [1,2,3]\n\nscalar_space = np.linspace(0,7)\n\ny = [compute_weird_function(x,vector_0) for x in scalar_space]\nplt.plot(scalar_space,y,label='function')\ny_der_by_scalar = [compute_der_by_scalar(x,vector_0) for x in scalar_space]\nplt.plot(scalar_space,y_der_by_scalar,label='derivative')\nplt.grid();plt.legend()\n", "Almost done - Updates\n\n\nupdates are a way of changing shared variables at after function call.\n\n\ntechnically it's a dictionary {shared_variable : a recipe for new value} which is has to be provided when function is compiled\n\n\nThat's how it works:", "# Multiply shared vector by a number and save the product back into shared vector\n\ninputs = [input_scalar]\noutputs = [scalar_times_shared] #return vector times scalar\n\nmy_updates = {\n shared_vector_1:scalar_times_shared #and write this same result bach into shared_vector_1\n}\n\ncompute_and_save = theano.function(inputs, outputs, updates=my_updates)\n\nshared_vector_1.set_value(np.arange(5))\n\n#initial shared_vector_1\nprint \"initial shared value:\" ,shared_vector_1.get_value()\n\n# evaluating the function (shared_vector_1 will be changed)\nprint \"compute_and_save(2) returns\",compute_and_save(2)\n\n#evaluate new shared_vector_1\nprint \"new shared value:\" ,shared_vector_1.get_value()\n\n", "Logistic regression example\n[ 4 points max]\nImplement the regular logistic regression training algorithm\nTips:\n* Weights fit in as a shared variable\n* X and y are potential inputs\n* Compile 2 functions:\n * train_function(X,y) - returns error and computes weights' new values (through updates)\n * predict_fun(X) - just computes probabilities (\"y\") given data\nWe shall train on a two-class MNIST dataset\n* please note that target y are {0,1} and not {-1,1} as in some formulae", "from sklearn.datasets import load_digits\nmnist = load_digits(2)\n\nX,y = mnist.data, mnist.target\n\n\nprint \"y [shape - %s]:\"%(str(y.shape)),y[:10]\n\nprint \"X [shape - %s]:\"%(str(X.shape))\nprint X[:3]\nprint y[:10]\n\n# inputs and shareds\nshared_weights = <student.code_me()>\ninput_X = <student.code_me()>\ninput_y = <student.code_me()>\n\npredicted_y = <predicted probabilities for input_X>\nloss = <logistic loss (scalar, mean over sample)>\n\ngrad = <gradient of loss over model weights>\n\n\n\nupdates = {\n shared_weights: <new weights after gradient step>\n}\n\ntrain_function = <compile function that takes X and y, returns log loss and updates weights>\npredict_function = <compile function that takes X and computes probabilities of y>\n\nfrom sklearn.cross_validation import train_test_split\nX_train,X_test,y_train,y_test = train_test_split(X,y)\n\nfrom sklearn.metrics import roc_auc_score\n\nfor i in range(5):\n loss_i = train_function(X_train,y_train)\n print \"loss at iter %i:%.4f\"%(i,loss_i)\n print \"train auc:\",roc_auc_score(y_train,predict_function(X_train))\n print \"test auc:\",roc_auc_score(y_test,predict_function(X_test))\n\n \nprint \"resulting weights:\"\nplt.imshow(shared_weights.get_value().reshape(8,-1))\nplt.colorbar()", "my1stNN\n[basic part 4 points max]\nYour ultimate task for this week is to build your first neural network [almost] from scratch and pure theano.\nThis time you will same digit recognition problem, but at a larger scale\n* images are now 28x28\n* 10 different digits\n* 50k samples\nNote that you are not required to build 152-layer monsters here. A 2-layer (one hidden, one output) NN should already have ive you an edge over logistic regression.\n[bonus score]\nIf you've already beaten logistic regression with a two-layer net, but enthusiasm still ain't gone, you can try improving the test accuracy even further! The milestones would be 95%/97.5%/98.5% accuraсy on test set.\nSPOILER!\nAt the end of the notebook you will find a few tips and frequently made mistakes. If you feel enough might to shoot yourself in the foot without external assistance, we encourage you to do so, but if you encounter any unsurpassable issues, please do look there before mailing us.", "from mnist import load_dataset\n\n#[down]loading the original MNIST dataset.\n#Please note that you should only train your NN on _train sample,\n# _val can be used to evaluate out-of-sample error, compare models or perform early-stopping\n# _test should be hidden under a rock untill final evaluation... But we both know it is near impossible to catch you evaluating on it.\nX_train,y_train,X_val,y_val,X_test,y_test = load_dataset()\n\nprint X_train.shape,y_train.shape\n\nplt.imshow(X_train[0,0])\n\n<here you could just as well create computation graph>\n\n<this may or may not be a good place to evaluating loss and updates>\n\n<here one could compile all the required functions>\n\n<this may be a perfect cell to write a training&evaluation loop in>\n\n<predict & evaluate on test here, right? No cheating pls.>", "Report\nI did such and such, that did that cool thing and my stupid NN bloated out that stuff. Finally, i did that thingy and felt like Le'Cun. That cool article and that kind of weed helped me so much (if any).\n```\n```\n```\n```\n```\n```\n```\n```\n```\n```\n```\n```\n```\n```\n```\n```\nSPOILERS!\nRecommended pipeline\n\nAdapt logistic regression from previous assignment to classify some number against others (e.g. zero vs nonzero)\nGeneralize it to multiclass logistic regression.\nEither try to remember lecture 0 or google it.\nInstead of weight vector you'll have to use matrix (feature_id x class_id)\nsoftmax (exp over sum of exps) can implemented manually or as T.nnet.softmax (stable)\nprobably better to use STOCHASTIC gradient descent (minibatch)\nin which case sample should probably be shuffled (or use random subsamples on each iteration)\n\n\nAdd a hidden layer. Now your logistic regression uses hidden neurons instead of inputs.\nHidden layer uses the same math as output layer (ex-logistic regression), but uses some nonlinearity (sigmoid) instead of softmax\nYou need to train both layers, not just output layer :)\nDo not initialize layers with zeros (due to symmetry effects). A gaussian noize with small sigma will do.\n50 hidden neurons and a sigmoid nonlinearity will do for a start. Many ways to improve. \nIn ideal casae this totals to 2 .dot's, 1 softmax and 1 sigmoid\n\nmake sure this neural network works better than logistic regression\n\n\nNow's the time to try improving the network. Consider layers (size, neuron count), nonlinearities, optimization methods, initialization - whatever you want, but please avoid convolutions for now." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
SciTools/courses
course_content/iris_course/1.The_Iris_Cube.ipynb
gpl-3.0
[ "%matplotlib inline", "Iris introduction course\n1. The Iris Cube\nLearning Outcome: by the end of this section, you will be able to explain the capabilities and functionality of Iris Cubes and Coordinates.\nDuration: 1 hour\nOverview:<br>\n1.1 Introduction to the Iris Cube<br>\n1.2 Working with a Cube<br>\n1.3 Cube Attributes<br>\n1.4 Coordinates<br>\n1.5 Exercise<br>\n1.6 Summary of the Section", "import iris", "1.1 Introduction to the Iris Cube<a id='intro_to_iris_cube'></a>\nThe top level object in Iris is called a Cube. A Cube contains data and metadata about a single phenomenon and is an implementation of the data model interpreted from the Climate and Forecast (CF) Metadata Conventions.\nEach cube has:\n\nA data array (typically a NumPy array).\nA \"name\", preferably a CF \"standard name\" to describe the phenomenon that the cube represents.\nA collection of coordinates to describe each of the dimensions of the data array. These coordinates are split into two types:\nDimension Coordinates are numeric, monotonic and represent a single dimension of the data array. There may be only one Dimension Coordinate per data dimension.\nAuxilliary Coordinates can be of any type, including discrete values such as strings, and may represent more than one data dimension.\n\n\n\nA fuller explanation is available in the Iris user guide.\nLet's take a simple example to demonstrate the Cube concept.\nSuppose we have a (3, 2, 4) NumPy array:\n\nWhere dimensions 0, 1, and 2 have lengths 3, 2 and 4 respectively.\nThe Iris Cube to represent this data may consist of:\n\n\na standard name of \"air_temperature\" and units of \"kelvin\"\n\n\na data array of shape (3, 2, 4)\n\n\na coordinate, mapping to dimension 0, consisting of:\n\na standard name of \"height\" and units of \"meters\"\nan array of length 3 representing the 3 height points\n\n\n\na coordinate, mapping to dimension 1, consisting of:\n\na standard name of \"latitude\" and units of \"degrees\"\nan array of length 2 representing the 2 latitude points\na coordinate system such that the latitude points could be fully located on the globe\n\n\n\na coordinate, mapping to dimension 2, consisting of:\n\na standard name of \"longitude\" and units of \"degrees\"\nan array of length 4 representing the 4 longitude points\na coordinate system such that the longitude points could be fully located on the globe\n\n\n\nPictorially the Cube has taken on more information than a simple array:\n\n\n1.2 Working with a Cube<a id='working_with_a_cube'></a>\nTo load in a Cube from a file, we make use of the iris.load function.\n<div class=\"alert alert-block alert-warning\">\n <b><font color='brown'>Exercise: </font></b>\n <p>Take a look at the above link to see how `iris.load` is called.</p>\n</div>\n\nFor the purpose of this course, we will be using the sample data provided with Iris. We use the utility function iris.sample_data_path which returns the filepath of where the sample data is installed. We assign the output filepath returned by the iris.sample_data_path function to a variable called fname.", "fname = iris.sample_data_path('space_weather.nc')", "<div class=\"alert alert-block alert-warning\">\n <b><font color='brown'>Exercise: </font></b>\n <p>Try printing <b><font style='font-family: courier'>fname</font></b>, to see where the sample data is installed on your system.<p/>\n</div>", "#\n# edit space for user code ...\n#", "We load in the filepath fname with iris.load.", "cubes = iris.load(fname)\nprint(cubes)", "iris.load returns an iris.cube.CubeList of all the cubes found in the file. From the above print out, we can see that we have loaded two cubes from the file, one representing the \"total electron content\" and the other representing \"electron density\". We can infer further detail about the returned cubes from this printout, such as the units, dimensions and shape.\n<div class=\"alert alert-block alert-warning\">\n <b><font color='brown'>Exercise: </font></b>\n <p>What are the dimensions of the \"total electron content\" cube?\n <br>What are the units of the \"electron_density\" cube?</p>\n</div>", "#\n# edit space for user notes ...\n#", "<b><font color=\"brown\">SAMPLE SOLUTION:</font></b>\nUn-comment and execute the following, to view a possible solution, and some code.\nThen run it ...", "# SAMPLE SOLUTION\n# %load solutions/iris_exercise_1.2a", "To see more detail about a specific cube, we can print out a single cube from the cubelist. We can select the second cube in the cubelist with indexing, and then print out what it returns.", "air_pot_temp = cubes[1]\nprint(air_pot_temp)", "As before, we have an overview of the cube's dimensions as well as the cube's name and units. We also have further detail on the cube's metadata, such as the Dimension Coordinates, Auxiliary Coordinates and Attributes. \nIn the printout, the dimension marker 'x' shows which dimensions apply to each coordinate. For example, we can see that the latitude Auxiliary Coordinate varies along the grid_latitude and grid_longitude dimensions.\nWhilst the printout of a cube gives a nice overview of the cube's metadata, we can dig deeper by inspecting the attributes of our cube object, as covered in the next section.\n\n1.3 Cube Attributes<a id='cube_attributes'></a>\nWe load in a different file (using the iris.sample_data_path utility function, as before, to give us the path of the file) and index out the first cube from the cubelist that is returned.", "fname = iris.sample_data_path('A1B_north_america.nc')\ncubes = iris.load(fname)\ncube = cubes[0]\nprint(cube)", "We can see that we have loaded and selected an air_temperature cube with time, latitude and longitude dimensions and the associated Dimension coordinates. We also have a forecast_period Auxiliary coordinate which maps the time dimension. Our cube also has two scalar coordinates: forecast_reference_time and height, and a cell method of mean: time (6 hour) which means that the cube contains 6-hourly mean air temperatures.\nTo access the values of air temperature in the cube we use the data property. This is either a NumPy array or, in some cases, a NumPy masked array. It is very important to note that for most of the supported filetypes in Iris, the cube's data isn't actually loaded until you request it via this property (either directly or indirectly). After you've accessed the data once, it is stored on the cube and thus won't be loaded from disk again.\nTo find the shape of a cube's data it is possible to call cube.data.shape or cube.data.ndim, but this will trigger any unloaded data to be loaded. Therefore shape and ndim are properties available directly on the cube that do not unnecessarily load data.", "print(cube.shape)\nprint(cube.ndim)\nprint(type(cube.data))", "<div class=\"alert alert-block alert-warning\">\n <b><font color=\"brown\">Exercise: </font></b>\n <p>From the above output we can see that cube.data is a masked numpy array.\n <br>How would you find out the fill value of this masked array?</p>\n</div>", "#\n# edit space for user code ...\n#\n\n# SAMPLE SOLUTION\n# %load solutions/iris_exercise_1.3a", "The standard_name, long_name and to an extent var_name are all attributes to describe the phenomenon that the cube represents. The name() method is a convenience that looks at the name attributes in the order they are listed above, returning the first non-empty string.", "print(cube.standard_name)\nprint(cube.long_name)\nprint(cube.var_name)\nprint(cube.name())", "standard_name is restricted to be a CF standard name (see the CF standard name table). \nIf there is not a suitable CF standard name, cube.standard name is set to None and the long_name is used instead.\nlong_name is less restrictive and can be set to be any string. \nvar_name is the name of a netCDF file variable in the input file, or to be used in output. This is normally unimportant, as CF data is identified by 'standard_name' instead : (Note: although they are often the same, some standard names are not valid as netCDF variable names).\nTo rename a cube, it is possible to set the attributes manually, but it is generally easier to use the rename() method.\nBelow we rename the cube to a string that we know is not a valid CF standard name.", "cube.rename(\"A name that isn't a valid CF standard name\")\n\nprint(cube.standard_name)\nprint(cube.long_name)\nprint(cube.var_name)\nprint(cube.name())", "When renaming a cube, Iris will initally try to set cube.standard_name.\nIf the name is not a standard name, cube.long_name is set instead.\n<div class=\"alert alert-block alert-warning\">\n <b><font color=\"brown\">Exercise: </font></b>\n <p>Take a look at the <a href=http://cfconventions.org/standard-names.html> CF standard name table</a> and try renaming the cube to an accepted name.</p>\n</div>", "#\n# edit space for user code ...\n#\n\n# SAMPLE SOLUTION\n# %load solutions/iris_exercise_1.3b", "The units attribute on a cube tells us the units of the numbers held in the data array.", "print(cube.units)\nprint(cube.data.max())", "We can convert the cube to another unit using the convert_units method, which will automatically update the data array.", "cube.convert_units('Celsius')\nprint(cube.units)\nprint(cube.data.max())", "A cube also has a dictionary for extra general purpose attributes, which can be accessed with the cube.attributes attribute:", "print(cube.attributes)", "<div class=\"alert alert-block alert-warning\">\n <b><font color=\"brown\">Exercise: </font></b>\n <p>Update the `cube.attributes` dictionary with a new entry.\n <br>For example <b><font face=\"courier\" color=\"black\">{'comment':'Original data had units of degrees celsius'}</font></b>.</p>\n</div>", "#\n# edit space for user code ...\n#\n\n# SAMPLE SOLUTION\n# %load solutions/iris_exercise_1.3c", "1.4 Coordinates<a id='coordinates'></a>\nAs we've seen, cubes need coordinate information to help us describe the underlying phenomenon. Typically a cube's coordinates are accessed with the coords or coord methods. The latter must return exactly one coordinate for the given parameter filters, where the former returns a list of matching coordinates, possibly of length 0.\nFor example, to access the time coordinate, and print the first 4 times:", "time = cube.coord('time')\nprint(time[:4])", "The coordinate interface is very similar to that of a cube. The attributes that exist on both cubes and coordinates are: standard_name, long_name, var_name, units, attributes and shape. Similarly, the name(), rename() and convert_units() methods also exist on a coordinate.\n\nA coordinate does not have data, instead it has points and bounds (bounds may be None). In Iris, time coordinates are currently represented as \"a number since an epoch\":", "print(repr(time.units))\nprint(time.points[:4])\nprint(time.bounds[:4])", "These numbers can be converted to datetime objects with the unit's num2date method. Dates can be converted back again with the date2num method:", "import datetime\n\nprint(time.units.num2date(time.points[:4]))\nprint(time.units.date2num(datetime.datetime(1970, 2, 1)))", "Another important attribute on a coordinate is its coordinate system. Coordinate systems may be None for trivial coordinates, but particularly for spatial coordinates, they may be complex definitions of things such as the projection, ellipse and/or datum.", "lat = cube.coord('latitude')\nprint(lat.coord_system)", "In this case, the latitude's coordinate system is a simple geographic latitude on a spherical globe of radius 6371229 (meters).\n\n1.5 Section Review Exercise<a id='exercise'></a>\n1. Load the file in iris.sample_data_path('atlantic_profiles.nc') and print the cube list. Store these cubes in a variable called cubes.", "# EDIT for user code ...\n\n# SAMPLE SOLUTION : Un-comment and execute the following to see a possible solution ...\n\n# %load solutions/iris_exercise_1.5a", "2. Loop through each of the cubes (e.g. for cube in cubes) and print the standard name of each.", "# user code ...\n\n# SAMPLE SOLUTION\n# %load solutions/iris_exercise_1.5b", "3. Index cubes to retrieve the sea_water_potential_temperature cube. \nNote: that indexing to extract single cubes is useful for EDA, but it is better practice to use constraints (See 3. Cube Control and Subsetting.ipynb for more information).", "# user code ...\n\n# SAMPLE SOLUTION\n# %load solutions/iris_exercise_1.5c", "4. Get hold of the latitude coordinate on the sea_water_potential_temperature cube. Identify whether this coordinate has bounds. Print the minimum and maximum latitude points in the cube.", "# user code ...\n\n# SAMPLE SOLUTION\n# %load solutions/iris_exercise_1.5d", "1.6 Summary of Section: The iris Cube<a id='summary'></a>\nIn this section we learnt:\n* An iris cube, which contains data and metadata, is based on the cf data model, containing dimension and auxiliary coordinates.\n* Printing out a cube gives an overview of its metadata. We can get more information on the cube by inspecting its attributes (e.g. cube.standard_name, cube.units())\n* A coordinate has a similar interface to a cube, but a coordinate has points and bounds, where a cube has data." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phoebe-project/phoebe2-docs
development/examples/spot_transit.ipynb
gpl-3.0
[ "Spot Transit\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).", "#!pip install -I \"phoebe>=2.4,<2.5\"", "As always, let's do imports and initialize a logger and a new bundle.", "import phoebe\nimport numpy as np\n\nb = phoebe.default_binary()", "Let's set reasonable (although not necessarily physical) values for the secondary component.", "b.flip_constraint('mass@secondary', solve_for='q')\nb.set_value(qualifier='mass', component='secondary', value=0.2)\nb.set_value(qualifier='requiv', component='secondary', value=0.2)\nb.set_value(qualifier='teff', component='secondary', value=300)\n", "We'll add a spot to the primary component.", "b.add_spot(component='primary', \n relteff=0.90, \n long=0, \n colat=90, \n radius=20, \n feature='spot01')", "Adding Datasets", "b.add_dataset('lc', compute_times=phoebe.linspace(-0.1, 0.1, 201))", "Because we have such a cool transiting object, we'll have to use blackbody atmospheres and manually provide limb-darkening.", "b.set_value(qualifier='atm', component='secondary', value='blackbody')\nb.set_value(qualifier='ld_mode', component='secondary', value='manual')\n\nanim_times = phoebe.linspace(-0.1, 0.1, 101)\n\nb.add_dataset('mesh', compute_times=anim_times, coordinates='uvw', columns='teffs')", "Running Compute", "b.run_compute(distortion_method='sphere', irrad_method='none')", "Plotting", "print(np.min(b.get_value('teffs', time=0.0, component='primary')), np.max(b.get_value('teffs', time=0.0, component='primary')))", "Let's go through these options (see also the plot API docs):\n* time: make the plot at this single time\n* fc: (will be ignored by everything but the mesh): set the facecolor to the teffs column.\n* fcmap: use 'plasma' colormap instead of the default to avoid whites.\n* fclim: set the limits on facecolor so that the much cooler transiting object doesn't drive the entire range.\n* ec: disable drawing the edges of the triangles in a separate color. We could also set this to 'none', but then we'd be able to \"see-through\" the triangle edges.\n* tight_layout: use matplotlib's tight layout to ensure we have enough padding between axes to see the labels.", "afig, mplfig = b.plot(time=0.0,\n fc='teffs', fcmap='plasma', fclim=(5000, 6000), \n ec='face', \n tight_layout=True,\n show=True)", "Now let's animate the same figure in time. We'll use the same arguments as the static plot above, with the following exceptions:\n\ntimes: pass our array of times that we want the animation to loop over.\nconsider_for_limits: for the mesh panel, keep the primary star centered and allow the transiting object to move in and out of the frame.\npad_aspect: pad_aspect doesn't work with animations, so we'll disable to avoid the warning messages.\nanimate: self-explanatory.\nsave: we could use show=True, but that doesn't always play nice with jupyter notebooks\nsave_kwargs: may need to change these for your setup, to create a gif, passing {'writer': 'imagemagick'} is often useful.", "afig, mplfig = b.plot(times=anim_times,\n fc='teffs', fcmap='plasma', fclim=(5000, 6000), \n ec='face', \n consider_for_limits={'primary': True, 'secondary': False},\n tight_layout=True, pad_aspect=False,\n animate=True, \n save='spot_transit.gif',\n save_kwargs={'writer': 'imagemagick'})", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
stephenliu1989/msmbuilder
examples/advanced/hmm-and-msm.ipynb
lgpl-2.1
[ "This example builds HMM and MSMs on the alanine_dipeptide dataset using varing lag times\nand numbers of states, and compares the relaxation timescales", "from __future__ import print_function\nimport os\n%matplotlib inline\nfrom matplotlib.pyplot import *\nfrom msmbuilder.featurizer import SuperposeFeaturizer\nfrom msmbuilder.example_datasets import AlanineDipeptide\nfrom msmbuilder.hmm import GaussianHMM\nfrom msmbuilder.cluster import KCenters\nfrom msmbuilder.msm import MarkovStateModel", "First: load and \"featurize\"\nFeaturization refers to the process of converting the conformational\nsnapshots from your MD trajectories into vectors in some space $\\mathbb{R}^N$ that can be manipulated and modeled by subsequent analyses. The Gaussian HMM, for instance, uses Gaussian emission distributions, so it models the trajectory as a time-dependent\nmixture of multivariate Gaussians.\nIn general, the featurization is somewhat of an art. For this example, we're using Mixtape's SuperposeFeaturizer, which superposes each snapshot onto a reference frame (trajectories[0][0] in this example), and then measure the distance from each\natom to its position in the reference conformation as the 'feature'", "print(AlanineDipeptide.description())\n\ndataset = AlanineDipeptide().get()\ntrajectories = dataset.trajectories\ntopology = trajectories[0].topology\n\nindices = [atom.index for atom in topology.atoms if atom.element.symbol in ['C', 'O', 'N']]\nfeaturizer = SuperposeFeaturizer(indices, trajectories[0][0])\nsequences = featurizer.transform(trajectories)", "Now sequences is our featurized data.", "lag_times = [1, 10, 20, 30, 40]\nhmm_ts0 = {}\nhmm_ts1 = {}\nn_states = [3, 5]\n\nfor n in n_states:\n hmm_ts0[n] = []\n hmm_ts1[n] = []\n for lag_time in lag_times:\n strided_data = [s[i::lag_time] for s in sequences for i in range(lag_time)]\n hmm = GaussianHMM(n_states=n, n_init=1).fit(strided_data)\n timescales = hmm.timescales_ * lag_time\n hmm_ts0[n].append(timescales[0])\n hmm_ts1[n].append(timescales[1])\n print('n_states=%d\\tlag_time=%d\\ttimescales=%s' % (n, lag_time, timescales))\n print()\n\nfigure(figsize=(14,3))\n\nfor i, n in enumerate(n_states):\n subplot(1,len(n_states),1+i)\n plot(lag_times, hmm_ts0[n])\n plot(lag_times, hmm_ts1[n])\n if i == 0:\n ylabel('Relaxation Timescale')\n xlabel('Lag Time')\n title('%d states' % n)\n\nshow()\n\nmsmts0, msmts1 = {}, {}\nlag_times = [1, 10, 20, 30, 40]\nn_states = [4, 8, 16, 32, 64]\n\nfor n in n_states:\n msmts0[n] = []\n msmts1[n] = []\n for lag_time in lag_times:\n assignments = KCenters(n_clusters=n).fit_predict(sequences)\n msm = MarkovStateModel(lag_time=lag_time, verbose=False).fit(assignments)\n timescales = msm.timescales_\n msmts0[n].append(timescales[0])\n msmts1[n].append(timescales[1])\n print('n_states=%d\\tlag_time=%d\\ttimescales=%s' % (n, lag_time, timescales[0:2]))\n print()\n\nfigure(figsize=(14,3))\n\nfor i, n in enumerate(n_states):\n subplot(1,len(n_states),1+i)\n plot(lag_times, msmts0[n])\n plot(lag_times, msmts1[n])\n if i == 0:\n ylabel('Relaxation Timescale')\n xlabel('Lag Time')\n title('%d states' % n)\n\nshow()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
nicoguaro/AdvancedMath
notebooks/sympy/ode.ipynb
mit
[ "Ordinary differential equations\nIntroduction\nEngineering systems are often represented as differential equations. sympy has robust support for the following types of ordinary differential equations (ODEs):\n\n1st order separable differential equations\n1st order differential equations whose coefficients or dx and dy are functions homogeneous of the same order.\n1st order exact differential equations.\n1st order linear differential equations\n1st order Bernoulli differential equations.\n2nd order Liouville differential equations.\nnth order linear homogeneous differential equation with constant coefficients.\nnth order linear inhomogeneous differential equation with constant coefficients using the method of undetermined coefficients.\nnth order linear inhomogeneous differential equation with constant coefficients using the method of variation of parameters.\n\nIn addition to ODE support, sympy can even solve separable (either multiplicative or additive) partial differential equations (PDEs).\nThe main functionality for ODE solving in sympy is the sympy.dsolve function. The call signature for this function is sympy.dsolve(eq, goal, **kwargs), where eq is the differential equation, goal is the function you want to end up with, and **kwargs is a placeholder for other arguments that could be passed to the function to help it out a bit. For more detail on what **kwargs could be, see the documentation.", "from sympy import *\n\ninit_session()\nplt.style.use(u\"seaborn-notebook\")", "To solve differential equations, use dsolve. First, create an undefined function by passing cls=Function to the symbols function.", "f, g = symbols('f g', cls=Function)", "Derivatives of f(x) are unevaluated.", "f(x).diff(x)", "To represent the differential equation $f′′(x)−2f′(x)+f(x)=sin(x)$, we would use", "ode = Eq(f(x).diff(x, x) - 2*f(x).diff(x) + f(x), sin(x))\n\node", "To solve the ODE, pass it and the function to solve for to dsolve.", "sol = dsolve(ode, f(x))\ndisplay(sol)", "dsolve returns an instance of Eq. This is because in general, solutions to differential equations cannot be solved explicitly for the function.", "dsolve(f(x).diff(x)*(1 - sin(f(x))), f(x))", "The arbitrary constants in the solutions from dsolve are symbols of the form $C_1, C_2, C_3$, and so on.\nApplying Boundary conditions\nThis isn't implemented yet in dsolve stable version, but it will be available in the next release (probably!).\nFor now, solve for contants on your own. For example, if\n$$f(0)=1\\quad \\left.\\frac{d f}{d x}\\right\\vert_{x=0}=0,$$\nsolve the following equations:", "constants = solve([sol.rhs.subs(x,0) - 1, sol.rhs.diff(x,1).subs(x,0)- 0])\nconstants\n\nC1, C2 = symbols('C1,C2')\nsol = sol.subs(constants)\nsol", "Systems of equations", "g1, g2 = symbols('g1 g2', cls=Function)\n\neq1 = g1(x).diff(x) - 2*g1(x) - g2(x)\neq2 = g2(x).diff(x) - 3*g1(x) - 4*g2(x)\n\ndsolve((eq1, eq2), fun=(g1(x), g2(x)))", "Vector field", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "The '''Lotka–Volterra equations''', also known as the '''predator&ndash;prey equations''', are a pair of first-order, nonlinear, differential equations frequently used to describe the dynamics of biological systems in which two species interact, one as a predator and the other as prey. The populations change through time according to the pair of equations:\n\\begin{align}\n\\frac{dx}{dt} & = \\alpha x - \\beta x y \\[6pt]\n\\frac{dy}{dt} & = \\delta x y - \\gamma y\n\\end{align}\nwhere\n\n$x$ is the number of prey (for example, rabbits);\n$y$ is the number of some predator (for example, foxes);\n$\\tfrac{dy}{dt}$ and $\\tfrac{dx}{dt}$ represent the growth rates of the two populations over time;\n$t$ represents time; and\n$α, β, γ, δ$ are positive real parameters describing the interaction of the two species.", "y, x = np.mgrid[0:2.5:100j, 0:4:100j]\n\nalpha = 2/3\nbeta = 4/3\ngamma = 1\ndelta = 1\ndxdt = alpha*x - beta*x*y\ndydt = delta*x*y - gamma*y\nspeed = np.sqrt(dxdt**2 + dydt**2)", "In the first case we just plot the velocity field.", "fig0 = plt.figure(figsize=(8,5))\nstrm = plt.streamplot(x, y, dxdt, dydt, linewidth=2)\nplt.xlabel(\"Preys\")\nplt.ylabel(\"Predators\")\nplt.xlim(0,4)\nplt.ylim(0,2.5)", "In this example we change the color of the line according to the magnitude of the\nspeed at each point.", "fig1 = plt.figure(figsize=(8,5))\nstrm = plt.streamplot(x, y, dxdt, dydt, color=speed, linewidth=2, cmap=\"summer\")\nplt.colorbar(strm.lines)\nplt.xlabel(\"Preys\")\nplt.ylabel(\"Predators\")\nplt.xlim(0,4)\nplt.ylim(0,2.5)", "In this case the densities in $x$ and $y$ directions are different.", "fig2 = plt.figure(figsize=(8,5))\nplt.streamplot(x, y, dxdt, dydt, density=[0.5, 1])\nplt.xlabel(\"Preys\")\nplt.ylabel(\"Predators\")\nplt.xlim(0,4)\nplt.ylim(0,2.5)", "In this last case we vary the width of the lines according to the magnitude\nof the speed at each point.", "fig3 = plt.figure(figsize=(8,5))\nlw = 5*speed / speed.max()\nplt.streamplot(x, y, dxdt, dydt, density=0.6, color='k', linewidth=lw)\nplt.xlabel(\"Preys\")\nplt.ylabel(\"Predators\")\nplt.xlim(0,4)\nplt.ylim(0,2.5)", "Power series approximation", "def picard_iteration(f, t, y0, n):\n y = y0\n for cont in range(n):\n y = y0 + f(t, y, n).integrate((t, 0, t))\n\n return y\n\ndef fun(t, y, n):\n return Matrix([y[1], t*y[0]])\n\nx0, x1 = symbols(\"x0 x1\")\ny = Matrix([airyai(0), airyaiprime(0)])\ny_approx = picard_iteration(fun, t, y, 10)\n\np0 = plot(airyai(t), (t, -4, 4), line_color=\"black\");\np1 = plot(N(y_approx[0]), (t, -4, 4));\np0.extend(p1);\np0.show()", "References\n\nSymPy Development Team (2016). Sympy Tutorial: Matrices\nIvan Savov (2016). Taming math and physics using SymPy\n\nThe following cell change the style of the notebook.", "from IPython.core.display import HTML\ndef css_styling():\n styles = open('./styles/custom_barba.css', 'r').read()\n return HTML(styles)\ncss_styling()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
magenta/ddsp
ddsp/colab/tutorials/3_training.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/magenta/ddsp/blob/main/ddsp/colab/tutorials/3_training.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2021 Google LLC.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "# Copyright 2021 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================", "DDSP Training\nThis notebook demonstrates the libraries in https://github.com/magenta/ddsp/tree/master/ddsp/training. It is a simple example, overfitting a single audio sample, for educational purposes. \nFor a full training pipeline please use ddsp/training/ddsp_run.py as in the train_autoencoder.ipynb.", "# Install and import dependencies\n%tensorflow_version 2.x\n!pip install -qU ddsp\n\n# Ignore a bunch of deprecation warnings\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nimport time\n\nimport ddsp\nfrom ddsp.training import (data, decoders, encoders, models, preprocessing, \n train_util, trainers)\nfrom ddsp.colab.colab_utils import play, specplot, DEFAULT_SAMPLE_RATE\nimport gin\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tensorflow.compat.v2 as tf\nimport tensorflow_datasets as tfds\n\nsample_rate = DEFAULT_SAMPLE_RATE # 16000", "Get a batch of data", "# Get a single example from NSynth.\n# Takes a few seconds to load from GCS.\ndata_provider = data.NSynthTfds(split='test')\ndataset = data_provider.get_batch(batch_size=1, shuffle=False).take(1).repeat()\nbatch = next(iter(dataset))\naudio = batch['audio']\nn_samples = audio.shape[1]\n\nspecplot(audio)\nplay(audio)", "Get a distribution strategy", "strategy = train_util.get_strategy()", "Get model and trainer\npython", "TIME_STEPS = 1000\n\n# Create Neural Networks.\npreprocessor = preprocessing.F0LoudnessPreprocessor(time_steps=TIME_STEPS)\n\ndecoder = decoders.RnnFcDecoder(rnn_channels = 256,\n rnn_type = 'gru',\n ch = 256,\n layers_per_stack = 1,\n input_keys = ('ld_scaled', 'f0_scaled'),\n output_splits = (('amps', 1),\n ('harmonic_distribution', 45),\n ('noise_magnitudes', 45)))\n\n# Create Processors.\nharmonic = ddsp.synths.Harmonic(n_samples=n_samples, \n sample_rate=sample_rate,\n name='harmonic')\n\nnoise = ddsp.synths.FilteredNoise(window_size=0,\n initial_bias=-10.0,\n name='noise')\nadd = ddsp.processors.Add(name='add')\n\n# Create ProcessorGroup.\ndag = [(harmonic, ['amps', 'harmonic_distribution', 'f0_hz']),\n (noise, ['noise_magnitudes']),\n (add, ['noise/signal', 'harmonic/signal'])]\n\nprocessor_group = ddsp.processors.ProcessorGroup(dag=dag,\n name='processor_group')\n\n\n# Loss_functions\nspectral_loss = ddsp.losses.SpectralLoss(loss_type='L1',\n mag_weight=1.0,\n logmag_weight=1.0)\n\nwith strategy.scope():\n # Put it together in a model.\n model = models.Autoencoder(preprocessor=preprocessor,\n encoder=None,\n decoder=decoder,\n processor_group=processor_group,\n losses=[spectral_loss])\n trainer = trainers.Trainer(model, strategy, learning_rate=1e-3)", "or gin", "gin_string = \"\"\"\nimport ddsp\nimport ddsp.training\n\n# Preprocessor\nmodels.Autoencoder.preprocessor = @preprocessing.F0LoudnessPreprocessor()\npreprocessing.F0LoudnessPreprocessor.time_steps = 1000\n\n\n# Encoder\nmodels.Autoencoder.encoder = None\n\n# Decoder\nmodels.Autoencoder.decoder = @decoders.RnnFcDecoder()\ndecoders.RnnFcDecoder.rnn_channels = 256\ndecoders.RnnFcDecoder.rnn_type = 'gru'\ndecoders.RnnFcDecoder.ch = 256\ndecoders.RnnFcDecoder.layers_per_stack = 1\ndecoders.RnnFcDecoder.input_keys = ('ld_scaled', 'f0_scaled')\ndecoders.RnnFcDecoder.output_splits = (('amps', 1),\n ('harmonic_distribution', 20),\n ('noise_magnitudes', 20))\n\n# ProcessorGroup\nmodels.Autoencoder.processor_group = @processors.ProcessorGroup()\n\nprocessors.ProcessorGroup.dag = [\n (@harmonic/synths.Harmonic(),\n ['amps', 'harmonic_distribution', 'f0_hz']),\n (@noise/synths.FilteredNoise(),\n ['noise_magnitudes']),\n (@add/processors.Add(),\n ['noise/signal', 'harmonic/signal']),\n]\n\n# Harmonic Synthesizer\nharmonic/synths.Harmonic.name = 'harmonic'\nharmonic/synths.Harmonic.n_samples = 64000\nharmonic/synths.Harmonic.scale_fn = @core.exp_sigmoid\n\n# Filtered Noise Synthesizer\nnoise/synths.FilteredNoise.name = 'noise'\nnoise/synths.FilteredNoise.n_samples = 64000\nnoise/synths.FilteredNoise.window_size = 0\nnoise/synths.FilteredNoise.scale_fn = @core.exp_sigmoid\nnoise/synths.FilteredNoise.initial_bias = -10.0\n\n# Add\nadd/processors.Add.name = 'add'\n\nmodels.Autoencoder.losses = [\n @losses.SpectralLoss(),\n]\nlosses.SpectralLoss.loss_type = 'L1'\nlosses.SpectralLoss.mag_weight = 1.0\nlosses.SpectralLoss.logmag_weight = 1.0\n\"\"\"\n\nwith gin.unlock_config():\n gin.parse_config(gin_string)\n\nwith strategy.scope():\n # Autoencoder arguments are filled by gin.\n model = ddsp.training.models.Autoencoder()\n trainer = trainers.Trainer(model, strategy, learning_rate=1e-4)", "Train\nBuild model", "# Build model, easiest to just run forward pass.\ndataset = trainer.distribute_dataset(dataset)\ntrainer.build(next(iter(dataset)))", "Train Loop", "dataset_iter = iter(dataset)\n\nfor i in range(300):\n losses = trainer.train_step(dataset_iter)\n res_str = 'step: {}\\t'.format(i)\n for k, v in losses.items():\n res_str += '{}: {:.2f}\\t'.format(k, v)\n print(res_str)", "Analyze results", "# Run a batch of predictions.\nstart_time = time.time()\ncontrols = model(next(dataset_iter))\naudio_gen = model.get_audio_from_outputs(controls)\nprint('Prediction took %.1f seconds' % (time.time() - start_time))\n\nprint('Original Audio')\nplay(audio)\nprint('Resynthesized Audio')\nplay(audio_gen)\nprint('Filtered Noise Audio')\naudio_noise = controls['noise']['signal']\nplay(audio_noise)\n\nspecplot(audio)\nspecplot(audio_gen)\nspecplot(audio_noise)\n\nbatch_idx = 0\nget = lambda key: ddsp.core.nested_lookup(key, controls)[batch_idx]\n\namps = get('harmonic/controls/amplitudes')\nharmonic_distribution = get('harmonic/controls/harmonic_distribution')\nnoise_magnitudes = get('noise/controls/magnitudes')\nf0_hz = get('f0_hz')\nloudness = get('loudness_db')\n\naudio_noise = get('noise/signal')\n\nf, ax = plt.subplots(1, 2, figsize=(14, 4))\nf.suptitle('Input Features', fontsize=16)\nax[0].plot(loudness)\nax[0].set_ylabel('Loudness')\nax[1].plot(f0_hz)\nax[1].set_ylabel('F0_Hz')\n\nf, ax = plt.subplots(1, 2, figsize=(14, 4))\nf.suptitle('Synth Params', fontsize=16)\nax[0].semilogy(amps)\nax[0].set_ylabel('Amps')\nax[0].set_ylim(1e-5, 2)\n# ax[0].semilogy(harmonic_distribution)\nax[1].matshow(np.rot90(np.log10(harmonic_distribution + 1e-6)),\n cmap=plt.cm.magma, \n aspect='auto')\nax[1].set_ylabel('Harmonic Distribution')\nax[1].set_xticks([])\n_ = ax[1].set_yticks([])\n\nf, ax = plt.subplots(1, 1, figsize=(7, 4))\n# f.suptitle('Filtered Noise Params', fontsize=16)\nax.matshow(np.rot90(np.log10(noise_magnitudes + 1e-6)), \n cmap=plt.cm.magma, \n aspect='auto')\nax.set_ylabel('Filtered Noise Magnitudes')\nax.set_xticks([])\n_ = ax.set_yticks([])\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
luofan18/deep-learning
batch-norm/Batch_Normalization_Lesson.ipynb
mit
[ "Batch Normalization – Lesson\n\nWhat is it?\nWhat are it's benefits?\nHow do we add it to a network?\nLet's see it work!\nWhat are you hiding?\n\nWhat is Batch Normalization?<a id='theory'></a>\nBatch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. The idea is that, instead of just normalizing the inputs to the network, we normalize the inputs to layers within the network. It's called \"batch\" normalization because during training, we normalize each layer's inputs by using the mean and variance of the values in the current mini-batch.\nWhy might this help? Well, we know that normalizing the inputs to a network helps the network learn. But a network is a series of layers, where the output of one layer becomes the input to another. That means we can think of any layer in a neural network as the first layer of a smaller network.\nFor example, imagine a 3 layer network. Instead of just thinking of it as a single network with inputs, layers, and outputs, think of the output of layer 1 as the input to a two layer network. This two layer network would consist of layers 2 and 3 in our original network. \nLikewise, the output of layer 2 can be thought of as the input to a single layer network, consisting only of layer 3.\nWhen you think of it like that - as a series of neural networks feeding into each other - then it's easy to imagine how normalizing the inputs to each layer would help. It's just like normalizing the inputs to any other neural network, but you're doing it at every layer (sub-network).\nBeyond the intuitive reasons, there are good mathematical reasons why it helps the network learn better, too. It helps combat what the authors call internal covariate shift. This discussion is best handled in the paper and in Deep Learning a book you can read online written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Specifically, check out the batch normalization section of Chapter 8: Optimization for Training Deep Models.\nBenefits of Batch Normalization<a id=\"benefits\"></a>\nBatch normalization optimizes network training. It has been shown to have several benefits:\n1. Networks train faster – Each training iteration will actually be slower because of the extra calculations during the forward pass and the additional hyperparameters to train during back propagation. However, it should converge much more quickly, so training should be faster overall. \n2. Allows higher learning rates – Gradient descent usually requires small learning rates for the network to converge. And as networks get deeper, their gradients get smaller during back propagation so they require even more iterations. Using batch normalization allows us to use much higher learning rates, which further increases the speed at which networks train. \n3. Makes weights easier to initialize – Weight initialization can be difficult, and it's even more difficult when creating deeper networks. Batch normalization seems to allow us to be much less careful about choosing our initial starting weights.\n4. Makes more activation functions viable – Some activation functions do not work well in some situations. Sigmoids lose their gradient pretty quickly, which means they can't be used in deep networks. And ReLUs often die out during training, where they stop learning completely, so we need to be careful about the range of values fed into them. Because batch normalization regulates the values going into each activation function, non-linearlities that don't seem to work well in deep networks actually become viable again.\n5. Simplifies the creation of deeper networks – Because of the first 4 items listed above, it is easier to build and faster to train deeper neural networks when using batch normalization. And it's been shown that deeper networks generally produce better results, so that's great.\n6. Provides a bit of regularlization – Batch normalization adds a little noise to your network. In some cases, such as in Inception modules, batch normalization has been shown to work as well as dropout. But in general, consider batch normalization as a bit of extra regularization, possibly allowing you to reduce some of the dropout you might add to a network. \n7. May give better results overall – Some tests seem to show batch normalization actually improves the training results. However, it's really an optimization to help train faster, so you shouldn't think of it as a way to make your network better. But since it lets you train networks faster, that means you can iterate over more designs more quickly. It also lets you build deeper networks, which are usually better. So when you factor in everything, you're probably going to end up with better results if you build your networks with batch normalization.\nBatch Normalization in TensorFlow<a id=\"implementation_1\"></a>\nThis section of the notebook shows you one way to add batch normalization to a neural network built in TensorFlow. \nThe following cell imports the packages we need in the notebook and loads the MNIST dataset to use in our experiments. However, the tensorflow package contains all the code you'll actually need for batch normalization.", "# Import necessary packages\nimport tensorflow as tf\nimport tqdm\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Import MNIST data so we have something for our experiments\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)", "Neural network classes for testing\nThe following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heavily documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions.\nAbout the code:\n\nThis class is not meant to represent TensorFlow best practices – the design choices made here are to support the discussion related to batch normalization.\nIt's also important to note that we use the well-known MNIST data for these examples, but the networks we create are not meant to be good for performing handwritten character recognition. We chose this network architecture because it is similar to the one used in the original paper, which is complex enough to demonstrate some of the benefits of batch normalization while still being fast to train.", "class NeuralNet:\n def __init__(self, initial_weights, activation_fn, use_batch_norm):\n \"\"\"\n Initializes this object, creating a TensorFlow graph using the given parameters.\n \n :param initial_weights: list of NumPy arrays or Tensors\n Initial values for the weights for every layer in the network. We pass these in\n so we can create multiple networks with the same starting weights to eliminate\n training differences caused by random initialization differences.\n The number of items in the list defines the number of layers in the network,\n and the shapes of the items in the list define the number of nodes in each layer.\n e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would \n create a network with 784 inputs going into a hidden layer with 256 nodes,\n followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes.\n :param activation_fn: Callable\n The function used for the output of each hidden layer. The network will use the same\n activation function on every hidden layer and no activate function on the output layer.\n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n :param use_batch_norm: bool\n Pass True to create a network that uses batch normalization; False otherwise\n Note: this network will not use batch normalization on layers that do not have an\n activation function.\n \"\"\"\n # Keep track of whether or not this network uses batch normalization.\n self.use_batch_norm = use_batch_norm\n self.name = \"With Batch Norm\" if use_batch_norm else \"Without Batch Norm\"\n\n # Batch normalization needs to do different calculations during training and inference,\n # so we use this placeholder to tell the graph which behavior to use.\n self.is_training = tf.placeholder(tf.bool, name=\"is_training\")\n\n # This list is just for keeping track of data we want to plot later.\n # It doesn't actually have anything to do with neural nets or batch normalization.\n self.training_accuracies = []\n\n # Create the network graph, but it will not actually have any real values until after you\n # call train or test\n self.build_network(initial_weights, activation_fn)\n \n def build_network(self, initial_weights, activation_fn):\n \"\"\"\n Build the graph. The graph still needs to be trained via the `train` method.\n \n :param initial_weights: list of NumPy arrays or Tensors\n See __init__ for description. \n :param activation_fn: Callable\n See __init__ for description. \n \"\"\"\n self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]])\n layer_in = self.input_layer\n for weights in initial_weights[:-1]:\n layer_in = self.fully_connected(layer_in, weights, activation_fn) \n self.output_layer = self.fully_connected(layer_in, initial_weights[-1])\n \n def fully_connected(self, layer_in, initial_weights, activation_fn=None):\n \"\"\"\n Creates a standard, fully connected layer. Its number of inputs and outputs will be\n defined by the shape of `initial_weights`, and its starting weight values will be\n taken directly from that same parameter. If `self.use_batch_norm` is True, this\n layer will include batch normalization, otherwise it will not. \n \n :param layer_in: Tensor\n The Tensor that feeds into this layer. It's either the input to the network or the output\n of a previous layer.\n :param initial_weights: NumPy array or Tensor\n Initial values for this layer's weights. The shape defines the number of nodes in the layer.\n e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 \n outputs. \n :param activation_fn: Callable or None (default None)\n The non-linearity used for the output of the layer. If None, this layer will not include \n batch normalization, regardless of the value of `self.use_batch_norm`. \n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n \"\"\"\n # Since this class supports both options, only use batch normalization when\n # requested. However, do not use it on the final layer, which we identify\n # by its lack of an activation function.\n if self.use_batch_norm and activation_fn:\n # Batch normalization uses weights as usual, but does NOT add a bias term. This is because \n # its calculations include gamma and beta variables that make the bias term unnecessary.\n # (See later in the notebook for more details.)\n weights = tf.Variable(initial_weights)\n linear_output = tf.matmul(layer_in, weights)\n\n # Apply batch normalization to the linear combination of the inputs and weights\n batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)\n\n # Now apply the activation function, *after* the normalization.\n return activation_fn(batch_normalized_output)\n else:\n # When not using batch normalization, create a standard layer that multiplies\n # the inputs and weights, adds a bias, and optionally passes the result \n # through an activation function. \n weights = tf.Variable(initial_weights)\n biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))\n linear_output = tf.add(tf.matmul(layer_in, weights), biases)\n return linear_output if not activation_fn else activation_fn(linear_output)\n\n def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None):\n \"\"\"\n Trains the model on the MNIST training dataset.\n \n :param session: Session\n Used to run training graph operations.\n :param learning_rate: float\n Learning rate used during gradient descent.\n :param training_batches: int\n Number of batches to train.\n :param batches_per_sample: int\n How many batches to train before sampling the validation accuracy.\n :param save_model_as: string or None (default None)\n Name to use if you want to save the trained model.\n \"\"\"\n # This placeholder will store the target labels for each mini batch\n labels = tf.placeholder(tf.float32, [None, 10])\n\n # Define loss and optimizer\n cross_entropy = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer))\n \n # Define operations for testing\n correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n if self.use_batch_norm:\n # If we don't include the update ops as dependencies on the train step, the \n # tf.layers.batch_normalization layers won't update their population statistics,\n # which will cause the model to fail at inference time\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\n else:\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\n \n # Train for the appropriate number of batches. (tqdm is only for a nice timing display)\n for i in tqdm.tqdm(range(training_batches)):\n # We use batches of 60 just because the original paper did. You can use any size batch you like.\n batch_xs, batch_ys = mnist.train.next_batch(60)\n session.run(train_step, feed_dict={self.input_layer: batch_xs, \n labels: batch_ys, \n self.is_training: True})\n \n # Periodically test accuracy against the 5k validation images and store it for plotting later.\n if i % batches_per_sample == 0:\n test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,\n labels: mnist.validation.labels,\n self.is_training: False})\n self.training_accuracies.append(test_accuracy)\n\n # After training, report accuracy against test data\n test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,\n labels: mnist.validation.labels,\n self.is_training: False})\n print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy))\n\n # If you want to use this model later for inference instead of having to retrain it,\n # just construct it with the same parameters and then pass this file to the 'test' function\n if save_model_as:\n tf.train.Saver().save(session, save_model_as)\n\n def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None):\n \"\"\"\n Trains a trained model on the MNIST testing dataset.\n\n :param session: Session\n Used to run the testing graph operations.\n :param test_training_accuracy: bool (default False)\n If True, perform inference with batch normalization using batch mean and variance;\n if False, perform inference with batch normalization using estimated population mean and variance.\n Note: in real life, *always* perform inference using the population mean and variance.\n This parameter exists just to support demonstrating what happens if you don't.\n :param include_individual_predictions: bool (default True)\n This function always performs an accuracy test against the entire test set. But if this parameter\n is True, it performs an extra test, doing 200 predictions one at a time, and displays the results\n and accuracy.\n :param restore_from: string or None (default None)\n Name of a saved model if you want to test with previously saved weights.\n \"\"\"\n # This placeholder will store the true labels for each mini batch\n labels = tf.placeholder(tf.float32, [None, 10])\n\n # Define operations for testing\n correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n # If provided, restore from a previously saved model\n if restore_from:\n tf.train.Saver().restore(session, restore_from)\n\n # Test against all of the MNIST test data\n test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images,\n labels: mnist.test.labels,\n self.is_training: test_training_accuracy})\n print('-'*75)\n print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy))\n\n # If requested, perform tests predicting individual values rather than batches\n if include_individual_predictions:\n predictions = []\n correct = 0\n\n # Do 200 predictions, 1 at a time\n for i in range(200):\n # This is a normal prediction using an individual test case. However, notice\n # we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`.\n # Remember that will tell it whether it should use the batch mean & variance or\n # the population estimates that were calucated while training the model.\n pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy],\n feed_dict={self.input_layer: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]],\n self.is_training: test_training_accuracy})\n correct += corr\n\n predictions.append(pred[0])\n\n print(\"200 Predictions:\", predictions)\n print(\"Accuracy on 200 samples:\", correct/200)\n", "There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines.\nWe add batch normalization to layers inside the fully_connected function. Here are some important points about that code:\n1. Layers with batch normalization do not include a bias term.\n2. We use TensorFlow's tf.layers.batch_normalization function to handle the math. (We show lower-level ways to do this later in the notebook.)\n3. We tell tf.layers.batch_normalization whether or not the network is training. This is an important step we'll talk about later.\n4. We add the normalization before calling the activation function.\nIn addition to that code, the training step is wrapped in the following with statement:\npython\nwith tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\nThis line actually works in conjunction with the training parameter we pass to tf.layers.batch_normalization. Without it, TensorFlow's batch normalization layer will not operate correctly during inference.\nFinally, whenever we train the network or perform inference, we use the feed_dict to set self.is_training to True or False, respectively, like in the following line:\npython\nsession.run(train_step, feed_dict={self.input_layer: batch_xs, \n labels: batch_ys, \n self.is_training: True})\nWe'll go into more details later, but next we want to show some experiments that use this code and test networks with and without batch normalization.\nBatch Normalization Demos<a id='demos'></a>\nThis section of the notebook trains various networks with and without batch normalization to demonstrate some of the benefits mentioned earlier. \nWe'd like to thank the author of this blog post Implementing Batch Normalization in TensorFlow. That post provided the idea of - and some of the code for - plotting the differences in accuracy during training, along with the idea for comparing multiple networks using the same initial weights.\nCode to support testing\nThe following two functions support the demos we run in the notebook. \nThe first function, plot_training_accuracies, simply plots the values found in the training_accuracies lists of the NeuralNet objects passed to it. If you look at the train function in NeuralNet, you'll see it that while it's training the network, it periodically measures validation accuracy and stores the results in that list. It does that just to support these plots.\nThe second function, train_and_test, creates two neural nets - one with and one without batch normalization. It then trains them both and tests them, calling plot_training_accuracies to plot how their accuracies changed over the course of training. The really imporant thing about this function is that it initializes the starting weights for the networks outside of the networks and then passes them in. This lets it train both networks from the exact same starting weights, which eliminates performance differences that might result from (un)lucky initial weights.", "def plot_training_accuracies(*args, **kwargs):\n \"\"\"\n Displays a plot of the accuracies calculated during training to demonstrate\n how many iterations it took for the model(s) to converge.\n \n :param args: One or more NeuralNet objects\n You can supply any number of NeuralNet objects as unnamed arguments \n and this will display their training accuracies. Be sure to call `train` \n the NeuralNets before calling this function.\n :param kwargs: \n You can supply any named parameters here, but `batches_per_sample` is the only\n one we look for. It should match the `batches_per_sample` value you passed\n to the `train` function.\n \"\"\"\n fig, ax = plt.subplots()\n\n batches_per_sample = kwargs['batches_per_sample']\n \n for nn in args:\n ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample),\n nn.training_accuracies, label=nn.name)\n ax.set_xlabel('Training steps')\n ax.set_ylabel('Accuracy')\n ax.set_title('Validation Accuracy During Training')\n ax.legend(loc=4)\n ax.set_ylim([0,1])\n plt.yticks(np.arange(0, 1.1, 0.1))\n plt.grid(True)\n plt.show()\n\ndef train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500):\n \"\"\"\n Creates two networks, one with and one without batch normalization, then trains them\n with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies.\n \n :param use_bad_weights: bool\n If True, initialize the weights of both networks to wildly inappropriate weights;\n if False, use reasonable starting weights.\n :param learning_rate: float\n Learning rate used during gradient descent.\n :param activation_fn: Callable\n The function used for the output of each hidden layer. The network will use the same\n activation function on every hidden layer and no activate function on the output layer.\n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n :param training_batches: (default 50000)\n Number of batches to train.\n :param batches_per_sample: (default 500)\n How many batches to train before sampling the validation accuracy.\n \"\"\"\n # Use identical starting weights for each network to eliminate differences in\n # weight initialization as a cause for differences seen in training performance\n #\n # Note: The networks will use these weights to define the number of and shapes of\n # its layers. The original batch normalization paper used 3 hidden layers\n # with 100 nodes in each, followed by a 10 node output layer. These values\n # build such a network, but feel free to experiment with different choices.\n # However, the input size should always be 784 and the final output should be 10.\n if use_bad_weights:\n # These weights should be horrible because they have such a large standard deviation\n weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32),\n np.random.normal(size=(100,100), scale=5.0).astype(np.float32),\n np.random.normal(size=(100,100), scale=5.0).astype(np.float32),\n np.random.normal(size=(100,10), scale=5.0).astype(np.float32)\n ]\n else:\n # These weights should be good because they have such a small standard deviation\n weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,10), scale=0.05).astype(np.float32)\n ]\n\n # Just to make sure the TensorFlow's default graph is empty before we start another\n # test, because we don't bother using different graphs or scoping and naming \n # elements carefully in this sample code.\n tf.reset_default_graph()\n\n # build two versions of same network, 1 without and 1 with batch normalization\n nn = NeuralNet(weights, activation_fn, False)\n bn = NeuralNet(weights, activation_fn, True)\n \n # train and test the two models\n with tf.Session() as sess:\n tf.global_variables_initializer().run()\n\n nn.train(sess, learning_rate, training_batches, batches_per_sample)\n bn.train(sess, learning_rate, training_batches, batches_per_sample)\n \n nn.test(sess)\n bn.test(sess)\n \n # Display a graph of how validation accuracies changed during training\n # so we can compare how the models trained and when they converged\n plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample)\n", "Comparisons between identical networks, with and without batch normalization\nThe next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook.\nThe following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights.", "train_and_test(False, 0.01, tf.nn.relu)", "As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations.\nIf you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.)\nThe following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations.", "train_and_test(False, 0.01, tf.nn.relu, 2000, 50)", "As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note: if you run the code yourself, you'll see slightly different results each time because the starting weights - while the same for each model - are different for each run.)\nIn the above example, you should also notice that the networks trained fewer batches per second then what you saw in the previous example. That's because much of the time we're tracking is actually spent periodically performing inference to collect data for the plots. In this example we perform that inference every 50 batches instead of every 500, so generating the plot for this example requires 10 times the overhead for the same 2000 iterations.\nThe following creates two networks using a sigmoid activation function, a learning rate of 0.01, and reasonable starting weights.", "train_and_test(False, 0.01, tf.nn.sigmoid)", "With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches. \nThe following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights.", "train_and_test(False, 1, tf.nn.relu)", "Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate.\nThe next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens.", "train_and_test(False, 1, tf.nn.relu)", "In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast.\nThe following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights.", "train_and_test(False, 1, tf.nn.sigmoid)", "In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy.\nThe cell below shows a similar pair of networks trained for only 2000 iterations.", "train_and_test(False, 1, tf.nn.sigmoid, 2000, 50)", "As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced.\nThe following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights.", "train_and_test(False, 2, tf.nn.relu)", "With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all.\nThe following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights.", "train_and_test(False, 2, tf.nn.sigmoid)", "Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization.\nHowever, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster.", "train_and_test(False, 2, tf.nn.sigmoid, 2000, 50)", "In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose random values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient. \nThe following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights.", "train_and_test(True, 0.01, tf.nn.relu)", "As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them. \nThe following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights.", "train_and_test(True, 0.01, tf.nn.sigmoid)", "Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all. \nThe following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id=\"successful_example_lr_1\"></a>", "train_and_test(True, 1, tf.nn.relu)", "The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere.\nThe following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights.", "train_and_test(True, 1, tf.nn.sigmoid)", "Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy.\nThe following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id=\"successful_example_lr_2\"></a>", "train_and_test(True, 2, tf.nn.relu)", "We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck.\nThe following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights.", "train_and_test(True, 2, tf.nn.sigmoid)", "In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%.\nFull Disclosure: Batch Normalization Doesn't Fix Everything\nBatch normalization isn't magic and it doesn't work every time. Weights are still randomly initialized and batches are chosen at random during training, so you never know exactly how training will go. Even for these tests, where we use the same initial weights for both networks, we still get different weights each time we run.\nThis section includes two examples that show runs when batch normalization did not help at all.\nThe following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.", "train_and_test(True, 1, tf.nn.relu)", "When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)\nThe following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.", "train_and_test(True, 2, tf.nn.relu)", "When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning. \nNote: Both of the above examples use extremely bad starting weights, along with learning rates that are too high. While we've shown batch normalization can overcome bad values, we don't mean to encourage actually using them. The examples in this notebook are meant to show that batch normalization can help your networks train better. But these last two examples should remind you that you still want to try to use good network design choices and reasonable starting weights. It should also remind you that the results of each attempt to train a network are a bit random, even when using otherwise identical architectures.\nBatch Normalization: A Detailed Look<a id='implementation_2'></a>\nThe layer created by tf.layers.batch_normalization handles all the details of implementing batch normalization. Many students will be fine just using that and won't care about what's happening at the lower levels. However, some students may want to explore the details, so here is a short explanation of what's really happening, starting with the equations you're likely to come across if you ever read about batch normalization. \nIn order to normalize the values, we first need to find the average value for the batch. If you look at the code, you can see that this is not the average value of the batch inputs, but the average value coming out of any particular layer before we pass it through its non-linear activation function and then feed it as an input to the next layer.\nWe represent the average as $\\mu_B$, which is simply the sum of all of the values $x_i$ divided by the number of values, $m$ \n$$\n\\mu_B \\leftarrow \\frac{1}{m}\\sum_{i=1}^m x_i\n$$\nWe then need to calculate the variance, or mean squared deviation, represented as $\\sigma_{B}^{2}$. If you aren't familiar with statistics, that simply means for each value $x_i$, we subtract the average value (calculated earlier as $\\mu_B$), which gives us what's called the \"deviation\" for that value. We square the result to get the squared deviation. Sum up the results of doing that for each of the values, then divide by the number of values, again $m$, to get the average, or mean, squared deviation.\n$$\n\\sigma_{B}^{2} \\leftarrow \\frac{1}{m}\\sum_{i=1}^m (x_i - \\mu_B)^2\n$$\nOnce we have the mean and variance, we can use them to normalize the values with the following equation. For each value, it subtracts the mean and divides by the (almost) standard deviation. (You've probably heard of standard deviation many times, but if you have not studied statistics you might not know that the standard deviation is actually the square root of the mean squared deviation.)\n$$\n\\hat{x_i} \\leftarrow \\frac{x_i - \\mu_B}{\\sqrt{\\sigma_{B}^{2} + \\epsilon}}\n$$\nAbove, we said \"(almost) standard deviation\". That's because the real standard deviation for the batch is calculated by $\\sqrt{\\sigma_{B}^{2}}$, but the above formula adds the term epsilon, $\\epsilon$, before taking the square root. The epsilon can be any small, positive constant - in our code we use the value 0.001. It is there partially to make sure we don't try to divide by zero, but it also acts to increase the variance slightly for each batch. \nWhy increase the variance? Statistically, this makes sense because even though we are normalizing one batch at a time, we are also trying to estimate the population distribution – the total training set, which itself an estimate of the larger population of inputs your network wants to handle. The variance of a population is higher than the variance for any sample taken from that population, so increasing the variance a little bit for each batch helps take that into account. \nAt this point, we have a normalized value, represented as $\\hat{x_i}$. But rather than use it directly, we multiply it by a gamma value, $\\gamma$, and then add a beta value, $\\beta$. Both $\\gamma$ and $\\beta$ are learnable parameters of the network and serve to scale and shift the normalized value, respectively. Because they are learnable just like weights, they give your network some extra knobs to tweak during training to help it learn the function it is trying to approximate. \n$$\ny_i \\leftarrow \\gamma \\hat{x_i} + \\beta\n$$\nWe now have the final batch-normalized output of our layer, which we would then pass to a non-linear activation function like sigmoid, tanh, ReLU, Leaky ReLU, etc. In the original batch normalization paper (linked in the beginning of this notebook), they mention that there might be cases when you'd want to perform the batch normalization after the non-linearity instead of before, but it is difficult to find any uses like that in practice.\nIn NeuralNet's implementation of fully_connected, all of this math is hidden inside the following line, where linear_output serves as the $x_i$ from the equations:\npython\nbatch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)\nThe next section shows you how to implement the math directly. \nBatch normalization without the tf.layers package\nOur implementation of batch normalization in NeuralNet uses the high-level abstraction tf.layers.batch_normalization, found in TensorFlow's tf.layers package.\nHowever, if you would like to implement batch normalization at a lower level, the following code shows you how.\nIt uses tf.nn.batch_normalization from TensorFlow's neural net (nn) package.\n1) You can replace the fully_connected function in the NeuralNet class with the below code and everything in NeuralNet will still work like it did before.", "def fully_connected(self, layer_in, initial_weights, activation_fn=None):\n \"\"\"\n Creates a standard, fully connected layer. Its number of inputs and outputs will be\n defined by the shape of `initial_weights`, and its starting weight values will be\n taken directly from that same parameter. If `self.use_batch_norm` is True, this\n layer will include batch normalization, otherwise it will not. \n \n :param layer_in: Tensor\n The Tensor that feeds into this layer. It's either the input to the network or the output\n of a previous layer.\n :param initial_weights: NumPy array or Tensor\n Initial values for this layer's weights. The shape defines the number of nodes in the layer.\n e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 \n outputs. \n :param activation_fn: Callable or None (default None)\n The non-linearity used for the output of the layer. If None, this layer will not include \n batch normalization, regardless of the value of `self.use_batch_norm`. \n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n \"\"\"\n if self.use_batch_norm and activation_fn:\n # Batch normalization uses weights as usual, but does NOT add a bias term. This is because \n # its calculations include gamma and beta variables that make the bias term unnecessary.\n weights = tf.Variable(initial_weights)\n linear_output = tf.matmul(layer_in, weights)\n\n num_out_nodes = initial_weights.shape[-1]\n\n # Batch normalization adds additional trainable variables: \n # gamma (for scaling) and beta (for shifting).\n gamma = tf.Variable(tf.ones([num_out_nodes]))\n beta = tf.Variable(tf.zeros([num_out_nodes]))\n\n # These variables will store the mean and variance for this layer over the entire training set,\n # which we assume represents the general population distribution.\n # By setting `trainable=False`, we tell TensorFlow not to modify these variables during\n # back propagation. Instead, we will assign values to these variables ourselves. \n pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False)\n pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False)\n\n # Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero.\n # This is the default value TensorFlow uses.\n epsilon = 1e-3\n\n def batch_norm_training():\n # Calculate the mean and variance for the data coming out of this layer's linear-combination step.\n # The [0] defines an array of axes to calculate over.\n batch_mean, batch_variance = tf.nn.moments(linear_output, [0])\n\n # Calculate a moving average of the training data's mean and variance while training.\n # These will be used during inference.\n # Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter\n # \"momentum\" to accomplish this and defaults it to 0.99\n decay = 0.99\n train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))\n train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))\n\n # The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean' \n # and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.\n # This is necessary because the those two operations are not actually in the graph\n # connecting the linear_output and batch_normalization layers, \n # so TensorFlow would otherwise just skip them.\n with tf.control_dependencies([train_mean, train_variance]):\n return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)\n \n def batch_norm_inference():\n # During inference, use the our estimated population mean and variance to normalize the layer\n return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)\n\n # Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute \n # the operation returned from `batch_norm_training`; otherwise it will execute the graph\n # operation returned from `batch_norm_inference`.\n batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference)\n \n # Pass the batch-normalized layer output through the activation function.\n # The literature states there may be cases where you want to perform the batch normalization *after*\n # the activation function, but it is difficult to find any uses of that in practice.\n return activation_fn(batch_normalized_output)\n else:\n # When not using batch normalization, create a standard layer that multiplies\n # the inputs and weights, adds a bias, and optionally passes the result \n # through an activation function. \n weights = tf.Variable(initial_weights)\n biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))\n linear_output = tf.add(tf.matmul(layer_in, weights), biases)\n return linear_output if not activation_fn else activation_fn(linear_output)\n", "This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points:\n\nIt explicitly creates variables to store gamma, beta, and the population mean and variance. These were all handled for us in the previous version of the function.\nIt initializes gamma to one and beta to zero, so they start out having no effect in this calculation: $y_i \\leftarrow \\gamma \\hat{x_i} + \\beta$. However, during training the network learns the best values for these variables using back propagation, just like networks normally do with weights.\nUnlike gamma and beta, the variables for population mean and variance are marked as untrainable. That tells TensorFlow not to modify them during back propagation. Instead, the lines that call tf.assign are used to update these variables directly.\nTensorFlow won't automatically run the tf.assign operations during training because it only evaluates operations that are required based on the connections it finds in the graph. To get around that, we add this line: with tf.control_dependencies([train_mean, train_variance]): before we run the normalization operation. This tells TensorFlow it needs to run those operations before running anything inside the with block. \nThe actual normalization math is still mostly hidden from us, this time using tf.nn.batch_normalization.\ntf.nn.batch_normalization does not have a training parameter like tf.layers.batch_normalization did. However, we still need to handle training and inference differently, so we run different code in each case using the tf.cond operation.\nWe use the tf.nn.moments function to calculate the batch mean and variance.\n\n2) The current version of the train function in NeuralNet will work fine with this new version of fully_connected. However, it uses these lines to ensure population statistics are updated when using batch normalization: \npython\nif self.use_batch_norm:\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\nelse:\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\nOur new version of fully_connected handles updating the population statistics directly. That means you can also simplify your code by replacing the above if/else condition with just this line:\npython\ntrain_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\n3) And just in case you want to implement every detail from scratch, you can replace this line in batch_norm_training:\npython\nreturn tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)\nwith these lines:\npython\nnormalized_linear_output = (linear_output - batch_mean) / tf.sqrt(batch_variance + epsilon)\nreturn gamma * normalized_linear_output + beta\nAnd replace this line in batch_norm_inference:\npython\nreturn tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)\nwith these lines:\npython\nnormalized_linear_output = (linear_output - pop_mean) / tf.sqrt(pop_variance + epsilon)\nreturn gamma * normalized_linear_output + beta\nAs you can see in each of the above substitutions, the two lines of replacement code simply implement the following two equations directly. The first line calculates the following equation, with linear_output representing $x_i$ and normalized_linear_output representing $\\hat{x_i}$: \n$$\n\\hat{x_i} \\leftarrow \\frac{x_i - \\mu_B}{\\sqrt{\\sigma_{B}^{2} + \\epsilon}}\n$$\nAnd the second line is a direct translation of the following equation:\n$$\ny_i \\leftarrow \\gamma \\hat{x_i} + \\beta\n$$\nWe still use the tf.nn.moments operation to implement the other two equations from earlier – the ones that calculate the batch mean and variance used in the normalization step. If you really wanted to do everything from scratch, you could replace that line, too, but we'll leave that to you. \nWhy the difference between training and inference?\nIn the original function that uses tf.layers.batch_normalization, we tell the layer whether or not the network is training by passing a value for its training parameter, like so:\npython\nbatch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)\nAnd that forces us to provide a value for self.is_training in our feed_dict, like we do in this example from NeuralNet's train function:\npython\nsession.run(train_step, feed_dict={self.input_layer: batch_xs, \n labels: batch_ys, \n self.is_training: True})\nIf you looked at the low level implementation, you probably noticed that, just like with tf.layers.batch_normalization, we need to do slightly different things during training and inference. But why is that?\nFirst, let's look at what happens when we don't. The following function is similar to train_and_test from earlier, but this time we are only testing one network and instead of plotting its accuracy, we perform 200 predictions on test inputs, 1 input at at time. We can use the test_training_accuracy parameter to test the network in training or inference modes (the equivalent of passing True or False to the feed_dict for is_training).", "def batch_norm_test(test_training_accuracy):\n \"\"\"\n :param test_training_accuracy: bool\n If True, perform inference with batch normalization using batch mean and variance;\n if False, perform inference with batch normalization using estimated population mean and variance.\n \"\"\"\n\n weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,10), scale=0.05).astype(np.float32)\n ]\n\n tf.reset_default_graph()\n\n # Train the model\n bn = NeuralNet(weights, tf.nn.relu, True)\n \n # First train the network\n with tf.Session() as sess:\n tf.global_variables_initializer().run()\n\n bn.train(sess, 0.01, 2000, 2000)\n\n bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True)", "In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training.", "batch_norm_test(True)", "As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The \"batches\" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer. \nNote: If you re-run that cell, you might get a different value from what we showed. That's because the specific weights the network learns will be different every time. But whatever value it is, it should be the same for all 200 predictions.\nTo overcome this problem, the network does not just normalize the batch at each layer. It also maintains an estimate of the mean and variance for the entire population. So when we perform inference, instead of letting it \"normalize\" all the values using their own means and variance, it uses the estimates of the population mean and variance that it calculated while training. \nSo in the following example, we pass False for test_training_accuracy, which tells the network that we it want to perform inference with the population statistics it calculates during training.", "batch_norm_test(False)", "As you can see, now that we're using the estimated population mean and variance, we get a 97% accuracy. That means it guessed correctly on 194 of the 200 samples – not too bad for something that trained in under 4 seconds. :)\nConsiderations for other network types\nThis notebook demonstrates batch normalization in a standard neural network with fully connected layers. You can also use batch normalization in other types of networks, but there are some special considerations.\nConvNets\nConvolution layers consist of multiple feature maps. (Remember, the depth of a convolutional layer refers to its number of feature maps.) And the weights for each feature map are shared across all the inputs that feed into the layer. Because of these differences, batch normalizaing convolutional layers requires batch/population mean and variance per feature map rather than per node in the layer.\nWhen using tf.layers.batch_normalization, be sure to pay attention to the order of your convolutionlal dimensions.\nSpecifically, you may want to set a different value for the axis parameter if your layers have their channels first instead of last. \nIn our low-level implementations, we used the following line to calculate the batch mean and variance:\npython\nbatch_mean, batch_variance = tf.nn.moments(linear_output, [0])\nIf we were dealing with a convolutional layer, we would calculate the mean and variance with a line like this instead:\npython\nbatch_mean, batch_variance = tf.nn.moments(conv_layer, [0,1,2], keep_dims=False)\nThe second parameter, [0,1,2], tells TensorFlow to calculate the batch mean and variance over each feature map. (The three axes are the batch, height, and width.) And setting keep_dims to False tells tf.nn.moments not to return values with the same size as the inputs. Specifically, it ensures we get one mean/variance pair per feature map.\nRNNs\nBatch normalization can work with recurrent neural networks, too, as shown in the 2016 paper Recurrent Batch Normalization. It's a bit more work to implement, but basically involves calculating the means and variances per time step instead of per layer. You can find an example where someone extended tf.nn.rnn_cell.RNNCell to include batch normalization in this GitHub repo." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/test-institute-1/cmip6/models/sandbox-2/aerosol.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: TEST-INSTITUTE-1\nSource ID: SANDBOX-2\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:43\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-2', 'aerosol')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Meteorological Forcings\n5. Key Properties --&gt; Resolution\n6. Key Properties --&gt; Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --&gt; Absorption\n12. Optical Radiative Properties --&gt; Mixtures\n13. Optical Radiative Properties --&gt; Impact Of H2o\n14. Optical Radiative Properties --&gt; Radiative Scheme\n15. Optical Radiative Properties --&gt; Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of aerosol model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrognostic variables in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of tracers in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre aerosol calculations generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the aerosol model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Variables 2D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Frequency\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of transport in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for aerosol transport modeling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n", "7.3. Mass Conservation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to ensure mass conservation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.4. Convention\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTransport by convention", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prescribed Climatology\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify the climatology type for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n", "8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Other Method Characteristics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCharacteristics of the &quot;other method&quot; used for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as mass mixing ratios.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of optical and radiative properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Optical Radiative Properties --&gt; Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.2. Dust\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Organics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12. Optical Radiative Properties --&gt; Mixtures\n**\n12.1. External\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there external mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Internal\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.3. Mixing Rule\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Optical Radiative Properties --&gt; Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact size?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.2. Internal Mixture\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact internal mixture?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Optical Radiative Properties --&gt; Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Shortwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of shortwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Optical Radiative Properties --&gt; Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol-cloud interactions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Twomey\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the Twomey effect included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.3. Twomey Minimum Ccn\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Drizzle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect drizzle?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.5. Cloud Lifetime\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect cloud lifetime?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the Aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n", "16.3. Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther model components coupled to the Aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.4. Gas Phase Precursors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of gas phase aerosol precursors.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.5. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.6. Bulk Scheme Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of species covered by the bulk scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
julianogalgaro/udacity
nd101/c2l8-sentiment-analysis/sentiment_network/Sentiment Classification - Mini Project 3.ipynb
mit
[ "Sentiment Classification & How To \"Frame Problems\" for a Neural Network\nby Andrew Trask\n\nTwitter: @iamtrask\nBlog: http://iamtrask.github.io\n\nWhat You Should Already Know\n\nneural networks, forward and back-propagation\nstochastic gradient descent\nmean squared error\nand train/test splits\n\nWhere to Get Help if You Need it\n\nRe-watch previous Udacity Lectures\nLeverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)\nShoot me a tweet @iamtrask\n\nTutorial Outline:\n\n\nIntro: The Importance of \"Framing a Problem\"\n\n\nCurate a Dataset\n\nDeveloping a \"Predictive Theory\"\n\nPROJECT 1: Quick Theory Validation\n\n\nTransforming Text to Numbers\n\n\nPROJECT 2: Creating the Input/Output Data\n\n\nPutting it all together in a Neural Network\n\n\nPROJECT 3: Building our Neural Network\n\n\nUnderstanding Neural Noise\n\n\nPROJECT 4: Making Learning Faster by Reducing Noise\n\n\nAnalyzing Inefficiencies in our Network\n\n\nPROJECT 5: Making our Network Train and Run Faster\n\n\nFurther Noise Reduction\n\n\nPROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary\n\n\nAnalysis: What's going on in the weights?\n\n\nLesson: Curate a Dataset", "def pretty_print_review_and_label(i):\n print(labels[i] + \"\\t:\\t\" + reviews[i][:80] + \"...\")\n\ng = open('reviews.txt','r') # What we know!\nreviews = list(map(lambda x:x[:-1],g.readlines()))\ng.close()\n\ng = open('labels.txt','r') # What we WANT to know!\nlabels = list(map(lambda x:x[:-1].upper(),g.readlines()))\ng.close()\n\nlen(reviews)\n\nreviews[0]\n\nlabels[0]", "Lesson: Develop a Predictive Theory", "print(\"labels.txt \\t : \\t reviews.txt\\n\")\npretty_print_review_and_label(2137)\npretty_print_review_and_label(12816)\npretty_print_review_and_label(6267)\npretty_print_review_and_label(21934)\npretty_print_review_and_label(5297)\npretty_print_review_and_label(4998)", "Project 1: Quick Theory Validation", "from collections import Counter\nimport numpy as np\n\npositive_counts = Counter()\nnegative_counts = Counter()\ntotal_counts = Counter()\n\nfor i in range(len(reviews)):\n if(labels[i] == 'POSITIVE'):\n for word in reviews[i].split(\" \"):\n positive_counts[word] += 1\n total_counts[word] += 1\n else:\n for word in reviews[i].split(\" \"):\n negative_counts[word] += 1\n total_counts[word] += 1\n\npositive_counts.most_common()\n\npos_neg_ratios = Counter()\n\nfor term,cnt in list(total_counts.most_common()):\n if(cnt > 100):\n pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)\n pos_neg_ratios[term] = pos_neg_ratio\n\nfor word,ratio in pos_neg_ratios.most_common():\n if(ratio > 1):\n pos_neg_ratios[word] = np.log(ratio)\n else:\n pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))\n\n# words most frequently seen in a review with a \"POSITIVE\" label\npos_neg_ratios.most_common()\n\n# words most frequently seen in a review with a \"NEGATIVE\" label\nlist(reversed(pos_neg_ratios.most_common()))[0:30]", "Transforming Text into Numbers", "from IPython.display import Image\n\nreview = \"This was a horrible, terrible movie.\"\n\nImage(filename='sentiment_network.png')\n\nreview = \"The movie was excellent\"\n\nImage(filename='sentiment_network_pos.png')", "Project 2: Creating the Input/Output Data", "vocab = set(total_counts.keys())\nvocab_size = len(vocab)\nprint(vocab_size)\n\nlist(vocab)\n\nimport numpy as np\n\nlayer_0 = np.zeros((1,vocab_size))\nlayer_0\n\nfrom IPython.display import Image\nImage(filename='sentiment_network.png')\n\nword2index = {}\n\nfor i,word in enumerate(vocab):\n word2index[word] = i\nword2index\n\ndef update_input_layer(review):\n \n global layer_0\n \n # clear out previous state, reset the layer to be all 0s\n layer_0 *= 0\n for word in review.split(\" \"):\n layer_0[0][word2index[word]] += 1\n\nupdate_input_layer(reviews[0])\n\nlayer_0\n\ndef get_target_for_label(label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n\nlabels[0]\n\nget_target_for_label(labels[0])\n\nlabels[1]\n\nget_target_for_label(labels[1])", "Project 3: Building a Neural Network\n\nStart with your neural network from the last chapter\n3 layer neural network\nno non-linearity in hidden layer\nuse our functions to create the training data\ncreate a \"pre_process_data\" function to create vocabulary for our training data generating functions\nmodify \"train\" to train over the entire corpus\n\nWhere to Get Help if You Need it\n\nRe-watch previous week's Udacity Lectures\nChapters 3-5 - Grokking Deep Learning - (40% Off: traskud17)", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom collections import Counter\n\nclass NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n \n self.pre_process_data(input_nodes)\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = len(self.layer_0)\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.input_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.output_nodes, self.hidden_nodes))\n self.lr = learning_rate\n \n #### TODO: Set self.activation_function to your implemented sigmoid function ####\n #\n # Note: in Python, you can define a function with a lambda expression,\n # as shown below.\n #self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.\n \n ### If the lambda code above is not something you're familiar with,\n # You can uncomment out the following three lines and put your \n # implementation there instead.\n #\n #def sigmoid(x):\n # return 0 # Replace 0 with your sigmoid calculation here\n #self.activation_function = sigmoid\n \n \n def train(self, inputs_list, targets_list):\n \n \n self.update_input_layer(inputs_list)\n # Convert inputs list to 2d array\n inputs = np.array(self.layer_0, ndmin=2)\n targets = np.array(self.get_target_for_label(targets_list), ndmin=2)\n\n\n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer - Replace these values with your calculations.\n hidden_inputs = np.dot(self.weights_input_to_hidden,inputs) # signals into hidden layer\n hidden_outputs = hidden_inputs # signals from hidden layer\n\n # TODO: Output layer - Replace these values with your calculations.\n final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer\n\n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error - Replace this value with your calculations.\n output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.\n print(output_errors)\n # TODO: Backpropagated error - Replace these values with your calculations.\n hidden_errors = np.dot(output_errors.T,self.weights_hidden_to_output)\n hidden_grad = 1 # hidden layer gradients\n hidden_error_term = hidden_grad * hidden_errors\n\n # TODO: Update the weights - Replace these values with your calculations.\n self.weights_hidden_to_output += self.lr * output_errors * hidden_outputs # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden += self.lr * hidden_error_term * inputs.T # update input-to-hidden weights with gradient descent step\n\n \n def run(self, inputs_list):\n # Run a forward pass through the network\n inputs = np.array(inputs_list, ndmin=2).T\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer - replace these values with the appropriate calculations.\n hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer\n hidden_outputs = hidden_inputs # signals from hidden layer\n \n # TODO: Output layer - Replace these values with the appropriate calculations.\n final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer \n \n return final_outputs\n def pre_process_data(self, reviews):\n total_counts=Counter()\n for i in range(len(reviews)):\n for word in reviews[i].split(\" \"):\n total_counts[word] += 1\n\n self.vocab = set(total_counts.keys())\n vocab_size = len(self.vocab)\n\n self.word2index = {}\n for i,word in enumerate(self.vocab):\n self.word2index[word] = i\n \n self.layer_0 = np.zeros((1,vocab_size))\n list(self.layer_0)\n \n def update_input_layer(self, review):\n \n # clear out previous state, reset the layer to be all 0s\n self.layer_0 *= 0\n for word in review.split(\" \"):\n self.layer_0[0][self.word2index[word]] += 1\n\n def get_target_for_label(self, label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)", "Training the network", "import sys\n\n### Set the hyperparameters here ###\nepochs = 1500\nlearning_rate = 0.01\nhidden_nodes = 8\noutput_nodes = 1\n\n#N_i = len(reviews)\nnetwork = NeuralNetwork(reviews, hidden_nodes, output_nodes, learning_rate)\n\n\nlosses = {'train':[], 'validation':[]}\nfor e in range(epochs):\n # Go through a random batch of 128 records from the training data set\n #batch = np.random.choice(train_features.index, size=128)\n for record, target in zip(reviews, \n labels):\n print(target)\n network.train(record, target)\n \n # Printing out the training progress\n #train_loss = MSE(network.run(train_features), train_targets['cnt'].values)\n #val_loss = MSE(network.run(val_features), val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: \" + str(100 * e/float(epochs))[:4] \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\nplt.ylim(ymax=1)", "Run", "\nprint(network.run())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
xpmanoj/content
HW2.ipynb
mit
[ "Homework 2: Desperately Seeking Silver\nDue Thursday, Oct 3, 11:59 PM\n<center>\n<img src=\"http://www.scribewise.com/Portals/202647/images/photo.jpg\">\n</center>\n<br>\nIn HW1, we explored how to make predictions (with uncertainties) about upcoming elections based on the Real Clear Politics poll. This assignment also focuses on election prediction, but we are going to implement and evaluate a number of more sophisticated forecasting techniques. \nWe are going to focus on the 2012 Presidential election. Analysts like Nate Silver, Drew Linzer, and Sam Wang developed highly accurate models that correctly forecasted most or all of the election outcomes in each of the 50 states. We will explore how hard it is to recreate similarly successful models. The goals of this assignment are:\n\nTo practice data manipulation with Pandas\nTo develop intuition about the interplay of precision, accuracy, and bias when making predictions\nTo better understand how election forecasts are constructed\n\nThe data for our analysis will come from demographic and polling data. We will simulate building our model on October 2, 2012 -- approximately one month before the election. \nInstructions\nThe questions in this assignment are numbered. The questions are also usually italicised, to help you find them in the flow of this notebook. At some points you will be asked to write functions to carry out certain tasks. Its worth reading a little ahead to see how the function whose body you will fill in will be used.\nThis is a long homework. Please do not wait until the last minute to start it!\nThe data for this homework can be found at this link. Download it to the same folder where you are running this notebook, and uncompress it. You should find the following files there:\n\nus-states.json\nelectoral_votes.csv\npredictwise.csv\ng12.csv\ng08.csv\n2008results.csv\nnat.csv\np04.csv\n2012results.csv\ncleaned-state_data2012.csv\n\nSetup and Plotting code", "%matplotlib inline\nfrom collections import defaultdict\nimport json\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nfrom matplotlib import rcParams\nimport matplotlib.cm as cm\nimport matplotlib as mpl\n\n#colorbrewer2 Dark2 qualitative color table\ndark2_colors = [(0.10588235294117647, 0.6196078431372549, 0.4666666666666667),\n (0.8509803921568627, 0.37254901960784315, 0.00784313725490196),\n (0.4588235294117647, 0.4392156862745098, 0.7019607843137254),\n (0.9058823529411765, 0.1607843137254902, 0.5411764705882353),\n (0.4, 0.6509803921568628, 0.11764705882352941),\n (0.9019607843137255, 0.6705882352941176, 0.00784313725490196),\n (0.6509803921568628, 0.4627450980392157, 0.11372549019607843)]\n\nrcParams['figure.figsize'] = (10, 6)\nrcParams['figure.dpi'] = 150\nrcParams['axes.color_cycle'] = dark2_colors\nrcParams['lines.linewidth'] = 2\nrcParams['axes.facecolor'] = 'white'\nrcParams['font.size'] = 14\nrcParams['patch.edgecolor'] = 'white'\nrcParams['patch.facecolor'] = dark2_colors[0]\nrcParams['font.family'] = 'StixGeneral'\n\n\ndef remove_border(axes=None, top=False, right=False, left=True, bottom=True):\n \"\"\"\n Minimize chartjunk by stripping out unnecesasry plot borders and axis ticks\n \n The top/right/left/bottom keywords toggle whether the corresponding plot border is drawn\n \"\"\"\n ax = axes or plt.gca()\n ax.spines['top'].set_visible(top)\n ax.spines['right'].set_visible(right)\n ax.spines['left'].set_visible(left)\n ax.spines['bottom'].set_visible(bottom)\n \n #turn off all ticks\n ax.yaxis.set_ticks_position('none')\n ax.xaxis.set_ticks_position('none')\n \n #now re-enable visibles\n if top:\n ax.xaxis.tick_top()\n if bottom:\n ax.xaxis.tick_bottom()\n if left:\n ax.yaxis.tick_left()\n if right:\n ax.yaxis.tick_right()\n \npd.set_option('display.width', 500)\npd.set_option('display.max_columns', 100)\n\n#this mapping between states and abbreviations will come in handy later\nstates_abbrev = {\n 'AK': 'Alaska',\n 'AL': 'Alabama',\n 'AR': 'Arkansas',\n 'AS': 'American Samoa',\n 'AZ': 'Arizona',\n 'CA': 'California',\n 'CO': 'Colorado',\n 'CT': 'Connecticut',\n 'DC': 'District of Columbia',\n 'DE': 'Delaware',\n 'FL': 'Florida',\n 'GA': 'Georgia',\n 'GU': 'Guam',\n 'HI': 'Hawaii',\n 'IA': 'Iowa',\n 'ID': 'Idaho',\n 'IL': 'Illinois',\n 'IN': 'Indiana',\n 'KS': 'Kansas',\n 'KY': 'Kentucky',\n 'LA': 'Louisiana',\n 'MA': 'Massachusetts',\n 'MD': 'Maryland',\n 'ME': 'Maine',\n 'MI': 'Michigan',\n 'MN': 'Minnesota',\n 'MO': 'Missouri',\n 'MP': 'Northern Mariana Islands',\n 'MS': 'Mississippi',\n 'MT': 'Montana',\n 'NA': 'National',\n 'NC': 'North Carolina',\n 'ND': 'North Dakota',\n 'NE': 'Nebraska',\n 'NH': 'New Hampshire',\n 'NJ': 'New Jersey',\n 'NM': 'New Mexico',\n 'NV': 'Nevada',\n 'NY': 'New York',\n 'OH': 'Ohio',\n 'OK': 'Oklahoma',\n 'OR': 'Oregon',\n 'PA': 'Pennsylvania',\n 'PR': 'Puerto Rico',\n 'RI': 'Rhode Island',\n 'SC': 'South Carolina',\n 'SD': 'South Dakota',\n 'TN': 'Tennessee',\n 'TX': 'Texas',\n 'UT': 'Utah',\n 'VA': 'Virginia',\n 'VI': 'Virgin Islands',\n 'VT': 'Vermont',\n 'WA': 'Washington',\n 'WI': 'Wisconsin',\n 'WV': 'West Virginia',\n 'WY': 'Wyoming'\n}", "Here is some code to plot State Chloropleth maps in matplotlib. make_map is the function you will use.", "#adapted from https://github.com/dataiap/dataiap/blob/master/resources/util/map_util.py\n\n#load in state geometry\nstate2poly = defaultdict(list)\n\ndata = json.load(file(\"data/us-states.json\"))\nfor f in data['features']:\n state = states_abbrev[f['id']]\n geo = f['geometry']\n if geo['type'] == 'Polygon':\n for coords in geo['coordinates']:\n state2poly[state].append(coords)\n elif geo['type'] == 'MultiPolygon':\n for polygon in geo['coordinates']:\n state2poly[state].extend(polygon)\n\n \ndef draw_state(plot, stateid, **kwargs):\n \"\"\"\n draw_state(plot, stateid, color=..., **kwargs)\n \n Automatically draws a filled shape representing the state in\n subplot.\n The color keyword argument specifies the fill color. It accepts keyword\n arguments that plot() accepts\n \"\"\"\n for polygon in state2poly[stateid]:\n xs, ys = zip(*polygon)\n plot.fill(xs, ys, **kwargs)\n\n \ndef make_map(states, label):\n \"\"\"\n Draw a cloropleth map, that maps data onto the United States\n \n Inputs\n -------\n states : Column of a DataFrame\n The value for each state, to display on a map\n label : str\n Label of the color bar\n\n Returns\n --------\n The map\n \"\"\"\n fig = plt.figure(figsize=(12, 9))\n ax = plt.gca()\n\n if states.max() < 2: # colormap for election probabilities \n cmap = cm.RdBu\n vmin, vmax = 0, 1\n else: # colormap for electoral votes\n cmap = cm.binary\n vmin, vmax = 0, states.max()\n norm = mpl.colors.Normalize(vmin=vmin, vmax=vmax)\n \n skip = set(['National', 'District of Columbia', 'Guam', 'Puerto Rico',\n 'Virgin Islands', 'American Samoa', 'Northern Mariana Islands'])\n for state in states_abbrev.values():\n if state in skip:\n continue\n color = cmap(norm(states.ix[state]))\n draw_state(ax, state, color = color, ec='k')\n\n #add an inset colorbar\n ax1 = fig.add_axes([0.45, 0.70, 0.4, 0.02]) \n cb1=mpl.colorbar.ColorbarBase(ax1, cmap=cmap,\n norm=norm,\n orientation='horizontal')\n ax1.set_title(label)\n remove_border(ax, left=False, bottom=False)\n ax.set_xticks([])\n ax.set_yticks([])\n ax.set_xlim(-180, -60)\n ax.set_ylim(15, 75)\n return ax", "Today: the day we make the prediction", "# We are pretending to build our model 1 month before the election\nimport datetime\ntoday = datetime.datetime(2012, 10, 2)\ntoday", "Background: The Electoral College\nUS Presidential elections revolve around the <a href=\"http://en.wikipedia.org/wiki/Electoral_College_(United_States)\"> Electoral College </a>. In this system, each state receives a number of Electoral College votes depending on it's population -- there are 538 votes in total. In most states, all of the electoral college votes are awarded to the presidential candidate who recieves the most votes in that state. A candidate needs 269 votes to be elected President. \nThus, to calculate the total number of votes a candidate gets in the election, we add the electoral college votes in the states that he or she wins. (This is not entirely true, with Nebraska and Maine splitting their electoral college votes, but, for the purposes of this homework, we shall assume that the winner of the most votes in Maine and Nebraska gets ALL the electoral college votes there.) \nHere is the electoral vote breakdown by state:\nAs a matter of convention, we will index all our dataframes by the state name", "electoral_votes = pd.read_csv(\"data/electoral_votes.csv\").set_index('State')\nelectoral_votes.head(50)", "To illustrate the use of make_map we plot the Electoral College", "make_map(electoral_votes.Votes, \"Electoral Votes\");", "Question 1: Simulating elections\nThe PredictWise Baseline\nWe will start by examining a successful forecast that PredictWise made on October 2, 2012. This will give us a point of comparison for our own forecast models.\nPredictWise aggregated polling data and, for each state, estimated the probability that the Obama or Romney would win. Here are those estimated probabilities:", "predictwise = pd.read_csv('data/predictwise.csv').set_index('States')\npredictwise.head(10)", "1.1 Each row is the probability predicted by Predictwise that Romney or Obama would win a state. The votes column lists the number of electoral college votes in that state. Use make_map to plot a map of the probability that Obama wins each state, according to this prediction.", "#your code here\nmake_map(predictwise.Obama, \"Probability of Obama Winning\");\n", "Later on in this homework we will explore some approaches to estimating probabilities like these and quatifying our uncertainty about them. But for the time being, we will focus on how to make a prediction assuming these probabilities are known.\nEven when we assume the win probabilities in each state are known, there is still uncertainty left in the election. We will use simulations from a simple probabilistic model to characterize this uncertainty. From these simulations, we will be able to make a prediction about the expected outcome of the election, and make a statement about how sure we are about it.\n1.2 We will assume that the outcome in each state is the result of an independent coin flip whose probability of coming up Obama is given by a Dataframe of state-wise win probabilities. Write a function that uses this predictive model to simulate the outcome of the election given a Dataframe of probabilities.", "\"\"\"\nFunction\n--------\nsimulate_election\n\nInputs\n------\nmodel : DataFrame\n A DataFrame summarizing an election forecast. The dataframe has 51 rows -- one for each state and DC\n It has the following columns:\n Obama : Forecasted probability that Obama wins the state\n Votes : Electoral votes for the state\n The DataFrame is indexed by state (i.e., model.index is an array of state names)\n \nn_sim : int\n Number of simulations to run\n \nReturns\n-------\nresults : Numpy array with n_sim elements\n Each element stores the number of electoral college votes Obama wins in each simulation. \n\"\"\"\n\n#Your code here\ndef simulate_election(model, n_sim):\n simulations = np.random.uniform(size= (51,n_sim))\n obama_votes = (simulations < model.Obama.values.reshape(-1,1))* model.Votes.values.reshape(-1,1)\n results = obama_votes.sum(axis = 0)\n return results\n\n", "The following cells takes the necessary DataFrame for the Predictwise data, and runs 10000 simulations. We use the results to compute the probability, according to this predictive model, that Obama wins the election (i.e., the probability that he receives 269 or more electoral college votes)", "result = simulate_election(predictwise, 10000)\nresult\n\n#compute the probability of an Obama win, given this simulation\n#Your code here\nprint (result >=269).mean()\n", "1.3 Now, write a function called plot_simulation to visualize the simulation. This function should:\n\nBuild a histogram from the result of simulate_election\nOverplot the \"victory threshold\" of 269 votes as a vertical black line (hint: use axvline)\nOverplot the result (Obama winning 332 votes) as a vertical red line\nCompute the number of votes at the 5th and 95th quantiles, and display the difference (this is an estimate of the outcome's uncertainty)\nDisplay the probability of an Obama victory", "\"\"\"\nFunction\n--------\nplot_simulation\n\nInputs\n------\nsimulation: Numpy array with n_sim (see simulate_election) elements\n Each element stores the number of electoral college votes Obama wins in each simulation.\n \nReturns\n-------\nNothing \n\"\"\"\n\n#your code here\ndef plot_simulation(simulation): \n plt.hist(simulation, bins=np.arange(200, 538, 1), \n label='simulations', align='left', normed=True)\n plt.axvline(332, 0, .5, color='r', label='Actual Outcome')\n plt.axvline(269, 0, .5, color='k', label='Victory Threshold')\n p05 = np.percentile(simulation, 5.)\n p95 = np.percentile(simulation, 95.)\n iq = int(p95 - p05)\n pwin = ((simulation >= 269).mean() * 100)\n plt.title(\"Chance of Obama Victory: %0.2f%%, Spread: %d votes\" % (pwin, iq))\n plt.legend(frameon=False, loc='upper left')\n plt.xlabel(\"Obama Electoral College Votes\")\n plt.ylabel(\"Probability\")\n remove_border()\n", "Lets plot the result of the Predictwise simulation. Your plot should look something like this:\n<img src=\"http://i.imgur.com/uCOFXHp.png\">", "plot_simulation(result)\nplt.xlim(240,380)", "Evaluating and Validating our Forecast\nThe point of creating a probabilistic predictive model is to simultaneously make a forecast and give an estimate of how certain we are about it. \nHowever, in order to trust our prediction or our reported level of uncertainty, the model needs to be correct. We say a model is correct if it honestly accounts for all of the mechanisms of variation in the system we're forecasting.\nIn this section, we evaluate our prediction to get a sense of how useful it is, and we validate the predictive model by comparing it to real data.\n1.4 Suppose that we believe the model is correct. Under this assumption, we can evaluate our prediction by characterizing its accuracy and precision (see here for an illustration of these ideas). What does the above plot reveal about the accuracy and precision of the PredictWise model?\nYour Answer Here\n1.5 Unfortunately, we can never be absolutely sure that a model is correct, just as we can never be absolutely sure that the sun will rise tomorrow. But we can test a model by making predictions assuming that it is true and comparing it to real events -- this constitutes a hypothesis test. After testing a large number of predictions, if we find no evidence that says the model is wrong, we can have some degree of confidence that the model is right (the same reason we're still quite confident about the sun being here tomorrow). We call this process model checking, and use it to validate our model.\nDescribe how the graph provides one way of checking whether the prediction model is correct. How many predictions have we checked in this case? How could we increase our confidence in the model's correctness?\nYour Answer Here\nGallup Party Affiliation Poll\nNow we will try to estimate our own win probabilities to plug into our predictive model.\nWe will start with a simple forecast model. We will try to predict the outcome of the election based the estimated proportion of people in each state who identify with one one political party or the other.\nGallup measures the political leaning of each state, based on asking random people which party they identify or affiliate with. Here's the data they collected from January-June of 2012:", "gallup_2012=pd.read_csv(\"data/g12.csv\").set_index('State')\ngallup_2012[\"Unknown\"] = 100 - gallup_2012.Democrat - gallup_2012.Republican\ngallup_2012.head()\n", "Each row lists a state, the percent of surveyed individuals who identify as Democrat/Republican, the percent whose identification is unknown or who haven't made an affiliation yet, the margin between Democrats and Republicans (Dem_Adv: the percentage identifying as Democrats minus the percentage identifying as Republicans), and the number N of people surveyed.\n1.6 This survey can be used to predict the outcome of each State's election. The simplest forecast model assigns 100% probability that the state will vote for the majority party. Implement this simple forecast.", "\"\"\"\nFunction\n--------\nsimple_gallup_model\n\nA simple forecast that predicts an Obama (Democratic) victory with\n0 or 100% probability, depending on whether a state\nleans Republican or Democrat.\n\nInputs\n------\ngallup : DataFrame\n The Gallup dataframe above\n\nReturns\n-------\nmodel : DataFrame\n A dataframe with the following column\n * Obama: probability that the state votes for Obama. All values should be 0 or 1\n model.index should be set to gallup.index (that is, it should be indexed by state name)\n \nExamples\n---------\n>>> simple_gallup_model(gallup_2012).ix['Florida']\nObama 1\nName: Florida, dtype: float64\n>>> simple_gallup_model(gallup_2012).ix['Arizona']\nObama 0\nName: Arizona, dtype: float64\n\"\"\"\n\n#your code here\ndef simple_gallup_model(gallup):\n return pd.DataFrame(dict(Obama=(gallup_2012.Dem_Adv > 0).astype(float)))", "Now, we run the simulation with this model, and plot it.", "\nmodel = simple_gallup_model(gallup_2012)\nmodel = model.join(electoral_votes)\n\nprediction = simulate_election(model, 10000)\n\nplot_simulation(prediction)\nplt.show()\nmake_map(model.Obama, \"P(Obama): Simple Model\")", "1.7 Attempt to validate the predictive model using the above simulation histogram. Does the evidence contradict the predictive model?\nPredictive model suggests zero probability for the actual outcome(in red). This contradicts the evidence and hence we should reject the preditive model.\nAdding Polling Uncertainty to the Predictive Model\nThe model above is brittle -- it includes no accounting for uncertainty, and thus makes predictions with 100% confidence. This is clearly wrong -- there are numerous sources of uncertainty in estimating election outcomes from a poll of affiliations. \nThe most obvious source of error in the Gallup data is the finite sample size -- Gallup did not poll everybody in America, and thus the party affilitions are subject to sampling errors. How much uncertainty does this introduce?\nOn their webpage discussing these data, Gallup notes that the sampling error for the states is between 3 and 6%, with it being 3% for most states. (The calculation of the sampling error itself is an exercise in statistics. Its fun to think of how you could arrive at the sampling error if it was not given to you. One way to do it would be to assume this was a two-choice situation and use binomial sampling error for the non-unknown answers, and further model the error for those who answered 'Unknown'.)\n1.8 Use Gallup's estimate of 3% to build a Gallup model with some uncertainty. Assume that the Dem_Adv column represents the mean of a Gaussian, whose standard deviation is 3%. Build the model in the function uncertain_gallup_model. Return a forecast where the probability of an Obama victory is given by the probability that a sample from the Dem_Adv Gaussian is positive.\nHint\nThe probability that a sample from a Gaussian with mean $\\mu$ and standard deviation $\\sigma$ exceeds a threhold $z$ can be found using the the Cumulative Distribution Function of a Gaussian:\n$$\nCDF(z) = \\frac1{2}\\left(1 + {\\rm erf}\\left(\\frac{z - \\mu}{\\sqrt{2 \\sigma^2}}\\right)\\right) \n$$", "\"\"\"\nFunction\n--------\nuncertain_gallup_model\n\nA forecast that predicts an Obama (Democratic) victory if the random variable drawn\nfrom a Gaussian with mean Dem_Adv and standard deviation 3% is >0\n\nInputs\n------\ngallup : DataFrame\n The Gallup dataframe above\n\nReturns\n-------\nmodel : DataFrame\n A dataframe with the following column\n * Obama: probability that the state votes for Obama.\n model.index should be set to gallup.index (that is, it should be indexed by state name)\n\"\"\"\n# your code here\nfrom scipy.special import erf\ndef uncertain_gallup_model(gallup):\n sigma = 3.0\n prob = 0.5*(1 + erf(gallup.Dem_Adv/np.sqrt(2*sigma**2)))\n return pd.DataFrame(dict(Obama= prob), index =gallup.index)\n", "We construct the model by estimating the probabilities:", "model = uncertain_gallup_model(gallup_2012)\nmodel = model.join(electoral_votes)\n", "Once again, we plot a map of these probabilities, run the simulation, and display the results", "make_map(model.Obama, \"P(Obama): Gallup + Uncertainty\")\nplt.show()\nprediction = simulate_election(model, 10000)\nplot_simulation(prediction)\nplt.xlim(150,350)", "1.9 Attempt to validate the above model using the histogram. Does the predictive distribution appear to be consistent with the real data? Comment on the accuracy and precision of the prediction.\nYour answers here\nBiases\nWhile accounting for uncertainty is one important part of making predictions, we also want to avoid systematic errors. We call systematic over- or under-estimation of an unknown quantity bias. In the case of this forecast, our predictions would be biased if the estimates from this poll systematically over- or under-estimate vote proportions on election day. There are several reasons this might happen:\n\nGallup is wrong. The poll may systematically over- or under-estimate party affiliation. This could happen if the people who answer Gallup phone interviews might not be a representative sample of people who actually vote, Gallup's methodology is flawed, or if people lie during a Gallup poll.\nOur assumption about party affiliation is wrong. Party affiliation may systematically over- or under-estimate vote proportions. This could happen if people identify with one party, but strongly prefer the candidate from the other party, or if undecided voters do not end up splitting evenly between Democrats and Republicans on election day.\nOur assumption about equilibrium is wrong. This poll was released in August, with more than two months left for the elections. If there is a trend in the way people change their affiliations during this time period (for example, because one candidate is much worse at televised debates), an estimate in August could systematically miss the true value in November.\n\nOne way to account for bias is to calibrate our model by estimating the bias and adjusting for it. Before we do this, let's explore how sensitive our prediction is to bias.\n1.10 Implement a biased_gallup forecast, which assumes the vote share for the Democrat on election day will be equal to Dem_Adv shifted by a fixed negative amount. We will call this shift the \"bias\", so a bias of 1% means that the expected vote share on election day is Dem_Adv-1.\nHint You can do this by wrapping the uncertain_gallup_model in a function that modifies its inputs.", "\"\"\"\nFunction\n--------\nbiased_gallup_poll\n\nSubtracts a fixed amount from Dem_Adv, beofore computing the uncertain_gallup_model.\nThis simulates correcting a hypothetical bias towards Democrats\nin the original Gallup data.\n\nInputs\n-------\ngallup : DataFrame\n The Gallup party affiliation data frame above\nbias : float\n The amount by which to shift each prediction\n \nExamples\n--------\n>>> model = biased_gallup(gallup, 1.)\n>>> model.ix['Flordia']\n>>> .460172\n\"\"\"\n#your code here\ndef biased_gallup_poll(gallup, bias):\n g2 = gallup.copy()\n g2.Dem_Adv -= bias\n return uncertain_gallup_model(g2)\n", "1.11 Simulate elections assuming a bias of 1% and 5%, and plot histograms for each one.", "#your code here\nmodel = biased_gallup_poll(gallup_2012, 1)\nmodel = model.join(electoral_votes)\nprediction = simulate_election(model, 10000)\nplot_simulation(prediction)\nplt.show()\n\n\nmodel = biased_gallup_poll(gallup_2012, 5)\nmodel = model.join(electoral_votes)\nprediction = simulate_election(model, 10000)\nplot_simulation(prediction)\nplt.show()\n\n", "Note that even a small bias can have a dramatic effect on the predictions. Pundits made a big fuss about bias during the last election, and for good reason -- it's an important effect, and the models are clearly sensitive to it. Forecastors like Nate Silver would have had an easier time convincing a wide audience about their methodology if bias wasn't an issue.\nFurthermore, because of the nature of the electoral college, biases get blown up large. For example, suppose you mis-predict the party Florida elects. We've possibly done this as a nation in the past :-). Thats 29 votes right there. So, the penalty for even one misprediction is high.\nEstimating the size of the bias from the 2008 election\nWhile bias can lead to serious inaccuracy in our predictions, it is fairly easy to correct if we are able to estimate the size of the bias and adjust for it. This is one form of calibration.\nOne approach to calibrating a model is to use historical data to estimate the bias of a prediction model. We can use our same prediction model on historical data and compare our historical predictions to what actually occurred and see if, on average, the predictions missed the truth by a certain amount. Under some assumptions (discussed in a question below), we can use the estimate of the bias to adjust our current forecast.\nIn this case, we can use data from the 2008 election. (The Gallup data from 2008 are from the whole of 2008, including after the election):", "gallup_08 = pd.read_csv(\"data/g08.csv\").set_index('State')\nresults_08 = pd.read_csv('data/2008results.csv').set_index('State')\nprediction_08 = gallup_08[['Dem_Adv']]\nprediction_08['Dem_Win']=results_08[\"Obama Pct\"] - results_08[\"McCain Pct\"]\nprediction_08.head()", "1.12 Make a scatter plot using the prediction_08 dataframe of the democratic advantage in the 2008 Gallup poll (X axis) compared to the democratic win percentage -- the difference between Obama and McCain's vote percentage -- in the election (Y Axis). Overplot a linear fit to these data.\nHint\nThe np.polyfit function can compute linear fits, as can sklearn.linear_model.LinearModel", "#your code here\nplt.plot(prediction_08.Dem_Adv,prediction_08.Dem_Win, 'o');\nplt.xlabel('Democratic Advantage in 2008');\nplt.ylabel('Democratic Win in 2008');\nplt.title('Democratic Advantage vs Win in 2008');\n\n#plot fit\nfit = np.polyfit(prediction_08.Dem_Adv,prediction_08.Dem_Win,1 )\nx = np.linspace(-40,80,10)\ny = np.polyval(fit,x)\nplt.plot(x, x, '--k', alpha=.3, label='x=y')\nplt.plot(x,y)\nprint fit\n \n", "Notice that a lot of states in which Gallup reported a Democratic affiliation, the results were strongly in the opposite direction. Why might that be? You can read more about the reasons for this here.\nA quick look at the graph will show you a number of states where Gallup showed a Democratic advantage, but where the elections were lost by the democrats. Use Pandas to list these states.", "#your code here\nprediction_08[(prediction_08.Dem_Adv > 0) & (prediction_08.Dem_Win <0)]", "We compute the average difference between the Democrat advantages in the election and Gallup poll", "print (prediction_08.Dem_Adv - prediction_08.Dem_Win).mean()", "The bias was roughly 8% in favor of the Democrats in the Gallup Poll, meaning that you would want to adjust predictions based on this poll down by that amount. This was the result of people in a number of Southern and Western states claiming to be affiliated as Democrats, then voting the other way. Or, since Gallup kept polling even after the elections, it could also represent people swept away by the 2008 election euphoria in those states. This is an illustration of why one needs to be carefull with polls.\n1.13 * Calibrate your forecast of the 2012 election using the estimated bias from 2008. Validate the resulting model against the real 2012 outcome. Did the calibration help or hurt your prediction?*", "#your code here\nmodel = biased_gallup_poll(gallup_2012, 8.06 )\nmodel = model.join(electoral_votes)\nprediction = simulate_election(model, 10000)\nplot_simulation(prediction)\nplt.xlim(150,350)", "1.14 Finally, given that we know the actual outcome of the 2012 race, and what you saw from the 2008 race would you trust the results of the an election forecast based on the 2012 Gallup party affiliation poll?\nAnswer\nThis was a disaster. The 8% calibration completey destroys the accuracy of our prediction in 2012. Our calibration made the assumptions that a) the bias in 2008 was the same as 2012, and b) the bias in each state was the same.\nThere are several ways in which these assumptions may have been violated. Gallup may have changed their methodology to account for this bias already, leading to a different bias in 2012 from what there was in 2008. The state-by-state biases may have also been different -- voters in highly conservative states may have responded to polls differently from voters in libreral states, for instance. It might have been better to callibrate the bias on a state-wide or clustered basis.\nQuestion 2: Logistic Considerations\nIn the previous forecast, we used the strategy of taking some side-information about an election (the partisan affiliation poll) and relating that to the predicted outcome of the election. We tied these two quantities together using a very simplistic assumption, namely that the vote outcome is deterministically related to estimated partisan affiliation.\nIn this section, we use a more sophisticated approach to link side information -- usually called features or predictors -- to our prediction. This approach has several advantages, including the fact that we may use multiple features to perform our predictions. Such data may include demographic data, exit poll data, and data from previous elections.\nFirst, we'll construct a new feature called PVI, and use it and the Gallup poll to build predictions. Then, we'll use logistic regression to estimate win probabilities, and use these probabilities to build a prediction.\nThe Partisan Voting Index\nThe Partisan Voting Index (PVI) is defined as the excessive swing towards a party in the previous election in a given state. In other words:\n$$\nPVI_{2008} (state) = \nDemocratic.Percent_{2004} ( state ) - Republican.Percent_{2004} ( state) - \\ \n \\Big ( Democratic.Percent_{2004} (national) - Republican.Percent_{2004} (national) \\Big )\n$$\nTo calculate it, let us first load the national percent results for republicans and democrats in the last 3 elections and convert it to the usual democratic - republican format.", "national_results=pd.read_csv(\"data/nat.csv\")\nnational_results.set_index('Year',inplace=True)\nnational_results.head()", "Let us also load in data about the 2004 elections from p04.csv which gets the results in the above form for the 2004 election for each state.", "polls04=pd.read_csv(\"data/p04.csv\")\npolls04.State=polls04.State.replace(states_abbrev)\npolls04.set_index(\"State\", inplace=True);\npolls04.head()\n\npvi08=polls04.Dem - polls04.Rep - (national_results.xs(2004)['Dem'] - national_results.xs(2004)['Rep'])\npvi08.head()", "2.1 Build a new DataFrame called e2008. The dataframe e2008 must have the following columns:\n\na column named pvi with the contents of the partisan vote index pvi08\na column named Dem_Adv which has the Democratic advantage from the frame prediction_08 of the last question with the mean subtracted out\na column named obama_win which has a 1 for each state Obama won in 2008, and 0 otherwise\na column named Dem_Win which has the 2008 election Obama percentage minus McCain percentage, also from the frame prediction_08\nThe DataFrame should be indexed and sorted by State", "#your code here\ne2008 = pd.DataFrame(dict(pvi=pvi08,Dem_Adv = (prediction_08.Dem_Adv - prediction_08.Dem_Adv.mean()),\n obama_win = (prediction_08.Dem_Win > 0).astype(float), Dem_Win= prediction_08.Dem_Win))\ne2008.head()\n", "We construct a similar frame for 2012, obtaining pvi using the 2008 Obama win data which we already have. There is no obama_win column since, well, our job is to predict it!", "pvi12 = e2008.Dem_Win - (national_results.xs(2008)['Dem'] - national_results.xs(2008)['Rep'])\ne2012 = pd.DataFrame(dict(pvi=pvi12, Dem_Adv=gallup_2012.Dem_Adv - gallup_2012.Dem_Adv.mean()))\ne2012 = e2012.sort_index()\ne2012.head()", "We load in the actual 2012 results so that we can compare our results to the predictions.", "results2012 = pd.read_csv(\"data/2012results.csv\")\nresults2012.set_index(\"State\", inplace=True)\nresults2012 = results2012.sort_index()\nresults2012.head()", "Exploratory Data Analysis\n2.2 Lets do a little exploratory data analysis. Plot a scatter plot of the two PVi's against each other. What are your findings? Is the partisan vote index relatively stable from election to election?", "#your code here\nplt.plot(e2008.pvi, e2012.pvi, 'o', label='Data')\nfit = np.polyfit(e2008.pvi, e2012.pvi, 1)\nx = np.linspace(-40, 80, 10)\ny = np.polyval(fit, x)\nplt.plot(x, x, '--k', alpha=.3, label='x=y')\nplt.plot(x, y, label='Linear fit')\nplt.xlabel('2008 Partisan Vote Index (PVI)');\nplt.ylabel('2012 Partisan Vote Index (PVI)');\nplt.title('PVI 2008 vs PVI 2012');\nplt.legend(loc='upper left')\n\n", "your answer here\n2.3 Lets do a bit more exploratory data analysis. Using a scatter plot, plot Dem_Adv against pvi in both 2008 and 2012. Use colors red and blue depending upon obama_win for the 2008 data points. Plot the 2012 data using gray color. Is there the possibility of making a linear separation (line of separation) between the red and the blue points on the graph?", "#your code here\nplt.xlabel('Gallup Democrat Advantage (from mean)')\nplt.ylabel('pvi')\ncolors = [\"red\",\"blue\"]\naxes = plt.gca()\nfor label in [0,1]:\n color = colors[label]\n mask = e2008.obama_win == label\n l = '2008 Mc Cain States' if label == 0 else '2008 Obama States'\n axes.scatter(e2008[mask]['Dem_Adv'], e2008[mask]['pvi'], c= color, s =80, label =l)\naxes.scatter(e2012.Dem_Adv, e2012.pvi, c = 'black', s =80, marker ='s', alpha = .4)\nplt.legend(frameon = False, scatterpoints = 1, loc = \"upper left\")\nremove_border()\n", "your answer here\nThe Logistic Regression\nLogistic regression is a probabilistic model that links observed binary data to a set of features.\nSuppose that we have a set of binary (that is, taking the values 0 or 1) observations $Y_1,\\cdots,Y_n$, and for each observation $Y_i$ we have a vector of features $X_i$. The logistic regression model assumes that there is some set of weights, coefficients, or parameters $\\beta$, one for each feature, so that the data were generated by flipping a weighted coin whose probability of giving a 1 is given by the following equation:\n$$\nP(Y_i = 1) = \\mathrm{logistic}(\\sum \\beta_i X_i),\n$$\nwhere\n$$\n\\mathrm{logistic}(x) = \\frac{e^x}{1+e^x}.\n$$\nWhen we fit a logistic regression model, we determine values for each $\\beta$ that allows the model to best fit the training data we have observed (the 2008 election). Once we do this, we can use these coefficients to make predictions about data we have not yet observed (the 2012 election).\nSometimes this estimation procedure will overfit the training data yielding predictions that are difficult to generalize to unobserved data. Usually, this occurs when the magnitudes of the components of $\\beta$ become too large. To prevent this, we can use a technique called regularization to make the procedure prefer parameter vectors that have smaller magnitude. We can adjust the strength of this regularization to reduce the error in our predictions.\nWe now write some code as technology for doing logistic regression. By the time you start doing this homework, you will have learnt the basics of logistic regression, but not all the mechanisms of cross-validation of data sets. Thus we provide here the code for you to do the logistic regression, and the accompanying cross-validation.\nWe first build the features from the 2008 data frame, returning y, the vector of labels, and X the feature-sample matrix where the columns are the features in order from the list featurelist, and each row is a data \"point\".", "from sklearn.linear_model import LogisticRegression\n\ndef prepare_features(frame2008, featureslist):\n y= frame2008.obama_win.values\n X = frame2008[featureslist].values\n if len(X.shape) == 1:\n X = X.reshape(-1, 1)\n return y, X", "We use the above function to get the label vector and feature-sample matrix for feeding to scikit-learn. We then use the usual scikit-learn incantation fit to fit a logistic regression model with regularization parameter C. The parameter C is a hyperparameter of the model, and is used to penalize too high values of the parameter co-efficients in the loss function that is minimized to perform the logistic regression. We build a new dataframe with the usual Obama column, that holds the probabilities used to make the prediction. Finally we return a tuple of the dataframe and the classifier instance, in that order.", "def fit_logistic(frame2008, frame2012, featureslist, reg=0.0001):\n y, X = prepare_features(frame2008, featureslist)\n clf2 = LogisticRegression(C=reg)\n clf2.fit(X, y)\n X_new = frame2012[featureslist]\n obama_probs = clf2.predict_proba(X_new)[:, 1]\n \n df = pd.DataFrame(index=frame2012.index)\n df['Obama'] = obama_probs\n return df, clf2", "We are not done yet. In order to estimate C, we perform a grid search over many C to find the best C that minimizes the loss function. For each point on that grid, we carry out a n_folds-fold cross-validation. What does this mean?\nSuppose n_folds=10. Then we will repeat the fit 10 times, each time randomly choosing 50/10 ~ 5 states out as a test set, and using the remaining 45/46 as the training set. We use the average score on the test set to score each particular choice of C, and choose the one with the best performance.", "from sklearn.grid_search import GridSearchCV\n\ndef cv_optimize(frame2008, featureslist, n_folds=10, num_p=100):\n y, X = prepare_features(frame2008, featureslist)\n clf = LogisticRegression()\n parameters = {\"C\": np.logspace(-4, 3, num=num_p)}\n gs = GridSearchCV(clf, param_grid=parameters, cv=n_folds)\n gs.fit(X, y)\n return gs.best_params_, gs.best_score_\n", "Finally we write the function that we use to make our fits. It takes both the 2008 and 2012 frame as arguments, as well as the featurelist, and the number of cross-validation folds to do. It uses the above defined logistic_score to find the best-fit C, and then uses this value to return the tuple of result dataframe and classifier described above. This is the function you will be using.", "def cv_and_fit(frame2008, frame2012, featureslist, n_folds=5):\n bp, bs = cv_optimize(frame2008, featureslist, n_folds=n_folds)\n predict, clf = fit_logistic(frame2008, frame2012, featureslist, reg=bp['C'])\n return predict, clf", "2.4 *Carry out a logistic fit using the cv_and_fit function developed above. As your featurelist use the features we have: Dem_Adv and pvi.", "#your code here\n\nres, clf = cv_and_fit(e2008, e2012, ['Dem_Adv', 'pvi'])\npredict2012_logistic = res.join(electoral_votes)\npredict2012_logistic.head()\n", "2.5 As before, plot a histogram and map of the simulation results, and interpret the results in terms of accuracy and precision.", "#code to make the histogram\n#your code here\n\nprediction_2012 = simulate_election(predict2012_logistic, 10000)\nplot_simulation(prediction_2012)\nplt.xlim(200,450)\n\n\n#code to make the map\n#your code here\nmake_map(predict2012_logistic.Obama, \"P(Obama): Logistic\")\nplt.show()", "your answer here\nClassifier Decision boundary\nOne nice way to visualize a 2-dimensional logistic regression is to plot the probability as a function of each dimension. This shows the decision boundary -- the set of parameter values where the logistic fit yields P=0.5, and shifts between a preference for Obama or McCain/Romney.\nThe function below draws such a figure (it is adapted from the scikit-learn website), and overplots the data.", "from matplotlib.colors import ListedColormap\ndef points_plot(e2008, e2012, clf):\n \"\"\"\n e2008: The e2008 data\n e2012: The e2012 data\n clf: classifier\n \"\"\"\n Xtrain = e2008[['Dem_Adv', 'pvi']].values\n Xtest = e2012[['Dem_Adv', 'pvi']].values\n ytrain = e2008['obama_win'].values == 1\n \n X=np.concatenate((Xtrain, Xtest))\n \n # evenly sampled points\n x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5\n y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5\n xx, yy = np.meshgrid(np.linspace(x_min, x_max, 50),\n np.linspace(y_min, y_max, 50))\n plt.xlim(xx.min(), xx.max())\n plt.ylim(yy.min(), yy.max())\n\n #plot background colors\n ax = plt.gca()\n Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]\n Z = Z.reshape(xx.shape)\n cs = ax.contourf(xx, yy, Z, cmap='RdBu', alpha=.5)\n cs2 = ax.contour(xx, yy, Z, cmap='RdBu', alpha=.5)\n plt.clabel(cs2, fmt = '%2.1f', colors = 'k', fontsize=14)\n \n # Plot the 2008 points\n ax.plot(Xtrain[ytrain == 0, 0], Xtrain[ytrain == 0, 1], 'ro', label='2008 McCain')\n ax.plot(Xtrain[ytrain == 1, 0], Xtrain[ytrain == 1, 1], 'bo', label='2008 Obama')\n \n # and the 2012 points\n ax.scatter(Xtest[:, 0], Xtest[:, 1], c='k', marker=\"s\", s=50, facecolors=\"k\", alpha=.5, label='2012')\n plt.legend(loc='upper left', scatterpoints=1, numpoints=1)\n\n return ax", "2.6 Plot your results on the classification space boundary plot. How sharp is the classification boundary, and how does this translate into accuracy and precision of the results?", "#your code here\npoints_plot(e2008, e2012, clf)\nplt.xlabel(\"Dem_Adv (from mean)\")\nplt.ylabel(\"PVI\")\n", "Answer:\nThe sharpness of the classifier boundary, as defined by the closeness of the contours near a probability of 0.5 gives us a sense of precision. Imagine that the boundary is very tight, tighter than what you can see in the graph. Then most states will be away from the 0.5 line, and the spread in the results will be less, or the precision higher. This is not the only consideration: indeed one must ask, how many states fall smack into the middle, say between 0.3 and 0.7. The more that do, the less precise the results will be, as there will be a greater number of simulations in which they will cross-over to another party.\nTo assess accuracy, we would have to see the actual outcome of the states in 2012. Accuracy would be a function of the number of states that end up on the \"wrong\" side of the 0.5 line, and how deep they end up on the wrong side. In terms of characterizing the 2008 outcomes, it seems that the classifier is quit eaccurate, with most misclassifications appearing in grey area of the classification boundary.\nQuestion 3: Trying to catch Silver: Poll Aggregation\nIn the previous section, we tried to use heterogeneous side-information to build predictions of the election outcome. In this section, we switch gears to bringing together homogeneous information about the election, by aggregating different polling result together.\nThis approach -- used by the professional poll analysists -- involves combining many polls about the election itself. One advantage of this approach is that it addresses the problem of bias in individual polls, a problem we found difficult to deal with in problem 1. If we assume that the polls are all attempting to estimate the same quantity, any individual biases should cancel out when averaging many polls (pollsters also try to correct for known biases). This is often a better assumption than assuming constant bias between election cycles, as we did above.\nThe following table aggregates many of the pre-election polls available as of October 2, 2012. We are most interested in the column \"obama_spread\". We will clean the data for you:", "multipoll = pd.read_csv('data/cleaned-state_data2012.csv', index_col=0)\n\n#convert state abbreviation to full name\nmultipoll.State.replace(states_abbrev, inplace=True)\n\n#convert dates from strings to date objects, and compute midpoint\n\nmultipoll.start_date = pd.to_datetime(multipoll.start_date)\nmultipoll.end_date = pd.to_datetime(multipoll.end_date)\nmultipoll['poll_date'] = multipoll.start_date + (multipoll.end_date - multipoll.start_date).values / 2\n\n#compute the poll age relative to Oct 2, in days\nmultipoll['age_days'] = (today - multipoll['poll_date']).values / np.timedelta64(1, 'D')\n\n#drop any rows with data from after oct 2\nmultipoll = multipoll[multipoll.age_days > 0]\n\n#drop unneeded columns\nmultipoll = multipoll.drop(['Date', 'start_date', 'end_date', 'Spread'], axis=1)\n\n#add electoral vote counts\nmultipoll = multipoll.join(electoral_votes, on='State')\n\n#drop rows with missing data\nmultipoll.dropna()\n\nmultipoll.head()", "3.1 Using this data, compute a new data frame that averages the obama_spread for each state. Also compute the standard deviation of the obama_spread in each state, and the number of polls for each state.\nDefine a function state_average which returns this dataframe\nHint\npd.GroupBy could come in handy", "groups = multipoll.groupby('State')\nmean = groups.mean()['obama_spread']\nstd = groups.std()['obama_spread']\nsize = groups.count()['Pollster']\nprint std[std.isnull()]\nstd[std.isnull()] = 0.05 * mean[std.isnull()]\n\naverages = pd.DataFrame(dict(N = size, poll_mean = mean, poll_std = std))\naverages\n\n\"\"\"\nFunction\n--------\nstate_average\n\nInputs\n------\nmultipoll : DataFrame\n The multipoll data above\n \nReturns\n-------\naverages : DataFrame\n A dataframe, indexed by State, with the following columns:\n N: Number of polls averaged together\n poll_mean: The average value for obama_spread for all polls in this state\n poll_std: The standard deviation of obama_spread\n \nNotes\n-----\nFor states where poll_std isn't finite (because N is too small), estimate the\npoll_std value as .05 * poll_mean\n\"\"\"\n#your code here\ndef state_average(multipoll):\n groups = multipoll.groupby('State')\n mean = groups.mean()['obama_spread']\n std = groups.std()['obama_spread']\n size = groups.count()['Pollster']\n std[std.isnull()] = 0.5 * mean[std.isnull()]\n \n averages = pd.DataFrame(dict(N = size, poll_mean = mean, poll_std = std))\n return averages", "Lets call the function on the multipoll data frame, and join it with the electoral_votes frame.", "avg = state_average(multipoll).join(electoral_votes, how='outer')\navg.head()", "Some of the reddest and bluest states are not present in this data (people don't bother polling there as much). The default_missing function gives them strong Democratic/Republican advantages", "def default_missing(results):\n red_states = [\"Alabama\", \"Alaska\", \"Arkansas\", \"Idaho\", \"Wyoming\"]\n blue_states = [\"Delaware\", \"District of Columbia\", \"Hawaii\"]\n results.ix[red_states, [\"poll_mean\"]] = -100.0\n results.ix[red_states, [\"poll_std\"]] = 0.1\n results.ix[blue_states, [\"poll_mean\"]] = 100.0\n results.ix[blue_states, [\"poll_std\"]] = 0.1\ndefault_missing(avg)\navg", "Unweighted aggregation\n3.2 Build an aggregated_poll_model function that takes the avg DataFrame as input, and returns a forecast DataFrame\nin the format you've been using to simulate elections. Assume that the probability that Obama wins a state\nis given by the probability that a draw from a Gaussian with $\\mu=$poll_mean and $\\sigma=$poll_std is positive.", "\"\"\"\nFunction\n--------\naggregated_poll_model\n\nInputs\n------\npolls : DataFrame\n DataFrame indexed by State, with the following columns:\n poll_mean\n poll_std\n Votes\n\nReturns\n-------\nA DataFrame indexed by State, with the following columns:\n Votes: Electoral votes for that state\n Obama: Estimated probability that Obama wins the state\n\"\"\"\n#your code here\ndef aggregated_poll_model(polls):\n sigma = polls.poll_std\n prob = 0.5*(1 + erf(polls.poll_mean / np.sqrt(2 * sigma ** 2)))\n return pd.DataFrame(dict(Obama = prob, Votes =polls.Votes))\n ", "3.3 Run 10,000 simulations with this model, and plot the results. Describe the results in a paragraph -- compare the methodology and the simulation outcome to the Gallup poll. Also plot the usual map of the probabilities", "#your code here\nmodel= aggregated_poll_model(avg)\npredict_poll_aggregation = simulate_election(model, 10000)\nplot_simulation(predict_poll_aggregation)\nplt.xlim(250, 400)\n", "Answer: The accuracy of this poll is somewhat greater than just taking the gallup poll. This is probably because\n\nWe're using as inputs polls that are trying to predict the election, so there is nothing \"lost in translation\", and\nWe are averaging many polls together, so some of their biases are likely to offset each other.\n\nOne problem is that we treated all polls as equal. Thus a bad poll with a small sample size is given equal footing as a good poll with a large one. Thus we are introducing bias both due to individual poll biases and individual poll sampling errors.\nFurthermore, we estimate the standard deviation by simply taking the spread of the percentages in the polls, without considering their individual margins of error. In the limit of very limit sampling error per poll, this might be ok. However in states with some polls with large margins of error (Kansas, for eg), this can lead to an overestimate of the standard deviation, pushing the predicted probability toward 0.5. This, in turn, can hurt precision since it increases the chance that a state will flip sides in a simulation.", "#your code here\nmake_map(model.Obama, \"P(Obama): Poll Aggregation\")\nplt.show()", "Weighted Aggregation\nNot all polls are equally valuable. A poll with a larger margin of error should not influence a forecast as heavily. Likewise, a poll further in the past is a less valuable indicator of current (or future) public opinion. For this reason, polls are often weighted when building forecasts. \nA weighted estimate of Obama's advantage in a given state is given by\n$$\n\\mu = \\frac{\\sum w_i \\times \\mu_i}{\\sum w_i}\n$$\nwhere $\\mu_i$ are individual polling measurements or a state, and $w_i$ are the weights assigned to each poll. The uncertainty on the weighted mean, assuming each measurement is independent, is given by\nThe estimate of the variance of $\\mu$, when $\\mu_i$ are unbiased estimators of $\\mu$, is\n$$\\textrm{Var}(\\mu) = \\frac{1}{(\\sum_i w_i)^2} \\sum_{i=1}^n w_i^2 \\textrm{Var}(\\mu_i).$$\nWhats the matter with Kansas?\nWe need to find an estimator of the variance of $\\mu_i$, $Var(\\mu_i)$. In the case of states that have a lot of polls, we expect the bias in $\\mu$ to be negligible, and then the above formula for the variance of $\\mu$ holds. However, lets take a look at the case of Kansas.", "multipoll[multipoll.State==\"Kansas\"]\n", "There are only two polls in the last year! And, the results in the two polls are far, very far from the mean.\nNow, Kansas is a safely Republican state, so this dosent really matter, but if it were a swing state, we'd be in a pickle. We'd have no unbiased estimator of the variance in Kansas. So, to be conservative, and play it safe, we follow the same tack we did with the unweighted averaging of polls, and simply assume that the variance in a state is the square of the standard deviation of obama_spread.\nThis will overestimate the errors for a lot of states, but unless we do a detailed state-by-state analysis, its better to be conservative. Thus, we use:\n$\\textrm{Var}(\\mu)$ = obama_spread.std()$^2$ .\nThe weights $w_i$ should combine the uncertainties from the margin of error and the age of the forecast. One such combination is:\n$$\nw_i = \\frac1{MoE^2} \\times \\lambda_{\\rm age}\n$$\nwhere\n$$\n\\lambda_{\\rm age} = 0.5^{\\frac{{\\rm age}}{30 ~{\\rm days}}}\n$$\nThis model makes a few ad-hoc assumptions:\n\nThe equation for $\\sigma$ assumes that every measurement is independent. This is not true in the case that a given pollster in a state makes multiple polls, perhaps with some of the same respondents (a longitudinal survey). But its a good assumption to start with.\nThe equation for $\\lambda_{\\rm age}$ assumes that a 30-day old poll is half as valuable as a current one\n\n3.4 Nevertheless, it's worth exploring how these assumptions affect the forecast model. Implement the model in the function weighted_state_average", "\"\"\"\nFunction\n--------\nweighted_state_average\n\nInputs\n------\nmultipoll : DataFrame\n The multipoll data above\n \nReturns\n-------\naverages : DataFrame\n A dataframe, indexed by State, with the following columns:\n N: Number of polls averaged together\n poll_mean: The average value for obama_spread for all polls in this state\n poll_std: The standard deviation of obama_spread\n \nNotes\n-----\nFor states where poll_std isn't finite (because N is too small), estimate the\npoll_std value as .05 * poll_mean\n\"\"\"\n\n#your code here\ndef weights(df):\n decay = 0.5**(df.age_days/30)\n w = decay / df.MoE ** 2\n return w\n\ndef wmean(df):\n w = weights(df)\n result = (df.obama_spread*w).sum() / w.sum()\n return result\n\ndef wsig(df):\n return df.obama_spread.std()\n\ndef weighted_state_average(multipoll):\n groups = multipoll.groupby('State')\n poll_mean = groups.apply(wmean)\n poll_std = groups.apply(wsig)\n poll_std[poll_std.isnull()] = 0.05*mean[poll_std.isnull()]\n return pd.DataFrame(dict(poll_mean = poll_mean, poll_std = poll_std))\n \n", "3.5 Put this all together -- compute a new estimate of poll_mean and poll_std for each state, apply the default_missing function to handle missing rows, build a forecast with aggregated_poll_model, run 10,000 simulations, and plot the results, both as a histogram and as a map.", "#your code here\naverage = weighted_state_average(multipoll).join(electoral_votes, how='outer')\ndefault_missing(average)\nmodel= aggregated_poll_model(average)\npredict_poll_weighted = simulate_election(model, 10000)\nplot_simulation(predict_poll_weighted)\nplt.xlim(250, 400)\n\n#your map code here\nmake_map(model.Obama, \"P(Obama): Weighted Polls\")", "3.6 Discuss your results in terms of bias, accuracy and precision, as before\nAnswer: The accuracy of this poll is higher than before, primarily as a result of removing bias in the calculation of the weighted means. As per the discussion earlier, the precision is really not much better, as we use the same method to calculate the standrd deviations.\nThis points to the importance of getting a better grip on these standard deviations to improve the precisions of one's forecasts. Pollsters engage in trend analysis, a state by state weighting of the standard deviation, weighting pollsters, and other methods to estimate the standard deviations more accurately.\nFor fun, but not to hand in, play around with turning off the time decay weight and the sample error weight individually.\nParting Thoughts: What do the pros do?\nThe models we have explored in this homework have been fairly ad-hoc. Still, we have seen predicting by simulation, prediction using heterogeneous side-features, and finally by weighting polls that are made in the election season. The pros pretty much start from poll-averaging, adding in demographics and economic information, and moving onto trend-estimation as the election gets closer. They also employ models of likely voters vs registered voters, and how independents might break. At this point, you are prepared to go and read more about these techniques, so let us leave you with some links to read:\n\n\nSkipper Seabold's reconstruction of parts of Nate Silver's model: https://github.com/jseabold/538model . We've drawn direct inspiration from his work , and indeed have used some of the data he provides in his repository\n\n\nThe simulation techniques are partially drawn from Sam Wang's work at http://election.princeton.edu . Be sure to check out the FAQ, Methods section, and matlab code on his site.\n\n\nNate Silver, who we are still desperately seeking, has written a lot about his techniques: http://www.fivethirtyeight.com/2008/03/frequently-asked-questions-last-revised.html . Start there and look around\n\n\nDrew Linzer uses bayesian techniques, check out his work at: http://votamatic.org/evaluating-the-forecasting-model/\n\n\nHow to submit\nTo submit your homework, create a folder named lastname_firstinitial_hw2 and place this notebook file in the folder. Also put the data folder in this folder. Make sure everything still works! Select Kernel->Restart Kernel to restart Python, Cell->Run All to run all cells. You shouldn't hit any errors. Compress the folder (please use .zip compression) and submit to the CS109 dropbox in the appropriate folder. If we cannot access your work because these directions are not followed correctly, we will not grade your work.\n\ncss tweaks in this cell\n<style>\ndiv.text_cell_render {\n line-height: 150%;\n font-size: 110%;\n width: 850px;\n margin-left:50px;\n margin-right:auto;\n }\n</style>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
wakeful-sun/imageprocessor
OnroadLanesDetector.ipynb
mit
[ "Welcome to on road lane detection program\nProgramm has image processing pipeline that support both RGB images and BGR video input.\nRGB image processing consists of next steps:\n - image file reading. File expected to exist on disk\n - RGB to BGR conversion\n - common image processing pipeline\n - processing result conversion (BGR to RGB)\n - result image output\n\nBGR video processing consists of next steps:\n - video file capturing. File expected to exist on disk\n - video frame processing loop:\n * common image processing pipeline\n * result frame output\n - resources release\n\nImage processing pipeline does:\n - BGR to HSV conversion\n - white and yellow colors filter, creates b/w image. Makes other colored objects black\n - detection area filter. Creates new b/w image output of color filter result\n - edge detection\n - lines detection with help of Hough transform method (cv2.HoughLines tool)\n - lanes detection with help of custom algorithm that does line groups detection and median line\n calculation for each found group\n - lane lines Polar coortinate to Cartesian coordinate conversion, drawing lane lines\n - result image composition from initial frame and detected lane lines images\n - result image returned to program for output/further operations\n\nColors detection on HSV image allows efficientely get rid of noise coused by shadows and road surface color artifacts. Further b/w image processing might save processor time.\nAs the next improvement I would extract configuration parametes into separate entity. And would use same instance of it for on-flight configuration/adjustment. It can become an interface for another system :)\nImports, compose full file path fuction", "import matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2\nimport pandas\nimport os\n\ndef getPathFor(file_path):\n current_directory = %pwd\n path = os.path.join(current_directory, file_path)\n \n print(\"About to open file: {}\\n\".format(path))\n return path\n\n", "Next class is responsible for filtering out lanes detection area:", "class DetectionAreaFilter:\n\n def __init__(self):\n self._lower_yellow = np.array([20, 0, 170], dtype=np.uint8)\n self._upper_yellow = np.array([55, 255, 255], dtype=np.uint8)\n\n self._lower_white = np.array([0, 0, 220], dtype=np.uint8)\n self._upper_white = np.array([255, 25, 255], dtype=np.uint8)\n\n self._ignore_mask_color = 255\n\n def getColorMask(self, hsv_image):\n mask_yellow = cv2.inRange(hsv_image, self._lower_yellow, self._upper_yellow)\n mask_white = cv2.inRange(hsv_image, self._lower_white, self._upper_white)\n\n mask = cv2.add(mask_white, mask_yellow)\n return mask\n\n def applyDetectionArea(self, bw_image, width_adjustment=60, height_adjustment=65):\n im_height = bw_image.shape[0]\n im_half_height = im_height // 2\n im_width = bw_image.shape[1]\n im_half_width = im_width // 2\n\n area_left_bottom = (0, im_height)\n area_left_top = (im_half_width - width_adjustment, im_half_height + height_adjustment)\n area_right_top = (im_half_width + width_adjustment, im_half_height + height_adjustment)\n area_right_bottom = (im_width, im_height)\n\n detection_area = [area_left_bottom, area_left_top, area_right_top, area_right_bottom]\n vertices = np.array([detection_area], dtype=np.int32)\n\n mask = np.zeros_like(bw_image)\n cv2.fillPoly(mask, vertices, self._ignore_mask_color)\n\n masked_image = cv2.bitwise_and(bw_image, mask)\n return masked_image\n", "Here is the result of getColorMask function, that turns all white and yellow objects into white ones and makes other colored objects black:\n\nAnd the result of applyDetectionArea function, that creates new b/w image with applyed trapezium shaped mask out of given b/w image:\n\nThen goes Canny edge detection:", "def getEdges(image, low_threshold=50, high_threshold=150):\n edges = cv2.Canny(image, low_threshold, high_threshold)\n return edges", "The result of Canny edge detection:\n\nLines detetection with help of Hough transform method", "def getLaneLines(edges):\n deg = np.pi/180\n lines = cv2.HoughLines(edges, 1, 1*deg, 40)\n\n if lines is None:\n return np.array([])\n\n points_array = list()\n for line in lines:\n for rho, theta in line:\n points_array.append((rho, theta))\n\n return np.array(points_array)", "Result of getLaneLines transformed into lines\n\nThen goes line grop detection, that accepts getLaneLines result as input parameter.\nCoordinateSorter class does line groups detection and median line calculation for each found group. \nInfluence it's input parameters on 'sort' function behavior could be described with help of Gherkin language:\nCoordinateSorter(max_distance_delta, max_angle_delta, threshold)\n\n Scenario: 'max_distance_delta' and 'max_angle_delta' parameters allow to control line group detection\n Given: 5 lines have been given to sort\n And: it is possible to create a chain 'chain_1' of lines line1, line2, line3\n Where: distance between links is less (or equal) then (max_distance_delta, max_angle_delta)\n And: it is possible to create a chain 'chain_2' of lines line4, line5\n Where: distance between links is less (or equal) then (max_distance_delta, max_angle_delta)\n And: distance between chain_1 and chain_2 edges is more than (max_distance_delta, max_angle_delta)\n Then: chain_1 and chain_2 considered as two separate lines\n\n Scenario: 'threshold' parameter allows to filter out noise lines\n Given: threshold = 4, set of lines\n When: sorter found 3 groups of lines\n And: the first set of lines contains 10 lines, second - 5 lines\n But: the third set of lines contains 3 lines\n Then: the third considered as noise and will not be presented in sorting result\n\nResulting line is calculate as median of all lines in a group", "class CoordinateSorter:\n\n def __init__(self, max_distance_delta, max_angle_delta, threshold):\n if max_angle_delta < 0:\n raise ValueError(\"[max_angle_delta] must be positive number\")\n\n if max_angle_delta < 0:\n raise ValueError(\"[max_angle_delta] must be positive number\")\n\n if threshold < 1 or type(threshold) != int:\n raise ValueError(\"[threshold] expected to be integer greater then or equal to 1\")\n\n self._max_point_distance = (max_distance_delta, max_angle_delta)\n self._min_points_amount = threshold\n\n def _sortPointsByDistance(self, points_dict):\n set_list = list()\n\n for key, value in points_dict.items():\n indexes_set = set()\n set_list.append(indexes_set)\n indexes_set.add(key)\n\n for inner_key, inner_value in points_dict.items():\n point_distance = abs(np.subtract(value, inner_value))\n\n if point_distance[0] <= self._max_point_distance[0] \\\n and point_distance[1] <= self._max_point_distance[1]:\n indexes_set.add(inner_key)\n\n return set_list\n\n def _splitOnGroups(self, set_list_source):\n\n sorted_source = list(set_list_source)\n sorted_source.sort(key=len, reverse=True)\n\n extremums = list()\n\n def find_extremums(ordered_list_of_set_items):\n if len(ordered_list_of_set_items) == 0:\n return\n\n first_extremum = ordered_list_of_set_items[0]\n items_for_further_sorting = list()\n\n for dot_set in ordered_list_of_set_items:\n if dot_set.issubset(first_extremum):\n continue\n else:\n if len(first_extremum.intersection(dot_set)):\n first_extremum = first_extremum.union(dot_set)\n else:\n items_for_further_sorting.append(dot_set)\n\n extremums.append(first_extremum)\n find_extremums(items_for_further_sorting)\n\n find_extremums(sorted_source)\n\n filtered_extremums = filter(lambda x: len(x) >= self._min_points_amount, extremums)\n return filtered_extremums\n\n @staticmethod\n def _getMedian(source_dict, key_set):\n point_array = [source_dict[item] for item in key_set]\n data_frame = pandas.DataFrame(data=point_array, columns=[\"distance\", \"angle\"])\n\n return data_frame[\"distance\"].median(), data_frame[\"angle\"].median()\n\n def sort(self, points_array):\n\n if len(points_array) < self._min_points_amount:\n return []\n\n points_dictionary = dict()\n\n for index, coordinates in enumerate(points_array):\n points_dictionary[index] = (int(coordinates[0]), coordinates[1])\n\n point_set_list = self._sortPointsByDistance(points_dictionary)\n point_groups = self._splitOnGroups(point_set_list)\n resulting_points = [self._getMedian(points_dictionary, point_group) for point_group in point_groups]\n\n return resulting_points", "Result of lines sorting function in Cartesian coordinate system:\n\nDrawing of lines with needed length:", "def convert(rho, theta, y_min, y_max):\n\n def create_point(y):\n x = (rho - y*np.sin(theta))/np.cos(theta)\n return int(x), int(y)\n\n d1 = create_point(y_max)\n d2 = create_point(y_min)\n\n return d1, d2\n\n\ndef drawLines(polar_coordinates_array, image, color, line_weight = 10):\n\n y_max = image.shape[0]\n y_min = int(y_max * 2 / 3)\n\n lines = [convert(rho, theta, y_min, y_max) for rho, theta in polar_coordinates_array]\n\n for d1, d2 in lines:\n cv2.line(image, d1, d2, color, line_weight)", "The result:\n\nThe pipeline itself:", "class ImageProcessor:\n\n def __init__(self, detection_area_filter, coordinate_sorter):\n self._bgr_line_color = (0, 0, 255)\n self._detection_area_filter = detection_area_filter\n self._coordinate_sorter = coordinate_sorter\n\n def processFrame(self, bgr_frame):\n frame = cv2.cvtColor(bgr_frame, cv2.COLOR_BGR2HSV)\n\n bw_color_mask = self._detection_area_filter.getColorMask(frame)\n bw_area = self._detection_area_filter.applyDetectionArea(bw_color_mask)\n\n bw_edges = getEdges(bw_area)\n\n polar_lane_coordinates = getLaneLines(bw_edges)\n average_polar_lane_coordinates = self._coordinate_sorter.sort(polar_lane_coordinates)\n\n lines_image = np.zeros(bgr_frame.shape, dtype=np.uint8)\n drawLines(average_polar_lane_coordinates, lines_image, self._bgr_line_color)\n\n result_image = cv2.addWeighted(lines_image, 0.9, bgr_frame, 1, 0)\n\n return result_image\n\n def _convert_bw_2_color(self, bw_image):\n return np.dstack((bw_image, bw_image, bw_image))", "And the result of processFrame:\n\nRGB image processing entry point", "def showImage(file_path):\n def convert(image):\n return image[..., [2, 1, 0]]\n\n image_path = getPathFor(file_path)\n rgb_image = mpimg.imread(image_path)\n\n bgr_frame = convert(rgb_image)\n frame = img_processor.processFrame(bgr_frame)\n rgb_frame = convert(frame)\n\n plt.imshow(rgb_frame)\n plt.show()", "Video processing entry point", "def playVideo(file_path):\n video_path = getPathFor(file_path)\n video = cv2.VideoCapture(video_path)\n print(\"About to start video playback...\")\n \n while video.isOpened(): \n _, bgr_frame = video.read()\n\n if not isinstance(bgr_frame, np.ndarray):\n # workaround to handle end of video stream. \n break\n \n frame = img_processor.processFrame(bgr_frame)\n cv2.imshow(\"output\", frame)\n \n key = cv2.waitKey(1) & 0xFF\n # stop video on ESC key pressed\n if key == 27:\n break\n \n print(\"Video has been closed successfully.\")\n video.release()\n cv2.destroyAllWindows()", "Constants with image/video pathes for testing, pipeline initialization", "image1 = \"input/test_images/solidWhiteCurve.jpg\"\nimage2 = \"input/test_images/solidWhiteRight.jpg\"\nimage3 = \"input/test_images/solidYellowCurve.jpg\"\nimage4 = \"input/test_images/solidYellowCurve2.jpg\"\nimage5 = \"input/test_images/solidYellowLeft.jpg\"\nimage6 = \"input/test_images/whiteCarLaneSwitch.jpg\"\n\nvideo1 = \"input/test_videos/challenge.mp4\"\nvideo2 = \"input/test_videos/solidYellowLeft.mp4\"\nvideo3 = \"input/test_videos/solidWhiteRight.mp4\" \n\ndetection_area_filter = DetectionAreaFilter()\n\nmax_distance_delta = 40 # max distance between lines (rho1 - rho2) in polar coordinate system\nmax_angle_delta = np.radians(4) # max angle between lines (theta1 - theta2) in polar coordinate system\nthreshold = 3 # min amount of lines in set filter\ncoordinate_sorter = CoordinateSorter(max_distance_delta, max_angle_delta, threshold)\n\nimg_processor = ImageProcessor(detection_area_filter, coordinate_sorter)\n\nshowImage(image4)\n#playVideo(video1)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
openhep/ackp16
H750.ipynb
gpl-3.0
[ "<center><b><font size=\"5\">H(750) decays to gauge boson pairs</font></b></center>\nIntro and definitions\nPackages and constants", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\nimport numpy as np\nimport math\nfrom sympy import *\nfrom scipy.optimize import root, brentq\nfrom sympy.abc import tau, sigma, x, D, T, Q, Y, N\nT3, sigmaprime = symbols('T3, sigmaprime')\n\n# local packages\nfrom plothelp import label_line\nimport smgroup\nfrom constants import *\nsmgroup.GUTU1 = False # we don't work here with GUT-unified value for alpha_1", "One-loop decays to pairs of gauge bosons\nFactors different for different VV channels", "def VVfact(S1, S2, S3):\n \"\"\"Factors for loop decays to VV channels.\n Phase space factors for identical particles accounted for here.\n \n \"\"\"\n gg = S1 + S2\n GG = (2*S3)*(alphas/alpha)*sqrt(Kfactor)\n ZZ = (cw2/sw2)*S2 + (sw2/cw2)*S1\n Zg = sqrt(2)*( (cw/sw)*S2 - (sw/cw)*S1 )\n WW = sqrt(2) * S2 / sw2\n return {'gg':gg, 'GG':GG, 'ZZ':ZZ, 'Zg':Zg, 'WW':WW}\n\n\ndef VVfactW(D=7, Y=0, real=True, wght=False):\n \"\"\"Factors for loop decays to VV channels.\n Phase space factors for identical particles accounted for here.\n \n wght --- T3-weight factor entering sum over multiplet\n \n \"\"\"\n r, wNC, wCC = (1, 1, 1)\n if real:\n r = 2\n if wght:\n # Weights for CKP model quintuplet scalars which don't\n # couple universally to H\n wNC = (2-T3)/4\n wCC = (3-2*T3)/8 # average of (2-T3)/4 and (1-T3)/4\n T = (D-S(1))/2 # weak isospin\n gg = summation((wNC*Q**2).subs(Q,T3+Y/2).evalf(), (T3, -T, T))/r\n GG = 0\n ZZ = summation((wNC*(T3-sw2*Q)**2).subs(Q,T3+Y/2).evalf(), (T3, -T, T))/sw2/cw2/r\n Zg = sqrt(2)*summation((wNC*Q*(T3-sw2*Q)).subs(Q,T3+Y/2).evalf(), (T3, -T, T))/sw/cw/r\n WW = sqrt(2) * summation((wCC*(T-T3)*(T+T3+1)/2).evalf(), (T3, -T, T))/sw2/r\n return {'gg':gg, 'GG':GG, 'ZZ':ZZ, 'Zg':Zg, 'WW':WW}\n\ndef Rtogg(reps, prt=False):\n \"\"\"Ratios of VV to gamma-gamma channels.\"\"\"\n VVs = VVfact(*smgroup.SMDynkin(reps))\n gg = VVs['gg']\n RGG = float((VVs['GG']/gg)**2)\n RZg = float((VVs['Zg']/gg)**2)\n RZZ = float((VVs['ZZ']/gg)**2)\n RWW = float((VVs['WW']/gg)**2)\n if prt:\n print(\"RGG = {:.3f}, RZg = {:.3f}, RZZ = {:.3f}, RWW = {:.3f}\".format(RGG,\n RZg, RZZ , RWW) )\n return RGG, RZg, RZZ, RWW\n \ndef RtoggW(D=7, Y=0, real=True, wght=False, prt=False):\n \"\"\"Ratios of VV to gamma-gamma channels if T3-weights are needed.\"\"\"\n VVs = VVfactW(D, Y, real, wght)\n gg = VVs['gg']\n RGG = float((VVs['GG']/gg)**2)\n RZg = float((VVs['Zg']/gg)**2)\n RZZ = float((VVs['ZZ']/gg)**2)\n RWW = float((VVs['WW']/gg)**2)\n if prt:\n print(\"RGG = {:.3f}, RZg = {:.3f}, RZZ = {:.3f}, RWW = {:.3f}\".format(RGG,\n RZg, RZZ , RWW) )\n return RGG, RZg, RZZ, RWW\n\n# Check of consistency of two formulas\nres = Rtogg([smgroup.RealScalar(1,7,0)], prt=True)\nres = RtoggW(D=7, Y=0, real=True, prt=True)\n\nres = Rtogg([smgroup.ComplexScalar(1,5,-1)], prt=True)\nres = RtoggW(D=5, Y=-1, real=False, prt=True)", "Note that first set is consistent with table below Fig. 4 of Strumia's arXiv:1605.09401, which has RZg=7, RZZ=12, RWW=40. We can reproduce second row of his table with some SU(2) singlet:", "res = Rtogg([smgroup.ComplexScalar(1,1,-1)], prt=True)\nres = RtoggW(D=1, Y=-1, real=False, prt=True)", "Final expression for $H\\to VV$ width\n$$\\Gamma(h\\to\\gamma\\gamma) = B \\left|\\sum_i Q_i^2 A_{i}(\\tau_i) \\right|^2$$\n$$ B = \\frac{\\alpha^2 g^2 m_h^3}{1024 \\pi^3 m_W^2} = \\frac{G_F \\alpha^2 m_h^3}{\n128\\sqrt{2} \\pi^3}$$\n$$\\tau_i = \\frac{4m_i^2}{m_{H}^2} $$\n$$A_{0}(\\tau) = -\\tau(1-\\tau f(\\tau)) \\to \\frac{1}{3} \\quad \\text{for} \\quad \\tau\\to\\infty$$\n$$A_{1/2}(\\tau) = 2\\tau\\big(1+(1-\\tau)f(\\tau)\\big) = 2 + (4 m^2 -m_{H}^2)C_0(0,0,m_{H}^2,m^2,m^2,m^2)$$\n$$f(\\tau) = \\arcsin^2(\\sqrt{\\frac{1}{\\tau}}) \\quad \\text{for} \\quad \\tau\\ge 1$$\n$$f(\\tau) = -\\frac{m_H^2}{2} C_0 (0,0,m_H^2; m, m, m) $$", "# Loop functions\n\ndef f(tau):\n return asin(1/sqrt(tau))**2\n\ndef A0(tau):\n return -tau*(1-tau*f(tau))\n\ndef A1(tau):\n return -2-3*tau-3*tau*(2-tau)*f(tau)\n\ndef A12(tau):\n return 2*tau*(1+(1-tau)*f(tau))\n\n# numpy-approved versions\nfN = lambdify(x, f(x), 'numpy')\nA0N = lambdify(x, A0(x), 'numpy')\nA1N = lambdify(x, A1(x), 'numpy')\nA12N = lambdify(x, A12(x), 'numpy')\n\ndef tauN(m, mH=750):\n return 4*m**2/mH**2\n\nprint(\" A0 --> {}\".format(limit(A0(tau), tau, oo)))\nprint(\"A12 --> {}\".format(limit(A12(tau), tau, oo)))\nprint(\" A1 --> {}\".format(limit(A1(tau), tau, oo)))\n\n# Numerical check of the relation of C0 and f(tau):\n# LoopTools for mH=125, m=375\n# fLT = 0.028038859\nf(4*375**2/125**2)", "SM $h(125)\\to\\gamma\\gamma$ width in GeV (just W and top contributions)", "Bh = (GF * alpha**2 * mh**3)/(128 * sqrt(2) * pi**3).evalf()\nBh * (A1N(tauN(mW,mh)) + 3*(2/3)**2*A12N(tauN(mt,mh)))**2", "This is about right. (2HDMC gives 8.3e-6 GeV). \nNow we define $H\\to VV$ decay widths expressions for generic BSM and for ČKP model.", "def GAMHVV(VV='gg', \n BSMfermions=[], BSMscalars=[], gHFF=v, mF=400, gHSS=v, mS=400):\n \"\"\"Decay width of scalar H to pair of gauge bosons (generic model)\"\"\"\n B = float((alpha**2 * mH**3)/(1024 * pi**3).evalf())\n VVf = VVfact(*smgroup.SMDynkin(BSMfermions))[VV]\n VVs = VVfact(*smgroup.SMDynkin(BSMscalars))[VV]\n amp = - (2*gHFF/mF)*VVf*A12N(tauN(mF))\n amp += - (gHSS/mS**2)*VVs*A0N(tauN(mS))\n return B * amp**2\n\ndef GAMHckp(VV='gg', tau=1, sig=1, sigpri=1, mchi=400, mphi=400):\n \"\"\"Decay width of scalar H to pair of gauge bosons (CKP model)\"\"\"\n B = float((alpha**2 * mH**3)/(1024 * pi**3).evalf())\n VVtau = VVfactW(D=7, Y=0, real=True)[VV]\n VVsig = VVfactW(D=5, Y=-2, real=False)[VV]\n VVsigpri = VVfactW(D=5, Y=-2, real=False, wght=True)[VV]\n amp = tau*v*VVtau/mchi**2 * A0N(tauN(mchi))\n amp += (sig*v*VVsig/mphi**2 + sigpri*v*VVsigpri/mphi**2) * A0N(tauN(mphi))\n return B * amp**2\n\nGAMHckp('gg'), GAMHckp('WW')", "These numbers agree with my older notebook used for initial plots. Another check: CKP model widths with $\\sigma'=0$ (i.e. septuplet plus universal quintuplet contributions) can also be calculated with generic function GAMHVV.", "[GAMHckp(VV, sigpri=0) for VV in ['gg', 'Zg', 'ZZ', 'WW']]\n\n[GAMHVV(VV, BSMscalars=[smgroup.RealScalar(1,7,0), smgroup.ComplexScalar(1,5,-2)], gHSS=v) for VV in ['gg', 'Zg', 'ZZ', 'WW']]", "Models\n(For check) Bhupal Dev et al. [1512.08507] 1512.06028", "bdrep = [smgroup.Dirac(3,1,S(4)/3), smgroup.Dirac(3,1,-S(2)/3), smgroup.Dirac(1,1,-2)]\nRtogg(bdrep)", "This is in good agreement with their Table 1:\nRGG = 220, RZg = 0.61, RZZ = 0.091\nLet's also check decay width formula (again in ratios only)", "# RGG\nGAMHVV('GG', BSMfermions=bdrep, gHFF=246, mF=400)/GAMHVV(BSMfermions=bdrep, gHFF=246, mF=400)\n\n# RZg\nGAMHVV('Zg', BSMfermions=bdrep, gHFF=246, mF=400)/GAMHVV(BSMfermions=bdrep, gHFF=246, mF=400)\n\n# RZZ\nGAMHVV('ZZ', BSMfermions=bdrep, gHFF=246, mF=400)/GAMHVV(BSMfermions=bdrep, gHFF=246, mF=400)\n\n# RWW\nGAMHVV('WW', BSMfermions=bdrep, gHFF=246, mF=400)/GAMHVV(BSMfermions=bdrep, gHFF=246, mF=400)", "(For check) Elllis and Ellis et al. 1512.05327", "# Model 1 \nRtogg([smgroup.Dirac(3,1,S(4)/3), smgroup.Dirac(3,1,-S(4)/3)])\n\n# Zg above agrees with Eq. (32) of 1512.07616\n2*sw2/cw2\n\n# Model 2 \nRtogg([smgroup.Dirac(3,2,S(1)/3), smgroup.Dirac(3,2,-S(1)/3)])\n\n# Model 3 \nRtogg([smgroup.Dirac(3,1,S(4)/3), smgroup.Dirac(3,1,-S(4)/3), smgroup.Dirac(3,2,S(1)/3), smgroup.Dirac(3,2,-S(1)/3), smgroup.Dirac(3,1,-S(2)/3), smgroup.Dirac(3,1,S(2)/3)])", "They have e.g. for the Model 3:\nRGG = 460, RZg = 1.1, RZZ = 2.8, RWW = 15.\nSo, their RZg and RWW look factor 2 too large.\n(For check) Benbrik et al. 1512.06028", "# VLTQ model of Benbrik and al.\nRtogg([smgroup.Dirac(3,3,S(4)/3), smgroup.Dirac(3,3,-S(2)/3)])", "Benbrink at al have:\nRGG = 40, RZg = 2.29, RZZ = 5.59, RWW = 8.88\nSo their RWW looks factor 2 too small.\n\"Our\" one-loop (BPR) model\nFor the purposes of couplings to H(750) and gauge bosons, we have one Dirac doublet (times the number of generations, of course)", "bpr = [smgroup.Dirac(1,2,-1)]\nsmgroup.SMDynkin(bpr)\n\nRGGbpr, RZgbpr, RZZbpr, RWWbpr = Rtogg(bpr, prt=True)\n\n# Branching ratio to gamma gamma:\nBrBPRgg = GAMHVV('gg', BSMfermions=bpr)/(GAMHVV('gg', BSMfermions=bpr)+GAMHVV('Zg', BSMfermions=bpr)+GAMHVV('ZZ', BSMfermions=bpr)+GAMHVV('WW', BSMfermions=bpr))\nBrBPRgg\n\n# OA's factor \n10.8*(750/45)**2 /(64*pi.evalf()**3)**2 * 1000 # fb", "\"Our\" three-loop (ČKP) model", "# Factors relevant for H-->gamma gamma\nsigmaprime = symbols('sigmaprime')\nprint(tau*VVfactW(D=7, Y=0, real=True)['gg'])\nprint(sigma*VVfactW(D=5, Y=-2, real=False)['gg'])\nprint(sigmaprime*VVfactW(D=5, Y=-2, real=False, wght=True)['gg'])\n\n# Factors relevant for H--> W+ W- (with sqrt(2)/sw2 factor extracted)\nprint((tau*VVfactW(D=7, Y=0, real=True)['WW']*sw2/sqrt(2)).evalf())\nprint((sigma*VVfactW(D=5, Y=-2, real=False)['WW']*sw2/sqrt(2)).evalf())\nprint((sigmaprime*VVfactW(D=5, Y=-2, real=False, wght=True)['WW']*sw2/sqrt(2)).evalf())", "One possible check is the known fact that for Y=0 model ratio of WW to $\\gamma\\gamma$ decay widths is $2/s_{W}^4=36.5$. We have such model for $\\sigma=\\sigma'=0$.", "GAMHckp('WW', sig=0, sigpri=0)/GAMHckp('gg', sig=0, sigpri=0)\n\n# Final ratios to gamma gamma channel for tau=sig=sigpri\nGAMHckp('Zg')/GAMHckp('gg'), GAMHckp('ZZ')/GAMHckp('gg'), GAMHckp('WW')/GAMHckp('gg')\n\n# Width for H(750) --> t tbar\nGAMHtt = 3*(1/126.5)*mH*mt**2/(8*sw2*mW**2)*(1-tauN(mt))**(3/2); GAMHtt\n\ndef GAMTOTckp(tau, sig, sigpri, mchi, mphi):\n \"\"\"Total width of H(750) in CKP model.\"\"\"\n WW = GAMHckp('WW', tau, sig, sigpri, mchi, mphi)\n ZZ = GAMHckp('ZZ', tau, sig, sigpri, mchi, mphi)\n Zg = GAMHckp('Zg', tau, sig, sigpri, mchi, mphi)\n gg = GAMHckp('gg', tau, sig, sigpri, mchi, mphi)\n TOT = GAMHtt+WW+ZZ+Zg+gg\n return TOT\n\ndef GAMTOTckpD(lam, mS):\n \"\"\"Total width of H(750) in CKP model with degenerate couplings and mases and Br(gg).\"\"\"\n WW = GAMHckp('WW', tau=lam, sig=lam, sigpri=lam, mchi=mS, mphi=mS)\n ZZ = GAMHckp('ZZ', tau=lam, sig=lam, sigpri=lam, mchi=mS, mphi=mS)\n Zg = GAMHckp('Zg', tau=lam, sig=lam, sigpri=lam, mchi=mS, mphi=mS)\n gg = GAMHckp('gg', tau=lam, sig=lam, sigpri=lam, mchi=mS, mphi=mS)\n TOT = GAMHtt+WW+ZZ+Zg+gg\n return TOT, gg/TOT\n\nprint( \"GAMHTOT = {:.1f} GeV; Br(H-->gamma gamma) = {:.4}\".format(*GAMTOTckpD(8, 375)) )", "Experimental constraints\nGeneralities\n750 GeV diphoton excess:\n$$\\sigma(pp\\to H\\to \\gamma\\gamma){\\rm CMS} = 4.47 \\pm 1.86\\;{\\rm fb}$$\n$$\\sigma(pp\\to H\\to \\gamma\\gamma){\\rm ATLAS} = 10.6 \\pm 2.9\\;{\\rm fb}$$\nCombination by Di Chiara et al.:\n$$\\sigma(pp\\to H\\to \\gamma\\gamma)_{\\rm LHC} = 6.26 \\pm 3.32\\; {\\rm fb}$$\nSo one could scan 3-10 fb region.\n\nWidth measured by ATLAS is 45 GeV; CMS prefers narrower.", "xs_low = 3 # fb\nxs_high = 9 ", "2HDM: for $m_H = 750\\;{\\rm GeV}$, $A$ and $H^\\pm$ are also close to 750 GeV.\nSignal is $10^4$ times stronger than SM-like Higgs, so pure 2HDM is hopeless\n\nHiggs production by gluon-gluon fusion: 7 and 8 TeV by LHC Higgs xs WG, and 13 TeV by gluon luminosity ratios (13 TeV/8 TeV) being 2.296 for 125 GeV and 4.693 for 750 GeV.\n| | 7 TeV | 8 TeV | 13 TeV\n| ---- | ----- | ------ | -----\n| h(125) | 15.13 pb | 19.27 pb| 44.2 pb\n| h(750) | 93 fb | 157 fb | 737 fb\nand we have\n$$\\sigma_{\\gamma\\gamma} = 737\\,{\\rm fb}\\; Br(H\\to\\gamma\\gamma)$$", "ggF13 = 737. # fb\nggF8 = 157.", "For production via photon fusion $pp \\to \\gamma \\gamma \\to H \\to \\gamma \\gamma$ at 13 TeV, Harland-Lang et al. have\n$$ \\sigma = 4.1\\,{\\rm pb}\\, \\left(\\frac{\\Gamma_H}{45\\, {\\rm GeV}}\\right) {\\rm Br}(H\\to\\gamma\\gamma)^2$$\nwhile Csaki et al. have\n$$ \\sigma = 10.8 \\,{\\rm pb}\\, \\left(\\frac{\\Gamma_H}{45\\, {\\rm GeV}}\\right) {\\rm Br}(H\\to\\gamma\\gamma)^2$$.\nThey both include elastic and inelastic contributions, the latter also mixed, and in narrow width approximation. For $\\sigma = 3-9\\,{\\rm fb}$, we get from the second choice\nbranching ratio 1.7-2.9 %, or $\\Gamma(H\\to\\gamma\\gamma) = 0.75-1.3\\,{\\rm GeV}$.", "# Same prefactor in picobarn from Franceschini et al. Eq. (2)\n(45*54)/750/(13000)**2 * GeV2fb / 1000 ", "one-loop BPR model", "## So in pure photon fusion production and pure VV decay scenario, we have for total H(750) width range in GeV\nGAMbpr_low, GAMbpr_high = [(siggg/10800)*45/BrBPRgg**2 for siggg in (xs_low, xs_high)]\nGAMbpr_low, GAMbpr_high\n\n# Translating this into H->gamma gamma width in GeV:\nGAMbpr_HGG_low, GAMbpr_HGG_high = sqrt((xs_low/10800)*GAMbpr_low*45), sqrt((xs_high/10800)*GAMbpr_high*45)\nGAMbpr_HGG_low, GAMbpr_HGG_high\n\ndef lam(msig, GAM):\n \"\"\"Coupling to get given width H->gamma gamma for given loop fermion mass.\"\"\"\n ss = ( sqrt(256 * pi**3 * GAM / (alpha**2 * mH**3)) ).evalf()\n return float(msig * ss / A12N(tauN(msig)))\n\nlamN = np.frompyfunc(lam, 2, 1)\n\n# Checking that above inverted formula lambda(sigma) is consistent\n# with \"master\" formula sigma(lambda):\nlamN(400, GAMHVV('gg', BSMfermions=bpr, gHFF=42, mF=400))", "So, minimal possible coupling would be:", "lam(375, GAMbpr_HGG_low)", "So, for $N_E=3$, and $\\cos\\theta_0\\sim 1$ we have $\\lambda/(4\\pi)$ = 1.1 and we are at the border of perturbativity?\nConstraints from 8 TeV VV bounds. First, cross sections for pp->H->VV at 8 TeV if pp->H->gg at 13 TeV is in (3 fb, 9 fb)", "#rgg = 1.9 # gain for photon fusion xs going from 8 to 13 TeV in Franceschini et al.\n# rgg = 3.9 # value from Fichet et al.1512.05751 \nrgg = 3 # average value\n# For 3 pb\n[(xs_low*RVV)/rgg for RVV in [RWWbpr, RZZbpr, RZgbpr, 1]]\n\n# And for 9 fb\n[(xs_high*RVV)/rgg for RVV in [RWWbpr, RZZbpr, RZgbpr, 1]]\n\n# Bounds on pp->H->VV xs from LHC 8 TeV\nsig8 = {'WW' : 40, 'ZZ' : 12, 'Zg' : 11, 'gg' : 1.5} # in fb, from Franceschini", "So we see that strongest bound comes from photon photon final state. This can be relaxed\nif one takes larger rgg, advocated by some.", "def lambound(VV, msig, reps=bpr):\n \"\"\"Boundary value of gHFF to violate 8 TeV VV xs constraint\"\"\"\n VVs = VVfact(*smgroup.SMDynkin(reps))\n RVV = float((VVs[VV]/VVs['gg'])**2)\n # diphoton 13 TeV xs that would mean boundary VV 8 TeV xs\n gamgg = sig8[VV]*rgg/RVV\n Brgg = BrBPRgg # FIXME: hardwired BPR\n # H->gamma gamma width that would give above diphoton 13 TeV xs\n GAMgg = gamgg*45/10800/Brgg\n return lamN(msig, GAMgg)", "three-loop ČKP model\nWith ggF as dominant production mechanism we have\n$$ \\sigma_{VV}^{8\\,{\\rm TeV}} = 157\\,{\\rm fb} \\; Br(H\\to VV)$$", "def sig8CKP(VV, tau, sig, sigpri, mchi, mphi):\n \"\"\"xs in fb for pp-->H-->VV at 8 TeV in CKP model with degenerate couplings an masses\"\"\"\n TOT = GAMTOTckp(tau, sig, sigpri, mchi, mphi)\n BrVV = GAMHckp(VV, tau, sig, sigpri, mchi, mphi)/TOT\n # print('GAMH = {:.1f} GeV, Br(H->{}) = {}'.format(TOT, VV, BrVV))\n return ggF8 * BrVV\n\ndef fun(VV, tau, s):\n return sig8CKP(VV, tau=tau, sig=s, sigpri=s, mchi=375, mphi=375) - sig8[VV]\n\ndef sigboundCKP(VV, tau, init=6):\n \"\"\"Boundary value of sig=sig' to violate 8 TeV VV xs constraint\"\"\"\n return root(lambda s: fun(VV, tau, s), init).x[0]\n\n[sigboundCKP(VV, 8) for VV in ['WW', 'Zg', 'ZZ', 'gg']]", "So it is again photon-photon channel that is most restrictive.", "def funm(VV, mchi, s):\n return sig8CKP(VV, tau=10, sig=10, sigpri=10, mchi=mchi, mphi=s) - sig8[VV]\n\ndef mboundCKP(VV, mchi, init=390):\n \"\"\"Boundary value of mphi to violate 8 TeV VV xs constraint\"\"\"\n return root(lambda s: funm(VV, mchi, s), init).x[0]", "Plots", "SAVEPDFS = True", "[Fig. 1a] Enhancement of $h(125)\\to\\gamma\\gamma$ in one-loop BRP model\nEnhancement from the lighter of two charged components of triplet scalar (cf. Eq. (10) from Brdar et al.)", "def Rgg(lam=1, m=375):\n \"\"\"Triplet scalar h(125)->gamma gamma enhancement.\"\"\"\n SM = A1N(tauN(mW,mh)) + 3*(2/3)**2*A12N(tauN(mt,mh))\n BSM = lam * v**2 * A0N(tauN(m,mh)) / (2 * m**2)\n return (1 + BSM/SM)**2\n\nms = np.linspace(375, 1000)\nfig, ax = plt.subplots(figsize=(4,4))\nax.plot(ms, Rgg(lam=-20, m=ms), 'r--', label=r\"$c_S=-20$\")\nax.plot(ms, Rgg(lam=-10, m=ms), 'b-', label=r\"$c_S=-10$\")\nax.plot(ms, Rgg(lam=-5, m=ms), 'k:', label=r\"$c_S=-5$\")\nax.plot(ms, Rgg(lam=10, m=ms), 'g-.', label=r\"$c_S=\\;\\;10$\")\n#ax.plot(ms, Rgg(lam=20, m=ms), label=r\"$c_S=\\;\\;20$\")\nax.set_xlabel(r'$m_S \\;{\\rm [GeV]}$', fontsize=16)\nax.set_ylabel(r\"$R_{\\gamma\\gamma}$\", fontsize=16)\nprops = dict(color=\"red\", linestyle=\"-\", linewidth=2)\nax.axhline(0.9, **props)\nax.axhline(1.44, **props)\nax.legend(loc=(0.5, 0.55)).draw_frame(0)\nplt.tight_layout()\nif SAVEPDFS:\n plt.savefig(\"/home/kkumer/h125gg.pdf\")", "[Fig 1b] $H(750)\\to\\gamma\\gamma$ in one-loop BRP model", "xmin, xmax = 375, 800\nymin, ymax = 0, 250\nms = np.linspace(xmin, xmax)\nlam3 = lamN(ms, GAMbpr_HGG_low).astype(np.float) # 3 fb\nlam9 = lamN(ms, GAMbpr_HGG_high).astype(np.float) # 9 fb\n# 8 TeV bounds\nlamWW = lambound('WW', ms).astype(np.float)\nlamZZ = lambound('ZZ', ms).astype(np.float)\nlamZg = lambound('Zg', ms).astype(np.float)\nlamgg = lambound('gg', ms).astype(np.float)\nfig, ax = plt.subplots(figsize=(4,4))\nax.fill_between(ms, lam3, lamgg, color='lightgreen', alpha='0.5')\nax.fill_between(ms, lamgg, ymax, color='gray', alpha='0.6')\nax.plot(ms, lam9, 'b--', label=r'$\\sigma_{\\gamma\\gamma}= 9\\,{\\rm fb}$')\nax.plot(ms, lam3, 'b-', label=r'$\\sigma_{\\gamma\\gamma}= 3\\,{\\rm fb}$')\nlWW, = ax.plot(ms, lamWW, 'r-', label=r'$\\sigma_{VV}^{8\\,{\\rm TeV}}\\,{\\rm bounds}$')\nlgg, = ax.plot(ms, lamgg, 'r-')\nlZg, = ax.plot(ms, lamZg, 'r-')\nax.set_ylabel(r'$g_3\\, \\cos\\theta_0\\, N_E$', fontsize=16)\nax.set_xlabel(r'$m_{E}\\:{\\rm [GeV]}$', fontsize=16)\nax.xaxis.set_major_locator(ticker.MultipleLocator(100))\nax.legend(loc=4).draw_frame(0)\nax.set_xlim(xmin, xmax)\nax.set_ylim(ymin, ymax)\nplt.tight_layout()\n# Put labels on exclusion lines\nlabel_line(lWW, r\"$WW$\", near_x=650)\nlabel_line(lgg, r\"$\\gamma\\gamma$\", near_x=650)\nlabel_line(lZg, r\"$Z\\gamma$\", near_x=420)\nif SAVEPDFS:\n plt.savefig(\"/home/kkumer/triplet.pdf\")", "[Fig. 2] Allowed mass/coupling parameter ranges for ČKP model", "resolution = 60 # of calculation grid \nbmax=15\nxs = np.linspace(-bmax, bmax, resolution)\nys = np.linspace(-bmax, bmax, resolution)\nlevels = [3, 6, 9]\nX, Y = np.meshgrid(xs, ys)\n#Z = X**2 + Y**2\nZ = GAMHckp('gg', tau=X, sig=Y, sigpri=Y, mchi=375, mphi=375)*737./30.\nfig, ax = plt.subplots(figsize=(4.5,4.5))\n#ax.contour(X, Y, Z, cmap=plt.cm.viridis)\nCS = plt.contour(X, Y, Z, levels, cmap=plt.cm.Dark2, linestyles='dashed')\nfor c in CS.collections:\n c.set_dashes([(0, (8.0, 3.0))])\nfig = plt.clabel(CS, inline=1, fmt=r'$%.0f \\;{\\rm fb}$', fontsize=12, colors='black')\nax.annotate(r'$m_\\chi = m_\\phi = 375\\;{\\rm GeV}$', xy=(0.05, 0.58), xycoords='axes fraction', fontsize=12)\nax.set_xlabel(r'$\\tau$', fontsize=16)\nax.set_ylabel(r\"$\\sigma=\\sigma'$\", fontsize=16)\nprops = dict(color=\"green\", linestyle=\"-.\", linewidth=1)\nax.axvline(x=0, **props)\nax.axhline(y=0, **props)\ncut = 20 # by eyeballing\ngs = [sigboundCKP('gg', t, init=-20) for t in xs]\nlggL, = plt.plot(xs, gs, 'r-')\nax.fill_between(xs, -bmax, gs, color='gray', alpha='0.6')\ngs = [sigboundCKP('gg', t, init=20) for t in xs]\nlggH, = plt.plot(xs, gs, 'r-')\nax.fill_between(xs, gs, bmax, color='gray', alpha='0.6')\nplt.tight_layout()\nfig = plt.ylim(-bmax, bmax)\n# Put labels on exclusion lines\nlabel_line(lggL, r\"$\\gamma\\gamma$\", near_x=-12)\nlabel_line(lggH, r\"$\\gamma\\gamma$\", near_x=12)\nif SAVEPDFS:\n plt.savefig(\"/home/kkumer/tausig.pdf\")\n\nresolution = 60 # of calculation grid \nxs = np.linspace(375, 450, resolution)\nys = np.linspace(375, 450, resolution)\nlevels = [3, 6, 9]\nX, Y = np.meshgrid(xs, ys)\n#Z = X**2 + Y**2\nZ = GAMHckp('gg', tau=10, sig=10, sigpri=10, mchi=X, mphi=Y)*737./30.\nfig, ax = plt.subplots(figsize=(4.5,4.5))\nCS = plt.contour(X, Y, Z, levels, cmap=plt.cm.Dark2, linestyles='dashed')\nfor c in CS.collections:\n c.set_dashes([(0, (8.0, 3.0))])\nfig = plt.clabel(CS, inline=1, fmt=r'$%.0f \\;{\\rm fb}$', fontsize=12, colors='black')\nax.annotate(r\"$\\tau=\\sigma=\\sigma' = 10$\", xy=(0.6, 0.88), xycoords='axes fraction', fontsize=14)\nax.set_xlabel(r'$m_\\chi \\; {\\rm [GeV]}$', fontsize=16)\nax.set_ylabel(r\"$m_\\phi \\; {\\rm [GeV]}$\", fontsize=16)\n#props = dict(color=\"green\", linestyle=\"-.\", linewidth=1)\n#ax.axvline(x=375, **props)\n#ax.axhline(y=375, **props)\ngs = [mboundCKP('gg', m, init=380) for m in xs]\nlgg, = plt.plot(xs, gs, 'r-')\nax.fill_between(xs, 375, gs, color='gray', alpha='0.6')\nplt.tight_layout()\n# Put labels on exclusion lines\nlabel_line(lgg, r\"$\\gamma\\gamma$\", near_x=380)\nif SAVEPDFS:\n plt.savefig(\"/home/kkumer/mm.pdf\")", "[Fig 2] $\\sigma(pp \\to H(750)\\to\\gamma\\gamma)$ in three-loop ČKP model", "xmin, xmax = 375, 400\nymin, ymax = 0, 13.5\nxs = np.linspace(xmin,xmax)\nplt.figure(figsize=(4,4))\nTOT, BR = GAMTOTckpD(8, xs)\nplt.plot(xs , ggF13 * BR, label=r\"$\\tau=\\sigma=\\sigma' = 8$\")\nTOT, BR = GAMTOTckpD(4, xs)\nplt.plot(xs , ggF13* BR, 'r--', label=r\"$\\tau=\\sigma=\\sigma' = 4$\")\nplt.ylabel(r'$\\sigma(pp\\to H\\to\\gamma\\gamma)\\;{\\rm [fb]}$', fontsize=16)\nplt.xlabel(r'$m_{\\chi}=m_{\\phi}\\;{\\rm [GeV]}$', fontsize=16)\nplt.fill_between(xs, xs_low*np.ones(xs.shape), xs_high*np.ones(xs.shape), facecolor='lightgreen', alpha=0.5)\nplt.text(385, 8, r'${\\rm ATLAS+CMS}\\; \\sigma_{\\gamma\\gamma}\\; {\\rm range}$')\nplt.legend(loc=1).draw_frame(0)\nfig = plt.ylim(ymin, ymax)\nplt.tight_layout()\nif SAVEPDFS:\n plt.savefig('/home/kkumer/diphm.pdf')\n\nxmin, xmax = 0.2, 12\nxs = np.linspace(xmin,xmax)\nplt.figure(figsize=(3.7,4))\nTOT, BR = GAMTOTckpD(xs, 375)\nplt.plot(xs , ggF13*BR, label=r'$m_{\\chi} = m_{\\phi} = 375 \\;{\\rm GeV}$')\nTOT, BR = GAMTOTckpD(xs, 400)\nplt.plot(xs , ggF13*BR, 'r--', label=r'$m_{\\chi} = m_{\\phi} = 400 \\;{\\rm GeV}$')\n#plt.ylabel(r'$\\sigma(pp\\to H\\to\\gamma\\gamma)\\;{\\rm [fb]}$', fontsize=16)\nplt.xlabel(r\"$\\tau=\\sigma=\\sigma'$\", fontsize=16)\nplt.fill_between(xs, xs_low*np.ones(xs.shape), xs_high*np.ones(xs.shape), facecolor='lightgreen', alpha=0.5)\nplt.text(2.5, 8, r'${\\rm ATLAS+CMS}\\; \\sigma_{\\gamma\\gamma}\\; {\\rm range}$')\nplt.legend(loc=2).draw_frame(0)\nplt.xlim(2, 8.5)\nfig = plt.ylim(ymin, ymax)\nplt.tight_layout()\nif SAVEPDFS:\n plt.savefig('/home/kkumer/diphlam.pdf')", "[Fig 3] $\\Gamma_{H(750)}$ in three-loop ČKP model", "xmin, xmax = 375, 400\nymin, ymax = 20, 60\nxs = np.linspace(xmin,xmax)\nplt.figure(figsize=(4,4))\nTOT, BR = GAMTOTckpD(8, xs)\nplt.plot(xs , TOT, label=r\"$\\tau=\\sigma=\\sigma' = 8$\")\nTOT, BR = GAMTOTckpD(4, xs)\nplt.plot(xs , TOT, 'r--', label=r\"$\\tau=\\sigma=\\sigma' = 4$\")\nplt.ylabel(r'$\\Gamma_H\\;{\\rm [GeV]}$', fontsize=16)\nplt.xlabel(r'$m_{\\chi}=m_{\\phi}\\;{\\rm [GeV]}$', fontsize=16)\n#plt.fill_between(xs, xs_low*np.ones(xs.shape), xs_high*np.ones(xs.shape), facecolor='lightgreen', alpha=0.5)\n#plt.text(388, 8, r'${\\rm ATLAS+CMS}\\; \\gamma\\gamma\\; {\\rm range}$')\nplt.legend(loc=1).draw_frame(0)\nfig = plt.ylim(ymin, ymax)\nplt.tight_layout()\nif SAVEPDFS:\n plt.savefig('/home/kkumer/diphGAMm.pdf')\n\nxmin, xmax = 0.2, 12\nxs = np.linspace(xmin,xmax)\nplt.figure(figsize=(3.7,4))\nTOT, BR = GAMTOTckpD(xs, 375)\nplt.plot(xs , TOT, label=r'$m_{\\chi} = m_{\\phi} = 375 \\;{\\rm GeV}$')\nTOT, BR = GAMTOTckpD(xs, 400)\nplt.plot(xs , TOT, 'r--', label=r'$m_{\\chi} = m_{\\phi} = 400 \\;{\\rm GeV}$')\n#plt.ylabel(r'$\\sigma(pp\\to H\\to\\gamma\\gamma)\\;{\\rm [fb]}$', fontsize=16)\nplt.xlabel(r\"$\\tau=\\sigma=\\sigma'$\", fontsize=16)\n#plt.fill_between(xs, xs_low*np.ones(xs.shape), xs_high*np.ones(xs.shape), facecolor='lightgreen', alpha=0.5)\n#plt.text(2.5, 8, r'${\\rm ATLAS+CMS}\\; \\gamma\\gamma\\; {\\rm range}$')\nplt.legend(loc=2).draw_frame(0)\n#plt.xlim(xmin, xmax)\nfig = plt.ylim(ymin, ymax)\nplt.tight_layout()\nif SAVEPDFS:\n plt.savefig('/home/kkumer/diphGAMlam.pdf')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive/09_sequence/sinewaves.ipynb
apache-2.0
[ "Time Series Prediction\nObjectives\n 1. Build a linear, DNN and CNN model in Keras.\n 2. Build a simple RNN model and a multi-layer RNN model in Keras.\nIn this lab we will with a linear, DNN and CNN model \nSince the features of our model are sequential in nature, we'll next look at how to build various RNN models in Keras. We'll start with a simple RNN model and then see how to create a multi-layer RNN in Keras.\nWe will be exploring a lot of different model types in this notebook.", "!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst\n\n!pip install --user google-cloud-bigquery==1.25.0", "Note: Restart your kernel to use updated packages.\nKindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.\nLoad necessary libraries and set up environment variables", "PROJECT = \"your-gcp-project-here\" # REPLACE WITH YOUR PROJECT NAME\nBUCKET = \"your-gcp-bucket-here\" # REPLACE WITH YOUR BUCKET\nREGION = \"us-central1\" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\n\n%env \nPROJECT = PROJECT\nBUCKET = BUCKET\nREGION = REGION\n\nimport os\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\n\nfrom google.cloud import bigquery\nfrom tensorflow.keras.utils import to_categorical\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import (Dense, DenseFeatures,\n Conv1D, MaxPool1D,\n Reshape, RNN,\n LSTM, GRU, Bidirectional)\nfrom tensorflow.keras.callbacks import TensorBoard, ModelCheckpoint\nfrom tensorflow.keras.optimizers import Adam\n\n# To plot pretty figures\n%matplotlib inline\nmpl.rc('axes', labelsize=14)\nmpl.rc('xtick', labelsize=12)\nmpl.rc('ytick', labelsize=12)\n\n# For reproducible results.\nfrom numpy.random import seed\nseed(1)\ntf.random.set_seed(2)", "Explore time series data\nWe'll start by pulling a small sample of the time series data from Big Query and write some helper functions to clean up the data for modeling. We'll use the data from the percent_change_sp500 table in BigQuery. The close_values_prior_260 column contains the close values for any given stock for the previous 260 days.", "%%time\nbq = bigquery.Client(project=PROJECT)\n\nbq_query = '''\n#standardSQL\nSELECT\n symbol,\n Date,\n direction,\n close_values_prior_260\nFROM\n `stock_market.eps_percent_change_sp500`\nLIMIT\n 100\n'''\n", "The function clean_data below does three things:\n 1. First, we'll remove any inf or NA values\n 2. Next, we parse the Date field to read it as a string.\n 3. Lastly, we convert the label direction into a numeric quantity, mapping 'DOWN' to 0, 'STAY' to 1 and 'UP' to 2.", "def clean_data(input_df):\n \"\"\"Cleans data to prepare for training.\n\n Args:\n input_df: Pandas dataframe.\n Returns:\n Pandas dataframe.\n \"\"\"\n df = input_df.copy()\n\n # Remove inf/na values.\n real_valued_rows = ~(df == np.inf).max(axis=1)\n df = df[real_valued_rows].dropna()\n\n # TF doesn't accept datetimes in DataFrame.\n df['Date'] = pd.to_datetime(df['Date'], errors='coerce')\n df['Date'] = df['Date'].dt.strftime('%Y-%m-%d')\n\n # TF requires numeric label.\n df['direction_numeric'] = df['direction'].apply(lambda x: {'DOWN': 0,\n 'STAY': 1,\n 'UP': 2}[x])\n return df", "Read data and preprocessing\nBefore we begin modeling, we'll preprocess our features by scaling to the z-score. This will ensure that the range of the feature values being fed to the model are comparable and should help with convergence during gradient descent.", "STOCK_HISTORY_COLUMN = 'close_values_prior_260'\nCOL_NAMES = ['day_' + str(day) for day in range(0, 260)]\nLABEL = 'direction_numeric'\n\ndef _scale_features(df):\n \"\"\"z-scale feature columns of Pandas dataframe.\n\n Args:\n features: Pandas dataframe.\n Returns:\n Pandas dataframe with each column standardized according to the\n values in that column.\n \"\"\"\n avg = df.mean()\n std = df.std()\n return (df - avg) / std\n\n\ndef create_features(df, label_name):\n \"\"\"Create modeling features and label from Pandas dataframe.\n\n Args:\n df: Pandas dataframe.\n label_name: str, the column name of the label.\n Returns:\n Pandas dataframe\n \"\"\"\n # Expand 1 column containing a list of close prices to 260 columns.\n time_series_features = df[STOCK_HISTORY_COLUMN].apply(pd.Series)\n\n # Rename columns.\n time_series_features.columns = COL_NAMES\n time_series_features = _scale_features(time_series_features)\n\n # Concat time series features with static features and label.\n label_column = df[LABEL]\n\n return pd.concat([time_series_features,\n label_column], axis=1)", "Make train-eval-test split\nNext, we'll make repeatable splits for our train/validation/test datasets and save these datasets to local csv files. The query below will take a subsample of the entire dataset and then create a 70-15-15 split for the train/validation/test sets.", "def _create_split(phase):\n \"\"\"Create string to produce train/valid/test splits for a SQL query.\n\n Args:\n phase: str, either TRAIN, VALID, or TEST.\n Returns:\n String.\n \"\"\"\n floor, ceiling = '2002-11-01', '2010-07-01'\n if phase == 'VALID':\n floor, ceiling = '2010-07-01', '2011-09-01'\n elif phase == 'TEST':\n floor, ceiling = '2011-09-01', '2012-11-30'\n return '''\n WHERE Date >= '{0}'\n AND Date < '{1}'\n '''.format(floor, ceiling)\n\n\ndef create_query(phase):\n \"\"\"Create SQL query to create train/valid/test splits on subsample.\n\n Args:\n phase: str, either TRAIN, VALID, or TEST.\n sample_size: str, amount of data to take for subsample.\n Returns:\n String.\n \"\"\"\n basequery = \"\"\"\n #standardSQL\n SELECT\n symbol,\n Date,\n direction,\n close_values_prior_260\n FROM\n `stock_market.eps_percent_change_sp500`\n \"\"\"\n \n return basequery + _create_split(phase)", "Modeling\nFor experimentation purposes, we'll train various models using data we can fit in memory using the .csv files we created above.", "N_TIME_STEPS = 260\nN_LABELS = 3\n\nXtrain = pd.read_csv('stock-train.csv')\nXvalid = pd.read_csv('stock-valid.csv')\n\nytrain = Xtrain.pop(LABEL)\nyvalid = Xvalid.pop(LABEL)\n\nytrain_categorical = to_categorical(ytrain.values)\nyvalid_categorical = to_categorical(yvalid.values)", "To monitor training progress and compare evaluation metrics for different models, we'll use the function below to plot metrics captured from the training job such as training and validation loss or accuracy.", "def plot_curves(train_data, val_data, label='Accuracy'):\n \"\"\"Plot training and validation metrics on single axis.\n\n Args:\n train_data: list, metrics obtrained from training data.\n val_data: list, metrics obtained from validation data.\n label: str, title and label for plot.\n Returns:\n Matplotlib plot.\n \"\"\"\n plt.plot(np.arange(len(train_data)) + 0.5,\n train_data,\n \"b.-\", label=\"Training \" + label)\n plt.plot(np.arange(len(val_data)) + 1,\n val_data, \"r.-\",\n label=\"Validation \" + label)\n plt.gca().xaxis.set_major_locator(mpl.ticker.MaxNLocator(integer=True))\n plt.legend(fontsize=14)\n plt.xlabel(\"Epochs\")\n plt.ylabel(label)\n plt.grid(True) ", "Baseline\nBefore we begin modeling in Keras, let's create a benchmark using a simple heuristic. Let's see what kind of accuracy we would get on the validation set if we predict the majority class of the training set.", "sum(yvalid == ytrain.value_counts().idxmax()) / yvalid.shape[0]", "Ok. So just naively guessing the most common outcome UP will give about 29.5% accuracy on the validation set. \nLinear model\nWe'll start with a simple linear model, mapping our sequential input to a single fully dense layer.", "model = Sequential()\nmodel.add(Dense(units=N_LABELS,\n activation='softmax',\n kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))\n\nmodel.compile(optimizer=Adam(learning_rate=0.001),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\nhistory = model.fit(x=Xtrain.values,\n y=ytrain_categorical,\n batch_size=Xtrain.shape[0],\n validation_data=(Xvalid.values, yvalid_categorical),\n epochs=30,\n verbose=0)\n\nplot_curves(history.history['loss'],\n history.history['val_loss'],\n label='Loss')\n\nplot_curves(history.history['accuracy'],\n history.history['val_accuracy'],\n label='Accuracy')", "The accuracy seems to level out pretty quickly. To report the accuracy, we'll average the accuracy on the validation set across the last few epochs of training.", "np.mean(history.history['val_accuracy'][-5:])", "Deep Neural Network\nThe linear model is an improvement on our naive benchmark. Perhaps we can do better with a more complicated model. Next, we'll create a deep neural network with Keras. We'll experiment with a two layer DNN here but feel free to try a more complex model or add any other additional techniques to try an improve your performance.", "dnn_hidden_units = [16, 8]\n\nmodel = Sequential()\nfor layer in dnn_hidden_units:\n model.add(Dense(units=layer,\n activation=\"relu\"))\n\nmodel.add(Dense(units=N_LABELS,\n activation=\"softmax\",\n kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))\n\nmodel.compile(optimizer=Adam(learning_rate=0.001),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\nhistory = model.fit(x=Xtrain.values,\n y=ytrain_categorical,\n batch_size=Xtrain.shape[0],\n validation_data=(Xvalid.values, yvalid_categorical),\n epochs=10,\n verbose=0)\n\nplot_curves(history.history['loss'],\n history.history['val_loss'],\n label='Loss')\n\nplot_curves(history.history['accuracy'],\n history.history['val_accuracy'],\n label='Accuracy')\n\nnp.mean(history.history['val_accuracy'][-5:])", "Convolutional Neural Network\nThe DNN does slightly better. Let's see how a convolutional neural network performs. \nA 1-dimensional convolutional can be useful for extracting features from sequential data or deriving features from shorter, fixed-length segments of the data set. Check out the documentation for how to implement a Conv1d in Tensorflow. Max pooling is a downsampling strategy commonly used in conjunction with convolutional neural networks. Next, we'll build a CNN model in Keras using the Conv1D to create convolution layers and MaxPool1D to perform max pooling before passing to a fully connected dense layer.", "model = Sequential()\n\n# Convolutional layer\nmodel.add(Reshape(target_shape=[N_TIME_STEPS, 1]))\nmodel.add(Conv1D(filters=5,\n kernel_size=5,\n strides=2,\n padding=\"valid\",\n input_shape=[None, 1]))\nmodel.add(MaxPool1D(pool_size=2,\n strides=None,\n padding='valid'))\n\n\n# Flatten the result and pass through DNN.\nmodel.add(tf.keras.layers.Flatten())\nmodel.add(Dense(units=N_TIME_STEPS//4,\n activation=\"relu\"))\n\nmodel.add(Dense(units=N_LABELS, \n activation=\"softmax\",\n kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))\n\nmodel.compile(optimizer=Adam(learning_rate=0.01),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\nhistory = model.fit(x=Xtrain.values,\n y=ytrain_categorical,\n batch_size=Xtrain.shape[0],\n validation_data=(Xvalid.values, yvalid_categorical),\n epochs=10,\n verbose=0)\n\nplot_curves(history.history['loss'],\n history.history['val_loss'],\n label='Loss')\n\nplot_curves(history.history['accuracy'],\n history.history['val_accuracy'],\n label='Accuracy')\n\nnp.mean(history.history['val_accuracy'][-5:])", "Recurrent Neural Network\nRNNs are particularly well-suited for learning sequential data. They retain state information from one iteration to the next by feeding the output from one cell as input for the next step. In the cell below, we'll build a RNN model in Keras. The final state of the RNN is captured and then passed through a fully connected layer to produce a prediction.", "model = Sequential()\n\n# Reshape inputs to pass through RNN layer.\nmodel.add(Reshape(target_shape=[N_TIME_STEPS, 1]))\nmodel.add(LSTM(N_TIME_STEPS // 8,\n activation='relu',\n return_sequences=False))\n\nmodel.add(Dense(units=N_LABELS,\n activation='softmax',\n kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))\n\n# Create the model.\nmodel.compile(optimizer=Adam(learning_rate=0.001),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\nhistory = model.fit(x=Xtrain.values,\n y=ytrain_categorical,\n batch_size=Xtrain.shape[0],\n validation_data=(Xvalid.values, yvalid_categorical),\n epochs=40,\n verbose=0)\n\nplot_curves(history.history['loss'],\n history.history['val_loss'],\n label='Loss')\n\nplot_curves(history.history['accuracy'],\n history.history['val_accuracy'],\n label='Accuracy')\n\nnp.mean(history.history['val_accuracy'][-5:])", "Multi-layer RNN\nNext, we'll build multi-layer RNN. Just as multiple layers of a deep neural network allow for more complicated features to be learned during training, additional RNN layers can potentially learn complex features in sequential data. For a multi-layer RNN the output of the first RNN layer is fed as the input into the next RNN layer.", "rnn_hidden_units = [N_TIME_STEPS // 16,\n N_TIME_STEPS // 32]\n\nmodel = Sequential()\n\n# Reshape inputs to pass through RNN layer.\nmodel.add(Reshape(target_shape=[N_TIME_STEPS, 1]))\n\nfor layer in rnn_hidden_units[:-1]:\n model.add(GRU(units=layer,\n activation='relu',\n return_sequences=True))\n\nmodel.add(GRU(units=rnn_hidden_units[-1],\n return_sequences=False))\nmodel.add(Dense(units=N_LABELS,\n activation=\"softmax\",\n kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))\n\nmodel.compile(optimizer=Adam(learning_rate=0.001),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\nhistory = model.fit(x=Xtrain.values,\n y=ytrain_categorical,\n batch_size=Xtrain.shape[0],\n validation_data=(Xvalid.values, yvalid_categorical),\n epochs=50,\n verbose=0)\n\nplot_curves(history.history['loss'],\n history.history['val_loss'],\n label='Loss')\n\nplot_curves(history.history['accuracy'],\n history.history['val_accuracy'],\n label='Accuracy')\n\nnp.mean(history.history['val_accuracy'][-5:])", "Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
vzg100/Post-Translational-Modification-Prediction
.ipynb_checkpoints/Phosphorylation Chemical Tests - SVC-checkpoint.ipynb
mit
[ "Template for test", "from pred import Predictor\nfrom pred import sequence_vector\nfrom pred import chemical_vector", "Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.\nIncluded is N Phosphorylation however no benchmarks are available, yet. \nTraining data is from phospho.elm and benchmarks are from dbptm.", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nfor i in par:\n print(\"y\", i)\n y = Predictor()\n y.load_data(file=\"Data/Training/clean_s.csv\")\n y.process_data(vector_function=\"chemical\", amino_acid=\"S\", imbalance_function=i, random_data=0)\n y.supervised_training(\"svc\")\n y.benchmark(\"Data/Benchmarks/phos.csv\", \"S\")\n del y\n print(\"x\", i)\n x = Predictor()\n x.load_data(file=\"Data/Training/clean_s.csv\")\n x.process_data(vector_function=\"chemical\", amino_acid=\"S\", imbalance_function=i, random_data=1)\n x.supervised_training(\"svc\")\n x.benchmark(\"Data/Benchmarks/phos.csv\", \"S\")\n del x\n", "Y Phosphorylation", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nfor i in par:\n print(\"y\", i)\n y = Predictor()\n y.load_data(file=\"Data/Training/clean_Y.csv\")\n y.process_data(vector_function=\"chemical\", amino_acid=\"Y\", imbalance_function=i, random_data=0)\n y.supervised_training(\"svc\")\n y.benchmark(\"Data/Benchmarks/phos.csv\", \"Y\")\n del y\n print(\"x\", i)\n x = Predictor()\n x.load_data(file=\"Data/Training/clean_Y.csv\")\n x.process_data(vector_function=\"chemical\", amino_acid=\"Y\", imbalance_function=i, random_data=1)\n x.supervised_training(\"svc\")\n x.benchmark(\"Data/Benchmarks/phos.csv\", \"Y\")\n del x\n", "T Phosphorylation", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nfor i in par:\n print(\"y\", i)\n y = Predictor()\n y.load_data(file=\"Data/Training/clean_t.csv\")\n y.process_data(vector_function=\"chemical\", amino_acid=\"T\", imbalance_function=i, random_data=0)\n y.supervised_training(\"svc\")\n y.benchmark(\"Data/Benchmarks/phos.csv\", \"T\")\n del y\n print(\"x\", i)\n x = Predictor()\n x.load_data(file=\"Data/Training/clean_t.csv\")\n x.process_data(vector_function=\"chemical\", amino_acid=\"T\", imbalance_function=i, random_data=1)\n x.supervised_training(\"svc\")\n x.benchmark(\"Data/Benchmarks/phos.csv\", \"T\")\n del x\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tigerstat46/Deep-Learning
StateFarmDeep.ipynb
mit
[ "Thank you Mr.Vikramank for split data code in Python. You can see original code in https://github.com/Vikramank/Deep-Learning-/blob/master/Cats-and-Dogs/Classification-%20Cats%20and%20Dogs.ipynb", "#importing necessary packages\nimport numpy as np\nimport matplotlib.pyplot as plt \nimport tensorflow as tf\nimport tflearn\nimport tensorflow as tf\nfrom PIL import Image\n%matplotlib inline\n#for writing text files\nimport glob\nimport os \nimport random \n#reading images from a text file\nfrom tflearn.data_utils import image_preloader\nimport math\n\nIMAGE_FOLDER = '/home/tiger/Desktop/train'\nTRAIN_DATA = '/home/tiger/Desktop/train_data.txt'\nTEST_DATA = '/home/tiger/Desktop/test_data.txt'\ntrain_proportion=0.8\ntest_proportion=0.2\n\nrandom.seed(123)\n#read the image directories\nfilenames_image = os.listdir(IMAGE_FOLDER)\n#shuffling the data is important otherwise the model will be fed with a single class data for a long time and \n#network will not learn properly\nrandom.shuffle(filenames_image)\n\n#total number of images\ntotal=len(filenames_image)\n## *****training data******** \nfr = open(TRAIN_DATA, 'w')\ntrain_files=filenames_image[0: int(train_proportion*total)]\nfor filename in train_files:\n if filename[0:2] == 'c0':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 0\\n')\n elif filename[0:2] == 'c1':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 1\\n')\n elif filename[0:2] == 'c2':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 2\\n')\n elif filename[0:2] == 'c3':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 3\\n')\n elif filename[0:2] == 'c4':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 4\\n')\n elif filename[0:2] == 'c5':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 5\\n')\n elif filename[0:2] == 'c6':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 6\\n')\n elif filename[0:2] == 'c7':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 7\\n')\n elif filename[0:2] == 'c8':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 8\\n')\n elif filename[0:2] == 'c9':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 9\\n')\n\nfr.close()\n## *****testing data******** \nfr = open(TEST_DATA, 'w')\ntest_files=filenames_image[int(math.ceil(train_proportion*total)):int(math.ceil((train_proportion+test_proportion)*total))]\nfor filename in test_files:\n if filename[0:2] == 'c0':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 0\\n')\n elif filename[0:2] == 'c1':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 1\\n')\n elif filename[0:2] == 'c2':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 2\\n')\n elif filename[0:2] == 'c3':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 3\\n')\n elif filename[0:2] == 'c4':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 4\\n')\n elif filename[0:2] == 'c5':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 5\\n')\n elif filename[0:2] == 'c6':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 6\\n')\n elif filename[0:2] == 'c7':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 7\\n')\n elif filename[0:2] == 'c8':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 8\\n')\n elif filename[0:2] == 'c9':\n fr.write(IMAGE_FOLDER + '/'+ filename + ' 9\\n')\nfr.close()\n\nX_train, Y_train = image_preloader(TRAIN_DATA, image_shape=(30,40),mode='file', categorical_labels=True,normalize=True)\nX_test, Y_test = image_preloader(TEST_DATA, image_shape=(30,40),mode='file', categorical_labels=True,normalize=True)\n\nprint(\"Dataset\")\nprint(\"Number of training images {}\".format(len(X_train)))\nprint(\"Number of testing images {}\".format(len(X_test)))\nprint(\"Shape of an image {}\" .format(X_train[1].shape))\nprint(\"Shape of label:{} ,number of classes: {}\".format(Y_train[1].shape,len(Y_train[1])))\n\nplt.imshow(X_train[0])\nplt.show()", "แปลงข้อมูลจากรูปแบบ PIL เป็น Numpy Array เพื่อให้สามารถทำงานกับตัวแบบที่เขียนด้วย Keras<br> \nConvert image from PIL to Numpy Array to work with Keras", "train_x = np.array(X_train)\ntest_x = np.array(X_test)\n\ntrain_y = np.array(Y_train)\ntest_y = np.array(Y_test)", "Setting NVIDIA GPU", "import theano.sandbox.cuda\ntheano.sandbox.cuda.use(\"gpu0\")\n\nfrom keras.models import Model\nfrom keras.layers import Dense,Dropout,Flatten,Input, merge \nfrom keras.layers.convolutional import Convolution2D,MaxPooling2D\nfrom keras import backend as K\nfrom keras.layers.normalization import BatchNormalization\nK.set_image_dim_ordering('th')", "In Statefarm case. I construct Residual Network with Convolutional Neural Network. Keras have 2 ways to write model(Sequential and Functional) but Functional API is easier to implement ResNet than Sequential", "np.random.seed(123)\ninputs = Input(shape=(40, 30, 3))\nx = Convolution2D(64,3,3,activation='relu',border_mode='same')(inputs)\nx = Dropout(0.2)(x)\nx = MaxPooling2D(pool_size=(2,2))(x)\ny = Convolution2D(64,3,3,activation='relu',border_mode='same')(x)\ny = Dropout(0.2)(y)\ny = Convolution2D(64,3,3,activation='relu',border_mode='same')(y)\ny = Dropout(0.2)(y)\nz = merge([x,y],mode = 'sum')\nz = Flatten()(z)\nz = Dense(1024,activation='relu')(z)\nz = Dropout(0.5)(z)\npredictions = Dense(10, activation='softmax')(z)\nmodel = Model(input=inputs,output=predictions)\nmodel.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])\nhistory = model.fit(train_x, train_y, validation_data=(test_x, test_y), nb_epoch=10, batch_size=32)\nscores = model.evaluate(test_x, test_y, verbose=0)\nprint(\"Accuracy: %.2f%%\" % (scores[1]*100))\n\nfrom keras.utils.visualize_util import plot\nplot(model, to_file='Desktop/model.png')\n\nplt.plot(history.history[\"loss\"])\nplt.plot(history.history[\"val_loss\"])\nplt.title(\"Model Loss\")\nplt.ylabel(\"Loss\")\nplt.xlabel(\"epoch\")\nplt.legend([\"train\",\"test\"],loc=\"upper right\")\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mari-linhares/tensorflow-workshop
code_samples/RNN/colorbot/colorbot.ipynb
apache-2.0
[ "Colorbot\nSpecial thanks to @MarkDaoust that helped us with this material\nIn order to have a better experience follow these steps:\n\nJust read all the notebook, try to understand what each part of the code is doing and get familiar with the implementation.\nFor each exercise in this notebook make a copy of this notebook and try to implement what is expected. We suggest the following order for the exercises: HYPERPARAMETERS, EXPERIMENT, DATASET\nTroubles or doubts about the code/exercises? Ask the instructor about it or check colorbot_solutions.ipnyb for a possible implementation/instruction if available\n\nContent of this notebook\nIn this notebook you'll find a full implementation of a RNN model using the TensorFlow Estimators including comments and details about how to do it. \nOnce you finish this notebook, you'll have a better understanding of:\n * TensorFlow Estimators\n * TensorFlow DataSets\n * RNNs\nWhat is colorbot?\nColorbot is a RNN model that receives a word (sequence of characters) as input and learns to predict a rgb value that better represents this word. As a result we have a color generator!\n\nDependencies", "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\n# Tensorflow\nimport tensorflow as tf\nprint('Use TensorFlow v1.2 or higher')\nprint('Your TensorFlow version:', tf.__version__) \n\n# Feeding function for enqueue data\nfrom tensorflow.python.estimator.inputs.queues import feeding_functions as ff\n\n# Rnn common functions\nfrom tensorflow.contrib.learn.python.learn.estimators import rnn_common\n\n# Run an experiment\nfrom tensorflow.contrib.learn.python.learn import learn_runner\n\n# Model builder\nfrom tensorflow.python.estimator import model_fn as model_fn_lib\n\n# Plot images with pyplot\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\n# Helpers for data processing\nimport pandas as pd\nimport numpy as np\nimport argparse", "Parameters", "# Data files\nTRAIN_INPUT = 'data/train.csv'\nTEST_INPUT = 'data/test.csv'\nMY_TEST_INPUT = 'data/mytest.csv'\n\n# Parameters for training\nBATCH_SIZE = 64\n\n# Parameters for data processing\nVOCAB_SIZE = 256\nCHARACTERS = [chr(i) for i in range(VOCAB_SIZE)]\nSEQUENCE_LENGTH_KEY = 'sequence_length'\nCOLOR_NAME_KEY = 'color_name'", "Helper functions", "# Returns the column values from a CSV file as a list\ndef _get_csv_column(csv_file, column_name):\n with open(csv_file, 'r') as f:\n df = pd.read_csv(f)\n return df[column_name].tolist()\n\n# Plots a color image\ndef _plot_rgb(rgb):\n data = [[rgb]]\n plt.figure(figsize=(2,2))\n plt.imshow(data, interpolation='nearest')\n plt.show()", "Input function\nHere we are defining the input pipeline using the Dataset API.\nOne special operation that we're doing is called group_by_window, what this function does is to map each consecutive element in this dataset to a key using key_func and then groups the elements by key. It then applies reduce_func to at most window_size elements matching the same key. All except the final window for each key will contain window_size elements; the final window may be smaller.\nIn the code below what we're doing is using the group_by_window to batch color names that have similar length together, this makes the code more efficient since the RNN will be unfolded (approximately) the same number of steps in each batch.\n\nImage from Sequence Models and the RNN API (TensorFlow Dev Summit 2017)\n EXERCISE DATASET (first complete the EXERCISE EXPERIMENT: change the input function bellow so it will just use normal padded_batch instead sorting the batches. Then run each model using experiments and compare the efficiency (time, global_step/sec) using TensorBoard.\nhint: to compare the implementations using tensorboard just copy the model_dir folder of both executions to the same directory (the model dir should be different at each time you run the model) and point tensorboard to it with: tensorboard --logdir=path_to_model_dirs_par)", "def get_input_fn(csv_file, batch_size, num_epochs=1, shuffle=True):\n def _parse(line):\n '''\n This function will parse each line of the text,\n returning 3 variables.\n \n Each line contains: name, red, green, blue separated by \",\"\n Where:\n name: string\n red, green, blue: int [0, 255]\n \n The variables returned are:\n color: tensor containing the rgb values normalized that represent the color name.\n Each rgb values is an int [0, 1].\n \n color_name: a sequence of characters. Example: if name is \"blue\"\n color_name will be [\"b\", \"l\", \"u\", \"e\"]\n \n length = len(color_name). Example: if color_name = [\"b\", \"l\", \"u\", \"e\"], then length = 4 \n '''\n \n # split line\n items = tf.string_split([line],',').values\n\n # get 3 last values in the line that are the color rgb values\n color = tf.string_to_number(items[1:], out_type=tf.float32) / 255.0\n\n # split color_name (first value in the line)\n # into a sequence of characters and calculates the length\n color_name = tf.string_split([items[0]], '')\n length = color_name.indices[-1, 1] + 1 # length = index of last char + 1\n color_name = color_name.values\n \n return color, color_name, length\n\n def _length_bin(length, cast_value=5, max_bin_id=10):\n '''\n Chooses a bin for a word given it's length.\n The goal is to use group_by_window to group words\n with the ~ same ~ length in the same bin.\n\n Each bin will have the size of a batch, so it can train faster.\n '''\n bin_id = tf.cast(length / cast_value, dtype=tf.int64)\n return tf.minimum(bin_id, max_bin_id)\n\n def _pad_batch(ds, batch_size):\n return ds.padded_batch(batch_size, \n padded_shapes=([None], [None], []),\n padding_values=(0.0, chr(0), tf.cast(0, tf.int64)))\n\n def input_fn():\n # https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/data\n dataset = (\n tf.contrib.data.TextLineDataset(csv_file) # reading from the HD\n .skip(1) # skip header\n .repeat(num_epochs) # repeat dataset the number of epochs\n .map(_parse) # parse each line of text to variables\n .group_by_window(key_func=lambda color, color_name, length: _length_bin(length), # choose a bin\n reduce_func=lambda key, ds: _pad_batch(ds, batch_size), # apply reduce funtion\n window_size=batch_size)\n )\n \n # for our \"manual\" test we don't want to shuffle the data\n if shuffle:\n dataset = dataset.shuffle(buffer_size=100000)\n\n # create iterator\n color, color_name, length = dataset.make_one_shot_iterator().get_next()\n\n features = {\n COLOR_NAME_KEY: color_name,\n SEQUENCE_LENGTH_KEY: length,\n }\n\n return features, color\n return input_fn\n\ntrain_input_fn = get_input_fn(TRAIN_INPUT, BATCH_SIZE)\ntest_input_fn = get_input_fn(TEST_INPUT, BATCH_SIZE)", "Creating the Estimator model", "def get_model_fn(rnn_cell_sizes,\n label_dimension,\n dnn_layer_sizes=[],\n optimizer='SGD',\n learning_rate=0.01):\n \n def model_fn(features, labels, mode):\n \n color_name = features[COLOR_NAME_KEY]\n sequence_length = tf.cast(features[SEQUENCE_LENGTH_KEY], dtype=tf.int32) # int64 -> int32\n \n # ----------- Preparing input --------------------\n # Creating a tf constant to hold the characters used in the data\n mapping = tf.constant(CHARACTERS, name=\"mapping\")\n table = tf.contrib.lookup.index_table_from_tensor(mapping, dtype=tf.string)\n int_color_name = table.lookup(color_name)\n \n # converting color names to one hot representation\n color_name_onehot = tf.one_hot(int_color_name, depth=len(CHARACTERS) + 1)\n \n # ---------- RNN -------------------\n # Each RNN layer will consist of a LSTM cell\n rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in rnn_cell_sizes]\n \n # Construct the layers\n multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)\n \n # Runs the RNN model dynamically\n # more about it at: \n # https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn\n outputs, final_state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,\n inputs=color_name_onehot,\n sequence_length=sequence_length,\n dtype=tf.float32)\n\n # Slice to keep only the last cell of the RNN\n last_activations = rnn_common.select_last_activations(outputs,\n sequence_length)\n\n # ------------ Dense layers -------------------\n # Construct dense layers on top of the last cell of the RNN\n for units in dnn_layer_sizes:\n last_activations = tf.layers.dense(\n last_activations, units, activation=tf.nn.relu)\n \n # Final dense layer for prediction\n predictions = tf.layers.dense(last_activations, label_dimension)\n\n # ----------- Loss and Optimizer ----------------\n loss = None\n train_op = None\n\n if mode != tf.estimator.ModeKeys.PREDICT: \n loss = tf.losses.mean_squared_error(labels, predictions)\n \n if mode == tf.estimator.ModeKeys.TRAIN: \n train_op = tf.contrib.layers.optimize_loss(\n loss,\n tf.contrib.framework.get_global_step(),\n optimizer=optimizer,\n learning_rate=learning_rate)\n \n return model_fn_lib.EstimatorSpec(mode,\n predictions=predictions,\n loss=loss,\n train_op=train_op)\n return model_fn", "EXERCISE HYPERPARAMETERS: try making changes to the model and see if you can improve the results.\nRun the original model, run yours and compare them using Tensorboard. What improvements do you see?\nhint 0: change the type of RNNCell, maybe a GRUCell? Change the number of hidden layers, or add dnn layers.\nhint 1: to compare the implementations using tensorboard just copy the model_dir folder of both executions to the same directory (the model dir should be different at each time you run the model) and point tensorboard to it with: tensorboard --logdir=path_to_model_dirs_par)", "model_fn = get_model_fn(rnn_cell_sizes=[256, 128], # size of the hidden layers\n label_dimension=3, # since is RGB\n dnn_layer_sizes=[128], # size of units in the dense layers on top of the RNN\n optimizer='Adam', # changing optimizer to Adam\n learning_rate=0.01)\n\nestimator = tf.estimator.Estimator(model_fn=model_fn, model_dir='colorbot')", "Trainning and Evaluating\n EXERCISE EXPERIMENT: The code below works, but we can use an experiment instead. Add a cell that runs an experiment instead of interacting directly with the estimator.\nhint 0: you'll need to change the train_input_fn definition, think about it...\nhint 1: the change is related with the for loop", "NUM_EPOCHS = 40\nfor i in range(NUM_EPOCHS):\n print('Training epoch %d' % i)\n print('-' * 20)\n estimator.train(input_fn=train_input_fn)\nprint('Evaluating epoch %d' % i)\nprint('-' * 20)\nestimator.evaluate(input_fn = test_input_fn)", "Making Predictions", "def predict(estimator, input_file):\n preds = estimator.predict(input_fn=get_input_fn(input_file, 1, shuffle=False))\n color_names = _get_csv_column(input_file, 'name')\n\n print()\n for p, name in zip(preds, color_names):\n color = tuple(map(int, p * 255))\n print(name + ',', 'rgb:', color)\n _plot_rgb(p)\n\npredict(estimator, MY_TEST_INPUT)", "Pre-trained model predictions\nIn order to load the pre-trained model we can just create an estimator using the model_fn and use the model_dir that contains the pre-trained model files in this case it's 'pretrained'", "pre_estimator = tf.estimator.Estimator(model_dir='pretrained', model_fn=model_fn)\npredict(pre_estimator, MY_TEST_INPUT)", "This is hacky code from \"play_colorbot.py\" to interactively make predictions", "# Creating my own input function for a given color\ndef get_input_fn(color):\n def input_fn():\n seq_len = len(color)\n # color is now a sequence of chars\n color_split = tf.string_split([color], '').values\n\n # creating dataset\n dataset = tf.contrib.data.Dataset.from_tensors((color_split))\n # generating a batch, so it has the right rank\n dataset = dataset.batch(1)\n\n # creating iterator\n color_name = dataset.make_one_shot_iterator().get_next()\n\n features = {\n COLOR_NAME_KEY: color_name,\n SEQUENCE_LENGTH_KEY: [seq_len]\n }\n\n # we're just predicting, so the label can be None\n # if you're training make sure to return a label\n return features, None\n return input_fn\n\n# Making Predictions\nprint('Colorbot is ready to generate colors!')\n\nEXIT_COMMAND = '<exit>'\nwhile True:\n color_name = raw_input('give me a color name (or %s): ' % (EXIT_COMMAND))\n if color_name == EXIT_COMMAND:\n break\n\n print('Generating color...')\n preds = estimator.predict(input_fn=get_input_fn(color_name))\n for p, name in zip(preds, [color_name]):\n color = tuple(map(int, p * 255))\n _plot_rgb(p)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
arcyfelix/Courses
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/02-NumPy/.ipynb_checkpoints/Numpy Exercises-checkpoint.ipynb
apache-2.0
[ "<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\n<center>Copyright Pierian Data 2017</center>\n<center>For more information, visit us at www.pieriandata.com</center>\nNumPy Exercises\nNow that we've learned about NumPy let's test your knowledge. We'll start off with a few simple tasks and then you'll be asked some more complicated questions.\n IMPORTANT NOTE! Make sure you don't run the cells directly above the example output shown, otherwise you will end up writing over the example output! \nImport NumPy as np\nCreate an array of 10 zeros", "# CODE HERE", "Create an array of 10 ones", "# CODE HERE", "Create an array of 10 fives", "# CODE HERE", "Create an array of the integers from 10 to 50", "# CODE HERE", "Create an array of all the even integers from 10 to 50", "# CODE HERE", "Create a 3x3 matrix with values ranging from 0 to 8", "# CODE HERE", "Create a 3x3 identity matrix", "# CODE HERE", "Use NumPy to generate a random number between 0 and 1", "# CODE HERE", "Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution", "# CODE HERE", "Create the following matrix:\nCreate an array of 20 linearly spaced points between 0 and 1:\nNumpy Indexing and Selection\nNow you will be given a few matrices, and be asked to replicate the resulting matrix outputs:", "# HERE IS THE GIVEN MATRIX CALLED MAT\n# USE IT FOR THE FOLLOWING TASKS\nmat = np.arange(1,26).reshape(5,5)\nmat\n\n# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW\n# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T\n# BE ABLE TO SEE THE OUTPUT ANY MORE\n\n# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW\n# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T\n# BE ABLE TO SEE THE OUTPUT ANY MORE\n\n# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW\n# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T\n# BE ABLE TO SEE THE OUTPUT ANY MORE\n\n# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW\n# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T\n# BE ABLE TO SEE THE OUTPUT ANY MORE\n\n# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW\n# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T\n# BE ABLE TO SEE THE OUTPUT ANY MORE", "Now do the following\nGet the sum of all the values in mat", "# CODE HERE", "Get the standard deviation of the values in mat", "# CODE HERE", "Get the sum of all the columns in mat", "# CODE HERE", "Bonus Question\nWe worked a lot with random data with numpy, but is there a way we can insure that we always get the same random numbers? Click Here for a Hint\nGreat Job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gdhungana/desispec
doc/nb/Bootstrap_tests.ipynb
bsd-3-clause
[ "Tests for the Bootstrap code", "# import", "Generate test files", "def pix_sub(infil, outfil, rows=(80,310)):\n hdu = fits.open(infil)\n # Trim\n img = hdu[0].data\n sub_img = img[:,rows[0]:rows[1]]\n # New\n newhdu = fits.PrimaryHDU(sub_img)\n # Header\n for key in ['CAMERA','VSPECTER','RDNOISE','EXPTIME']:\n newhdu.header[key] = hdu[0].header[key]\n # Write\n newhdulist = fits.HDUList([newhdu])\n newhdulist.writeto(outfil,clobber=True)\n print('Wrote: {:s}'.format(outfil))", "b camera\nArcs", "arc_fil = '/u/xavier/DESI/Wavelengths/pix-b0-00000000.fits'\n\nout_arc = '/u/xavier/DESI/Wavelengths/pix-sub_b0-00000000.fits'\n\npix_sub(arc_fil, out_arc)", "Flats", "flat_fil = '/u/xavier/DESI/Wavelengths/pix-b0-00000001.fits'\nout_flat = '/u/xavier/DESI/Wavelengths/pix-sub_b0-00000001.fits'\n\npix_sub(flat_fil, out_flat)", "Test via script\ndesi_bootcalib.py \\\n --fiberflat /Users/xavier/DESI/Wavelengths/pix-sub_b0-00000001.fits \\\n --arcfile /Users/xavier/DESI/Wavelengths/pix-sub_b0-00000000.fits \\\n --outfile /Users/xavier/DESI/Wavelengths/boot_psf-sub_b0.fits \\\n --qafile /Users/xavier/DESI/Wavelengths/qa_boot-sub_b0.pdf\n\nSuccess\nSetting up for unit tests\nPushing files to NERSC\n scp pix-sub_b0-00000000.fits.gz hopper.nersc.gov:/project/projectdirs/desi/www/data/spectest\n scp pix-sub_b0-00000001.fits.gz hopper.nersc.gov:/project/projectdirs/desi/www/data/spectest\n\nTesting the read", "import urllib2\n\nurl_arc = 'https://portal.nersc.gov/project/desi/data/spectest/pix-sub_b0-00000000.fits.gz'\n\nf = urllib2.urlopen(url_arc)\ntst_fil = 'tmp_arc.fits.gz'\nwith open(tst_fil, \"wb\") as code:\n code.write(f.read())\n\nurl_flat = 'https://portal.nersc.gov/project/desi/data/spectest/pix-sub_b0-00000001.fits.gz'\n\nf = urllib2.urlopen(url_flat)\ntst_fil = 'tmp_flat.fits.gz'\nwith open(tst_fil, \"wb\") as code:\n code.write(f.read())", "Now a unit test" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
m2dsupsdlclass/lectures-labs
labs/03_neural_recsys/Implicit_Feedback_Recsys_with_the_triplet_loss.ipynb
mit
[ "Triplet Loss for Implicit Feedback Neural Recommender Systems\nThe goal of this notebook is first to demonstrate how it is possible to build a bi-linear recommender system only using positive feedback data.\nIn a latter section we show that it is possible to train deeper architectures following the same design principles.\nThis notebook is inspired by Maciej Kula's Recommendations in Keras using triplet loss. Contrary to Maciej we won't use the BPR loss but instead will introduce the more common margin-based comparator.\nLoading the movielens-100k dataset\nFor the sake of computation time, we will only use the smallest variant of the movielens reviews dataset. Beware that the architectural choices and hyperparameters that work well on such a toy dataset will not necessarily be representative of the behavior when run on a more realistic dataset such as Movielens 10M or the Yahoo Songs dataset with 700M rating.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport os.path as op\n\nfrom zipfile import ZipFile\ntry:\n from urllib.request import urlretrieve\nexcept ImportError: # Python 2 compat\n from urllib import urlretrieve\n\n\nML_100K_URL = \"http://files.grouplens.org/datasets/movielens/ml-100k.zip\"\nML_100K_FILENAME = ML_100K_URL.rsplit('/', 1)[1]\nML_100K_FOLDER = 'ml-100k'\n\nif not op.exists(ML_100K_FILENAME):\n print('Downloading %s to %s...' % (ML_100K_URL, ML_100K_FILENAME))\n urlretrieve(ML_100K_URL, ML_100K_FILENAME)\n\nif not op.exists(ML_100K_FOLDER):\n print('Extracting %s to %s...' % (ML_100K_FILENAME, ML_100K_FOLDER))\n ZipFile(ML_100K_FILENAME).extractall('.')\n\ndata_train = pd.read_csv(op.join(ML_100K_FOLDER, 'ua.base'), sep='\\t',\n names=[\"user_id\", \"item_id\", \"rating\", \"timestamp\"])\ndata_test = pd.read_csv(op.join(ML_100K_FOLDER, 'ua.test'), sep='\\t',\n names=[\"user_id\", \"item_id\", \"rating\", \"timestamp\"])\n\ndata_train.describe()\n\ndef extract_year(release_date):\n if hasattr(release_date, 'split'):\n components = release_date.split('-')\n if len(components) == 3:\n return int(components[2])\n # Missing value marker\n return 1920\n\n\nm_cols = ['item_id', 'title', 'release_date', 'video_release_date', 'imdb_url']\nitems = pd.read_csv(op.join(ML_100K_FOLDER, 'u.item'), sep='|',\n names=m_cols, usecols=range(5), encoding='latin-1')\nitems['release_year'] = items['release_date'].map(extract_year)\n\ndata_train = pd.merge(data_train, items)\ndata_test = pd.merge(data_test, items)\n\ndata_train.head()\n\n# data_test.describe()\n\nmax_user_id = max(data_train['user_id'].max(), data_test['user_id'].max())\nmax_item_id = max(data_train['item_id'].max(), data_test['item_id'].max())\n\nn_users = max_user_id + 1\nn_items = max_item_id + 1\n\nprint('n_users=%d, n_items=%d' % (n_users, n_items))", "Implicit feedback data\nConsider ratings >= 4 as positive feed back and ignore the rest:", "pos_data_train = data_train.query(\"rating >= 4\")\npos_data_test = data_test.query(\"rating >= 4\")", "Because the median rating is around 3.5, this cut will remove approximately half of the ratings from the datasets:", "pos_data_train['rating'].count()\n\npos_data_test['rating'].count()", "The Triplet Loss\nThe following section demonstrates how to build a low-rank quadratic interaction model between users and items. The similarity score between a user and an item is defined by the unormalized dot products of their respective embeddings.\nThe matching scores can be use to rank items to recommend to a specific user.\nTraining of the model parameters is achieved by randomly sampling negative items not seen by a pre-selected anchor user. We want the model embedding matrices to be such that the similarity between the user vector and the negative vector is smaller than the similarity between the user vector and the positive item vector. Furthermore we use a margin to further move appart the negative from the anchor user.\nHere is the architecture of such a triplet architecture. The triplet name comes from the fact that the loss to optimize is defined for triple (anchor_user, positive_item, negative_item):\n<img src=\"images/rec_archi_implicit_2.svg\" style=\"width: 600px;\" />\nWe call this model a triplet model with bi-linear interactions because the similarity between a user and an item is captured by a dot product of the first level embedding vectors. This is therefore not a deep architecture.", "import tensorflow as tf\nfrom tensorflow.keras import layers\n\n\ndef identity_loss(y_true, y_pred):\n \"\"\"Ignore y_true and return the mean of y_pred\n \n This is a hack to work-around the design of the Keras API that is\n not really suited to train networks with a triplet loss by default.\n \"\"\"\n return tf.reduce_mean(y_pred)\n\n\nclass MarginLoss(layers.Layer):\n\n def __init__(self, margin=1.):\n super().__init__()\n self.margin = margin\n \n def call(self, inputs):\n pos_pair_similarity = inputs[0]\n neg_pair_similarity = inputs[1]\n \n diff = neg_pair_similarity - pos_pair_similarity\n return tf.maximum(diff + self.margin, 0.)", "Here is the actual code that builds the model(s) with shared weights. Note that here we use the cosine similarity instead of unormalized dot products (both seems to yield comparable results).\nThe triplet model is used to train the weights of the companion\nsimilarity model. The triplet model takes 1 user, 1 positive item\n(relative to the selected user) and one negative item and is\ntrained with comparator loss.\nThe similarity model takes one user and one item as input and return\ncompatibility score (aka the match score).", "from tensorflow.keras.models import Model\nfrom tensorflow.keras.layers import Embedding, Flatten, Input, Dense\nfrom tensorflow.keras.layers import Lambda, Dot\nfrom tensorflow.keras.regularizers import l2\n\n\nclass TripletModel(Model):\n def __init__(self, n_users, n_items, latent_dim=64,\n l2_reg=None, margin=1.):\n super().__init__(name=\"TripletModel\")\n \n self.margin = margin\n \n l2_reg = None if l2_reg == 0 else l2(l2_reg)\n\n self.user_layer = Embedding(n_users, latent_dim,\n input_length=1,\n input_shape=(1,),\n name='user_embedding',\n embeddings_regularizer=l2_reg)\n \n # The following embedding parameters will be shared to\n # encode both the positive and negative items.\n self.item_layer = Embedding(n_items, latent_dim,\n input_length=1,\n name=\"item_embedding\",\n embeddings_regularizer=l2_reg)\n \n # The 2 following layers are without parameters, and can\n # therefore be used for both positive and negative items.\n self.flatten = Flatten()\n self.dot = Dot(axes=1, normalize=True)\n\n self.margin_loss = MarginLoss(margin)\n \n def call(self, inputs, training=False):\n user_input = inputs[0]\n pos_item_input = inputs[1]\n neg_item_input = inputs[2]\n \n user_embedding = self.user_layer(user_input)\n user_embedding = self.flatten(user_embedding)\n \n pos_item_embedding = self.item_layer(pos_item_input)\n pos_item_embedding = self.flatten(pos_item_embedding)\n \n neg_item_embedding = self.item_layer(neg_item_input)\n neg_item_embedding = self.flatten(neg_item_embedding)\n \n # Similarity computation between embeddings\n pos_similarity = self.dot([user_embedding, pos_item_embedding])\n neg_similarity = self.dot([user_embedding, neg_item_embedding])\n \n return self.margin_loss([pos_similarity, neg_similarity])\n \n\ntriplet_model = TripletModel(n_users, n_items,\n latent_dim=64, l2_reg=1e-6)\n\nclass MatchModel(Model):\n\n def __init__(self, user_layer, item_layer):\n super().__init__(name=\"MatchModel\")\n \n # Reuse shared weights for those layers:\n self.user_layer = user_layer\n self.item_layer = item_layer\n \n self.flatten = Flatten()\n self.dot = Dot(axes=1, normalize=True)\n \n def call(self, inputs):\n user_input = inputs[0]\n pos_item_input = inputs[1]\n \n user_embedding = self.user_layer(user_input)\n user_embedding = self.flatten(user_embedding)\n\n pos_item_embedding = self.item_layer(pos_item_input)\n pos_item_embedding = self.flatten(pos_item_embedding)\n \n pos_similarity = self.dot([user_embedding,\n pos_item_embedding])\n \n return pos_similarity\n \n\nmatch_model = MatchModel(triplet_model.user_layer,\n triplet_model.item_layer)", "Note that triplet_model and match_model have as much parameters, they share both user and item embeddings. Their only difference is that the latter doesn't compute the negative similarity.\nQuality of Ranked Recommendations\nNow that we have a randomly initialized model we can start computing random recommendations. To assess their quality we do the following for each user:\n\ncompute matching scores for items (except the movies that the user has already seen in the training set),\ncompare to the positive feedback actually collected on the test set using the ROC AUC ranking metric,\naverage ROC AUC scores across users to get the average performance of the recommender model on the test set.", "from sklearn.metrics import roc_auc_score\n\n\ndef average_roc_auc(model, data_train, data_test):\n \"\"\"Compute the ROC AUC for each user and average over users\"\"\"\n max_user_id = max(data_train['user_id'].max(),\n data_test['user_id'].max())\n max_item_id = max(data_train['item_id'].max(),\n data_test['item_id'].max())\n user_auc_scores = []\n for user_id in range(1, max_user_id + 1):\n pos_item_train = data_train[data_train['user_id'] == user_id]\n pos_item_test = data_test[data_test['user_id'] == user_id]\n \n # Consider all the items already seen in the training set\n all_item_ids = np.arange(1, max_item_id + 1)\n items_to_rank = np.setdiff1d(\n all_item_ids, pos_item_train['item_id'].values)\n \n # Ground truth: return 1 for each item positively present in\n # the test set and 0 otherwise.\n expected = np.in1d(\n items_to_rank, pos_item_test['item_id'].values)\n \n if np.sum(expected) >= 1:\n # At least one positive test value to rank\n repeated_user_id = np.empty_like(items_to_rank)\n repeated_user_id.fill(user_id)\n\n predicted = model.predict(\n [repeated_user_id, items_to_rank], batch_size=4096)\n \n user_auc_scores.append(roc_auc_score(expected, predicted))\n\n return sum(user_auc_scores) / len(user_auc_scores)", "By default the model should make predictions that rank the items in random order. The ROC AUC score is a ranking score that represents the expected value of correctly ordering uniformly sampled pairs of recommendations.\nA random (untrained) model should yield 0.50 ROC AUC on average.", "average_roc_auc(match_model, pos_data_train, pos_data_test)", "Training the Triplet Model\nLet's now fit the parameters of the model by sampling triplets: for each user, select a movie in the positive feedback set of that user and randomly sample another movie to serve as negative item.\nNote that this sampling scheme could be improved by removing items that are marked as positive in the data to remove some label noise. In practice this does not seem to be a problem though.", "def sample_triplets(pos_data, max_item_id, random_seed=0):\n \"\"\"Sample negatives at random\"\"\"\n rng = np.random.RandomState(random_seed)\n user_ids = pos_data['user_id'].values\n pos_item_ids = pos_data['item_id'].values\n\n neg_item_ids = rng.randint(low=1, high=max_item_id + 1,\n size=len(user_ids))\n\n return [user_ids, pos_item_ids, neg_item_ids]", "Let's train the triplet model:", "# we plug the identity loss and the a fake target variable ignored by\n# the model to be able to use the Keras API to train the triplet model\nfake_y = np.ones_like(pos_data_train[\"user_id\"])\n\ntriplet_model.compile(loss=identity_loss, optimizer=\"adam\")\n\nn_epochs = 10\nbatch_size = 64\n\nfor i in range(n_epochs):\n # Sample new negatives to build different triplets at each epoch\n triplet_inputs = sample_triplets(pos_data_train, max_item_id,\n random_seed=i)\n\n # Fit the model incrementally by doing a single pass over the\n # sampled triplets.\n triplet_model.fit(x=triplet_inputs, y=fake_y, shuffle=True,\n batch_size=64, epochs=1)\n\n\n# Evaluate the convergence of the model. Ideally we should prepare a\n# validation set and compute this at each epoch but this is too slow.\ntest_auc = average_roc_auc(match_model, pos_data_train, pos_data_test)\nprint(\"Epoch %d/%d: test ROC AUC: %0.4f\"\n % (i + 1, n_epochs, test_auc))", "Exercise:\nCount the number of parameters in match_model and triplet_model. Which model has the largest number of parameters?", "# print(match_model.summary())\n\n# print(triplet_model.summary())\n\n# %load solutions/triplet_parameter_count.py", "Training a Deep Matching Model on Implicit Feedback\nInstead of using hard-coded cosine similarities to predict the match of a (user_id, item_id) pair, we can instead specify a deep neural network based parametrisation of the similarity. The parameters of that matching model are also trained with the margin comparator loss:\n<img src=\"images/rec_archi_implicit_1.svg\" style=\"width: 600px;\" />\nExercise to complete as a home assignment:\n\n\nImplement a deep_match_model, deep_triplet_model pair of models\n for the architecture described in the schema. The last layer of\n the embedded Multi Layer Perceptron outputs a single scalar that\n encodes the similarity between a user and a candidate item.\n\n\nEvaluate the resulting model by computing the per-user average\n ROC AUC score on the test feedback data.\n\n\nCheck that the AUC ROC score is close to 0.50 for a randomly\n initialized model.\n\n\nCheck that you can reach at least 0.91 ROC AUC with this deep\n model (you might need to adjust the hyperparameters).\n\n\nHints:\n\n\nit is possible to reuse the code to create embeddings from the previous model\n definition;\n\n\nthe concatenation between user and the positive item embedding can be\n obtained with the Concatenate layer:\n\n\n```py\n concat = Concatenate()\npositive_embeddings_pair = concat([user_embedding,\n positive_item_embedding])\nnegative_embeddings_pair = concat([user_embedding,\n negative_item_embedding])\n\n```\n\nthose embedding pairs should be fed to a shared MLP instance to compute the similarity scores.", "from tensorflow.keras.models import Model\nfrom tensorflow.keras.layers import Embedding, Flatten, Dense\nfrom tensorflow.keras.layers import Concatenate, Dropout\nfrom tensorflow.keras.regularizers import l2\n\n\nclass MLP(layers.Layer):\n def __init__(self, n_hidden=1, hidden_size=64, dropout=0.,\n l2_reg=None):\n super().__init__()\n # TODO\n\n \nclass DeepTripletModel(Model):\n def __init__(self, n_users, n_items, user_dim=32, item_dim=64,\n margin=1., n_hidden=1, hidden_size=64, dropout=0,\n l2_reg=None):\n super().__init__()\n # TODO\n \n\nclass DeepMatchModel(Model):\n def __init__(self, user_layer, item_layer, mlp):\n super().__init__(name=\"MatchModel\")\n # TODO\n\n# %load solutions/deep_implicit_feedback_recsys.py", "Exercise:\nCount the number of parameters in deep_match_model and deep_triplet_model. Which model has the largest number of parameters?", "# print(deep_match_model.summary())\n\n# print(deep_triplet_model.summary())\n\n# %load solutions/deep_triplet_parameter_count.py", "Possible Extensions\nYou can implement any of the following ideas if you want to get a deeper understanding of recommender systems.\nLeverage User and Item metadata\nAs we did for the Explicit Feedback model, it's also possible to extend our models to take additional user and item metadata as side information when computing the match score.\nBetter Ranking Metrics\nIn this notebook we evaluated the quality of the ranked recommendations using the ROC AUC metric. This score reflect the ability of the model to correctly rank any pair of items (sampled uniformly at random among all possible items).\nIn practice recommender systems will only display a few recommendations to the user (typically 1 to 10). It is typically more informative to use an evaluatio metric that characterize the quality of the top ranked items and attribute less or no importance to items that are not good recommendations for a specific users. Popular ranking metrics therefore include the Precision at k and the Mean Average Precision.\nYou can read up online about those metrics and try to implement them here.\nHard Negatives Sampling\nIn this experiment we sampled negative items uniformly at random. However, after training the model for a while, it is possible that the vast majority of sampled negatives have a similarity already much lower than the positive pair and that the margin comparator loss sets the majority of the gradients to zero effectively wasting a lot of computation.\nGiven the current state of the recsys model we could sample harder negatives with a larger likelihood to train the model better closer to its decision boundary. This strategy is implemented in the WARP loss [1].\nThe main drawback of hard negative sampling is increasing the risk of sever overfitting if a significant fraction of the labels are noisy.\nFactorization Machines\nA very popular recommender systems model is called Factorization Machines [2][3]. They two use low rank vector representations of the inputs but they do not use a cosine similarity or a neural network to model user/item compatibility.\nIt is be possible to adapt our previous code written with Keras to replace the cosine sims / MLP with the low rank FM quadratic interactions by reading through this gentle introduction.\nIf you choose to do so, you can compare the quality of the predictions with those obtained by the pywFM project which provides a Python wrapper for the official libFM C++ implementation. Maciej Kula also maintains a lighfm that implements an efficient and well documented variant in Cython and Python.\nReferences:\n[1] Wsabie: Scaling Up To Large Vocabulary Image Annotation\nJason Weston, Samy Bengio, Nicolas Usunier, 2011\nhttps://research.google.com/pubs/pub37180.html\n\n[2] Factorization Machines, Steffen Rendle, 2010\nhttps://www.ismll.uni-hildesheim.de/pub/pdfs/Rendle2010FM.pdf\n\n[3] Factorization Machines with libFM, Steffen Rendle, 2012\nin ACM Trans. Intell. Syst. Technol., 3(3), May.\nhttp://doi.acm.org/10.1145/2168752.2168771" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mkarakoc/aim
examples/01_AIMpy_exp_cos_screened_coulomb_potential.ipynb
gpl-3.0
[ "Application of the Asymptotic Iteration Method to <br> the Exponential Cosine Screened Coulomb Potential\nO. Bayrak, et al. Int. J. Quant. Chem., 107 (2007), p. 1040\nhttp://onlinelibrary.wiley.com/doi/10.1002/qua.21240/epdf\nAtomic orbitals\n1s\n2s 2p 2p 2p\n3s 3p 3p 3p\n4s 3d 3d 3d 3d 3d 4p 4p 4p\n5s 4d 4d 4d 4d 4d 5p 5p 5p\n6s 4f 4f 4f 4f 4f 4f 4f 5d 5d 5d 5d 5d 6p 6p 6p\n7s 5f 5f 5f 5f 5f 5f 5f 6d 6d 6d 6d 6d 7p 7p 7p\nhttps://en.wikipedia.org/wiki/Atomic_orbital#Electron_placement_and_the_periodic_table\nImport AIM library", "# Python program to use AIM tools\nfrom asymptotic import *", "Definitions\nVariables", "En, m, hbar, L, r, r0 = se.symbols(\"En, m, hbar, L, r, r0\")\nbeta, delta, A, A1, A2, A3, A4, A5, A6 = se.symbols(\"beta, delta, A, A1, A2, A3, A4, A5, A6\")", "$\\lambda_0$ and $s_0$", "l0 = 2*(beta - (L+1)/r)\ns0 = -2*m*En/hbar**2 + A2 - beta**2 + (2*L*beta + 2*beta - A1)/r - A3*r**2 + A4*r**3 - A5*r**4 + A6*r**6", "Case: $\\delta=0.01$\ns states (1s, 2s, 3s, 4s)\nNumerical values for variables", "nL = o* 0\nndelta = o* 1/100\nnbeta = o* 6/10\n\nnA, nhbar, nm = o* 1, o* 1, o* 1\nnr0 = o* (nL+1)/nbeta\n\nnA1 = 2*nm*nA/nhbar**2\nnA2 = nA1*ndelta\nnA3 = nA1*ndelta**3/3\nnA4 = nA1*ndelta**4/6\nnA5 = nA1*ndelta**5/30\nnA6 = nA1*ndelta**7/630\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}", "Initialize AIM solver", "%%time\n# pass lambda_0, s_0 and variable values to aim class\necsc_d01L0 = aim(l0, s0, pl0, ps0)\necsc_d01L0.display_parameters()\necsc_d01L0.display_l0s0(0)\necsc_d01L0.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)", "Calculation of Taylor series coefficients of $\\lambda_0$ and $s_0$", "%%time\n# create coefficients for improved AIM\necsc_d01L0.c0()\necsc_d01L0.d0()\necsc_d01L0.cndn()", "The solution", "%%time\necsc_d01L0.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")", "p states", "%%time\n\nnL = o* 1\nnr0 = o* (nL+1)/nbeta\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}\n\n# pass lambda_0, s_0 and variable values to aim class\necsc_d01L1 = aim(l0, s0, pl0, ps0)\necsc_d01L1.display_parameters()\necsc_d01L1.display_l0s0(0)\necsc_d01L1.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)\n\n# create coefficients for improved AIM\necsc_d01L1.c0()\necsc_d01L1.d0()\necsc_d01L1.cndn()\n\n# the solution\necsc_d01L1.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")", "d states", "%%time\n\nnL = o* 2\nnr0 = o* (nL+1)/nbeta\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}\n\n# pass lambda_0, s_0 and variable values to aim class\necsc_d01L2 = aim(l0, s0, pl0, ps0)\necsc_d01L2.display_parameters()\necsc_d01L2.display_l0s0(0)\necsc_d01L2.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)\n\n# create coefficients for improved AIM\necsc_d01L2.c0()\necsc_d01L2.d0()\necsc_d01L2.cndn()\n\n# the solution\necsc_d01L2.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")", "f states", "%%time\n\nnL = o* 3\nnr0 = o* (nL+1)/nbeta\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}\n\n# pass lambda_0, s_0 and variable values to aim class\necsc_d01L3 = aim(l0, s0, pl0, ps0)\necsc_d01L3.display_parameters()\necsc_d01L3.display_l0s0(0)\necsc_d01L3.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)\n\n# create coefficients for improved AIM\necsc_d01L3.c0()\necsc_d01L3.d0()\necsc_d01L3.cndn()\n\n# the solution\necsc_d01L3.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")", "Case: $\\delta=0.02$\ns states (1s, 2s, 3s, 4s)", "%%time\n\nnL = o* 0\nndelta = o* 2/100\nnbeta = o* 6/10\n\nnA, nhbar, nm = o* 1, o* 1, o* 1\nnr0 = o* (nL+1)/nbeta\n\nnA1 = 2*nm*nA/nhbar**2\nnA2 = nA1*ndelta\nnA3 = nA1*ndelta**3/3\nnA4 = nA1*ndelta**4/6\nnA5 = nA1*ndelta**5/30\nnA6 = nA1*ndelta**7/630\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}\n\n# pass lambda_0, s_0 and variable values to aim class\necsc_d02L0 = aim(l0, s0, pl0, ps0)\necsc_d02L0.display_parameters()\necsc_d02L0.display_l0s0(0)\necsc_d02L0.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)\n\n# create coefficients for improved AIM\necsc_d02L0.c0()\necsc_d02L0.d0()\necsc_d02L0.cndn()\n\n# the solution\necsc_d02L0.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")", "p states", "%%time\n\nnL = o* 1\nnr0 = o* (nL+1)/nbeta\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}\n\n# pass lambda_0, s_0 and variable values to aim class\necsc_d02L1 = aim(l0, s0, pl0, ps0)\necsc_d02L1.display_parameters()\necsc_d02L1.display_l0s0(0)\necsc_d02L1.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)\n\n# create coefficients for improved AIM\necsc_d02L1.c0()\necsc_d02L1.d0()\necsc_d02L1.cndn()\n\n# the solution\necsc_d02L1.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")", "d states", "%%time\n\nnL = o* 2\nnr0 = o* (nL+1)/nbeta\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}\n\n# pass lambda_0, s_0 and variable values to aim class\necsc_d02L2 = aim(l0, s0, pl0, ps0)\necsc_d02L2.display_parameters()\necsc_d02L2.display_l0s0(0)\necsc_d02L2.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)\n\n# create coefficients for improved AIM\necsc_d02L2.c0()\necsc_d02L2.d0()\necsc_d02L2.cndn()\n\n# the solution\necsc_d02L2.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")", "f states", "%%time\n\nnL = o* 3\nnr0 = o* (nL+1)/nbeta\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}\n\n# pass lambda_0, s_0 and variable values to aim class\necsc_d02L3 = aim(l0, s0, pl0, ps0)\necsc_d02L3.display_parameters()\necsc_d02L3.display_l0s0(0)\necsc_d02L3.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)\n\n# create coefficients for improved AIM\necsc_d02L3.c0()\necsc_d02L3.d0()\necsc_d02L3.cndn()\n\n# the solution\necsc_d02L3.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")", "Case: $\\delta=0.06$\ns states (1s, 2s, 3s)", "%%time\n\nnL = o* 0\nndelta = o* 6/100\nnbeta = o* 6/10\n\nnA, nhbar, nm = o* 1, o* 1, o* 1\nnr0 = o* (nL+1)/nbeta\n\nnA1 = 2*nm*nA/nhbar**2\nnA2 = nA1*ndelta\nnA3 = nA1*ndelta**3/3\nnA4 = nA1*ndelta**4/6\nnA5 = nA1*ndelta**5/30\nnA6 = nA1*ndelta**7/630\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}\n\n# pass lambda_0, s_0 and variable values to aim class\necsc_d06L0 = aim(l0, s0, pl0, ps0)\necsc_d06L0.display_parameters()\necsc_d06L0.display_l0s0(0)\necsc_d06L0.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)\n\n# create coefficients for improved AIM\necsc_d06L0.c0()\necsc_d06L0.d0()\necsc_d06L0.cndn()\n\n# the solution\necsc_d06L0.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")", "p states", "%%time\n\nnL = o* 1\nnr0 = o* (nL+1)/nbeta\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}\n\n# pass lambda_0, s_0 and variable values to aim class\necsc_d06L1 = aim(l0, s0, pl0, ps0)\necsc_d06L1.display_parameters()\necsc_d06L1.display_l0s0(0)\necsc_d06L1.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)\n\n# create coefficients for improved AIM\necsc_d06L1.c0()\necsc_d06L1.d0()\necsc_d06L1.cndn()\n\n# the solution\necsc_d06L1.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")", "d states", "%%time\n\nnL = o* 2\nnr0 = o* (nL+1)/nbeta\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}\n\n# pass lambda_0, s_0 and variable values to aim class\necsc_d06L2 = aim(l0, s0, pl0, ps0)\necsc_d06L2.display_parameters()\necsc_d06L2.display_l0s0(0)\necsc_d06L2.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)\n\n# create coefficients for improved AIM\necsc_d06L2.c0()\necsc_d06L2.d0()\necsc_d06L2.cndn()\n\n# the solution\necsc_d06L2.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")", "f states", "%%time\n\nnL = o* 3\nnr0 = o* (nL+1)/nbeta\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}\n\n# pass lambda_0, s_0 and variable values to aim class\necsc_d06L3 = aim(l0, s0, pl0, ps0)\necsc_d06L3.display_parameters()\necsc_d06L3.display_l0s0(0)\necsc_d06L3.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)\n\n# create coefficients for improved AIM\necsc_d06L3.c0()\necsc_d06L3.d0()\necsc_d06L3.cndn()\n\n# the solution\necsc_d06L3.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")", "Case: $\\delta=0.1$\ns states (1s, 2s)", "%%time\n\nnL = o* 0\nndelta = o* 10/100\nnbeta = o* 6/10\n\nnA, nhbar, nm = o* 1, o* 1, o* 1\nnr0 = o* (nL+1)/nbeta\n\nnA1 = 2*nm*nA/nhbar**2\nnA2 = nA1*ndelta\nnA3 = nA1*ndelta**3/3\nnA4 = nA1*ndelta**4/6\nnA5 = nA1*ndelta**5/30\nnA6 = nA1*ndelta**7/630\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}\n\n# pass lambda_0, s_0 and variable values to aim class\necsc_d10L0 = aim(l0, s0, pl0, ps0)\necsc_d10L0.display_parameters()\necsc_d10L0.display_l0s0(0)\necsc_d10L0.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)\n\n# create coefficients for improved AIM\necsc_d10L0.c0()\necsc_d10L0.d0()\necsc_d10L0.cndn()\n\n# the solution\necsc_d10L0.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")", "p states", "%%time\n\nnL = o* 1\nnr0 = o* (nL+1)/nbeta\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}\n\n# pass lambda_0, s_0 and variable values to aim class\necsc_d10L1 = aim(l0, s0, pl0, ps0)\necsc_d10L1.display_parameters()\necsc_d10L1.display_l0s0(0)\necsc_d10L1.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)\n\n# create coefficients for improved AIM\necsc_d10L1.c0()\necsc_d10L1.d0()\necsc_d10L1.cndn()\n\n# the solution\necsc_d10L1.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")", "d states", "%%time\n\nnL = o* 2\nnr0 = o* (nL+1)/nbeta\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}\n\n# pass lambda_0, s_0 and variable values to aim class\necsc_d10L2 = aim(l0, s0, pl0, ps0)\necsc_d10L2.display_parameters()\necsc_d10L2.display_l0s0(0)\necsc_d10L2.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)\n\n# create coefficients for improved AIM\necsc_d10L2.c0()\necsc_d10L2.d0()\necsc_d10L2.cndn()\n\n# the solution\necsc_d10L2.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")", "f states", "%%time\n\nnL = o* 3\nnr0 = o* (nL+1)/nbeta\n\npl0 = {beta:nbeta, L:nL}\nps0 = {hbar:nhbar, m:nm, delta:ndelta,\n beta:nbeta, L:nL, r0:nr0, \n A1:nA1, A2:nA2, A3:nA3, \n A4:nA4, A5:nA5, A6:nA6}\n\n# pass lambda_0, s_0 and variable values to aim class\necsc_d10L3 = aim(l0, s0, pl0, ps0)\necsc_d10L3.display_parameters()\necsc_d10L3.display_l0s0(0)\necsc_d10L3.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101)\n\n# create coefficients for improved AIM\necsc_d10L3.c0()\necsc_d10L3.d0()\necsc_d10L3.cndn()\n\n# the solution\necsc_d10L3.get_arb_roots(showRoots='-r', printFormat=\"{:22.17f}\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/tensorflow
tensorflow/lite/g3doc/models/modify/model_maker/speech_recognition.ipynb
apache-2.0
[ "Copyright 2022 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Retrain a speech recognition model with TensorFlow Lite Model Maker\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/models/modify/model_maker/speech_recognition\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/modify/model_maker/speech_recognition.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/modify/model_maker/speech_recognition.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/models/modify/model_maker/speech_recognition.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n\n</table>\n\nIn this colab notebook, you'll learn how to use the TensorFlow Lite Model Maker to train a speech recognition model that can classify spoken words or short phrases using one-second sound samples. The Model Maker library uses transfer learning to retrain an existing TensorFlow model with a new dataset, which reduces the amount of sample data and time required for training. \nBy default, this notebook retrains the model (BrowserFft, from the TFJS Speech Command Recognizer) using a subset of words from the speech commands dataset (such as \"up,\" \"down,\" \"left,\" and \"right\"). Then it exports a TFLite model that you can run on a mobile device or embedded system (such as a Raspberry Pi). It also exports the trained model as a TensorFlow SavedModel.\nThis notebook is also designed to accept a custom dataset of WAV files, uploaded to Colab in a ZIP file. The more samples you have for each class, the better your accuracy will be, but because the transfer learning process uses feature embeddings from the pre-trained model, you can still get a fairly accurate model with only a few dozen samples in each of your classes.\nNote: The model we'll be training is optimized for speech recognition with one-second samples. If you want to perform more generic audio classification (such as detecting different types of music), we suggest you instead follow this Colab to retrain an audio classifier.\nIf you want to run the notebook with the default speech dataset, you can run the whole thing now by clicking Runtime > Run all in the Colab toolbar. However, if you want to use your own dataset, then continue down to Prepare the dataset and follow the instructions there.\nImport the required packages\nYou'll need TensorFlow, TFLite Model Maker, and some modules for audio manipulation, playback, and visualizations.", "!sudo apt -y install libportaudio2\n!pip install tflite-model-maker\n\nimport os\nimport glob\nimport random\nimport shutil\n\nimport librosa\nimport soundfile as sf\nfrom IPython.display import Audio\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport tensorflow as tf\nimport tflite_model_maker as mm\nfrom tflite_model_maker import audio_classifier\nfrom tflite_model_maker.config import ExportFormat\n\nprint(f\"TensorFlow Version: {tf.__version__}\")\nprint(f\"Model Maker Version: {mm.__version__}\")", "Prepare the dataset\nTo train with the default speech dataset, just run all the code below as-is.\nBut if you want to train with your own speech dataset, follow these steps:\nNote: \nThe model you'll retrain expects input data to be roughly one second of audio at 44.1 kHz. Model Maker perfoms automatic resampling for the training dataset, so there's no need to resample your dataset if it has a sample rate other than 44.1 kHz. But beware that audio samples longer than one second will be split into multiple one-second chunks, and the final chunk will be discarded if it's shorter than one second.\n\nBe sure each sample in your dataset is in WAV file format, about one second long. Then create a ZIP file with all your WAV files, organized into separate subfolders for each classification. For example, each sample for a speech command \"yes\" should be in a subfolder named \"yes\". Even if you have only one class, the samples must be saved in a subdirectory with the class name as the directory name. (This script assumes your dataset is not split into train/validation/test sets and performs that split for you.)\nClick the Files tab in the left panel and just drag-drop your ZIP file there to upload it.\nUse the following drop-down option to set use_custom_dataset to True.\nThen skip to Prepare a custom audio dataset to specify your ZIP filename and dataset directory name.", "use_custom_dataset = False #@param [\"False\", \"True\"] {type:\"raw\"}", "Generate a background noise dataset\nWhether you're using the default speech dataset or a custom dataset, you should have a good set of background noises so your model can distinguish speech from other noises (including silence). \nBecause the following background samples are provided in WAV files that are a minute long or longer, we need to split them up into smaller one-second samples so we can reserve some for our test dataset. We'll also combine a couple different sample sources to build a comprehensive set of background noises and silence:", "tf.keras.utils.get_file('speech_commands_v0.01.tar.gz',\n 'http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz',\n cache_dir='./',\n cache_subdir='dataset-speech',\n extract=True)\ntf.keras.utils.get_file('background_audio.zip',\n 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/sound_classification/background_audio.zip',\n cache_dir='./',\n cache_subdir='dataset-background',\n extract=True)\n", "Note: Although there is a newer version available, we're using v0.01 of the speech commands dataset because it's a smaller download. v0.01 includes 30 commands, while v0.02 adds five more (\"backward\", \"forward\", \"follow\", \"learn\", and \"visual\").", "# Create a list of all the background wav files\nfiles = glob.glob(os.path.join('./dataset-speech/_background_noise_', '*.wav'))\nfiles = files + glob.glob(os.path.join('./dataset-background', '*.wav'))\n\nbackground_dir = './background'\nos.makedirs(background_dir, exist_ok=True)\n\n# Loop through all files and split each into several one-second wav files\nfor file in files:\n filename = os.path.basename(os.path.normpath(file))\n print('Splitting', filename)\n name = os.path.splitext(filename)[0]\n rate = librosa.get_samplerate(file)\n length = round(librosa.get_duration(filename=file))\n for i in range(length - 1):\n start = i * rate\n stop = (i * rate) + rate\n data, _ = sf.read(file, start=start, stop=stop)\n sf.write(os.path.join(background_dir, name + str(i) + '.wav'), data, rate)", "Prepare the speech commands dataset\nWe already downloaded the speech commands dataset, so now we just need to prune the number of classes for our model.\nThis dataset includes over 30 speech command classifications, and most of them have over 2,000 samples. But because we're using transfer learning, we don't need that many samples. So the following code does a few things:\n\nSpecify which classifications we want to use, and delete the rest.\nKeep only 150 samples of each class for training (to prove that transfer learning works well with smaller datasets and simply to reduce the training time).\nCreate a separate directory for a test dataset so we can easily run inference with them later.", "if not use_custom_dataset:\n commands = [ \"up\", \"down\", \"left\", \"right\", \"go\", \"stop\", \"on\", \"off\", \"background\"]\n dataset_dir = './dataset-speech'\n test_dir = './dataset-test'\n\n # Move the processed background samples\n shutil.move(background_dir, os.path.join(dataset_dir, 'background')) \n\n # Delete all directories that are not in our commands list\n dirs = glob.glob(os.path.join(dataset_dir, '*/'))\n for dir in dirs:\n name = os.path.basename(os.path.normpath(dir))\n if name not in commands:\n shutil.rmtree(dir)\n\n # Count is per class\n sample_count = 150\n test_data_ratio = 0.2\n test_count = round(sample_count * test_data_ratio)\n\n # Loop through child directories (each class of wav files)\n dirs = glob.glob(os.path.join(dataset_dir, '*/'))\n for dir in dirs:\n files = glob.glob(os.path.join(dir, '*.wav'))\n random.seed(42)\n random.shuffle(files)\n # Move test samples:\n for file in files[sample_count:sample_count + test_count]:\n class_dir = os.path.basename(os.path.normpath(dir))\n os.makedirs(os.path.join(test_dir, class_dir), exist_ok=True)\n os.rename(file, os.path.join(test_dir, class_dir, os.path.basename(file)))\n # Delete remaining samples\n for file in files[sample_count + test_count:]:\n os.remove(file)", "Prepare a custom dataset\nIf you want to train the model with our own speech dataset, you need to upload your samples as WAV files in a ZIP (as described above) and modify the following variables to specify your dataset:", "if use_custom_dataset:\n # Specify the ZIP file you uploaded:\n !unzip YOUR-FILENAME.zip\n # Specify the unzipped path to your custom dataset\n # (this path contains all the subfolders with classification names):\n dataset_dir = './YOUR-DIRNAME'", "After changing the filename and path name above, you're ready to train the model with your custom dataset. In the Colab toolbar, select Runtime > Run all to run the whole notebook.\nThe following code integrates our new background noise samples into your dataset and then separates a portion of all samples to create a test set.", "def move_background_dataset(dataset_dir):\n dest_dir = os.path.join(dataset_dir, 'background')\n if os.path.exists(dest_dir):\n files = glob.glob(os.path.join(background_dir, '*.wav'))\n for file in files:\n shutil.move(file, dest_dir)\n else:\n shutil.move(background_dir, dest_dir)\n\nif use_custom_dataset:\n # Move background samples into custom dataset\n move_background_dataset(dataset_dir)\n\n # Now we separate some of the files that we'll use for testing:\n test_dir = './dataset-test'\n test_data_ratio = 0.2\n dirs = glob.glob(os.path.join(dataset_dir, '*/'))\n for dir in dirs:\n files = glob.glob(os.path.join(dir, '*.wav'))\n test_count = round(len(files) * test_data_ratio)\n random.seed(42)\n random.shuffle(files)\n # Move test samples:\n for file in files[:test_count]:\n class_dir = os.path.basename(os.path.normpath(dir))\n os.makedirs(os.path.join(test_dir, class_dir), exist_ok=True)\n os.rename(file, os.path.join(test_dir, class_dir, os.path.basename(file)))\n print('Moved', test_count, 'images from', class_dir)", "Play a sample\nTo be sure the dataset looks correct, let's play at a random sample from the test set:", "def get_random_audio_file(samples_dir):\n files = os.path.abspath(os.path.join(samples_dir, '*/*.wav'))\n files_list = glob.glob(files)\n random_audio_path = random.choice(files_list)\n return random_audio_path\n\ndef show_sample(audio_path):\n audio_data, sample_rate = sf.read(audio_path)\n class_name = os.path.basename(os.path.dirname(audio_path))\n print(f'Class: {class_name}')\n print(f'File: {audio_path}')\n print(f'Sample rate: {sample_rate}')\n print(f'Sample length: {len(audio_data)}')\n\n plt.title(class_name)\n plt.plot(audio_data)\n display(Audio(audio_data, rate=sample_rate))\n\nrandom_audio = get_random_audio_file(test_dir)\nshow_sample(random_audio)", "Define the model\nWhen using Model Maker to retrain any model, you have to start by defining a model spec. The spec defines the base model from which your new model will extract feature embeddings to begin learning new classes. The spec for this speech recognizer is based on the pre-trained BrowserFft model from TFJS.\nThe model expects input as an audio sample that's 44.1 kHz, and just under a second long: the exact sample length must be 44034 frames.\nYou don't need to do any resampling with your training dataset. Model Maker takes care of that for you. But when you later run inference, you must be sure that your input matches that expected format.\nAll you need to do here is instantiate the BrowserFftSpec:", "spec = audio_classifier.BrowserFftSpec()", "Load your dataset\nNow you need to load your dataset according to the model specifications. Model Maker includes the DataLoader API, which will load your dataset from a folder and ensure it's in the expected format for the model spec.\nWe already reserved some test files by moving them to a separate directory, which makes it easier to run inference with them later. Now we'll create a DataLoader for each split: the training set, the validation set, and the test set.\nLoad the speech commands dataset", "if not use_custom_dataset:\n train_data_ratio = 0.8\n train_data = audio_classifier.DataLoader.from_folder(\n spec, dataset_dir, cache=True)\n train_data, validation_data = train_data.split(train_data_ratio)\n test_data = audio_classifier.DataLoader.from_folder(\n spec, test_dir, cache=True)", "Load a custom dataset\nNote: Setting cache=True is important to make training faster (especially when the dataset must be re-sampled) but it will also require more RAM to hold the data. If you use a very large custom dataset, caching might exceed your RAM capacity.", "if use_custom_dataset:\n train_data_ratio = 0.8\n train_data = audio_classifier.DataLoader.from_folder(\n spec, dataset_dir, cache=True)\n train_data, validation_data = train_data.split(train_data_ratio)\n test_data = audio_classifier.DataLoader.from_folder(\n spec, test_dir, cache=True)\n", "Train the model\nNow we'll use the Model Maker create() function to create a model based on our model spec and training dataset, and begin training.\nIf you're using a custom dataset, you might want to change the batch size as appropriate for the number of samples in your train set.\nNote: The first epoch takes longer because it must create the cache.", "# If your dataset has fewer than 100 samples per class,\n# you might want to try a smaller batch size\nbatch_size = 25\nepochs = 25\nmodel = audio_classifier.create(train_data, spec, validation_data, batch_size, epochs)", "Review the model performance\nEven if the accuracy/loss looks good from the training output above, it's important to also run the model using test data that the model has not seen yet, which is what the evaluate() method does here:", "model.evaluate(test_data)", "View the confusion matrix\nWhen training a classification model such as this one, it's also useful to inspect the confusion matrix. The confusion matrix gives you detailed visual representation of how well your classifier performs for each classification in your test data.", "def show_confusion_matrix(confusion, test_labels):\n \"\"\"Compute confusion matrix and normalize.\"\"\"\n confusion_normalized = confusion.astype(\"float\") / confusion.sum(axis=1)\n sns.set(rc = {'figure.figsize':(6,6)})\n sns.heatmap(\n confusion_normalized, xticklabels=test_labels, yticklabels=test_labels,\n cmap='Blues', annot=True, fmt='.2f', square=True, cbar=False)\n plt.title(\"Confusion matrix\")\n plt.ylabel(\"True label\")\n plt.xlabel(\"Predicted label\")\n\nconfusion_matrix = model.confusion_matrix(test_data)\nshow_confusion_matrix(confusion_matrix.numpy(), test_data.index_to_label)", "Export the model\nThe last step is exporting your model into the TensorFlow Lite format for execution on mobile/embedded devices and into the SavedModel format for execution elsewhere.\nWhen exporting a .tflite file from Model Maker, it includes model metadata that describes various details that can later help during inference. It even includes a copy of the classification labels file, so you don't need to a separate labels.txt file. (In the next section, we show how to use this metadata to run an inference.)", "TFLITE_FILENAME = 'browserfft-speech.tflite'\nSAVE_PATH = './models'\n\nprint(f'Exporing the model to {SAVE_PATH}')\nmodel.export(SAVE_PATH, tflite_filename=TFLITE_FILENAME)\nmodel.export(SAVE_PATH, export_format=[mm.ExportFormat.SAVED_MODEL, mm.ExportFormat.LABEL])", "Run inference with TF Lite model\nNow your TFLite model can be deployed and run using any of the supported inferencing libraries or with the new TFLite AudioClassifier Task API. The following code shows how you can run inference with the .tflite model in Python.", "# This library provides the TFLite metadata API\n! pip install -q tflite_support\n\nfrom tflite_support import metadata\nimport json\n\ndef get_labels(model):\n \"\"\"Returns a list of labels, extracted from the model metadata.\"\"\"\n displayer = metadata.MetadataDisplayer.with_model_file(model)\n labels_file = displayer.get_packed_associated_file_list()[0]\n labels = displayer.get_associated_file_buffer(labels_file).decode()\n return [line for line in labels.split('\\n')]\n\ndef get_input_sample_rate(model):\n \"\"\"Returns the model's expected sample rate, from the model metadata.\"\"\"\n displayer = metadata.MetadataDisplayer.with_model_file(model)\n metadata_json = json.loads(displayer.get_metadata_json())\n input_tensor_metadata = metadata_json['subgraph_metadata'][0][\n 'input_tensor_metadata'][0]\n input_content_props = input_tensor_metadata['content']['content_properties']\n return input_content_props['sample_rate']", "To observe how well the model performs with real samples, run the following code block over and over. Each time, it will fetch a new test sample and run inference with it, and you can listen to the audio sample below.", "# Get a WAV file for inference and list of labels from the model\ntflite_file = os.path.join(SAVE_PATH, TFLITE_FILENAME)\nlabels = get_labels(tflite_file)\nrandom_audio = get_random_audio_file(test_dir)\n\n# Ensure the audio sample fits the model input\ninterpreter = tf.lite.Interpreter(tflite_file)\ninput_details = interpreter.get_input_details()\noutput_details = interpreter.get_output_details()\ninput_size = input_details[0]['shape'][1]\nsample_rate = get_input_sample_rate(tflite_file)\naudio_data, _ = librosa.load(random_audio, sr=sample_rate)\nif len(audio_data) < input_size:\n audio_data.resize(input_size)\naudio_data = np.expand_dims(audio_data[:input_size], axis=0)\n\n# Run inference\ninterpreter.allocate_tensors()\ninterpreter.set_tensor(input_details[0]['index'], audio_data)\ninterpreter.invoke()\noutput_data = interpreter.get_tensor(output_details[0]['index'])\n\n# Display prediction and ground truth\ntop_index = np.argmax(output_data[0])\nlabel = labels[top_index]\nscore = output_data[0][top_index]\nprint('---prediction---')\nprint(f'Class: {label}\\nScore: {score}')\nprint('----truth----')\nshow_sample(random_audio)", "Download the TF Lite model\nNow you can deploy the TF Lite model to your mobile or embedded device. You don't need to download the labels file because you can instead retrieve the labels from .tflite file metadata, as shown in the previous inferencing example.", "try:\n from google.colab import files\nexcept ImportError:\n pass\nelse:\n files.download(tflite_file)", "Check out our end-to-end example apps that perform inferencing with TFLite audio models on Android and iOS." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bearing/dosenet-analysis
Programming Lesson Modules/Module 2- Import Web CSVs.ipynb
mit
[ "Module 2- Import Web CSVs\nauthor: Radley Rigonan\nThis is an example of reading and importing .CSV files from a direct download link (DDL). DDLs are hyperlinks that point to a file that will immediately be downloaded by your internet browser.\nIn this module, I will be using lbl.csv which can be accessed from the following link.\nhttp://radwatch.berkeley.edu/sites/default/files/dosenet/lbl.csv", "import csv\nimport io\nimport urllib.request \nurl = 'https://radwatch.berkeley.edu/sites/default/files/dosenet/lbl.csv'", "The io module allows Python to deal with objects formatted as bytes. Web sources are usually formatted in HTTP/bytes, rendering it incompatible with default Python modules.\nThe urllib module provides an interface to fetch data from the Internet.\nThe following lines will: access the DDL, makes the file compatible to Python, and prints the CSV. Take note that only the first two lines are different from reading a .CSV from your local disk storage.", "def printwebCSV(url):\n response = urllib.request.urlopen(url) \n # This line will fail without internet access.\n csvfile = io.TextIOWrapper(response)\n # io.TextIOWrapper decodes HTTP data and encodes the data as string \n # objects that can be understood by Python \n reader = csv.reader(csvfile)\n for row in reader:\n print(', '.join(row))", "The next commands are an example of importing a CSV from a DDL. It also uses more compact syntax in order to reduce the number of lines:", "def importwebCSV(url):\n response = urllib.request.urlopen(url)\n reader = csv.reader(io.TextIOWrapper(response)) \n datetime = [] \n cpm = []\n line = 0\n for row in reader:\n if line != 0:\n datetime.append(row[0])\n cpm.append(float(row[6]))\n line += 1 \n # Python syntax for line = line + 1 (+1 to current value for line)\n return (datetime,cpm)", "This example typifies the overwhelming amount of data that you can handle with Python! In only a few seconds, this script can record over 60,000 data points in your computer's memory." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gabrielusvicente/data-science-playground
develop/gs-ISL_advertising.ipynb
mit
[ "import numpy as np\nimport pandas as pd\n\n# dataset from Chapter 3 of An Introduction to Statistical Learning ()\ndata = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)\n\n# check the shape of the dataframe (rows, columns)\ndata.shape\ndata.head()\n\nimport seaborn as sns\n\n%matplotlib inline\n\n# visualize the relationship between the features and the response using scatterplots\nsns.pairplot(data, x_vars=['TV','radio','newspaper'], y_vars=['sales'], height=5, aspect=0.8, kind='reg')\n\n# create a list of feature names\nfeature_cols = ['TV','radio','newspaper']\n\n# select a subset of the original dataframe\nX = data[feature_cols]\n\n# create a list of response\nresponse_cols = ['sales']\n\n# select a subset of the original dataframe\ny = data[response_cols]\n\n#from sklearn.cross_validation import train_test_split\nfrom sklearn.model_selection import train_test_split\n\n# default split is 75% for training and 25% for testing\nX_train, X_test, y_train, y_test = train_test_split(X, y)\n\nfrom sklearn.linear_model import LinearRegression\n\n# instantiate\nlinreg = LinearRegression()\n\n# fit the model to the training dataset\nlinreg.fit(X_train, y_train)\n\n# print the intercept and coefficients\nprint(linreg.intercept_)\nprint(linreg.coef_)", "Model evaluation metrics for regression\nEvalution matrics for classification problems, such as accuracy, are not useful for regression.\nLet's create some example numeric predictions, and calculate three common evalution metrics for regression.", "#define numerical examples\ntrue = [100, 50, 30, 20]\npred = [90, 50, 50, 30]", "Mean Absolute Error (MAE) is the mean of the absolute value of the errors: \n$$\\frac{1}{n} \\sum_{i=1}^{n} | y_i - \\hat{y}_i| $$", "# calculate MAE using scikit-learn\nfrom sklearn import metrics\nprint metrics.median_absolute_error(true, pred)", "Mean Squared Error (MSE) is the mean of the squared errors: \n$$\\frac{1}{n} \\sum_{i=1}^{n} ( y_i - \\hat{y}_i)^2 $$", "# calculate MSE using scikit-learn\nprint metrics.mean_squared_error(true, pred)", "Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors: \n$$\\sqrt{\\frac{1}{n} \\sum_{i=1}^{n} ( y_i - \\hat{y}_i)^2}$$", "# calculate RMSE using scikit-learn\nimport numpy as np\nprint np.sqrt(metrics.mean_squared_error(true, pred))", "Comparing these metrics:\n\nMAE is the easiest to understand, because it's the average error.\nMSE is more popular than MAE, because MSE \"punishes\" larger errors.\nRMSE is even more popular than MSE, because RMSE is interpretable in the \"y\" units.\n\nRMSE for our sales predictions:", "# make predictions on the testing subset\ny_pred = linreg.predict(X_test)\n\n# RMSE\nprint np.sqrt(metrics.mean_squared_error(y_test, y_pred))", "Feature Selection", "# select first two features\nfeature_cols = ['TV', 'Radio']\n\n# select subset\nX = data[feature_cols]\n\n#y = data.Sales\ny = data[response_cols]\n\n# split into training and testing\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)\n\n# fit the model\nlinreg.fit(X_train, y_train)\n\n# make predictions\ny_pred = linreg.predict(X_test)\n\n# compute RMSE\nprint np.sqrt(metrics.mean_squared_error(y_test, y_pred))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fastai/course-v3
nbs/dl2/translation.ipynb
apache-2.0
[ "from fastai.text import *", "Reduce original dataset to questions", "path = Config().data_path()/'giga-fren'", "You only need to execute the setup cells once, uncomment to run. The dataset can be downloaded here.", "#! wget https://s3.amazonaws.com/fast-ai-nlp/giga-fren.tgz -P {path}\n#! tar xf {path}/giga-fren.tgz -C {path} \n\n# with open(path/'giga-fren.release2.fixed.fr') as f:\n# fr = f.read().split('\\n')\n\n# with open(path/'giga-fren.release2.fixed.en') as f:\n# en = f.read().split('\\n')\n\n# re_eq = re.compile('^(Wh[^?.!]+\\?)')\n# re_fq = re.compile('^([^?.!]+\\?)')\n# en_fname = path/'giga-fren.release2.fixed.en'\n# fr_fname = path/'giga-fren.release2.fixed.fr'\n\n# lines = ((re_eq.search(eq), re_fq.search(fq)) \n# for eq, fq in zip(open(en_fname, encoding='utf-8'), open(fr_fname, encoding='utf-8')))\n# qs = [(e.group(), f.group()) for e,f in lines if e and f]\n\n# qs = [(q1,q2) for q1,q2 in qs]\n# df = pd.DataFrame({'fr': [q[1] for q in qs], 'en': [q[0] for q in qs]}, columns = ['en', 'fr'])\n# df.to_csv(path/'questions_easy.csv', index=False)\n\n# del en, fr, lines, qs, df # free RAM or restart the nb \n\n### fastText pre-trained word vectors https://fasttext.cc/docs/en/crawl-vectors.html\n#! wget https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.fr.300.bin.gz -P {path}\n#! wget https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.en.300.bin.gz -P {path}\n#! gzip -d {path}/cc.fr.300.bin.gz \n#! gzip -d {path}/cc.en.300.bin.gz\n\npath.ls()", "Put them in a DataBunch\nOur questions look like this now:", "df = pd.read_csv(path/'questions_easy.csv')\ndf.head()", "To make it simple, we lowercase everything.", "df['en'] = df['en'].apply(lambda x:x.lower())\ndf['fr'] = df['fr'].apply(lambda x:x.lower())", "The first thing is that we will need to collate inputs and targets in a batch: they have different lengths so we need to add padding to make the sequence length the same;", "def seq2seq_collate(samples:BatchSamples, pad_idx:int=1, pad_first:bool=True, backwards:bool=False) -> Tuple[LongTensor, LongTensor]:\n \"Function that collect samples and adds padding. Flips token order if needed\"\n samples = to_data(samples)\n max_len_x,max_len_y = max([len(s[0]) for s in samples]),max([len(s[1]) for s in samples])\n res_x = torch.zeros(len(samples), max_len_x).long() + pad_idx\n res_y = torch.zeros(len(samples), max_len_y).long() + pad_idx\n if backwards: pad_first = not pad_first\n for i,s in enumerate(samples):\n if pad_first: \n res_x[i,-len(s[0]):],res_y[i,-len(s[1]):] = LongTensor(s[0]),LongTensor(s[1])\n else: \n res_x[i,:len(s[0]):],res_y[i,:len(s[1]):] = LongTensor(s[0]),LongTensor(s[1])\n if backwards: res_x,res_y = res_x.flip(1),res_y.flip(1)\n return res_x,res_y", "Then we create a special DataBunch that uses this collate function.", "class Seq2SeqDataBunch(TextDataBunch):\n \"Create a `TextDataBunch` suitable for training an RNN classifier.\"\n @classmethod\n def create(cls, train_ds, valid_ds, test_ds=None, path:PathOrStr='.', bs:int=32, val_bs:int=None, pad_idx=1,\n pad_first=False, device:torch.device=None, no_check:bool=False, backwards:bool=False, **dl_kwargs) -> DataBunch:\n \"Function that transform the `datasets` in a `DataBunch` for classification. Passes `**dl_kwargs` on to `DataLoader()`\"\n datasets = cls._init_ds(train_ds, valid_ds, test_ds)\n val_bs = ifnone(val_bs, bs)\n collate_fn = partial(seq2seq_collate, pad_idx=pad_idx, pad_first=pad_first, backwards=backwards)\n train_sampler = SortishSampler(datasets[0].x, key=lambda t: len(datasets[0][t][0].data), bs=bs//2)\n train_dl = DataLoader(datasets[0], batch_size=bs, sampler=train_sampler, drop_last=True, **dl_kwargs)\n dataloaders = [train_dl]\n for ds in datasets[1:]:\n lengths = [len(t) for t in ds.x.items]\n sampler = SortSampler(ds.x, key=lengths.__getitem__)\n dataloaders.append(DataLoader(ds, batch_size=val_bs, sampler=sampler, **dl_kwargs))\n return cls(*dataloaders, path=path, device=device, collate_fn=collate_fn, no_check=no_check)", "And a subclass of TextList that will use this DataBunch class in the call .databunch and will use TextList to label (since our targets are other texts).", "class Seq2SeqTextList(TextList):\n _bunch = Seq2SeqDataBunch\n _label_cls = TextList", "Thats all we need to use the data block API!", "src = Seq2SeqTextList.from_df(df, path = path, cols='fr').split_by_rand_pct().label_from_df(cols='en', label_cls=TextList)\n\nnp.percentile([len(o) for o in src.train.x.items] + [len(o) for o in src.valid.x.items], 90)\n\nnp.percentile([len(o) for o in src.train.y.items] + [len(o) for o in src.valid.y.items], 90)", "We remove the items where one of the target is more than 30 tokens long.", "src = src.filter_by_func(lambda x,y: len(x) > 30 or len(y) > 30)\n\nlen(src.train) + len(src.valid)\n\ndata = src.databunch()\n\ndata.save()\n\ndata = load_data(path)\n\ndata.show_batch()", "Model\nPretrained embeddings\nTo install fastText:\n$ git clone https://github.com/facebookresearch/fastText.git\n$ cd fastText\n$ pip install .", "# Installation: https://github.com/facebookresearch/fastText#building-fasttext-for-python\nimport fastText as ft\n\nfr_vecs = ft.load_model(str((path/'cc.fr.300.bin')))\nen_vecs = ft.load_model(str((path/'cc.en.300.bin')))", "We create an embedding module with the pretrained vectors and random data for the missing parts.", "def create_emb(vecs, itos, em_sz=300, mult=1.):\n emb = nn.Embedding(len(itos), em_sz, padding_idx=1)\n wgts = emb.weight.data\n vec_dic = {w:vecs.get_word_vector(w) for w in vecs.get_words()}\n miss = []\n for i,w in enumerate(itos):\n try: wgts[i] = tensor(vec_dic[w])\n except: miss.append(w)\n return emb\n\nemb_enc = create_emb(fr_vecs, data.x.vocab.itos)\nemb_dec = create_emb(en_vecs, data.y.vocab.itos)\n\ntorch.save(emb_enc, path/'models'/'fr_emb.pth')\ntorch.save(emb_dec, path/'models'/'en_emb.pth')", "Free some RAM", "del fr_vecs\ndel en_vecs", "QRNN seq2seq\nOur model we use QRNNs at its base (you can use GRUs or LSTMs by adapting a little bit). Using QRNNs require you have properly installed cuda (a version that matches your PyTorch install).", "from fastai.text.models.qrnn import QRNN, QRNNLayer", "The model in itself consists in an encoder and a decoder\n\nThe encoder is a (quasi) recurrent neural net and we feed it our input sentence, producing an output (that we discard for now) and a hidden state. That hidden state is then given to the decoder (an other RNN) which uses it in conjunction with the outputs it predicts to get produce the translation. We loop until the decoder produces a padding token (or at 30 iterations to make sure it's not an infinite loop at the beginning of training).", "class Seq2SeqQRNN(nn.Module):\n def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25, \n p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):\n super().__init__()\n self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx\n self.emb_enc = emb_enc\n self.emb_enc_drop = nn.Dropout(p_inp)\n self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc)\n self.out_enc = nn.Linear(n_hid, emb_enc.weight.size(1), bias=False)\n self.hid_dp = nn.Dropout(p_hid)\n self.emb_dec = emb_dec\n self.decoder = QRNN(emb_dec.weight.size(1), emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)\n self.out_drop = nn.Dropout(p_out)\n self.out = nn.Linear(emb_dec.weight.size(1), emb_dec.weight.size(0))\n self.out.weight.data = self.emb_dec.weight.data\n \n def forward(self, inp):\n bs,sl = inp.size()\n self.encoder.reset()\n self.decoder.reset()\n hid = self.initHidden(bs)\n emb = self.emb_enc_drop(self.emb_enc(inp))\n enc_out, hid = self.encoder(emb, hid)\n hid = self.out_enc(self.hid_dp(hid))\n\n dec_inp = inp.new_zeros(bs).long() + self.bos_idx\n outs = []\n for i in range(self.max_len):\n emb = self.emb_dec(dec_inp).unsqueeze(1)\n out, hid = self.decoder(emb, hid)\n out = self.out(self.out_drop(out[:,0]))\n outs.append(out)\n dec_inp = out.max(1)[1]\n if (dec_inp==self.pad_idx).all(): break\n return torch.stack(outs, dim=1)\n \n def initHidden(self, bs): return one_param(self).new_zeros(self.n_layers, bs, self.n_hid)", "Loss function\nThe loss pads output and target so that they are of the same size before using the usual flattened version of cross entropy. We do the same for accuracy.", "def seq2seq_loss(out, targ, pad_idx=1):\n bs,targ_len = targ.size()\n _,out_len,vs = out.size()\n if targ_len>out_len: out = F.pad(out, (0,0,0,targ_len-out_len,0,0), value=pad_idx)\n if out_len>targ_len: targ = F.pad(targ, (0,out_len-targ_len,0,0), value=pad_idx)\n return CrossEntropyFlat()(out, targ)\n\ndef seq2seq_acc(out, targ, pad_idx=1):\n bs,targ_len = targ.size()\n _,out_len,vs = out.size()\n if targ_len>out_len: out = F.pad(out, (0,0,0,targ_len-out_len,0,0), value=pad_idx)\n if out_len>targ_len: targ = F.pad(targ, (0,out_len-targ_len,0,0), value=pad_idx)\n out = out.argmax(2)\n return (out==targ).float().mean()", "Bleu metric (see dedicated notebook)\nIn translation, the metric usually used is BLEU, see the corresponding notebook for the details.", "class NGram():\n def __init__(self, ngram, max_n=5000): self.ngram,self.max_n = ngram,max_n\n def __eq__(self, other):\n if len(self.ngram) != len(other.ngram): return False\n return np.all(np.array(self.ngram) == np.array(other.ngram))\n def __hash__(self): return int(sum([o * self.max_n**i for i,o in enumerate(self.ngram)]))\n\ndef get_grams(x, n, max_n=5000):\n return x if n==1 else [NGram(x[i:i+n], max_n=max_n) for i in range(len(x)-n+1)]\n\ndef get_correct_ngrams(pred, targ, n, max_n=5000):\n pred_grams,targ_grams = get_grams(pred, n, max_n=max_n),get_grams(targ, n, max_n=max_n)\n pred_cnt,targ_cnt = Counter(pred_grams),Counter(targ_grams)\n return sum([min(c, targ_cnt[g]) for g,c in pred_cnt.items()]),len(pred_grams)\n\nclass CorpusBLEU(Callback):\n def __init__(self, vocab_sz):\n self.vocab_sz = vocab_sz\n self.name = 'bleu'\n \n def on_epoch_begin(self, **kwargs):\n self.pred_len,self.targ_len,self.corrects,self.counts = 0,0,[0]*4,[0]*4\n \n def on_batch_end(self, last_output, last_target, **kwargs):\n last_output = last_output.argmax(dim=-1)\n for pred,targ in zip(last_output.cpu().numpy(),last_target.cpu().numpy()):\n self.pred_len += len(pred)\n self.targ_len += len(targ)\n for i in range(4):\n c,t = get_correct_ngrams(pred, targ, i+1, max_n=self.vocab_sz)\n self.corrects[i] += c\n self.counts[i] += t\n \n def on_epoch_end(self, last_metrics, **kwargs):\n precs = [c/t for c,t in zip(self.corrects,self.counts)]\n len_penalty = exp(1 - self.targ_len/self.pred_len) if self.pred_len < self.targ_len else 1\n bleu = len_penalty * ((precs[0]*precs[1]*precs[2]*precs[3]) ** 0.25)\n return add_metrics(last_metrics, bleu)", "We load our pretrained embeddings to create the model.", "emb_enc = torch.load(path/'models'/'fr_emb.pth')\nemb_dec = torch.load(path/'models'/'en_emb.pth')\n\nmodel = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)\nlearn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))])\n\nlearn.lr_find()\n\nlearn.recorder.plot()\n\nlearn.fit_one_cycle(8, 1e-2)", "So how good is our model? Let's see a few predictions.", "def get_predictions(learn, ds_type=DatasetType.Valid):\n learn.model.eval()\n inputs, targets, outputs = [],[],[]\n with torch.no_grad():\n for xb,yb in progress_bar(learn.dl(ds_type)):\n out = learn.model(xb)\n for x,y,z in zip(xb,yb,out):\n inputs.append(learn.data.train_ds.x.reconstruct(x))\n targets.append(learn.data.train_ds.y.reconstruct(y))\n outputs.append(learn.data.train_ds.y.reconstruct(z.argmax(1)))\n return inputs, targets, outputs\n\ninputs, targets, outputs = get_predictions(learn)\n\ninputs[700], targets[700], outputs[700]\n\ninputs[701], targets[701], outputs[701]\n\ninputs[2513], targets[2513], outputs[2513]\n\ninputs[4000], targets[4000], outputs[4000]", "It's usually beginning well, but falls into easy word at the end of the question.\nTeacher forcing\nOne way to help training is to help the decoder by feeding it the real targets instead of its predictions (if it starts with wrong words, it's very unlikely to give us the right translation). We do that all the time at the beginning, then progressively reduce the amount of teacher forcing.", "class TeacherForcing(LearnerCallback):\n \n def __init__(self, learn, end_epoch):\n super().__init__(learn)\n self.end_epoch = end_epoch\n \n def on_batch_begin(self, last_input, last_target, train, **kwargs):\n if train: return {'last_input': [last_input, last_target]}\n \n def on_epoch_begin(self, epoch, **kwargs):\n self.learn.model.pr_force = 1 - 0.5 * epoch/self.end_epoch\n\nclass Seq2SeqQRNN(nn.Module):\n def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25, \n p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):\n super().__init__()\n self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx\n self.emb_enc = emb_enc\n self.emb_enc_drop = nn.Dropout(p_inp)\n self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc)\n self.out_enc = nn.Linear(n_hid, emb_enc.weight.size(1), bias=False)\n self.hid_dp = nn.Dropout(p_hid)\n self.emb_dec = emb_dec\n self.decoder = QRNN(emb_dec.weight.size(1), emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)\n self.out_drop = nn.Dropout(p_out)\n self.out = nn.Linear(emb_dec.weight.size(1), emb_dec.weight.size(0))\n self.out.weight.data = self.emb_dec.weight.data\n self.pr_force = 0.\n \n def forward(self, inp, targ=None):\n bs,sl = inp.size()\n hid = self.initHidden(bs)\n emb = self.emb_enc_drop(self.emb_enc(inp))\n enc_out, hid = self.encoder(emb, hid)\n hid = self.out_enc(self.hid_dp(hid))\n\n dec_inp = inp.new_zeros(bs).long() + self.bos_idx\n res = []\n for i in range(self.max_len):\n emb = self.emb_dec(dec_inp).unsqueeze(1)\n outp, hid = self.decoder(emb, hid)\n outp = self.out(self.out_drop(outp[:,0]))\n res.append(outp)\n dec_inp = outp.data.max(1)[1]\n if (dec_inp==self.pad_idx).all(): break\n if (targ is not None) and (random.random()<self.pr_force):\n if i>=targ.shape[1]: break\n dec_inp = targ[:,i]\n return torch.stack(res, dim=1)\n \n def initHidden(self, bs): return one_param(self).new_zeros(self.n_layers, bs, self.n_hid)\n\nemb_enc = torch.load(path/'models'/'fr_emb.pth')\nemb_dec = torch.load(path/'models'/'en_emb.pth')\n\nmodel = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)\nlearn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))],\n callback_fns=partial(TeacherForcing, end_epoch=8))\n\nlearn.fit_one_cycle(8, 1e-2)\n\ninputs, targets, outputs = get_predictions(learn)\n\ninputs[700],targets[700],outputs[700]\n\ninputs[2513], targets[2513], outputs[2513]\n\ninputs[4000], targets[4000], outputs[4000]\n\n#get_bleu(learn)", "Bidir\nA second things that might help is to use a bidirectional model for the encoder.", "class Seq2SeqQRNN(nn.Module):\n def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25, \n p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):\n super().__init__()\n self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx\n self.emb_enc = emb_enc\n self.emb_enc_drop = nn.Dropout(p_inp)\n self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc, bidirectional=True)\n self.out_enc = nn.Linear(2*n_hid, emb_enc.weight.size(1), bias=False)\n self.hid_dp = nn.Dropout(p_hid)\n self.emb_dec = emb_dec\n self.decoder = QRNN(emb_dec.weight.size(1), emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)\n self.out_drop = nn.Dropout(p_out)\n self.out = nn.Linear(emb_dec.weight.size(1), emb_dec.weight.size(0))\n self.out.weight.data = self.emb_dec.weight.data\n self.pr_force = 0.\n \n def forward(self, inp, targ=None):\n bs,sl = inp.size()\n hid = self.initHidden(bs)\n emb = self.emb_enc_drop(self.emb_enc(inp))\n enc_out, hid = self.encoder(emb, hid)\n \n hid = hid.view(2,self.n_layers, bs, self.n_hid).permute(1,2,0,3).contiguous()\n hid = self.out_enc(self.hid_dp(hid).view(self.n_layers, bs, 2*self.n_hid))\n\n dec_inp = inp.new_zeros(bs).long() + self.bos_idx\n res = []\n for i in range(self.max_len):\n emb = self.emb_dec(dec_inp).unsqueeze(1)\n outp, hid = self.decoder(emb, hid)\n outp = self.out(self.out_drop(outp[:,0]))\n res.append(outp)\n dec_inp = outp.data.max(1)[1]\n if (dec_inp==self.pad_idx).all(): break\n if (targ is not None) and (random.random()<self.pr_force):\n if i>=targ.shape[1]: break\n dec_inp = targ[:,i]\n return torch.stack(res, dim=1)\n \n def initHidden(self, bs): return one_param(self).new_zeros(2*self.n_layers, bs, self.n_hid)\n\nemb_enc = torch.load(path/'models'/'fr_emb.pth')\nemb_dec = torch.load(path/'models'/'en_emb.pth')\n\nmodel = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)\nlearn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))],\n callback_fns=partial(TeacherForcing, end_epoch=8))\n\nlearn.lr_find()\n\nlearn.recorder.plot()\n\nlearn.fit_one_cycle(8, 1e-2)\n\ninputs, targets, outputs = get_predictions(learn)\n\ninputs[700], targets[700], outputs[700]\n\ninputs[701], targets[701], outputs[701]\n\ninputs[4001], targets[4001], outputs[4001]\n\n#get_bleu(learn)", "Attention\nAttention is a technique that uses the output of our encoder: instead of discarding it entirely, we use it with our hidden state to pay attention to specific words in the input sentence for the predictions in the output sentence. Specifically, we compute attention weights, then add to the input of the decoder the linear combination of the output of the encoder, with those attention weights.", "def init_param(*sz): return nn.Parameter(torch.randn(sz)/math.sqrt(sz[0]))\n\nclass Seq2SeqQRNN(nn.Module):\n def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25, \n p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):\n super().__init__()\n self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx\n self.emb_enc = emb_enc\n self.emb_enc_drop = nn.Dropout(p_inp)\n self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc, bidirectional=True)\n self.out_enc = nn.Linear(2*n_hid, emb_enc.weight.size(1), bias=False)\n self.hid_dp = nn.Dropout(p_hid)\n self.emb_dec = emb_dec\n emb_sz = emb_dec.weight.size(1)\n self.decoder = QRNN(emb_sz + 2*n_hid, emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)\n self.out_drop = nn.Dropout(p_out)\n self.out = nn.Linear(emb_sz, emb_dec.weight.size(0))\n self.out.weight.data = self.emb_dec.weight.data #Try tying\n self.enc_att = nn.Linear(2*n_hid, emb_sz, bias=False)\n self.hid_att = nn.Linear(emb_sz, emb_sz)\n self.V = init_param(emb_sz)\n self.pr_force = 0.\n \n def forward(self, inp, targ=None):\n bs,sl = inp.size()\n hid = self.initHidden(bs)\n emb = self.emb_enc_drop(self.emb_enc(inp))\n enc_out, hid = self.encoder(emb, hid)\n \n hid = hid.view(2,self.n_layers, bs, self.n_hid).permute(1,2,0,3).contiguous()\n hid = self.out_enc(self.hid_dp(hid).view(self.n_layers, bs, 2*self.n_hid))\n\n dec_inp = inp.new_zeros(bs).long() + self.bos_idx\n res = []\n enc_att = self.enc_att(enc_out)\n for i in range(self.max_len):\n hid_att = self.hid_att(hid[-1])\n u = torch.tanh(enc_att + hid_att[:,None])\n attn_wgts = F.softmax(u @ self.V, 1)\n ctx = (attn_wgts[...,None] * enc_out).sum(1)\n emb = self.emb_dec(dec_inp)\n outp, hid = self.decoder(torch.cat([emb, ctx], 1)[:,None], hid)\n outp = self.out(self.out_drop(outp[:,0]))\n res.append(outp)\n dec_inp = outp.data.max(1)[1]\n if (dec_inp==self.pad_idx).all(): break\n if (targ is not None) and (random.random()<self.pr_force):\n if i>=targ.shape[1]: break\n dec_inp = targ[:,i]\n return torch.stack(res, dim=1)\n \n def initHidden(self, bs): return one_param(self).new_zeros(2*self.n_layers, bs, self.n_hid)\n\nemb_enc = torch.load(path/'models'/'fr_emb.pth')\nemb_dec = torch.load(path/'models'/'en_emb.pth')\n\nmodel = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)\nlearn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))],\n callback_fns=partial(TeacherForcing, end_epoch=8))\n\nlearn.lr_find()\n\nlearn.recorder.plot()\n\nlearn.fit_one_cycle(8, 3e-3)\n\ninputs, targets, outputs = get_predictions(learn)\n\ninputs[700], targets[700], outputs[700]\n\ninputs[701], targets[701], outputs[701]\n\ninputs[4002], targets[4002], outputs[4002]" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ctools/ctools
doc/source/users/tutorials/cta/howto/howto_scs.ipynb
gpl-3.0
[ "How to perform spectral component separation?\nIn this tutorial you will learn how to use spectral component separation to disentangle the morphological properties of overlapping sources.\nAs usual we start by importing the gammalib, ctools, and cscripts Python modules.", "import gammalib\nimport ctools\nimport cscripts", "We also import the matplotlib package for plotting.", "%matplotlib inline\nimport matplotlib.pyplot as plt", "Simulated dataset\nFor the tutorial we will simulate a small CTA dataset. We start by defining the Instrument Response Functions and energy range that will be used all along.", "caldb = 'prod2'\nirf = 'South_5h'\nemin = 0.1 # TeV\nemax = 160.0 # TeV", "We will simulate an observation of the region around the famous sources HESS J1825-137 and HESS J1826-130 based on a very simple sky model. We will consider two pointings of a few hours wobbling around the sources' position.\nWe start by writing the pointing directions and times to an ASCII file.", "pointing_file = 'pointings.txt'\n\n# open file\nf = open(pointing_file, 'w')\n# header\nf.write('id,ra,dec,tmin,tmax\\n')\n# pointings\nf.write('0001,275.65,-13.78,0.,10800.\\n')\nf.write('0002,277.25,-13.78,11000.,21800.\\n')\n\n# close file\nf.close()", "Then we use the csobsdef script to convert the list of pointings into an observation definition XML file.", "obsdef = cscripts.csobsdef()\nobsdef['inpnt'] = pointing_file\nobsdef['caldb'] = caldb\nobsdef['irf'] = irf\nobsdef['emin'] = emin\nobsdef['emax'] = emax\nobsdef['rad'] = 5.\nobsdef.run()", "Finally we use ctobssim to perform the observation simulation.", "obssim = ctools.ctobssim(obsdef.obs())\nobssim['inmodel'] = '$CTOOLS/share/models/hess1825_26.xml'\n\nobssim.run()", "Skymap inspection and preliminary likelihood fit\nThis section showcases a possible path to get to perform a spectral component separation. If you are here because you have an application in which you know the spectrum of your sources of interest, and you want just to learn how to perform the spectral component separation, please jump to the next section.\nAs usual a great way to inspect the data is making a skymap using ctskymap.", "skymap = ctools.ctskymap(obssim.obs())\nskymap['emin'] = emin\nskymap['emax'] = emax\nskymap['nxpix'] = 200\nskymap['nypix'] = 200\nskymap['binsz'] = 0.02\nskymap['proj'] = 'TAN'\nskymap['coordsys'] = 'CEL'\nskymap['xref'] = 276.45\nskymap['yref'] = -13.78\nskymap['bkgsubtract'] = 'NONE'\nskymap.run()", "Below we inspect the skymap by using matpltolib.", "# Slightly smooth the map for display to suppress statistical fluctuations\nskymap.skymap().smooth('GAUSSIAN',0.1)\n\nfig = plt.figure()\nax = plt.subplot()\nplt.imshow(skymap.skymap().array(),origin='lower',\n extent=[276.45+0.02*100,276.45-0.02*100,-13.78-0.02*100,-13.78+0.02*100])\n # boundaries of the coord grid\nax.set_xlabel('R.A. (deg)')\nax.set_ylabel('Dec (deg)')\ncbar = plt.colorbar()\ncbar.set_label('Counts')", "A large blob of emission appears at the center of the map. It coincides with the source HESS J1825-137. Past observations have indicated that HESS J1826-130 has a harder spectrum than HESS J1825-137. Let's peek at a skymap above 10 TeV.", "skymap = ctools.ctskymap(obssim.obs())\nskymap['emin'] = 10. #TeV\nskymap['emax'] = emax\nskymap['nxpix'] = 200\nskymap['nypix'] = 200\nskymap['binsz'] = 0.02\nskymap['proj'] = 'TAN'\nskymap['coordsys'] = 'CEL'\nskymap['xref'] = 276.45\nskymap['yref'] = -13.78\nskymap['bkgsubtract'] = 'NONE'\nskymap.run()\n\n# Slightly smooth the map for display to suppress statistical fluctuations\nskymap.skymap().smooth('GAUSSIAN',0.1)\n\nfig = plt.figure()\nax = plt.subplot()\nplt.imshow(skymap.skymap().array(),origin='lower',\n extent=[276.45+0.02*100,276.45-0.02*100,-13.78-0.02*100,-13.78+0.02*100])\n # boundaries of the coord grid\nax.set_xlabel('R.A. (deg)')\nax.set_ylabel('Dec (deg)')\ncbar = plt.colorbar()\ncbar.set_label('Counts')", "Indeed, emission above 10 TeV is concentrated on a spot North of the low-energy blob coincident with the position of HESS J1826-130. How can we disentangle the morphology of the two sources based on our observations?\nOne of the traditional techniques to disentangle multiple overlapping sources is using a 3D likelihood analysis based on some guess of the sources' morphology and spectra. However, this often implies to choose a simplistic (analytical) model for the sources' morphology. Below we will test another approach, that consists in determining the morphology from the data using some knowledge of their spectra.\nFor our exercise we guess the spectra of the two components via a preliminary likelihood analysis. For this we use a tentative sky model inspired by the skymaps we have inspected. We include in the model two disks with centre and extension eye-balled from the low-energy blob and high-energy spot. Their parameters are not fit to the data here for the sake of a shorter execution time. The morphology will be determined from the data later. However, at a cost of a longer computing time you could fit the spatial model parameters in this step.\nFor both components we will assume a power-law spectrum, with and index of 2.5 for the soft component and an index of 1.5 for the hard component, and a flux at 1 TeV approximately 10% and 1% of the Crab nebula, respectively. These spectral parameters are going to be fit to the data.", "# model container\nmodels = gammalib.GModels()\n\n# low-energy blob\ncentre = gammalib.GSkyDir()\ncentre.radec_deg(276.5,-13.75)\nspatial = gammalib.GModelSpatialRadialDisk(centre,0.5)\n# free source centre\nspatial['RA'].fix()\nspatial['DEC'].fix()\nspatial['Radius'].fix()\nspectral = gammalib.GModelSpectralPlaw(4.e-18,-2.5,gammalib.GEnergy(1.,'TeV'))\nsource = gammalib.GModelSky(spatial,spectral)\nsource.name('HESS J1825-137')\nmodels.append(source)\n\n# high-energy spot\ncentre = gammalib.GSkyDir()\ncentre.radec_deg(276.5,-13)\nspatial = gammalib.GModelSpatialRadialDisk(centre,0.1)\n# free source centre\nspatial['RA'].fix()\nspatial['DEC'].fix()\nspatial['Radius'].fix()\nspectral = gammalib.GModelSpectralPlaw(4.e-19,-1.5,gammalib.GEnergy(1.,'TeV'))\nsource = gammalib.GModelSky(spatial,spectral)\nsource.name('HESS J1826-130')\nmodels.append(source)\n\n# instrumental background\n# power law spectral correction with pivot energy at 1 TeV\nspectral = gammalib.GModelSpectralPlaw(1, 0, gammalib.GEnergy(1, 'TeV'))\nbkgmodel = gammalib.GCTAModelIrfBackground(spectral)\nbkgmodel.name('Background')\nbkgmodel.instruments('CTA')\n# append to models\nmodels.append(bkgmodel)", "We copy the simulated observations and append to them our initial sky model.", "obs = obssim.obs().copy()\nobs.models(models)", "We are going to start with a stacked analysis. We bin the events, and then attach to the stacked observations the stacked response. Note that if the dataset is small it may be convenient to use an unbinned analysis in lieu of the stacked analysis for this step.", "# Bin events\ncntcube = ctools.ctbin(obs)\ncntcube['usepnt'] = False\ncntcube['ebinalg'] = 'LOG'\ncntcube['xref'] = 276.45\ncntcube['yref'] = -13.78\ncntcube['binsz'] = 0.02\ncntcube['nxpix'] = 200\ncntcube['nypix'] = 200\ncntcube['enumbins'] = 40\ncntcube['emin'] = emin\ncntcube['emax'] = emax\ncntcube['coordsys'] = 'CEL'\ncntcube['proj'] = 'TAN'\ncntcube.run()\n\n# Extract counts cube\ncube = cntcube.cube()\n\n# Compute stacked response\nresponse = cscripts.obsutils.get_stacked_response(obs,cube)\n\n# Copy stacked observations\nstacked_obs = cntcube.obs().copy()\n\n# Append stacked response\nstacked_obs[0].response(response['expcube'], response['psfcube'],response['bkgcube'])\n\n# Set stacked models\nstacked_obs.models(response['models'])", "Now we can run the preliminary likelihood analysis in which the spectral parameters for the two sources are fit to the data.", "like = ctools.ctlike(stacked_obs)\nlike.run()", "Let's check that the fit was successful.", "print(like.opt())", "We also check the fitted models.", "print(like.obs().models())", "As guessed from the skymaps HESS J1826-130 is fainter than HESS J1825-137, but its spectrum is rather harder. We will use the values of the spectral indices obtained from this likelihood analysis to derive the source morphology below.\nHowever, we first check the fit residuals. Let's start by inspecting the spectral residuals using csresspec.", "resspec = cscripts.csresspec(like.obs())\nresspec['algorithm'] = 'SIGNIFICANCE'\nresspec['components'] = True\nresspec['outfile'] = 'resspec.fits'\nresspec.execute()", "We can use an example script to display the residuals.", "import sys\nimport os\nsys.path.append(os.environ['CTOOLS']+'/share/examples/python/')\n\nfrom show_residuals import plot_residuals\nplot_residuals('resspec.fits','',0)", "The model reproduces reasonably well the data spectrally (although not perfectly). We will also check the spatial residuals using csresmap.", "resmap = cscripts.csresmap(like.obs())\nresmap['algorithm'] = 'SIGNIFICANCE'\nresmap.run()", "We inspect the map to check the spatial residuals.", "# Slightly smooth the map for display to suppress statistical fluctuations\nresid = resmap._resmap.copy()\nresid.smooth('GAUSSIAN',0.1)\n# Plotting\nfig = plt.figure()\nax = plt.subplot()\nplt.imshow(resid.array(),origin='lower', cmap='bwr',\n extent=[276.45+0.02*100,276.45-0.02*100,-13.78-0.02*100,-13.78+0.02*100])\n # Boundaries of the coord grid\nax.set_xlabel('R.A. (deg)')\nax.set_ylabel('Dec (deg)')\ncbar = plt.colorbar()\ncbar.set_label('Significance ($\\sigma$)')", "The spatial residuals are small, indicating that the model is close enough to the data. However, the structures in the residuals indicate that the morphological models adopted may not accurately represent the data.\nStacked spectral component separation\nIf we are confident that the spectral models we have at hand represent the data well, we can use this information to determine the morphology of the emission associated with each spectral component from the data by using csscs. First of all we will perform the spectral component separation using the stacked dataset. A stacked analysis is usually the fastest option for csscs. However, an unbinned analysis is also possible. \nAs a preliminary step we will change the spatial models for the two components to be separated. The script uses the spatial model provided as prior, so for a disk the flux outside the disk radius would always be null. If you have a reasonable prior you can use it in the component separation, but it is recommended to use a spatial model without sharp boundaries (e.g., Gaussian). Since in this case we do not know what the morphology of the two sources is, we will start with an isotropic distribution.", "# copy the fitted models\nfit_obs = like.obs().copy()\n\n# replace disks with isotropic model\nfor model in fit_obs.models():\n if model.name() == 'HESS J1825-137' or model.name() == 'HESS J1826-130': \n model.spatial(gammalib.GModelSpatialDiffuseConst())", "The essential information to be provided to csscs is:\n\n\nthe geometry of the output maps; for every bin in the output map a dedicated likelhood analysis will be used to determine the fluxes of the sources of interest; make sure to have a region just big enough and a grid step just fine enough for your purposes since increasing the number of bins will proportionally increase the computation time; \n\n\nthe list of the sources of interest;\n\n\nthe energy range for the analysis;\n\n\nthe radius (rad) of the region of interest to be used for the spectral component separation centred on each bin of the output maps; note that the value of this parameter sets the correlation scale between neighbour pixels in the output maps, and it must be at least sqrt(2) times binsz to fully cover the input dataset; furthemore, rad must be large enough to ensure that you have enough statististics to perform a likelihood fit.\n\n\nYou can decide if you want to leave free in the fit the background model (true by default) and other background gamma-ray sources (false by default, we have none in the exemple we are considering).", "scs1 = cscripts.csscs(fit_obs)\nscs1['srcnames'] = 'HESS J1825-137;HESS J1826-130'\nscs1['emin'] = emin\nscs1['emax'] = emax\nscs1['nxpix'] = 20\nscs1['nypix'] = 20\nscs1['binsz'] = 0.1\nscs1['rad'] = 0.2\nscs1['proj'] = 'TAN'\nscs1['coordsys'] = 'CEL'\nscs1['xref'] = 276.45\nscs1['yref'] = -13.78\nscs1.run()", "Now we can inspect the maps of the fluxes from the two sources. We will set a minimum flux for display to avoid being confused by noisy bins. In fact the script has calculated also the flux uncertainty in each bin (accessible through the flux_error method) and the detection significance (accessible through the ts method), that you can use to filter more intelligently the maps. We did not request this, but one could also have flux upper limits computed (calc_ulimit parameter).", "flux_1826 = scs1.flux('HESS J1826-130')\n\n# Plotting\nfig = plt.figure()\nax = plt.subplot()\nplt.imshow(flux_1826.array(),origin='lower', vmin = 1.e-8,\n extent=[276.45+0.1*10,276.45-0.1*10,-13.78-0.1*10,-13.78+0.1*10])\n # Boundaries of the coord grid\nax.set_xlabel('R.A. (deg)')\nax.set_ylabel('Dec (deg)')\ncbar = plt.colorbar()\ncbar.set_label('Flux (photons/cm$^2$/s/sr)')\n\nflux_1825 = scs1.flux('HESS J1825-137')\n\n# Plotting\nfig = plt.figure()\nax = plt.subplot()\nplt.imshow(flux_1825.array(),origin='lower', vmin = 1.e-8,\n extent=[276.45+0.1*10,276.45-0.1*10,-13.78-0.1*10,-13.78+0.1*10])\n # Boundaries of the coord grid\nax.set_xlabel('R.A. (deg)')\nax.set_ylabel('Dec (deg)')\ncbar = plt.colorbar()\ncbar.set_label('Flux (photons/cm$^2$/s/sr)')", "As you can see the hard emission is confined in the North. The soft emission blob seems to have an elongated shape.\nOn/Off spectral component separation\nWhat if you don't have a reliable background model to perform a 3D analysis? You can still perform a spectral component separation using the On/Off technique as long as you have a prior on the source's spectra.\nFor this we need first of all to compute an exclusion map, i.e., a map of the region where there is significant gamma-ray emission, so that we can exclude it from background computation. We can do this by using ctskymap with the RING background subtraction method. Since we are dealing with a large source we will use rather large ROI and ring regions.", "skymap = ctools.ctskymap(obssim.obs())\nskymap['emin'] = emin\nskymap['emax'] = emax\nskymap['nxpix'] = 200\nskymap['nypix'] = 200\nskymap['binsz'] = 0.02\nskymap['proj'] = 'TAN'\nskymap['coordsys'] = 'CEL'\nskymap['xref'] = 276.45\nskymap['yref'] = -13.78\nskymap['bkgsubtract'] = 'RING'\nskymap['roiradius'] = 0.5\nskymap['inradius'] = 1.0\nskymap['outradius'] = 1.5\nskymap['iterations'] = 3\nskymap['threshold'] = 5 # sigma\nskymap.run()", "Let's inspect the exclusion map.", "fig = plt.figure()\nax = plt.subplot()\nplt.imshow(skymap.exclusion_map().map().array(), origin='lower', cmap = 'binary',\n extent=[276.45+0.02*100,276.45-0.02*100,-13.48-0.02*100,-13.48+0.02*100])\n # boundaries of the coord grid\nax.set_xlabel('R.A. (deg)')\nax.set_ylabel('Dec (deg)')", "To use csscs in On/Off mode we need to go back to using the event lists. We modify the associated models to the latest version obtained from the global likelihood fit.", "obs.models(like.obs().models())", "Note that in On/Off mode if there are multiple sources csscs will use only the spectral models for the sources, and their emission within each ROI for component separation will be assumed by default to be isotropic. We will not use the background model, we'll just assume that the background rate of the reflected background regions is the same as in the ROI.\nThis step is rather time consuming, and should take at least a couple of hours to complete on a normal laptop. This is a good time for you to grab lunch or to read that review paper you always meant to.", "scs2 = cscripts.csscs(obs)\nscs2['srcnames'] = 'HESS J1825-137;HESS J1826-130'\nscs2['emin'] = emin\nscs2['emax'] = emax\nscs2['nxpix'] = 20\nscs2['nypix'] = 20\nscs2['binsz'] = 0.1\nscs2['rad'] = 0.2\nscs2['proj'] = 'TAN'\nscs2['coordsys'] = 'CEL'\nscs2['xref'] = 276.45\nscs2['yref'] = -13.78\nscs2['method'] = 'ONOFF'\nscs2['use_model_bkg'] = False\nscs2['enumbins'] = 30\nscs2.exclusion_map(skymap.exclusion_map())\nscs2.run()", "We visualize below the flux maps, which are quite consistent with those obtained using the stacked analysis.", "flux_1826 = scs2.flux('HESS J1826-130')\n\n# Plotting\nfig = plt.figure()\nax = plt.subplot()\nplt.imshow(flux_1826.array(),origin='lower', vmin = 1.e-8,\n extent=[276.45+0.1*10,276.45-0.1*10,-13.78-0.1*10,-13.78+0.1*10])\n # Boundaries of the coord grid\nax.set_xlabel('R.A. (deg)')\nax.set_ylabel('Dec (deg)')\ncbar = plt.colorbar()\ncbar.set_label('Flux (photons/cm$^2$/s/sr)')\n\nflux_1825 = scs1.flux('HESS J1825-137')\n\n# Plotting\nfig = plt.figure()\nax = plt.subplot()\nplt.imshow(flux_1825.array(),origin='lower', vmin = 1.e-8,\n extent=[276.45+0.1*10,276.45-0.1*10,-13.78-0.1*10,-13.78+0.1*10])\n # Boundaries of the coord grid\nax.set_xlabel('R.A. (deg)')\nax.set_ylabel('Dec (deg)')\ncbar = plt.colorbar()\ncbar.set_label('Flux (photons/cm$^2$/s/sr)')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
HaoMood/cs231n
assignment2/assignment2/BatchNormalization.ipynb
gpl-3.0
[ "Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\nThe authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n[3] Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.", "# As usual, a bit of setup\n\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.iteritems():\n print '%s: ' % k, v.shape", "Batch normalization: Forward\nIn the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.", "# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization\n\n# Simulate the forward pass for a two-layer network\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint 'Before batch normalization:'\nprint ' means: ', a.mean(axis=0)\nprint ' stds: ', a.std(axis=0)\n\n# Means should be close to zero and stds close to one\nprint 'After batch normalization (gamma=1, beta=0)'\na_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})\nprint ' mean: ', a_norm.mean(axis=0)\nprint ' std: ', a_norm.std(axis=0)\n\n# Now means should be close to beta and stds close to gamma\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint 'After batch normalization (nontrivial gamma, beta)'\nprint ' means: ', a_norm.mean(axis=0)\nprint ' stds: ', a_norm.std(axis=0)\n\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\nfor t in xrange(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint 'After batch normalization (test-time):'\nprint ' means: ', a_norm.mean(axis=0)\nprint ' stds: ', a_norm.std(axis=0)", "Batch Normalization: backward\nNow implement the backward pass for batch normalization in the function batchnorm_backward.\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\nOnce you have finished, run the following to numerically check your backward pass.", "# Gradient check batchnorm backward pass\n\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma, dout)\ndb_num = eval_numerical_gradient_array(fb, beta, dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dgamma error: ', rel_error(da_num, dgamma)\nprint 'dbeta error: ', rel_error(db_num, dbeta)", "Batch Normalization: alternative backward\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.\nSurprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\nNOTE: You can still complete the rest of the assignment if you don't figure this part out, so don't worry too much if you can't get it.", "N, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint 'dx difference: ', rel_error(dx1, dx2)\nprint 'dgamma difference: ', rel_error(dgamma1, dgamma2)\nprint 'dbeta difference: ', rel_error(dbeta1, dbeta2)\nprint 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2))", "Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.\nConcretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\nHINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.", "N, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor reg in [0, 3.14]:\n print 'Running check with reg = ', reg\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n use_batchnorm=True)\n\n loss, grads = model.loss(X, y)\n print 'Initial loss: ', loss\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))\n if reg == 0: print", "Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.", "# Try training a very deep net with batchnorm\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nbn_solver.train()\n\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nsolver.train()", "Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.", "plt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 1)\nplt.plot(solver.loss_history, 'o', label='baseline')\nplt.plot(bn_solver.loss_history, 'o', label='batchnorm')\n\nplt.subplot(3, 1, 2)\nplt.plot(solver.train_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')\n\nplt.subplot(3, 1, 3)\nplt.plot(solver.val_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()", "Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.", "# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers = {}\nsolvers = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print 'Running weight scale %d / %d' % (i + 1, len(weight_scales))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers[weight_scale] = solver\n\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))\n \n best_val_accs.append(max(solvers[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\n\nplt.gcf().set_size_inches(10, 15)\nplt.show()", "Question:\nDescribe the results of this experiment, and try to give a reason why the experiment gave the results that it did.\nAnswer:\nBatch-normalization is more robust to weight initialization scale." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jbocharov-mids/W207-Machine-Learning
John_Bocharov_p1.ipynb
apache-2.0
[ "Project 1: Digit Classification with KNN and Naive Bayes\nIn this project, you'll implement your own image recognition system for classifying digits. Read through the code and the instructions carefully and add your own code where indicated. Each problem can be addressed succinctly with the included packages -- please don't add any more. Grading will be based on writing clean, commented code, along with a few short answers.\nAs always, you're welcome to work on the project in groups and discuss ideas on the course wall, but <b> please prepare your own write-up (with your own code). </b>\nIf you're interested, check out these links related to digit recognition:\nYann Lecun's MNIST benchmarks: http://yann.lecun.com/exdb/mnist/\nStanford Streetview research and data: http://ufldl.stanford.edu/housenumbers/", "# This tells matplotlib not to try opening a new window for each plot.\n%matplotlib inline\n\n# Import a bunch of libraries.\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import MultipleLocator\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.datasets import fetch_mldata\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.naive_bayes import BernoulliNB\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.grid_search import GridSearchCV\nfrom sklearn.metrics import classification_report\n\n# Set the randomizer seed so results are the same each time.\nnp.random.seed(0)", "Load the data. Notice that we are splitting the data into training, development, and test. We also have a small subset of the training data called mini_train_data and mini_train_labels that you should use in all the experiments below, unless otherwise noted.", "# Load the digit data either from mldata.org, or once downloaded to data_home, from disk. The data is about 53MB so this cell\n# should take a while the first time your run it.\nmnist = fetch_mldata('MNIST original', data_home='~/datasets/mnist')\nX, Y = mnist.data, mnist.target\n\n# Rescale grayscale values to [0,1].\nX = X / 255.0\n\n# Shuffle the input: create a random permutation of the integers between 0 and the number of data points and apply this\n# permutation to X and Y.\n# NOTE: Each time you run this cell, you'll re-shuffle the data, resulting in a different ordering.\nshuffle = np.random.permutation(np.arange(X.shape[0]))\nX, Y = X[shuffle], Y[shuffle]\n\nprint 'data shape: ', X.shape\nprint 'label shape:', Y.shape\n\n# Set some variables to hold test, dev, and training data.\ntest_data, test_labels = X[61000:], Y[61000:]\ndev_data, dev_labels = X[60000:61000], Y[60000:61000]\ntrain_data, train_labels = X[:60000], Y[:60000]\nmini_train_data, mini_train_labels = X[:1000], Y[:1000]", "(1) Create a 10x10 grid to visualize 10 examples of each digit. Python hints:\n\nplt.rc() for setting the colormap, for example to black and white\nplt.subplot() for creating subplots\nplt.imshow() for rendering a matrix\nnp.array.reshape() for reshaping a 1D feature vector into a 2D matrix (for rendering)", "#def P1(num_examples=10):\n\n### STUDENT START ###\n# Credit where due... some inspiration drawn from:\n# https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/fig/mnist.py\n\n# example_as_pixel_matrix():\n# transforms a 784 element pixel into a 28 x 28 pixel matrix\ndef example_as_pixel_matrix(example):\n return np.reshape(example, (-1, 28))\n\n# add_example_to_figure():\n# given an existing figure, number of rows, columns, and position,\n# adds a subplot with the example to the figure\ndef add_example_to_figure(example, \n figure, \n subplot_rows, \n subplot_cols, \n subplot_number):\n matrix = example_as_pixel_matrix(example)\n\n subplot = figure.add_subplot(subplot_rows, subplot_cols, subplot_number)\n subplot.imshow(matrix, cmap='Greys', interpolation='Nearest')\n # disable tick marks\n subplot.set_xticks(np.array([]))\n subplot.set_yticks(np.array([]))\n\n# plot_examples():\n# given a matrix of examples (digit, example#) => example, \n# plots it with digits as rows and examples as columns\ndef plot_examples(examples):\n \n figure = plt.figure()\n \n shape = np.shape(examples)\n rows = shape[0]\n columns = shape[1]\n \n subplot_index = 1\n \n for digit, examples_for_digit in enumerate(examples):\n for example_index, example in enumerate(examples_for_digit):\n add_example_to_figure(example, \n figure, \n rows, \n columns, \n subplot_index\n )\n subplot_index = subplot_index + 1\n \n figure.tight_layout()\n plt.show()\n\n# plot_one_example():\n# given an example, plots only that example, typically\n# for debugging or diagnostics\ndef plot_one_example(example): \n examples = [ [ example ] ]\n plot_examples(examples)\n\n# select_indices_of_digit():\n# given an array of digit lables, selects the indices of\n# labels that match a desired digit\ndef select_indices_of_digit(labels, digit):\n return [i for i, label in enumerate(labels) if label == digit]\n\n# take_n_from():\n# code readability sugar for taking a number of elements from an array\ndef take_n_from(count, array):\n return array[:count]\n\n# take_n_examples_by_digit():\n# given a data set of examples, a label set, and a parameter n,\n# creates a matrix where the rows are the digits 0-9, and the\n# columns are the first n examples of each digit\ndef take_n_examples_by_digit(data, labels, n):\n examples = [\n data[take_n_from(n, select_indices_of_digit(labels, digit))]\n for digit in range(10)\n ]\n return examples\n\ndef P1(num_examples=10):\n examples = take_n_examples_by_digit(mini_train_data, mini_train_labels, num_examples)\n plot_examples(examples)\n\nP1(10)\n### STUDENT END ###\n\n#P1(10)", "(2) Evaluate a K-Nearest-Neighbors model with k = [1,3,5,7,9] using the mini training set. Report accuracy on the dev set. For k=1, show precision, recall, and F1 for each label. Which is the most difficult digit?\n\nKNeighborsClassifier() for fitting and predicting\nclassification_report() for producing precision, recall, F1 results", "#def P2(k_values):\n\n### STUDENT START ###\n\nfrom sklearn.metrics import accuracy_score\n\n# apply_k_nearest_neighbors():\n# given the parameter k, training data and labels, and development data and labels,\n# fit a k nearest neighbors classifier using the training data, \n# test using development data, and output a report\ndef apply_k_nearest_neighbors(k,\n training_data,\n training_labels,\n development_data,\n development_labels):\n\n neigh = KNeighborsClassifier(n_neighbors = k)\n neigh.fit(training_data, training_labels)\n \n predicted_labels = neigh.predict(development_data)\n \n target_names = [ str(i) for i in range(10) ]\n \n print '============ Classification report for k = ' + str(k) + ' ============'\n print ''\n print(classification_report(\n development_labels, \n predicted_labels, \n target_names = target_names))\n \n return accuracy_score(development_labels, predicted_labels, normalize = True)\n\ndef P2(k_values):\n return [\n apply_k_nearest_neighbors(k,\n mini_train_data,\n mini_train_labels,\n dev_data,\n dev_labels)\n for k in k_values\n ]\n\nk_values = [1, 3, 5, 7, 9]\nP2(k_values)\n\n### STUDENT END ###\n\n#k_values = [1, 3, 5, 7, 9]\n#P2(k_values)", "ANSWER: The most difficult digit is 9, as measured by f1-score\n(3) Using k=1, report dev set accuracy for the training set sizes below. Also, measure the amount of time needed for prediction with each training size.\n\ntime.time() gives a wall clock value you can use for timing operations", "#def P3(train_sizes, accuracies):\n\n### STUDENT START ###\n# k_nearest_neighbors_timed_accuracy():\n# given the parameter k, training data and labels, and development data and labels,\n# fit a k nearest neighbors classifier using the training data, \n# test using development data, and return the number of examples, prediction time,\n# and accuracy as a Python dictionary\ndef k_nearest_neighbors_timed_accuracy(k,\n training_data,\n training_labels,\n development_data,\n development_labels):\n\n neigh = KNeighborsClassifier(n_neighbors = k)\n neigh.fit(training_data, training_labels)\n \n start = time.time()\n predicted_labels = neigh.predict(development_data)\n end = time.time()\n \n examples, dimensions = np.shape(training_data)\n \n accuracy = accuracy_score(development_labels, predicted_labels, normalize = True)\n \n return { 'examples' : examples, 'time' : end-start, 'accuracy' : accuracy }\n\ndef P3(train_sizes, accuracies):\n k = 1\n for train_size in train_sizes:\n # sample train_size examples from the training set\n current_train_data, current_train_labels = X[:train_size], Y[:train_size]\n \n results = k_nearest_neighbors_timed_accuracy(k,\n current_train_data,\n current_train_labels,\n dev_data,\n dev_labels)\n print(results)\n accuracies.append(results['accuracy'])\n\ntrain_sizes = [100, 200, 400, 800, 1600, 3200, 6400, 12800, 25000]\naccuracies = [ ]\nP3(train_sizes, accuracies) \n### STUDENT END ###\n\n#train_sizes = [100, 200, 400, 800, 1600, 3200, 6400, 12800, 25000]\n#accuracies = []\n#P3(train_sizes, accuracies)", "(4) Fit a regression model that predicts accuracy from training size. What does it predict for n=60000? What's wrong with using regression here? Can you apply a transformation that makes the predictions more reasonable?\n\nRemember that the sklearn fit() functions take an input matrix X and output vector Y. So each input example in X is a vector, even if it contains only a single value.", "#def P4():\n\n### STUDENT START ###\n\nfrom sklearn.linear_model import LogisticRegression\n\n# fit_linear_regression():\n# given arrays of training data sizes and corresponding accuracies,\n# train and return a linear regression model for predicting accuracies\ndef fit_linear_regression(train_sizes, accuracies):\n train_sizes_matrix = [ [ train_size ] for train_size in train_sizes ]\n \n linear = LinearRegression()\n linear.fit(train_sizes_matrix, accuracies)\n \n return linear\n\n# fit_logistic_regression():\n# given arrays of training data sizes and corresponding accuracies,\n# train and return a logistic regression model for predicting accuracies\ndef fit_logistic_regression(train_sizes, accuracies):\n train_sizes_matrix = [ [ train_size ] for train_size in train_sizes ]\n \n logistic = LogisticRegression()\n logistic.fit(train_sizes_matrix, accuracies)\n \n return logistic\n\ndef P4():\n full_training_size = 60000\n \n linear = fit_linear_regression(train_sizes, accuracies)\n linear_prediction = linear.predict(full_training_size)\n print('Linear model prediction for ' \n + str(full_training_size) + ' : ' + str(linear_prediction[0]))\n \n logistic = fit_logistic_regression(train_sizes, accuracies)\n logistic_prediction = logistic.predict(full_training_size)\n print('Logistic model prediction for ' \n + str(full_training_size) + ' : ' + str(logistic_prediction[0]))\n \nP4()\n\n### STUDENT END ###\n\n#P4()", "ANSWER: OLS/Linear models aren't designed to respect probibility range (0,1) and can produce probabilities > 1 or < 0 (e.g. 1.24). A Logistic Regression model is a great straightforward fix as it produces predictions in valid probability range (0.0 - 1.0) by design.\nFit a 1-NN and output a confusion matrix for the dev data. Use the confusion matrix to identify the most confused pair of digits, and display a few example mistakes.\n\nconfusion_matrix() produces a confusion matrix", "#def P5():\n\n### STUDENT START ###\n\n# train_k_nearest_neighbors():\n# given the parameter k, training data and labels, and development data and labels,\n# fit a k nearest neighbors classifier using the training data\ndef train_k_nearest_neighbors(k,\n training_data,\n training_labels):\n\n neigh = KNeighborsClassifier(n_neighbors = k)\n neigh.fit(training_data, training_labels)\n \n return neigh\n\n# most_confused():\n# given a confusion matrix\n# returns a sequence that comprises the two most confused digits, and errors between them\ndef most_confused(confusion):\n rows, columns = np.shape(confusion)\n worst_row, worst_column, worst_errors = 0, 1, 0\n \n # iterate through the upper triangle, ignoring the diagonals\n # confused is the sum for each pair of indices\n for row in range(rows):\n for column in range(row + 1, columns):\n errors = confusion[row][column] + confusion[column][row]\n if errors > worst_errors:\n worst_row, worst_column, worst_errors = row, column, errors\n \n return ( worst_row, worst_column, worst_errors )\n\n# select_pairwire_error_indices():\n# given a predictions vector, actual label vector, and the digits of interest\n# returns an array of indices where the digits were confused\ndef select_pairwire_error_indices(predictions, labels, confused_low, confused_high):\n error_indices = [ ]\n for i, prediction in enumerate(predictions):\n label = labels[i]\n if ((prediction == confused_low and label == confused_high) or\n (prediction == confused_high and label == confused_low)):\n \n error_indices.append(i)\n \n return error_indices\n\ndef P5():\n k = 1\n neigh = train_k_nearest_neighbors(k, train_data, train_labels)\n development_predicted = neigh.predict(dev_data)\n \n confusion = confusion_matrix(dev_labels, development_predicted)\n \n confused_low, confused_high, confusion_errors = most_confused(confusion)\n print('Most confused digits are: ' + str(confused_low) + ' and ' + str(confused_high)\n + ', with ' + str(confusion_errors) + ' total confusion errors')\n \n error_indices = select_pairwire_error_indices(\n development_predicted, dev_labels, confused_low, confused_high)\n error_examples = [ dev_data[error_indices] ]\n plot_examples(error_examples)\n \n return confusion\nP5()\n\n### STUDENT END ###\n\n#P5()", "(6) A common image processing technique is to smooth an image by blurring. The idea is that the value of a particular pixel is estimated as the weighted combination of the original value and the values around it. Typically, the blurring is Gaussian -- that is, the weight of a pixel's influence is determined by a Gaussian function over the distance to the relevant pixel.\nImplement a simplified Gaussian blur by just using the 8 neighboring pixels: the smoothed value of a pixel is a weighted combination of the original value and the 8 neighboring values. Try applying your blur filter in 3 ways:\n- preprocess the training data but not the dev data\n- preprocess the dev data but not the training data\n- preprocess both training and dev data\nNote that there are Guassian blur filters available, for example in scipy.ndimage.filters. You're welcome to experiment with those, but you are likely to get the best results with the simplified version I described above.", "import itertools\n\n# blur():\n# blurs an image by averaging adjacent pixels\ndef blur(image):\n pixel_matrix = example_as_pixel_matrix(image)\n blurred_image = []\n rows, columns = np.shape(pixel_matrix)\n \n for row in range(rows):\n for column in range(columns):\n # take the mean of the 9-pixel neighborhood (in clause)\n # but guard against running off the edges of the matrix (if clause)\n value = np.mean(list( \n pixel_matrix[i][j] \n for i, j\n in itertools.product(\n range(row - 1, row + 2), \n range(column - 1, column + 2)\n )\n if (i >= 0) and (j >= 0) and (i < rows) and (j < columns)\n ))\n \n blurred_image.append(value)\n \n return blurred_image\n\n# blur_images():\n# blurs a collection of images\ndef blur_images(images): \n blurred = [ blur(image) for image in images ]\n return blurred\n\n# Do this in batches since iPythonNB seems to hang on large batches\ntrain_data_0k = train_data[:10000]\nblurred_train_data_0k = blur_images(train_data_0k)\n\ntrain_data_1k = train_data[10000:20000]\nblurred_train_data_1k = blur_images(train_data_1k)\n\ntrain_data_2k = train_data[20000:30000]\nblurred_train_data_2k = blur_images(train_data_2k)\n\ntrain_data_3k = train_data[30000:40000]\nblurred_train_data_3k = blur_images(train_data_3k)\n\ntrain_data_4k = train_data[40000:50000]\nblurred_train_data_4k = blur_images(train_data_4k)\n\ntrain_data_5k = train_data[50000:60000]\nblurred_train_data_5k = blur_images(train_data_5k)\n\nblurred_dev_data = blur_images(dev_data)\n\nblurred_train_data = (\n blurred_train_data_0k \n + blurred_train_data_1k\n + blurred_train_data_2k\n + blurred_train_data_3k\n + blurred_train_data_4k\n + blurred_train_data_5k\n)\n\n#def P6():\n \n### STUDENT START ###\n\ndef P6():\n k = 1\n neigh_blurred_train = train_k_nearest_neighbors(k, blurred_train_data, train_labels)\n neigh_unblurred_train = train_k_nearest_neighbors(k, train_data, train_labels)\n \n predicted_blurred_train_unblurred_dev = (\n neigh_blurred_train.predict(dev_data)\n )\n \n predicted_unblurred_train_blurred_dev = (\n neigh_unblurred_train.predict(blurred_dev_data)\n )\n \n predicted_blurred_train_blurred_dev = (\n neigh_blurred_train.predict(blurred_dev_data)\n )\n \n print 'Accuracy for blurred training, unblurred dev:'\n print(accuracy_score(\n dev_labels, predicted_blurred_train_unblurred_dev, normalize = True))\n \n print 'Accuracy for unblurred training, blurred dev:'\n print(accuracy_score(\n dev_labels, predicted_unblurred_train_blurred_dev, normalize = True))\n \n print 'Accuracy for blurred training, blurred dev:'\n print(accuracy_score(\n dev_labels, predicted_blurred_train_blurred_dev, normalize = True))\n\nP6()\n\n### STUDENT END ###\n\n#P6()", "ANSWER: Blurring the training but not the development data \n(7) Fit a Naive Bayes classifier and report accuracy on the dev data. Remember that Naive Bayes estimates P(feature|label). While sklearn can handle real-valued features, let's start by mapping the pixel values to either 0 or 1. You can do this as a preprocessing step, or with the binarize argument. With binary-valued features, you can use BernoulliNB. Next try mapping the pixel values to 0, 1, or 2, representing white, grey, or black. This mapping requires MultinomialNB. Does the multi-class version improve the results? Why or why not?", "#def P7():\n\n### STUDENT START ###\n\nfrom sklearn.metrics import accuracy_score\n\n# binarize_example():\n# Turn all pixels below 0.5 (or threshold) -> 0, greater -> 1\ndef binarize_example(example, threshold = 0.5):\n binarized = [ 1 if value > threshold else 0 for value in example ]\n return binarized\n \n# binarize_examples():\n# Apply binarization to a set of example\ndef binarize_examples(examples, threshold = 0.5):\n binarized = [ binarize_example(example, threshold) for example in examples ]\n return binarized\n\n# ternarize_example():\n# Turn all pixels below 1/3 (or threshold) -> 0, 1/3 through 2/3 -> 1, greater -> 2\ndef ternarize_example(example, threshold_low = 0.33333333, threshold_high = 0.66666666):\n ternarized = [ \n 0 if value < threshold_low else 1 if value < threshold_high else 2\n for value in example\n ]\n return ternarized\n\n# ternarize_examples():\n# Apply ternarization to a set of example\ndef ternarize_examples(examples, threshold_low = 0.33333333, threshold_high = 0.66666666):\n ternarized = [ \n ternarize_example(example, threshold_low, threshold_high) \n for example in examples \n ]\n return ternarized\n\ndef P7():\n binarized_train_data = binarize_examples(train_data)\n \n binary_naive_bayes = BernoulliNB()\n binary_naive_bayes.fit(binarized_train_data, train_labels)\n\n binarized_dev_data = binarize_examples(dev_data)\n binary_naive_bayes_predicted = binary_naive_bayes.predict(binarized_dev_data)\n \n target_names = [ str(i) for i in range(10) ]\n \n print '============ Classification report for binarized ============'\n print ''\n print(classification_report(\n dev_labels, \n binary_naive_bayes_predicted, \n target_names = target_names))\n print ' Accuracy score: '\n print(accuracy_score(dev_labels, binary_naive_bayes_predicted, normalize = True))\n \n ternarized_train_data = ternarize_examples(train_data)\n \n ternary_naive_bayes = MultinomialNB()\n ternary_naive_bayes.fit(ternarized_train_data, train_labels)\n \n ternarized_dev_data = ternarize_examples(dev_data)\n \n ternary_naive_bayes_predicted = ternary_naive_bayes.predict(ternarized_dev_data)\n print '============ Classification report for ternarized ============'\n print ''\n print(classification_report(\n dev_labels, \n ternary_naive_bayes_predicted, \n target_names = target_names))\n print ' Accuracy score: '\n print(accuracy_score(dev_labels, ternary_naive_bayes_predicted, normalize = True))\n \nP7()\n \n### STUDENT END ###\n\n#P7()", "ANSWER:\n(8) Use GridSearchCV to perform a search over values of alpha (the Laplace smoothing parameter) in a Bernoulli NB model. What is the best value for alpha? What is the accuracy when alpha=0? Is this what you'd expect?\n\nNote that GridSearchCV partitions the training data so the results will be a bit different than if you used the dev data for evaluation.", "#def P8(alphas):\n\n### STUDENT START ###\n\ndef P8(alphas):\n binarized_train_data = binarize_examples(train_data)\n \n bernoulli_naive_bayes = BernoulliNB()\n \n grid_search = GridSearchCV(bernoulli_naive_bayes, alphas, verbose = 3)\n grid_search.fit(binarized_train_data, train_labels)\n\n return grid_search\n\nalphas = {'alpha': [0.0, 0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 2.0, 10.0]}\nnb = P8(alphas)\nprint nb.best_params_\n\n### STUDENT END ###\n\n#alphas = {'alpha': [0.0, 0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 2.0, 10.0]}\n#nb = P8(alphas)\n\n#print nb.best_params_", "ANSWER: The best value for alpha is 0.0001\nWhen alpha is 0, the accuracy is about one tenth. The effect of alpha = 0 is to ignore the training data, so this leaves the model to just pick a class. Single there are 10 (0-9) values with roughly equal distributions among the bins, always picking the same class is expected to have about 1/10 chances of being right.\n(9) Try training a model using GuassianNB, which is intended for real-valued features, and evaluate on the dev data. You'll notice that it doesn't work so well. Try to diagnose the problem. You should be able to find a simple fix that returns the accuracy to around the same rate as BernoulliNB. Explain your solution.\nHint: examine the parameters estimated by the fit() method, theta_ and sigma_.", "#def P9():\n\n### STUDENT END ###\n\ndef train_and_score_gaussian(\n training_data, training_labels, development_data, development_labels):\n \n model = GaussianNB().fit(training_data, training_labels)\n predictions = model.predict(development_data)\n print(accuracy_score(development_labels, predictions, normalize = True))\n \n return model\n\ndef P9():\n print 'Accuracy score of Gaussian Naive Bayes (uncorrected): '\n gaussian_naive_bayes = train_and_score_gaussian(\n train_data, train_labels, dev_data, dev_labels)\n\n theta = gaussian_naive_bayes.theta_\n\n for digit in range(10):\n theta_figure = plt.figure()\n theta_hist = plt.hist(theta[digit], bins = 100)\n theta_hist_title = plt.title('Theta distribution for the digit ' + str(digit))\n plt.show()\n sigma = gaussian_naive_bayes.sigma_\n\n for digit in range(10):\n sigma_figure = plt.figure()\n sigma_hist = plt.hist(theta[digit], bins = 100)\n sigma_hist_title = plt.title('Sigma distribution for the digit ' + str(digit))\n plt.show()\n\n return gaussian_naive_bayes\n\ngnb = P9()\n\n# Attempts to improve were unsuccessful, see attempts below\n\nprint('Issue: Many features have variance 0, ')\nprint('which means they \"contribute\" but the contribution is noise')\n\nexamples, pixels = np.shape(train_data)\ndef select_signal_pixel_indices(data):\n indices = [ ]\n examples, pixels = np.shape(data)\n \n for pixel in range(pixels):\n has_signal = False\n \n for example in range(examples):\n if data[example][pixel] > 0.0:\n has_signal = True\n \n if has_signal:\n indices.append(pixel)\n \n return indices\n \npixels_with_signal = select_signal_pixel_indices(train_data)\n\ndef select_pixels_with_signal(data, pixels_with_signal):\n examples, pixels = np.shape(data)\n selected = [\n data[example][pixels_with_signal]\n for example in range(examples)\n ]\n \n return selected\n\nsignal_train_data = select_pixels_with_signal(train_data, pixels_with_signal)\n\nsignal_dev_data = select_pixels_with_signal(dev_data, pixels_with_signal)\n\nprint('Attempt #0 : only select non-0 pixels ')\nevaluate_gaussian(signal_train_data, train_labels, signal_dev_data, dev_labels)\n\n\ndef transform_attempt1(pixel):\n return np.log(0.1 + pixel)\n\nvectorized_transform_attempt1 = np.vectorize(transform_attempt1)\n\nmapped_train_data = vectorized_transform_attempt1(train_data)\nmapped_dev_data = vectorized_transform_attempt1(dev_data)\n\n\nprint('Attempt #1 : transform each pixel with log(0.1 + pixel) ')\nevaluate_gaussian(mapped_train_data, train_labels, mapped_dev_data, dev_labels)\n\ndef transform_attempt2(pixel):\n return 0.0 if pixel < 0.0001 else 1.0\n\nvectorized_transform_attempt2 = np.vectorize(transform_attempt2)\n\nmapped_train_data = vectorized_transform_attempt2(train_data)\nmapped_dev_data = vectorized_transform_attempt2(dev_data)\n\n\nprint('Attempt #2 : binarize all pixels with a very low threshold ')\nevaluate_gaussian(mapped_train_data, train_labels, mapped_dev_data, dev_labels)\n\n### STUDENT END ###\n\n#gnb = P9()", "ANSWER:\n(10) Because Naive Bayes is a generative model, we can use the trained model to generate digits. Train a BernoulliNB model and then generate a 10x20 grid with 20 examples of each digit. Because you're using a Bernoulli model, each pixel output will be either 0 or 1. How do the generated digits compare to the training digits?\n\nYou can use np.random.rand() to generate random numbers from a uniform distribution\nThe estimated probability of each pixel is stored in feature_log_prob_. You'll need to use np.exp() to convert a log probability back to a probability.", "#def P10(num_examples):\n\n### STUDENT START ###\n\ndef generate_example(log_probabilities):\n pixels = [\n 1.0 if np.random.rand() <= np.exp( log_probability ) else 0.0\n for log_probability in log_probabilities\n ]\n\n return pixels\n\n# more than 10 x 10 gets scaled too small\ndef plot_10_examples(binary_naive_bayes):\n per_digit_log_probabilities = binary_naive_bayes.feature_log_prob_\n \n examples = [\n [ \n generate_example(per_digit_log_probabilities[digit])\n for example in range(10)\n ]\n for digit in range(10)\n ]\n \n plot_examples(examples)\n\ndef P10(num_examples):\n binarized_train_data = binarize_examples(train_data)\n binary_naive_bayes = BernoulliNB().fit(binarized_train_data, train_labels)\n \n page = 0\n \n while page < num_examples:\n plot_10_examples(binary_naive_bayes)\n page = page + 10\n \nP10(20)\n\n### STUDENT END ###\n\n#P10(20)", "ANSWER: Many of the generated digits are recognizable. However, they lack the connected lines of handdrawn digits because each pixel is sampled independently.\n(11) Remember that a strongly calibrated classifier is rougly 90% accurate when the posterior probability of the predicted class is 0.9. A weakly calibrated classifier is more accurate when the posterior is 90% than when it is 80%. A poorly calibrated classifier has no positive correlation between posterior and accuracy.\nTrain a BernoulliNB model with a reasonable alpha value. For each posterior bucket (think of a bin in a histogram), you want to estimate the classifier's accuracy. So for each prediction, find the bucket the maximum posterior belongs to and update the \"correct\" and \"total\" counters.\nHow would you characterize the calibration for the Naive Bayes model?", "#def P11(buckets, correct, total):\n \n### STUDENT START ###\n\nbuckets = [0.5, 0.9, 0.999, 0.99999, 0.9999999, 0.999999999, 0.99999999999, 0.9999999999999, 1.0]\ncorrect = [0 for i in buckets]\ntotal = [0 for i in buckets]\n\ndef train_binarized_bernoulli(training_data, training_labels, alpha = 0.0001):\n binarized_train_data = binarize_examples(training_data)\n binary_naive_bayes = BernoulliNB(alpha = alpha)\n binary_naive_bayes.fit(binarized_train_data, training_labels)\n \n return binary_naive_bayes\n\ndef find_bucket_index(buckets, posterior):\n index = None\n \n for i in range(len(buckets)):\n if index == None and posterior <= buckets[i]:\n index = i\n\n return index\n \ndef score_by_posterior_buckets(\n binary_naive_bayes, test_data, test_labels,\n buckets, correct, total):\n \n predictions = binary_naive_bayes.predict(test_data)\n posteriors = binary_naive_bayes.predict_proba(test_data)\n confidences = [\n posteriors[index][predictions[index]]\n for index in range(len(predictions))\n ]\n \n for index, confidence in enumerate(confidences):\n bucket_index = find_bucket_index(buckets, confidence)\n \n total[bucket_index] = total[bucket_index] + 1\n \n if predictions[index] == test_labels[index]:\n correct[bucket_index] = correct[bucket_index] + 1\n \ndef P11(buckets, correct, total):\n binary_naive_bayes = train_binarized_bernoulli(\n train_data, train_labels)\n \n binarized_dev_data = binarize_examples(dev_data)\n score_by_posterior_buckets(binary_naive_bayes, binarized_dev_data, dev_labels,\n buckets, correct, total)\n\nP11(buckets, correct, total)\n\nfor i in range(len(buckets)):\n if (total[i] > 0): accuracy = float(correct[i]) / float(total[i])\n print 'p(pred) <= %.13f total = %3d accuracy = %.3f' %(buckets[i], total[i], accuracy) \n \n### STUDENT END ###\n\n#buckets = [0.5, 0.9, 0.999, 0.99999, 0.9999999, 0.999999999, 0.99999999999, 0.9999999999999, 1.0]\n#correct = [0 for i in buckets]\n#total = [0 for i in buckets]\n\n#P11(buckets, correct, total)\n\n#for i in range(len(buckets)):\n# accuracy = 0.0\n# if (total[i] > 0): accuracy = correct[i] / total[i]\n# print 'p(pred) <= %.13f total = %3d accuracy = %.3f' %(buckets[i], total[i], accuracy)", "ANSWER: The model is poorly calibrated - all probably buckets are over-confident, many drastically so.\n(12) EXTRA CREDIT\nTry designing extra features to see if you can improve the performance of Naive Bayes on the dev set. Here are a few ideas to get you started:\n- Try summing the pixel values in each row and each column.\n- Try counting the number of enclosed regions; 8 usually has 2 enclosed regions, 9 usually has 1, and 7 usually has 0.\nMake sure you comment your code well!", "#def P12():\n\n### STUDENT START ###\n\n\n### STUDENT END ###\n\n#P12()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
EmissionsIndex/Emissions-Index
Code/Index and Generation Calculations.ipynb
gpl-3.0
[ "Import cleaned EIA/EPA data and calculate final Emissions Index/Generation by Fuel data\nThis notebook makes use of data created in the notebooks (nested levels indicate a chain of calculations):\n- EIA Bulk Download - extract facility generation\n - Emission factors\n- EIA bulk download - non-facility (distributed PV & state-level)\n - Emission factors\n- Group EPA emissions data by month and quarter\n - Load EPA Emissions Data", "%matplotlib inline\nfrom __future__ import division\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n# import plotly.plotly as py\n# import plotly.graph_objs as go\n# from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\nimport pandas as pd\nimport os\nimport numpy as np\n# init_notebook_mode(connected=True)\nimport datetime as dt", "Contents\n\nImport data\nEIA facility data\nTotal EIA gen & emissions\nEPA emissions data\n\n\nCheck EIA facility against EIA total\nAdjust EPA facility emissions\nDifference between EIA facility and total\nCombine all data (monthly)\nCreate plots\nMonthly Index\nQuarterly Index\nAnnual Index\n\n\n\nImport cleaned data\n\nFacility generation and CO2 emissions\nTotal generation and CO2 emissions by fuel\nEPA CO2 emissions\n\nCreate some helper functions to add datetime and quarter columns", "def add_datetime(df, year='year', month='month'):\n df['datetime'] = pd.to_datetime(df[year].astype(str) + '-' + df[month].astype(str),\n format='%Y-%m')\n\ndef add_quarter(df, year='year', month='month'):\n add_datetime(df, year, month)\n df['quarter'] = df['datetime'].dt.quarter", "Facility generation and CO2 emissions", "path = os.path.join('Facility gen fuels and CO2.csv')\neia_facility = pd.read_csv(path, parse_dates=['datetime'], low_memory=False)", "EIA Facility level emissions (consolidate fuels/prime movers)\nBecause EIA tracks all fuel consumption at facilities that might produce both electricity and useful thermal output (CHP), CO<sub>2</sub> emissions can be from one 4 categories:\n1. Total fuel consumption for all uses (fossil & non-fossil, electricity & thermal output)\n2. Fossil fuel consumption for all uses (electricity only or CHP)\n3. Total fuel consumption for electricity only\n4. Fossil fuel consumption for electricity only\nWe are interested in Category 4. EPA reports total emissions (Category 1), which need to be adjusted. To do this, we calculate a ratio\n$$CO_2 \\ Ratio = \\frac{Category \\ 4}{Category \\ 1}$$\nWill will apply the CO<sub>2</sub> ratio factors to EPA data later in this notebook.", "cols = ['all fuel fossil CO2 (kg)','elec fuel fossil CO2 (kg)',\n 'all fuel total CO2 (kg)','elec fuel total CO2 (kg)', 'generation (MWh)']\neia_facility_grouped = eia_facility.groupby(['year', 'month', 'plant id'])[cols].sum()\neia_facility_grouped.reset_index(inplace=True)\neia_facility_grouped['CO2 ratio'] = eia_facility_grouped['elec fuel fossil CO2 (kg)'] / eia_facility_grouped['all fuel total CO2 (kg)']\neia_facility_grouped['CO2 ratio'].fillna(0, inplace=True)\neia_facility_grouped.head()", "Total EIA generation and CO2 emissions", "path = os.path.join('EIA country-wide gen fuel CO2.csv')\neia_total = pd.read_csv(path, parse_dates=['datetime'], low_memory=False)\n\neia_total['type'].unique()", "Consolidate total EIA to monthly gen and emissions\nOnly keep non-overlapping fuel categories so that my totals are correct (e.g. don't keep utility-scale photovoltaic, because it's already counted in utility-scale solar [SUN]).", "keep_types = [u'WWW', u'WND', u'WAS', u'SUN', 'DPV', u'NUC', u'NG',\n u'PEL', u'PC', u'OTH', u'COW', u'OOG', u'HPS', u'HYC', u'GEO']\nkeep_cols = ['generation (MWh)', 'total fuel (mmbtu)', 'elec fuel (mmbtu)',\n 'all fuel CO2 (kg)', 'elec fuel CO2 (kg)']\neia_total_monthly = eia_total.loc[(eia_total['type'].isin(keep_types))].groupby(['type', 'year', 'month'])[keep_cols].sum()\n\neia_total_monthly.head()", "Pretty sure that I don't need to keep eia_total_annual", "keep_types = [u'WWW', u'WND', u'WAS', u'TSN', u'NUC', u'NG',\n u'PEL', u'PC', u'OTH', u'COW', u'OOG', u'HPS', u'HYC', u'GEO']\n\neia_total_annual = eia_total_monthly.reset_index().groupby('year').sum()\neia_total_annual['index (g/kWh)'] = eia_total_annual['elec fuel CO2 (kg)'] / eia_total_annual['generation (MWh)']", "Load EPA data\nCheck to see if there are multiple rows per facility for a single month", "path = os.path.join('Monthly EPA emissions.csv')\nepa = pd.read_csv(path)\n\nadd_quarter(epa, year='YEAR', month='MONTH')\nepa.head()", "Fill nan's with 0", "epa.loc[:,'CO2_MASS (kg)'].fillna(0, inplace=True)", "Correct EPA facility emissions\nUse the EIA facility adjustment factors to correct for CHP and biomass emissions\nUse an inner merge rather than left\nJustification: a left merge will retain CO2 emissions from facilities that aren't included in 923. But the generation and emissions for those facilities are included in the state-level estimates.", "eia_keep = ['month', 'year', 'all fuel total CO2 (kg)', 'CO2 ratio', 'plant id']\n\nepa_adj = epa.merge(eia_facility_grouped[eia_keep], left_on=['ORISPL_CODE', 'YEAR', 'MONTH'],\n right_on=['plant id', 'year', 'month'], how='inner') # how='left\n\nepa_adj.drop(['month', 'year', 'plant id'], axis=1, inplace=True)\nepa_adj['epa index'] = epa_adj.loc[:,'CO2_MASS (kg)'] / epa_adj.loc[:,'GLOAD (MW)']\nepa_adj.head()\n\nsns.jointplot('CO2_MASS (kg)', 'all fuel total CO2 (kg)', epa_adj, marker='.')", "Adjust CO2 emissions where we have a CO2 ratio value\nBecause of the inner merge above, all rows should have a valid CO2 ratio", "# Calaculated with an \"inner\" merge of the dataframes\nfor year in range(2001, 2017):\n total_co2 = epa_adj.loc[epa_adj['YEAR']==year, 'CO2_MASS (kg)'].sum()\n union_co2 = epa_adj.loc[(epa_adj['YEAR']==year) & \n ~(epa_adj['CO2 ratio'].isnull()), 'CO2_MASS (kg)'].sum()\n missing = total_co2 - union_co2\n \n print year, '{:.3%}'.format(union_co2/total_co2), 'accounted for', \\\n missing/1000, 'metric tons missing'", "Look back at this to ensure that I'm correctly accounting for edge cases\n- Emissions reported to CEMS under a different code than EIA\n- Emissions reported to CEMS but not EIA monthly\n- Incorrect 0 value reported to CEMS\nStart by setting all adjusted CO2 (adj CO2 (kg)) values to the reported CO2 value", "epa_adj['adj CO2 (kg)'] = epa_adj.loc[:,'CO2_MASS (kg)']", "If CEMS reported CO2 emissions are 0 but heat inputs are >0 and calculated CO2 emissions are >0, change the adjusted CO2 to NaN. These NaN values will be replaced by the calculated value later. Do the same for low index records (<300 g/kWh). If there is a valid CO2 ratio, multiply the adjusted CO2 column by the CO2 ratio.", "epa_adj.loc[~(epa_adj['CO2_MASS (kg)']>0) &\n (epa_adj['HEAT_INPUT (mmBtu)']>0) &\n (epa_adj['all fuel total CO2 (kg)']>0), 'adj CO2 (kg)'] = np.nan\nepa_adj.loc[(epa_adj['epa index']<300) &\n (epa_adj['HEAT_INPUT (mmBtu)']>0) &\n (epa_adj['all fuel total CO2 (kg)']>0), 'adj CO2 (kg)'] = np.nan\n\nepa_adj.loc[epa_adj['CO2 ratio'].notnull(), 'adj CO2 (kg)'] *= epa_adj.loc[epa_adj['CO2 ratio'].notnull(), 'CO2 ratio']\n\nfor year in range(2001,2017):\n num_missing = len(epa_adj.loc[(epa_adj['adj CO2 (kg)'].isnull()) & \n (epa_adj['YEAR']==year), 'ORISPL_CODE'].unique())\n total = len(epa_adj.loc[epa_adj['YEAR']==year, 'ORISPL_CODE'].unique())\n \n print 'In', str(year) + ',', num_missing, 'plants missing some data out of', total", "Emissions and gen not captured by facilities\nSubtract these from the top-line EIA values to get the amount not captured at facilities in each month. EIA natural gas fuel consumption does not include BFG or OG.\nConsolidate facility generation, fuel use, and CO2 emissions", "eia_facility['fuel'].unique()\n\n# OG and BFG are included in Other because I've included OOG in Other below\n# Pet liquids and pet coke are included here because they line up with how the state-level\n# EIA data are reported\nfacility_fuel_cats = {'COW' : ['SUB','BIT','LIG', 'WC','SC','RC','SGC'], \n 'NG' : ['NG'],\n 'PEL' : ['DFO', 'RFO', 'KER', 'JF', 'PG', 'WO', 'SGP'],\n 'PC' : ['PC'],\n 'HYC' : ['WAT'],\n 'HPS' : [],\n 'GEO' : ['GEO'],\n 'NUC' : ['NUC'],\n 'OOG' : ['BFG', 'OG', 'LFG'],\n 'OTH' : ['OTH', 'MSN', 'MSW', 'PUR', 'TDF', 'WH'],\n 'SUN' : ['SUN'],\n 'DPV' : [],\n 'WAS' : ['OBL', 'OBS', 'OBG', 'MSB', 'SLW'],\n 'WND' : ['WND'],\n 'WWW' : ['WDL', 'WDS', 'AB', 'BLQ']\n }", "Create a new df that groups the facility data into more general fuel types that match up with the EIA generation and fuel use totals.", "eia_facility_fuel = eia_facility.copy()\nfor key in facility_fuel_cats.keys():\n eia_facility_fuel.loc[eia_facility_fuel['fuel'].isin(facility_fuel_cats[key]),'type'] = key\neia_facility_fuel = eia_facility_fuel.groupby(['type', 'year', 'month']).sum()\n# eia_facility_fuel.reset_index(inplace=True)\n\neia_facility_fuel.head()", "Extra generation and fuel use", "eia_total_monthly.head()\n\niterables = [eia_total_monthly.index.levels[0], range(2001, 2017), range(1, 13)]\nindex = pd.MultiIndex.from_product(iterables=iterables, names=['type', 'year', 'month'])\neia_extra = pd.DataFrame(index=index, columns=['total fuel (mmbtu)', 'generation (MWh)',\n 'elec fuel (mmbtu)'])\n\nidx = pd.IndexSlice\n\nuse_columns=['total fuel (mmbtu)', 'generation (MWh)',\n 'elec fuel (mmbtu)']\neia_extra = (eia_total_monthly.loc[idx[:,:,:], use_columns] - \n eia_facility_fuel.loc[idx[:,:,:], use_columns])\n\n# I have lumped hydro pumped storage in with conventional hydro in the facility data.\n# Because of this, I need to add HPS rows so that the totals will add up correctly.\n# Also need to add DPV because it won't show up otherwise\neia_extra.loc[idx[['HPS', 'DPV'],:,:], use_columns] = eia_total_monthly.loc[idx[['HPS', 'DPV'],:,:], use_columns]\n# eia_extra = eia_extra.loc[idx[:, 2003:, :],:]\n\neia_extra.head()\n\neia_extra.loc[idx['DPV',:,:]]", "Calculate extra electric fuel CO2 emissions", "path = os.path.join('Final emission factors.csv')\nef = pd.read_csv(path, index_col=0)", "We need to approximate some of the emission factors because the state-level EIA data is only available in the bulk download at an aggregated level. Natural gas usually makes up the bulk of this extra electric generation/fuel use (consumption not reported by facilities, estimated by EIA), and it is still a single fuel here.", "fuel_factors = {'NG' : ef.loc['NG', 'Fossil Factor'],\n 'PEL': ef.loc[['DFO', 'RFO'], 'Fossil Factor'].mean(),\n 'PC' : ef.loc['PC', 'Fossil Factor'], \n 'COW' : ef.loc[['BIT', 'SUB'], 'Fossil Factor'].mean(),\n 'OOG' : ef.loc['OG', 'Fossil Factor']}\n\n# Start with 0 emissions in all rows\n# For fuels where we have an emission factor, replace the 0 with the calculated value\neia_extra['all fuel CO2 (kg)'] = 0\neia_extra['elec fuel CO2 (kg)'] = 0\nfor fuel in fuel_factors.keys():\n eia_extra.loc[idx[fuel,:,:],'all fuel CO2 (kg)'] = \\\n eia_extra.loc[idx[fuel,:,:],'total fuel (mmbtu)'] * fuel_factors[fuel]\n \n eia_extra.loc[idx[fuel,:,:],'elec fuel CO2 (kg)'] = \\\n eia_extra.loc[idx[fuel,:,:],'elec fuel (mmbtu)'] * fuel_factors[fuel]\n \n# eia_extra.reset_index(inplace=True)\n# add_quarter(eia_extra)\n\neia_extra.loc[idx['NG',:,:],].tail()", "Add EPA facility-level emissions back to the EIA facility df, use EIA emissions where EPA don't exist, add extra EIA emissions for state-level data\nThe dataframes start at a facility level. Extra EIA emissions for estimated state-level data are added after they are aggregated by year/month in the \"Monthly Index\" section below.", "epa_cols = ['ORISPL_CODE', 'YEAR', 'MONTH', 'adj CO2 (kg)']\nfinal_co2_gen = eia_facility_grouped.merge(epa_adj.loc[:,epa_cols], left_on=['plant id', 'year', 'month'], \n right_on=['ORISPL_CODE', 'YEAR', 'MONTH'], how='left')\nfinal_co2_gen.drop(['ORISPL_CODE', 'YEAR', 'MONTH'], axis=1, inplace=True)\nfinal_co2_gen['final CO2 (kg)'] = final_co2_gen['adj CO2 (kg)']\nfinal_co2_gen.loc[final_co2_gen['final CO2 (kg)'].isnull(), 'final CO2 (kg)'] = final_co2_gen.loc[final_co2_gen['final CO2 (kg)'].isnull(), 'elec fuel fossil CO2 (kg)']\nadd_quarter(final_co2_gen)\n\nfinal_co2_gen.head()", "Final index values\nStart with some helper functions to convert units and calculate % change from 2005 annual value", "def g2lb(df):\n \"\"\"\n Convert g/kWh to lb/MWh and add a column to the df\n \"\"\"\n kg2lb = 2.2046\n df['index (lb/MWh)'] = df['index (g/kWh)'] * kg2lb\n \ndef change_since_2005(df):\n \"\"\"\n Calculate the % difference from 2005 and add as a column in the df\n \"\"\"\n # first calculate the index in 2005 \n index_2005 = ((df.loc[df['year']==2005,'index (g/kWh)'] * \n df.loc[df['year']==2005,'generation (MWh)']) / \n df.loc[df['year']==2005,'generation (MWh)'].sum()).sum()\n \n # Calculated index value in 2005 is 599.8484560355034\n # If the value above is different throw an error\n if (index_2005 > 601) or (index_2005 < 599.5):\n raise ValueError('Calculated 2005 index value', index_2005, \n 'is outside expected range. Expected value is 599.848')\n if type(index_2005) != float:\n raise TypeError('index_2005 is', type(index_2005), 'rather than a float.')\n \n df['change since 2005'] = (df['index (g/kWh)'] - index_2005) / index_2005", "Monthly Index\nAdding generation and emissions not captured in the facility-level data", "monthly_index = final_co2_gen.groupby(['year', 'month'])['generation (MWh)', 'final CO2 (kg)'].sum()\nmonthly_index.reset_index(inplace=True)\n\n# Add extra generation and emissions not captured by facility-level data\nmonthly_index.loc[:,'final CO2 (kg)'] += eia_extra.reset_index().groupby(['year', 'month'])['elec fuel CO2 (kg)'].sum().values\nmonthly_index.loc[:,'generation (MWh)'] += eia_extra.reset_index().groupby(['year', 'month'])['generation (MWh)'].sum().values\nadd_quarter(monthly_index)\nmonthly_index['index (g/kWh)'] = monthly_index.loc[:, 'final CO2 (kg)'] / monthly_index.loc[:, 'generation (MWh)']\n\nchange_since_2005(monthly_index)\ng2lb(monthly_index)\nmonthly_index.dropna(inplace=True)\n\nmonthly_index.tail()\n\npath = os.path.join('Data for plots', 'Monthly index.csv')\nmonthly_index.to_csv(path, index=False)", "Quarterly Index\nBuilt up from the monthly index", "quarterly_index = monthly_index.groupby(['year', 'quarter'])['generation (MWh)', 'final CO2 (kg)'].sum()\nquarterly_index.reset_index(inplace=True)\nquarterly_index['index (g/kWh)'] = quarterly_index.loc[:, 'final CO2 (kg)'] / quarterly_index.loc[:, 'generation (MWh)']\nquarterly_index['year_quarter'] = quarterly_index['year'].astype(str) + ' Q' + quarterly_index['quarter'].astype(str)\nchange_since_2005(quarterly_index)\ng2lb(quarterly_index)\n\nquarterly_index.tail()\n\npath = os.path.join('Data for plots', 'Quarterly index.csv')\nquarterly_index.to_csv(path, index=False)", "Annual Index", "annual_index = quarterly_index.groupby('year')['generation (MWh)', 'final CO2 (kg)'].sum()\nannual_index.reset_index(inplace=True)\n\nannual_index['index (g/kWh)'] = annual_index.loc[:, 'final CO2 (kg)'] / annual_index.loc[:, 'generation (MWh)']\n\nchange_since_2005(annual_index)\ng2lb(annual_index)\n\nannual_index.tail()\n\npath = os.path.join('Data for plots', 'Annual index.csv')\nannual_index.to_csv(path, index=False)", "Export to Excel file", "'US POWER SECTOR CO2 EMISSIONS INTENSITY'.title()\n\npath = os.path.join('..', 'Calculated values', 'US Power Sector CO2 Emissions Intensity.xlsx')\nwriter = pd.ExcelWriter(path)\n\nmonthly_index.to_excel(writer, sheet_name='Monthly', index=False)\nquarterly_index.to_excel(writer, sheet_name='Quarterly', index=False)\nannual_index.to_excel(writer, sheet_name='Annual', index=False)\nwriter.save()", "Generation by fuel", "fuel_cats = {'Coal' : [u'COW'], \n 'Natural Gas' : [u'NG'],\n 'Nuclear' : ['NUC'],\n 'Renewables' : [u'GEO', u'HYC', u'SUN', 'DPV', \n u'WAS', u'WND', u'WWW'],\n 'Other' : [u'OOG', u'PC', u'PEL', u'OTH', u'HPS']\n }\nkeep_types = [u'WWW', u'WND', u'WAS', u'SUN', 'DPV', u'NUC', u'NG',\n u'PEL', u'PC', u'OTH', u'COW', u'OOG', u'HPS', u'HYC', u'GEO']\n\neia_gen_monthly = eia_total.loc[eia_total['type'].isin(keep_types)].groupby(['type', 'year', 'month']).sum()\neia_gen_monthly.reset_index(inplace=True)\neia_gen_monthly.drop(['end', 'sector', 'start'], inplace=True, axis=1)\n\nfor key, values in fuel_cats.iteritems():\n eia_gen_monthly.loc[eia_gen_monthly['type'].isin(values),'fuel category'] = key\n\neia_gen_monthly = eia_gen_monthly.groupby(['fuel category', 'year', 'month']).sum()\neia_gen_monthly.reset_index(inplace=True) \n\nadd_quarter(eia_gen_monthly)\n\neia_gen_quarterly = eia_gen_monthly.groupby(['fuel category', 'year', 'quarter']).sum()\neia_gen_quarterly.reset_index(inplace=True)\neia_gen_quarterly['year_quarter'] = (eia_gen_quarterly['year'].astype(str) + \n ' Q' + eia_gen_quarterly['quarter'].astype(str))\neia_gen_quarterly.drop('month', axis=1, inplace=True)\n\neia_gen_annual = eia_gen_monthly.groupby(['fuel category', 'year']).sum()\neia_gen_annual.reset_index(inplace=True)\neia_gen_annual.drop(['month', 'quarter'], axis=1, inplace=True)", "A function to estimate the emissions intensity of each fuel over time, making sure that they add up to the total emissions intensity.", "def generation_index(gen_df, index_df, group_by='year'):\n \"\"\"\n Calculate the emissions intensity of each fuel in each time period. Use the\n adjusted total emissions from the index dataframe to ensure that the weighted\n sum of fuel emission intensities will equal the total index value.\n \"\"\"\n final_adj_co2 = index_df.loc[:,'final CO2 (kg)'].copy()\n \n calc_total_co2 = gen_df.groupby(group_by)['elec fuel CO2 (kg)'].sum().values\n# calc_total_co2 = calc_total_co2.reset_index()\n \n \n for fuel in gen_df['fuel category'].unique():\n gen_df.loc[gen_df['fuel category']==fuel, 'adjusted CO2 (kg)'] = (gen_df.loc[gen_df['fuel category']==fuel, \n 'elec fuel CO2 (kg)'] / \n calc_total_co2 * \n final_adj_co2.values)\n \n gen_df['adjusted index (g/kWh)'] = gen_df['adjusted CO2 (kg)'] / gen_df['generation (MWh)']\n gen_df['adjusted index (lb/MWh)'] = gen_df['adjusted index (g/kWh)'] * 2.2046\n \n \n ", "Apply the function above to each generation dataframe", "generation_index(eia_gen_annual, annual_index, 'year')\n\ngeneration_index(eia_gen_monthly, monthly_index, ['year', 'month'])\n\ngeneration_index(eia_gen_quarterly, quarterly_index, 'year_quarter')\n\neia_gen_annual.head()", "Export files", "path = os.path.join('Data for plots', 'Monthly generation.csv')\neia_gen_monthly.to_csv(path, index=False)\n\npath = os.path.join('Data for plots', 'Quarterly generation.csv')\neia_gen_quarterly.to_csv(path, index=False)\n\npath = os.path.join('Data for plots', 'Annual generation.csv')\neia_gen_annual.to_csv(path, index=False)", "Export to Excel file", "path = os.path.join('..', 'Calculated values', 'US Generation By Fuel Type.xlsx')\nwriter = pd.ExcelWriter(path, engine='xlsxwriter')\n\neia_gen_monthly.to_excel(writer, sheet_name='Monthly', index=False)\n\neia_gen_quarterly.to_excel(writer, sheet_name='Quarterly', index=False)\n\neia_gen_annual.to_excel(writer, sheet_name='Annual', index=False)\nwriter.save()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/test-institute-3/cmip6/models/sandbox-2/landice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: TEST-INSTITUTE-3\nSource ID: SANDBOX-2\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:46\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-2', 'landice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --&gt; Mass Balance\n7. Ice --&gt; Mass Balance --&gt; Basal\n8. Ice --&gt; Mass Balance --&gt; Frontal\n9. Ice --&gt; Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Ice Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify how ice albedo is modelled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Atmospheric Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Oceanic Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the ocean and ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs an adative grid being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Base Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe base resolution (in metres), before any adaption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Resolution Limit\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Projection\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of glaciers in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of glaciers, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Dynamic Areal Extent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes the model include a dynamic glacial extent?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Grounding Line Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.3. Ice Sheet\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice sheets simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.4. Ice Shelf\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice shelves simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Ice --&gt; Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Ice --&gt; Mass Balance --&gt; Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Ice --&gt; Mass Balance --&gt; Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Melting\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Ice --&gt; Dynamics\n**\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Approximation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nApproximation type used in modelling ice dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Adaptive Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.4. Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
freininghaus/adventofcode
2016/day07-python.ipynb
mit
[ "Day 7: Internet Protocol Version 7", "with open(\"input/day7.txt\", \"r\") as f:\n inputLines = tuple(line.strip() for line in f)\n\nimport re", "Part 1: ABBA pattern in address, but not in hypernet sequences", "def isABBA(text):\n # Use a negative lookahead assertion to avoid matching four equal characters.\n return re.search(r\"(.)(?!\\1)(.)\\2\\1\", text) is not None\n\nassert isABBA(\"abba\")\nassert isABBA(\"xabba\")\nassert not isABBA(\"aaaa\")\nassert isABBA(\"abcoxxoxyz\")\nassert isABBA(\"aabba\")\nassert isABBA(\"aaabba\")\nassert isABBA(\"aaaabba\")\n\ndef ipAddressSequences(ipAddress):\n # We use a pattern for the hypernet sequences for splitting.\n # Moreover, we capture the letters in the hypernet sequences, such that\n # normal and hypernet sequences will be alternating in the result.\n sequences = re.split(r\"\\[([^\\]]+)\\]\", ipAddress)\n normalSequences = tuple(sequences[::2])\n hypernetSequences = tuple(sequences[1::2])\n return normalSequences, hypernetSequences\n \nassert ipAddressSequences(\"abba[mnop]qrst\") == ((\"abba\", \"qrst\"), (\"mnop\",))\nassert ipAddressSequences(\"abcd[bddb]xyyx\") == ((\"abcd\", \"xyyx\"), (\"bddb\",))\nassert ipAddressSequences(\"aaaa[qwer]tyui\") == ((\"aaaa\", \"tyui\"), (\"qwer\",))\nassert ipAddressSequences(\"ioxxoj[asdfgh]zxcvbn\") == ((\"ioxxoj\", \"zxcvbn\"), (\"asdfgh\",))\nassert ipAddressSequences(\"a[b]\") == ((\"a\", \"\"), (\"b\",))\nassert ipAddressSequences(\"[b]a\") == ((\"\", \"a\"), (\"b\",))\nassert ipAddressSequences(\"[b]\") == ((\"\", \"\"), (\"b\",))\n\ndef supportsTLS(ipAddress):\n normal, hypernet = ipAddressSequences(ipAddress)\n return any(isABBA(s) for s in normal) and not any(isABBA(s) for s in hypernet)\n \nassert supportsTLS(\"abba[mnop]qrst\")\nassert not supportsTLS(\"abcd[bddb]xyyx\")\nassert not supportsTLS(\"aaaa[qwer]tyui\")\nassert supportsTLS(\"ioxxoj[asdfgh]zxcvbn\")\n\nsum(1 for ipAddress in inputLines if supportsTLS(ipAddress))", "Part 2: ABA and corresponding BAB pattern in normal and hypernet parts", "def supportsSSL(ipAddress):\n # The idea is that the ABA and the BAB patterns are separated by an odd number of brackets.\n return re.search(# first the ABA pattern\n r\"([a-z])(?!\\1)([a-z])\\1\"\n # then an arbitrary number of letters\n + r\"[a-z]*\"\n # then an opening or closing bracket\n + r\"[\\[\\]]\"\n # then any number of blocks which contain letters, a bracket, more letters, and another bracket\n + r\"([a-z]*[\\[\\]][a-z]*[\\[\\]]]*)*\" \n # then an arbitrary number of letters\n + r\"[^\\[\\]]*\"\n # finally, the BAB pattern\n + r\"\\2\\1\\2\", \n ipAddress) is not None\n\nassert supportsSSL(\"aba[bab]xyz\")\nassert not supportsSSL(\"xyx[xyx]xyx\")\nassert supportsSSL(\"aaa[kek]eke\")\nassert supportsSSL(\"zazbz[bzb]cdb\")\n\nsum(1 for ipAddress in inputLines if supportsSSL(ipAddress))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
tammojan/lofarscripts
dipole_projection.ipynb
gpl-2.0
[ "from lofarantpos.db import LofarAntennaDatabase\nimport lofarantpos.geo as lofargeo\nimport astropy.units as u\nimport matplotlib.pyplot as plt\nfrom astropy.time import Time\nfrom astropy.coordinates import ITRS, EarthLocation, GCRS, SkyCoord, AltAz\nimport numpy as np\nfrom numpy.linalg import norm\nfrom typing import Union\n\ndb = LofarAntennaDatabase()", "Test / fiddle around", "t0 = Time(\"2019-01-01T01:00\")\nstationname = \"LV614LBA\"\n\ndef station_to_itrs(station_name):\n \"\"\"Returns an astropy ITRS coordinate for a given station (e.g. 'CS001LBA')\"\"\"\n x,y,z = db.phase_centres[station_name]\n return ITRS(x=x*u.m, y=y*u.m, z=z*u.m)\n\ndef station_to_earthlocation(station_name):\n \"\"\"Returns an astropy EarthLocation for a given station (e.g. 'CS001LBA')\"\"\"\n x,y,z = db.phase_centres[station_name]\n return EarthLocation(x=x*u.m, y=y*u.m, z=z*u.m)", "We will test at zenith:", "test_coord_skycoord = AltAz(location=station_to_earthlocation(stationname),\n alt=89.0*u.deg, az=0*u.deg,\n obstime=t0).transform_to(GCRS)\n\ntest_coord_gcrs = test_coord_skycoord.cartesian.get_xyz().value", "Let's make a matrix to go from sky (GCRS) to ITRS:", "def gcrs_to_itrs(t):\n return SkyCoord(x=np.array([1,0,0]),\n y=np.array([0,1,0]),\n z=np.array([0,0,1]),\n representation_type='cartesian'\n ).transform_to(ITRS(obstime=t)).cartesian.get_xyz().value", "Test that this matrix does the same thing as an astropy transformation:", "m31_skycoord = SkyCoord.from_name(\"M31\")\n\nm31_gcrs = m31_skycoord.cartesian.get_xyz().value\n\ngcrs_to_itrs(t0).dot(m31_gcrs)\n\nm31_skycoord.transform_to(ITRS(obstime=t0))", "Yay.\nTo test whether we computed zenith, let's test whether the ITRS-vector of zenith corresponds to the ITRS-vector that points towards the station phase center.", "station_itrs = station_to_itrs(stationname).cartesian.get_xyz().value\nstation_itrs /= norm(station_itrs)\n\nnp.rad2deg(np.arccos((gcrs_to_itrs(t0)@test_coord_gcrs)@station_itrs))", "Ok, somehow there is an error of about one degree. That's probably because the station plane is not exactly tangent to a spherical earth. Let's ignore for now.\nDefine the dipoles, as unit vectors in the PQR system, and convert them to ETRS.", "a = np.sqrt(.5)\ndipoles_pqr = np.array([[ a, a, 0],\n [ a, -a, 0],\n [-a, -a, 0],\n [-a, a, 0]]).T", "Note that we assume here that ITRS = ETRS, which is true to the meter level. (We could use Michiel's etrsitrs package here, but I think we've already got enough coordinate frames.)\nSo, now we have three coordinate frames:\n * GCRS (sky)\n * ITRS (Earth)\n * pqr (station)\nWe have two matrices to convert between these:\n * pqr_to_etrs\n * gcrs_to_itrs\nTo convert the other way around, these matrices need to be inverted, which is the same as transposing them because they are orthonormal.", "db.pqr_to_etrs[stationname]@(db.pqr_to_etrs[stationname].T)", "Project the dipoles to the plane orthogonal to the test_coord. Do this in ITRS frame.", "test_coord_itrs = gcrs_to_itrs(t0)@test_coord_gcrs\n\ndipoles_itrs = db.pqr_to_etrs[stationname]@dipoles_pqr\n\ndipoles_projected_itrs = dipoles_itrs - \\\n test_coord_itrs.dot(dipoles_itrs) * test_coord_itrs[:, np.newaxis]", "Check that the projected dipoles are indeed orthogonal to the test coordinate:", "dipoles_projected_itrs.T@test_coord_itrs", "Yay.", "dipoles_projected_gcrs = (gcrs_to_itrs(t0).T)@dipoles_projected_itrs", "These are the dipoles projected to GCRS. All we need to do now is plot them on the plane orthogonal to the vector test_coord_gcrs.\nUnfortunately I don't know how to do this properly. Idea for now: rotate along the z-axis to set RA to zero, then rotate along the y-axis to set DEC to zero. Then the vectors should point straight up.", "def rot_ra(phi):\n \"\"\"Rotates back along the z-axis with an angle phi\"\"\"\n return np.array([[np.cos(-phi), - np.sin(-phi), 0],\n [np.sin(-phi), np.cos(-phi), 0],\n [0, 0, 1]])", "Let's check this matrix by rotating test_coord back to ra=0:", "test_coord_gcrs_rotra = rot_ra(test_coord_skycoord.ra)@test_coord_gcrs\n\ntest_coord_skycoord_rotra = SkyCoord(x=test_coord_gcrs_rotra[0],\n y=test_coord_gcrs_rotra[1],\n z=test_coord_gcrs_rotra[2],\n representation_type='cartesian',frame=GCRS).transform_to(GCRS)\n\ntest_coord_skycoord.ra.degree\n\ntest_coord_skycoord_rotra.ra.degree", "Yay. Check that the declination does not change:", "test_coord_skycoord.dec.degree\n\ntest_coord_skycoord_rotra.dec.degree", "Yay.\nNow rotate the declination.", "def rot_dec(theta):\n \"\"\"Rotates back along the y-axis with an angle theta\"\"\"\n return np.array([[np.cos(-theta), 0, - np.sin(-theta)],\n [0, 1, 0],\n [np.sin(-theta), 0, np.cos(-theta)]])", "Test that this works (note we need to rotate back the RA first):", "test_coord_gcrs_rotdec = rot_dec(test_coord_skycoord_rotra.dec)@test_coord_gcrs_rotra\n\ntest_coord_skycoord_rotdec = SkyCoord(x=test_coord_gcrs_rotdec[0],\n y=test_coord_gcrs_rotdec[1],\n z=test_coord_gcrs_rotdec[2],\n representation_type='cartesian',frame=GCRS).transform_to(GCRS)\n\ntest_coord_skycoord_rotdec.dec.degree\n\ntest_coord_skycoord_rotra.ra.degree\n\ntest_coord_skycoord_rotdec.ra.degree", "Yay.\nNow let's introduce another coordinate system, which is the rotated system so that the viewing direction looks straight at the origin (and has the same north as GCRS, because of the order in which the rotations from GCRS are done). Let's call this system harry.", "def gcrs_to_harry(pointing_skycoord: SkyCoord):\n \"\"\"Rotate back so that at coord_skycoord is at ra=0, dec=0\"\"\"\n return rot_dec(pointing_skycoord.dec)@rot_ra(pointing_skycoord.ra)\n\ngcrs_to_harry(test_coord_skycoord)@test_coord_gcrs\n\ngcrs_to_harry(m31_skycoord)@m31_gcrs\n\ndipoles_projected_gcrs\n\ngcrs_to_harry(test_coord_skycoord)@dipoles_projected_gcrs", "Ok, so we didn't need to project the dipoles, since in the harry coordinate frame, projection corresponds to setting the first coordinate to zero.", "dipoles_gcrs = (gcrs_to_itrs(t0).T)@db.pqr_to_etrs[stationname]@dipoles_pqr\n\ngcrs_to_harry(test_coord_skycoord)@dipoles_gcrs\n\ndipoles_harry = gcrs_to_harry(test_coord_skycoord)@dipoles_gcrs", "Ok, let's put all of this in a function:\nReal code", "def station_to_earthlocation(station_name):\n \"\"\"Returns an astropy EarthLocation for a given station (e.g. 'CS001LBA')\"\"\"\n x,y,z = db.phase_centres[station_name]\n return EarthLocation(x=x*u.m, y=y*u.m, z=z*u.m)\n\ndef gcrs_to_itrs(t):\n return SkyCoord(x=np.array([1,0,0]),\n y=np.array([0,1,0]),\n z=np.array([0,0,1]),\n representation_type='cartesian'\n ).transform_to(ITRS(obstime=t)).cartesian.get_xyz().value\n\ndef rot_ra(phi):\n \"\"\"Rotates back along the z-axis with an angle phi\"\"\"\n return np.array([[np.cos(-phi), - np.sin(-phi), 0],\n [np.sin(-phi), np.cos(-phi), 0],\n [0, 0, 1]])\n\n\n\ndef rot_dec(theta):\n \"\"\"Rotates back along the y-axis with an angle theta\"\"\"\n return np.array([[np.cos(-theta), 0, - np.sin(-theta)],\n [0, 1, 0],\n [np.sin(-theta), 0, np.cos(-theta)]])\n\ndef gcrs_to_harry(pointing_skycoord: SkyCoord):\n \"\"\"Rotate back so that at coord_skycoord is at ra=0, dec=0\"\"\"\n return rot_dec(pointing_skycoord.dec)@rot_ra(pointing_skycoord.ra)\n\ndef get_dipoles_harry(pointing_skycoord, time, stationname):\n a = np.sqrt(.5)\n dipoles_pqr = np.array([[ a, a, 0],\n [ a, -a, 0],\n [-a, -a, 0],\n [-a, a, 0]]).T\n dipoles_gcrs = (gcrs_to_itrs(time).T) @ db.pqr_to_etrs[stationname] @ dipoles_pqr\n dipoles_harry = gcrs_to_harry(pointing_skycoord) @ dipoles_gcrs\n return dipoles_harry\n\ndipoles_harry = get_dipoles_harry(SkyCoord.from_name(\"M31\"), Time.now(), \"IE613LBA\")\n\ndef plot_dipoles_harry(pointing_str: str, timedelta: float, stationname: str):\n \"\"\"Plot projected dipoles\"\"\"\n fig, ax = plt.subplots(1)\n \n t0 = Time(\"2019-01-01T01:00\")\n \n if pointing_str == \"zenith\":\n pointing = AltAz(location=station_to_earthlocation(stationname),\n alt=89.0*u.deg, az=0*u.deg,\n obstime=t0).transform_to(GCRS)\n else:\n pointing = SkyCoord.from_name(pointing_str)\n\n time = t0 + timedelta * u.day\n \n dipoles_harry = get_dipoles_harry(pointing,\n time,\n stationname)\n \n x, y = dipoles_harry[1:3]\n ax.grid()\n ax.arrow(x[2], y[2], x[0] - x[2], y[0] - y[2], head_width=0.1, color='r',\n length_includes_head=True)\n ax.arrow(x[1], y[1], x[3] - x[1], y[3] - y[1], head_width=0.1, color='b',\n length_includes_head=True)\n ax.set_aspect(1)\n ax.set_xlim(-1, 1)\n ax.set_ylim(-1, 1)\n ax.set_xticks([-1,0,1])\n ax.set_yticks([-1,0,1])\n ax.tick_params(axis=\"x\", bottom=False, top=False, labelbottom=False)\n ax.tick_params(axis=\"y\", left=False, right=False, labelleft=False)\n ax.set_xlabel(r\"$\\Delta \\alpha$\", fontsize=16)\n ax.set_ylabel(r\"$\\Delta \\delta$\", fontsize=16)\n ax.set_title(f\"Dipoles of {stationname} as seen from \\n{pointing_str} at {time.iso[:16]}\")\n _ = ax.plot(0, 0, 'kx');\n return fig\n\nplot_dipoles_harry(\"M31\", 0, \"IE613LBA\");\n\nplot_dipoles_harry(\"zenith\", 0, \"IE613LBA\");", "Try from the superterp (where up should be North):", "plot_dipoles_harry(\"zenith\", 0, \"CS002LBA\");\n\nplot_dipoles_harry(\"zenith\", 0, \"LV614LBA\");", "Now make it interactive:", "from ipywidgets import interact, interactive, fixed, interact_manual\n\ninteractive_plot = interactive(plot_dipoles_harry,\n stationname=list(db.phase_centres.keys()),\n timedelta=(0., 1., 0.05),\n pointing_str=[\"M31\", \"Cas A\", \"zenith\"])\noutput = interactive_plot.children[-1]\noutput.layout.height = '350px'\ninteractive_plot\n\nfor framenum, time_delta in enumerate(np.linspace(0, 1, 120)):\n f = plot_dipoles_harry(\"zenith\", time_delta, \"CS002LBA\");\n _ = f.savefig(f\"frame{framenum:03d}.png\");" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mirjalil/pyclust
examples/bisecting_kmeans.ipynb
gpl-2.0
[ "import numpy as np\nimport scipy\nimport pandas\nimport treelib\nimport pyclust\n\nimport matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndf = pandas.read_table('data/data_k5.csv', sep=',')\n\ndf.head(3)\n\ndef plot_scatter(X, labels=None, title=\"Scatter Plot\"):\n \n labels = np.zeros(shape=X.shape[0], dtype=int) if labels is None else labels\n colors = ['b', 'r', 'g', 'm', 'y']\n col_dict = {}\n i = 0\n for lab in np.unique(labels):\n col_dict[lab] = colors[i]\n i += 1 \n \n fig1 = plt.figure(1, figsize=(8,6))\n ax = fig1.add_subplot(1, 1, 1)\n\n for i in np.unique(labels):\n indx = np.where(labels == i)[0]\n plt.scatter(X[indx,0], X[indx,1], color=col_dict[i], marker='o', s=100, alpha=0.5)\n\n plt.setp(ax.get_xticklabels(), rotation='horizontal', fontsize=16)\n plt.setp(ax.get_yticklabels(), rotation='vertical', fontsize=16)\n\n plt.xlabel('$x_1$', size=20)\n plt.ylabel('$x_2$', size=20)\n plt.title(title, size=20)\n\n plt.show()\n \n## test plot original data\nplot_scatter(df.iloc[:,0:2].values, labels=df.iloc[:,2].values, title=\"Scatter Plot: Original Labels\")", "KMeans Clustering\n K = 5", "km = pyclust.KMeans(n_clusters=5)\n\nkm.fit(df.iloc[:,0:2].values)\n\nprint(km.centers_)\n\nplot_scatter(df.iloc[:,0:2].values, labels=km.labels_, title=\"Scatter Plot: K-Means\")", "Bisecting K-Means", "bkm = pyclust.BisectKMeans(n_clusters=5)\n\nbkm.fit(df.iloc[:,0:2].values)\n\nprint(bkm.labels_)\n\nplot_scatter(df.iloc[:,0:2].values, labels=bkm.labels_, title=\"Scatter Plot: Bisecting K-Means\")\n\nbkm.tree_.show(line_type='ascii')", "Cutting the tree structure\n\nCut the tree to get a clustering with a new n_cluster\nbkm.cut(n_clusters=4)\nIt returns a tuple:\nfirst elemen being the new cluster memberships\nsecond element is a dictionary for the centroid of each cluster\n\nExample\n```python\nbkm.cut(3)\n(array([4, 4, 2, 2, 4, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 4, 2, 2, 2, 2, 4, 2, 3,\n 3, 2, 3, 4, 3, 4, 4, 3, 4, 3, 4, 2, 4, 2, 4, 3, 2, 3, 2, 2, 4, 3, 2,\n 2, 4, 2, 4, 4, 2, 2, 4, 3, 3, 2, 4, 4, 4, 3, 3, 2, 2, 4, 2, 2, 2, 3,\n 4, 3, 2, 2, 4, 3, 2, 2, 3, 2, 3, 3, 2, 3, 3, 2, 3, 2, 2, 2, 2, 3, 2,\n 2, 2, 2, 3, 4, 4, 2, 3, 2, 4, 2, 2, 2, 2, 2, 3, 4, 4, 4, 2, 2, 3, 4,\n 2, 2, 2, 4, 3]),\n {2: [-8.1686500000000013, 4.1619483333333331],\n 3: [-9.2501724137931021, -2.0435517241379313],\n 4: [8.3429774193548365, -0.30114193548387092]})\n```", "plot_scatter(df.iloc[:,0:2].values, labels=bkm.cut(2)[0], title=\"Scatter Plot: Bisecting K-Means (2)\")\n\nplot_scatter(df.iloc[:,0:2].values, labels=bkm.cut(3)[0], title=\"Scatter Plot: Bisecting K-Means (3)\")\n\nplot_scatter(df.iloc[:,0:2].values, labels=bkm.cut(4)[0], title=\"Scatter Plot: Bisecting K-Means (4)\")" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Alexoner/mooc
cs231n/assignment2/convnet.ipynb
apache-2.0
[ "Train a ConvNet!\nWe now have a generic solver and a bunch of modularized layers. It's time to put it all together, and train a ConvNet to recognize the classes in CIFAR-10. In this notebook we will walk you through training a simple two-layer ConvNet and then set you free to build the best net that you can to perform well on CIFAR-10.\nOpen up the file cs231n/classifiers/convnet.py; you will see that the two_layer_convnet function computes the loss and gradients for a two-layer ConvNet. Note that this function uses the \"sandwich\" layers defined in cs231n/layer_utils.py.", "# As usual, a bit of setup\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pickle\nfrom cs231n.classifier_trainer import ClassifierTrainer\nfrom cs231n.gradient_check import eval_numerical_gradient\nfrom cs231n.classifiers.convnet import *\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\nfrom cs231n.data_utils import load_CIFAR10\n\ndef get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):\n \"\"\"\n Load the CIFAR-10 dataset from disk and perform preprocessing to prepare\n it for the two-layer neural net classifier. These are the same steps as\n we used for the SVM, but condensed to a single function. \n \"\"\"\n # Load the raw CIFAR-10 data\n cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\n X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n \n # Subsample the data\n mask = range(num_training, num_training + num_validation)\n X_val = X_train[mask]\n y_val = y_train[mask]\n mask = range(num_training)\n X_train = X_train[mask]\n y_train = y_train[mask]\n mask = range(num_test)\n X_test = X_test[mask]\n y_test = y_test[mask]\n\n # Normalize the data: subtract the mean image\n mean_image = np.mean(X_train, axis=0)\n X_train -= mean_image\n X_val -= mean_image\n X_test -= mean_image\n \n # Transpose so that channels come first\n X_train = X_train.transpose(0, 3, 1, 2).copy()\n X_val = X_val.transpose(0, 3, 1, 2).copy()\n x_test = X_test.transpose(0, 3, 1, 2).copy()\n\n return X_train, y_train, X_val, y_val, X_test, y_test\n\n\n# Invoke the above function to get our data.\nX_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()\nprint 'Train data shape: ', X_train.shape\nprint 'Train labels shape: ', y_train.shape\nprint 'Validation data shape: ', X_val.shape\nprint 'Validation labels shape: ', y_val.shape\nprint 'Test data shape: ', X_test.shape\nprint 'Test labels shape: ', y_test.shape", "Sanity check loss\nAfter you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.", "model = init_two_layer_convnet()\n\nX = np.random.randn(100, 3, 32, 32)\ny = np.random.randint(10, size=100)\n\nloss, _ = two_layer_convnet(X, model, y, reg=0)\n\n# Sanity check: Loss should be about log(10) = 2.3026\nprint 'Sanity check loss (no regularization): ', loss\n\n# Sanity check: Loss should go up when you add regularization\nloss, _ = two_layer_convnet(X, model, y, reg=1)\nprint 'Sanity check loss (with regularization): ', loss", "Gradient check\nAfter the loss looks reasonable, you should always use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.", "num_inputs = 2\ninput_shape = (3, 16, 16)\nreg = 0.0\nnum_classes = 10\nX = np.random.randn(num_inputs, *input_shape)\ny = np.random.randint(num_classes, size=num_inputs)\n\nmodel = init_two_layer_convnet(num_filters=3, filter_size=3, input_shape=input_shape)\nloss, grads = two_layer_convnet(X, model, y)\nfor param_name in sorted(grads):\n f = lambda _: two_layer_convnet(X, model, y)[0]\n param_grad_num = eval_numerical_gradient(f, model[param_name], verbose=False, h=1e-6)\n e = rel_error(param_grad_num, grads[param_name])\n print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))", "Overfit small data\nA nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.", "# Use a two-layer ConvNet to overfit 50 training examples.\n\nmodel = init_two_layer_convnet()\ntrainer = ClassifierTrainer()\nbest_model, loss_history, train_acc_history, val_acc_history = trainer.train(\n X_train[:50], y_train[:50], X_val, y_val, model, two_layer_convnet,\n reg=0.001, momentum=0.9, learning_rate=0.0001, batch_size=10, num_epochs=10,\n verbose=True)", "Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:", "plt.subplot(2, 1, 1)\nplt.plot(loss_history)\nplt.xlabel('iteration')\nplt.ylabel('loss')\n\nplt.subplot(2, 1, 2)\nplt.plot(train_acc_history)\nplt.plot(val_acc_history)\nplt.legend(['train', 'val'], loc='upper left')\nplt.xlabel('epoch')\nplt.ylabel('accuracy')\nplt.show()", "Train the net\nOnce the above works, training the net is the next thing to try. You can set the acc_frequency parameter to change the frequency at which the training and validation set accuracies are tested. If your parameters are set properly, you should see the training and validation accuracy start to improve within a hundred iterations, and you should be able to train a reasonable model with just one epoch.\nUsing the parameters below you should be able to get around 50% accuracy on the validation set.", "model = init_two_layer_convnet(filter_size=7)\ntrainer = ClassifierTrainer()\nbest_model, loss_history, train_acc_history, val_acc_history = trainer.train(\n X_train, y_train, X_val, y_val, model, two_layer_convnet,\n reg=0.001, momentum=0.9, learning_rate=0.0001, batch_size=50, num_epochs=1,\n acc_frequency=50, verbose=True)", "Visualize weights\nWe can visualize the convolutional weights from the first layer. If everything worked properly, these will usually be edges and blobs of various colors and orientations.", "from cs231n.vis_utils import visualize_grid\n\ngrid = visualize_grid(best_model['W1'].transpose(0, 2, 3, 1))\nplt.imshow(grid.astype('uint8'))", "Experiment!\nExperiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started:\nThings you should try:\n\nFilter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient\nNumber of filters: Above we used 32 filters. Do more or fewer do better?\nNetwork depth: The network above has two layers of trainable parameters. Can you do better with a deeper network? You can implement alternative architectures in the file cs231n/classifiers/convnet.py. Some good architectures to try include:\n[conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM]\n[conv-relu-pool]XN - [affine]XM - [softmax or SVM]\n[conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM]\n\n\n\nTips for training\nFor each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:\n\nIf the parameters are working well, you should see improvement within a few hundred iterations\nRemember the course-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.\nOnce you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.\n\nGoing above and beyond\nIf you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.\n\nAlternative update steps: For the assignment we implemented SGD+momentum and RMSprop; you could try alternatives like AdaGrad or AdaDelta.\nOther forms of regularization such as L1 or Dropout\nAlternative activation functions such as leaky ReLU or maxout\nModel ensembles\nData augmentation\n\nWhat we expect\nAt the very least, you should be able to train a ConvNet that gets at least 65% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.\nYou should use the space below to experiment and train your network. The final cell in this notebook should contain the training, validation, and test set accuracies for your final trained network. In this notebook you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.\nHave fun and happy training!", "# TODO: Train a ConvNet to do really well on CIFAR-10!\n\n# sanity check\nmodel = init_fun_convnet()\n\nX = np.random.randn(100, 3, 32, 32)\ny = np.random.randint(10, size=100)\n\nloss, _ = fun_convnet(X, model, y, reg=0)\n\n# Sanity check: Loss should be about log(10) = 2.3026\nprint 'Sanity check loss (no regularization): ', loss\n\n# Sanity check: Loss should go up when you add regularization\nloss, _ = fun_convnet(X, model, y, reg=1)\nprint 'Sanity check loss (with regularization): ', loss\n\n# gradient check\nnum_inputs = 10\ninput_shape = (3, 16, 16)\nreg = 0.0\nnum_classes = 10\nX = np.random.randn(num_inputs, *input_shape)\ny = np.random.randint(num_classes, size=num_inputs)\n\nmodel = init_fun_convnet(num_filters=3, filter_size=3, input_shape=input_shape)\nloss, grads = fun_convnet(X, model, y)\nfor param_name in sorted(grads):\n f = lambda _: fun_convnet(X, model, y)[0]\n param_grad_num = eval_numerical_gradient(f, model[param_name], verbose=False, h=1e-6)\n e = rel_error(param_grad_num, grads[param_name])\n print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))\n\n# make sure we can overfit\n# Use a two-layer ConvNet to overfit 50 training examples.\nmodel = init_fun_convnet(weight_scale=2e-2, bias_scale=0, num_filters=64, filter_size=3)\ntrainer = ClassifierTrainer()\nbest_model, loss_history, train_acc_history, val_acc_history = trainer.train(\n X_train[:50], y_train[:50], X_val, y_val, model, fun_convnet,\n reg=0.001, momentum=0.9, learning_rate=1e-3, batch_size=10, num_epochs=10,\n verbose=True)\n\n# Plotting the loss, training accuracy, and validation accuracy \n# should show clear overfitting\nplt.subplot(2, 1, 1)\nplt.plot(loss_history)\nplt.xlabel('iteration')\nplt.ylabel('loss')\n\nplt.subplot(2, 1, 2)\nplt.plot(train_acc_history)\nplt.plot(val_acc_history)\nplt.legend(['train', 'val'], loc='upper left')\nplt.xlabel('epoch')\nplt.ylabel('accuracy')\nplt.show()\n\n# weight initialization may infect the training in the following ways:\n# 1. training speed, bad initialization make the training slow\n# 2. training loss may zigzag, much like when learning_rate is too large\nmodel = init_fun_convnet(weight_scale=1.5e-2, bias_scale=0, num_filters=64, filter_size=3)\nbest_model_pkl = 'best_model.pkl'\n# try:\n# with open(best_model_pkl, 'rb') as f:\n# model = pickle.load(f)\n# print(\"Loaded model from pickle file\")\n# except:\n# print(\"model pickle file doesn't exist! Initialized one from scratch\")\ntrainer = ClassifierTrainer()\nbest_model, loss_history, train_acc_history, val_acc_history = trainer.train(\n X_train, y_train, X_val, y_val, model, fun_convnet,\n reg=0.01, momentum=0.9, learning_rate=1e-3, learning_rate_decay=0.95,\n batch_size=50, num_epochs=1, # change num_epochs to 5\n acc_frequency=50, verbose=True)\n\n# with open(best_model_pkl, 'wb') as f:\n# pickle.dump(best_model, f)\n\n# plot the training history\nplt.subplot(2, 1, 1)\nplt.plot(loss_history)\nplt.xlabel('iteration')\nplt.ylabel('loss')\n\nplt.subplot(2, 1, 2)\nplt.plot(train_acc_history)\nplt.plot(val_acc_history)\nplt.legend(['train', 'val'], loc='upper left')\nplt.xlabel('epoch')\nplt.ylabel('accuracy')\nplt.show()\n\n\n# visualize the weights\ngrid = visualize_grid(best_model['W1'].transpose(0, 2, 3, 1))\nplt.imshow(grid.astype('uint8'))\n\n# best model accuracy: \n# train: 0.784, validation accuracy: 0.728000, test accuracy: 0.699\n\nwith open(best_model_pkl, 'rb') as f:\n best_model = pickle.load(f)\n\n# print X_val.shape\n# print X_test.shape\n\nscores_test = fun_convnet(X_test.transpose(0, 3, 1, 2) , best_model)\nprint 'Test accuracy: ', np.mean(np.argmax(scores_test, axis=1) == y_test)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
opengeostat/pygslib
pygslib/Ipython_templates/nscore_ttable_raw.ipynb
mit
[ "Testing the nscore transformation table", "#general imports\nimport matplotlib.pyplot as plt \nimport pygslib \nfrom matplotlib.patches import Ellipse\nimport numpy as np\nimport pandas as pd\n\n#make the plots inline\n%matplotlib inline ", "Getting the data ready for work\nIf the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.", "#get the data in gslib format into a pandas Dataframe\nmydata= pygslib.gslib.read_gslib_file('../data/cluster.dat') \n\n# This is a 2D file, in this GSLIB version we require 3D data and drillhole name or domain code\n# so, we are adding constant elevation = 0 and a dummy BHID = 1 \nmydata['Zlocation']=0\nmydata['bhid']=1\n\n# printing to verify results\nprint (' \\n **** 5 first rows in my datafile \\n\\n ', mydata.head(n=5))\n\n#view data in a 2D projection\nplt.scatter(mydata['Xlocation'],mydata['Ylocation'], c=mydata['Primary'])\nplt.colorbar()\nplt.grid(True)\nplt.show()", "The nscore transformation table function", "print (pygslib.gslib.__dist_transf.ns_ttable.__doc__)\n", "Note that the input can be data or a reference distribution function\nNormal score transformation table using delustering wight", "dtransin,dtransout, error = pygslib.gslib.__dist_transf.ns_ttable(mydata['Primary'],mydata['Declustering Weight'])\n\ndttable= pd.DataFrame({'z': dtransin,'y': dtransout})\n\nprint (dttable.head(3))\nprint (dttable.tail(3) )\nprint ('there was any error?: ', error!=0)\n\ndttable.hist(bins=30)", "Normal score transformation table without delustering wight", "transin,transout, error = pygslib.gslib.__dist_transf.ns_ttable(mydata['Primary'],np.ones(len(mydata['Primary'])))\n\nttable= pd.DataFrame({'z': transin,'y': transout})\n\nprint (ttable.head(3))\nprint (ttable.tail(3))\n\nttable.hist(bins=30)", "Comparing results", "parameters_probplt = {\n 'iwt' : 0, #int, 1 use declustering weight\n 'va' : ttable.y, # array('d') with bounds (nd)\n 'wt' : np.ones(len(ttable.y))} # array('d') with bounds (nd), wight variable (obtained with declust?)\n\nparameters_probpltl = {\n 'iwt' : 0, #int, 1 use declustering weight\n 'va' : dttable.y, # array('d') with bounds (nd)\n 'wt' : np.ones(len(dttable.y))} # array('d') with bounds (nd), wight variable (obtained with declust?)\n\n\nbinval,cl,xpt025,xlqt,xmed,xuqt,xpt975,xmin,xmax, \\\nxcvr,xmen,xvar,error = pygslib.gslib.__plot.probplt(**parameters_probplt)\n\nbinvall,cll,xpt025l,xlqtl,xmedl,xuqtl,xpt975l,xminl, \\\nxmaxl,xcvrl,xmenl,xvarl,errorl = pygslib.gslib.__plot.probplt(**parameters_probpltl)\n\n\nfig = plt.figure()\nax = fig.add_subplot(1,1,1)\nplt.plot (cl, binval, label = 'gaussian non-declustered')\nplt.plot (cll, binvall, label = 'gaussian declustered')\nplt.legend(loc=4)\nplt.grid(True)\nfig.show" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dipanjank/ml
tensorflow/variable_scope.ipynb
gpl-3.0
[ "<h1 align=\"center\">TensorFlow - Variable Scope</h1>", "import numpy as np\nimport pandas as pd\n%pylab inline\npylab.style.use('ggplot')", "The idea of variable scoping in TensorFlow is to be able to organize the names and initializations of variables that play the same role in a multilayer network. For example, consider an ANN with multiple hidden layers. All of them have a weight matrix $w$. Using variable scoping allows us to structure and initialize them in a systematic way. \nVariable Scope mechanism in TensorFlow consists of two main functions:\n\ntf.get_variable(&lt;name&gt;, &lt;shape&gt;, &lt;initializer&gt;) Creates or returns a variable with a given name.\ntf.variable_scope(&lt;scope_name&gt;) Manages namespaces for names passed to tf.get_variable().", "import tensorflow as tf", "AND Gate with TensorFlow", "X_val = numpy.array([[0, 0], [0, 1], [1, 0], [1, 1]])\ny_val = np.atleast_2d(np.array([0, 0, 0, 1])).T\n\nX_val\n\ny_val\n\ntf.reset_default_graph()\n\nn_iter = 500\nthreshold = 0.5\n\nwith tf.variable_scope('inputs'):\n X = tf.placeholder(name='X', shape=(4, 2), dtype=np.float64)\n y = tf.placeholder(name='y', shape=y_val.shape, dtype=np.float64)\n\nwith tf.variable_scope('weights'):\n w = tf.get_variable(name='w', shape=(2, 1), dtype=np.float64, initializer=tf.truncated_normal_initializer())\n b = tf.get_variable(name='b', shape=(1, 1), dtype=np.float64, initializer=tf.constant_initializer(1.0))\n\nwith tf.variable_scope('train'):\n output = tf.matmul(X, w) + b\n loss_func = tf.reduce_mean(tf.squared_difference(y, output))\n optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.2)\n train_op = optimizer.minimize(loss_func)\n\ninit_op = tf.global_variables_initializer()\n\nwith tf.Session() as sess:\n sess.run(init_op)\n feed_dict = {X: X_val, y: y_val}\n \n for i in range(1, n_iter+1): \n _, out_val = sess.run([train_op, output], feed_dict=feed_dict) \n out_val = np.where(out_val > threshold, 1, 0)\n \n if i % 50 == 0:\n result = np.column_stack([X_val, y_val, out_val])\n result_df = pd.DataFrame(result, columns=['x1', 'x2', 'x1 and x2', 'output'])\n print('loss_function: {}'.format(loss_func.eval(session=sess, feed_dict=feed_dict)))\n print('iteration {}\\n{}'.format(i, result_df))\n \n", "XOR Gate with TensorFlow", "X_val = numpy.array([[0, 0], [0, 1], [1, 0], [1, 1]])\ny_val = np.atleast_2d(np.array([0, 1, 1, 0])).T\n\nX_val\n\ny_val\n\ntf.reset_default_graph()\n\nn_iter = 500\nthreshold = 0.5\n\ndef make_layer(name, x):\n with tf.variable_scope(name, reuse=None):\n if name == 'hidden':\n w_shape = (2, 3)\n b_shape = (4, 3)\n elif name == 'output':\n w_shape = (3, 1)\n b_shape = (1, 1)\n else:\n assert False\n \n w = tf.get_variable(name='w', shape=w_shape, dtype=np.float64, initializer=tf.truncated_normal_initializer())\n b = tf.get_variable(name='b', shape=b_shape, dtype=np.float64, initializer=tf.constant_initializer(1.0))\n\n mm = tf.matmul(x, w) + b\n return tf.sigmoid(mm) if name == 'hidden' else mm\n\nwith tf.variable_scope('inputs'):\n X = tf.placeholder(name='X', shape=(4, 2), dtype=np.float64)\n y = tf.placeholder(name='y', shape=(4, 1), dtype=np.float64)\n\nhidden = make_layer('hidden', X)\noutput = make_layer('output', hidden)\n\nwith tf.variable_scope('train'): \n loss_func = tf.reduce_mean(tf.squared_difference(y, output))\n optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.2)\n train_op = optimizer.minimize(loss_func)\n \ninit_op = tf.global_variables_initializer()\n\nwith tf.Session() as sess:\n sess.run(init_op)\n feed_dict = {X: X_val, y: y_val}\n \n for i in range(1, n_iter+1): \n _, out_val = sess.run([train_op, output], feed_dict=feed_dict) \n out_val = np.where(out_val > threshold, 1, 0)\n \n if i % 50 == 0:\n result = np.column_stack([X_val, y_val, out_val])\n result_df = pd.DataFrame(result, columns=['x1', 'x2', 'x1 XOR x2', 'output'])\n print('loss_function: {}'.format(loss_func.eval(session=sess, feed_dict=feed_dict)))\n print('iteration {}\\n{}'.format(i, result_df))\n \n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.24/_downloads/9363cd44fa61e7532287bca17b028ef6/time_frequency_global_field_power.ipynb
bsd-3-clause
[ "%matplotlib inline", "Explore event-related dynamics for specific frequency bands\nThe objective is to show you how to explore spectrally localized\neffects. For this purpose we adapt the method described in\n:footcite:HariSalmelin1997 and use it on the somato dataset.\nThe idea is to track the band-limited temporal evolution\nof spatial patterns by using the :term:global field power (GFP).\nWe first bandpass filter the signals and then apply a Hilbert transform. To\nreveal oscillatory activity the evoked response is then subtracted from every\nsingle trial. Finally, we rectify the signals prior to averaging across trials\nby taking the magniude of the Hilbert.\nThen the :term:GFP is computed as described in\n:footcite:EngemannGramfort2015, using the sum of the\nsquares but without normalization by the rank.\nBaselining is subsequently applied to make the :term:GFP comparable\nbetween frequencies.\nThe procedure is then repeated for each frequency band of interest and\nall :term:GFPs&lt;GFP&gt; are visualized. To estimate uncertainty, non-parametric\nconfidence intervals are computed as described in :footcite:EfronHastie2016\nacross channels.\nThe advantage of this method over summarizing the Space x Time x Frequency\noutput of a Morlet Wavelet in frequency bands is relative speed and, more\nimportantly, the clear-cut comparability of the spectral decomposition (the\nsame type of filter is used across all bands).\nWe will use this dataset: somato-dataset\nReferences\n.. footbibliography::", "# Authors: Denis A. Engemann <denis.engemann@gmail.com>\n# Stefan Appelhoff <stefan.appelhoff@mailbox.org>\n#\n# License: BSD-3-Clause\n\nimport os.path as op\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import somato\nfrom mne.baseline import rescale\nfrom mne.stats import bootstrap_confidence_interval", "Set parameters", "data_path = somato.data_path()\nsubject = '01'\ntask = 'somato'\nraw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',\n 'sub-{}_task-{}_meg.fif'.format(subject, task))\n\n# let's explore some frequency bands\niter_freqs = [\n ('Theta', 4, 7),\n ('Alpha', 8, 12),\n ('Beta', 13, 25),\n ('Gamma', 30, 45)\n]", "We create average power time courses for each frequency band", "# set epoching parameters\nevent_id, tmin, tmax = 1, -1., 3.\nbaseline = None\n\n# get the header to extract events\nraw = mne.io.read_raw_fif(raw_fname)\nevents = mne.find_events(raw, stim_channel='STI 014')\n\nfrequency_map = list()\n\nfor band, fmin, fmax in iter_freqs:\n # (re)load the data to save memory\n raw = mne.io.read_raw_fif(raw_fname)\n raw.pick_types(meg='grad', eog=True) # we just look at gradiometers\n raw.load_data()\n\n # bandpass filter\n raw.filter(fmin, fmax, n_jobs=1, # use more jobs to speed up.\n l_trans_bandwidth=1, # make sure filter params are the same\n h_trans_bandwidth=1) # in each band and skip \"auto\" option.\n\n # epoch\n epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=baseline,\n reject=dict(grad=4000e-13, eog=350e-6),\n preload=True)\n # remove evoked response\n epochs.subtract_evoked()\n\n # get analytic signal (envelope)\n epochs.apply_hilbert(envelope=True)\n frequency_map.append(((band, fmin, fmax), epochs.average()))\n del epochs\ndel raw", "Now we can compute the Global Field Power\nWe can track the emergence of spatial patterns compared to baseline\nfor each frequency band, with a bootstrapped confidence interval.\nWe see dominant responses in the Alpha and Beta bands.", "# Helper function for plotting spread\ndef stat_fun(x):\n \"\"\"Return sum of squares.\"\"\"\n return np.sum(x ** 2, axis=0)\n\n\n# Plot\nfig, axes = plt.subplots(4, 1, figsize=(10, 7), sharex=True, sharey=True)\ncolors = plt.get_cmap('winter_r')(np.linspace(0, 1, 4))\nfor ((freq_name, fmin, fmax), average), color, ax in zip(\n frequency_map, colors, axes.ravel()[::-1]):\n times = average.times * 1e3\n gfp = np.sum(average.data ** 2, axis=0)\n gfp = mne.baseline.rescale(gfp, times, baseline=(None, 0))\n ax.plot(times, gfp, label=freq_name, color=color, linewidth=2.5)\n ax.axhline(0, linestyle='--', color='grey', linewidth=2)\n ci_low, ci_up = bootstrap_confidence_interval(average.data, random_state=0,\n stat_fun=stat_fun)\n ci_low = rescale(ci_low, average.times, baseline=(None, 0))\n ci_up = rescale(ci_up, average.times, baseline=(None, 0))\n ax.fill_between(times, gfp + ci_up, gfp - ci_low, color=color, alpha=0.3)\n ax.grid(True)\n ax.set_ylabel('GFP')\n ax.annotate('%s (%d-%dHz)' % (freq_name, fmin, fmax),\n xy=(0.95, 0.8),\n horizontalalignment='right',\n xycoords='axes fraction')\n ax.set_xlim(-1000, 3000)\n\naxes.ravel()[-1].set_xlabel('Time [ms]')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
UWSEDS/LectureNotes
PreFall2018/Objects/Building Software With Objects.ipynb
bsd-2-clause
[ "Why Objects?\n\nProvide modularity and reuse through hierarchical structures\n\nObject oriented programming is a different way of thinking.\nProgramming With Objects", "from IPython.display import Image\nImage(filename='Classes_vs_Objects.png') ", "Initial concepts\n\nAn object is a container of data (attributes) and code (methods)\nA class is a template for creating objects\n\nReuse is provided by:\n\nreusing the same class to create many objects\n\"inheriting\" data and code from other classes", "# Definiting a Car class\nclass Car(object):\n pass", "Attributes", "from IPython.display import Image\nImage(filename='ClassAttributes.png') ", "Attributes are data associated with an object (instance) or class. Object attributes (and methods) are specified by using \"self\". Instance attributes and methods are accessed using the dot \".\" operator.", "class Car(object):\n \n # The following method is called when the class\n # is created or \"constructed\". The variables \"self.x\" refers\n # to the variable \"x\" in a created object.\n def __init__(self, color, car_type, speed):\n self.color = color\n self.car_type = car_type\n self.speed = speed\n\nclass Car(object):\n \n # The following method is called when the class\n # is created or \"constructed\". The variables \"self.x\" refers\n # to the variable \"x\" in a created object.\n def __init__(self, color, car_type, speed):\n self.color = color\n self.car_type = car_type\n self.speed = speed\n\n# Creating an object for a class with arguments in the __init__ method\ncar = Car(\"Blue\", \"HatchBack\", 100)\ncar.color\n\n# Creating an object for a class with arguments in the __init__ method\njoe_car = Car(\"Blue\", \"Sedan\", 100)\ndave_car = Car(\"Red\", \"Sports\", 150)\nprint (\"Type of joe_car is %s. Type of dave_car is %s\"% (type(joe_car), type(dave_car)))\n\n# Accessed instance attributes\njoe_car = Car(\"Blue\", \"Sedan\", 100)\nprint (\"Type of joe_car has (color, type, speed)=%s.\" % str((joe_car.color, joe_car.car_type, joe_car.speed)))", "EXERCISE: Change the constructor for Car to include the attribute \"doors\".\nInstance Methods", "from IPython.display import Image\nImage(filename='InstanceMethods.png') \n\n#Class diagram\nfrom IPython.display import Image\nImage(filename='SingleClassDiagram.png', width=200, height=200) ", "A class diagram provides a more compact representation of a class. There are three sections.\n- Class name\n- Attributes\n- Methods\nInstance methods\n- functions associated with the objects constructed for a class\n- provide a way to transform data in objects\n- use instance attributes (references to variables beginning with \"self.\")", "class Car(object):\n \n def __init__(self, color, car_type, speed):\n \"\"\"\n :param str color:\n :param str car_type:\n :param int speed:\n \"\"\"\n self.color = color\n self.car_type = car_type\n self.speed = speed\n \n def start(self):\n print (\"%s %s started!\" % (self.color, self.car_type))\n \n def stop(self):\n pass\n \n def turn(self, direction):\n \"\"\"\n :parm str direction: left or right\n \"\"\"\n pass\n\ncar = Car(\"Blue\", \"Sedan\", 100)\ncar.start()", "EXERCISE: Implement the stop and turn methods. Run the methods.\nInheritance\nInheritance is a common way that classes reuse data and code from other classes. A child class or derived class gets attributes and methods from its parent class.\nProgrammatically:\n- Specify inheritance in the class statement\n- Constructor for derived class (class that inherits) have access to the constructor of its parent.\nInheritance is represented in diagrams as an arror from the child class to its parent class.", "from IPython.display import Image\nImage(filename='SimpleClassHierarchy.png', width=400, height=400) \n\n# Code for inheritance\nclass Sedan(Car):\n # Sedan inherits from car\n \n def __init__(self, color, speed):\n \"\"\"\n :param str color:\n :param int speed:\n \"\"\"\n super().__init__(color, \"Sedan\", speed)\n \n def play_cd(self):\n print (\"Playing cd in %s sedan\" % self.color)\n\nsedan = Sedan(\"Yellow\", 1e6)\nsedan.play_cd()\n\nsedan.car_type\n\njoe_car = Sedan(\"Blue\", 100)\nprint (\"Type of joe_car has (color, type, speed)=%s.\" % str((joe_car.color, joe_car.car_type, joe_car.speed)))", "Exercise: Implement SportsCar and create dave_car from SportsCar. Print attributes of dave_car.", "from IPython.display import Image\nImage(filename='ClassInheritance.png', width=400, height=400) ", "Subclasses can have their own methods.\nExercise: Add the play_cd() to Sedan and play_bluetooth() method to SportsCar. Construct a test to run these methods.\nWhat Else?\n\nClass attributes\nClass methods\n\nObject Oriented Design\nA design methodology must specify:\n- Components: What they do and how to build them\n- Interactions: How the components interact to implement use cases\nObject oriented designed\n- Components are specified by class diagrams.\n- Interactions are specified by interaction diagrams.\nClass diagram for the ATM system", "from IPython.display import Image\nImage(filename='ATMClassDiagram.png', width=400, height=400) ", "The diamond arrow is a \"has-a\" relationship. For example, the Controller has-a ATMInput. This means that a Controller object has an instance variable for an ATMInput object.\nInteraction Diagram for the ATM System\nAn interaction diagram specifies how components interact to achieve a use case. \nInteractions are from one object to another object, indicating that the first object calls a method in the second object.\nRules for drawing lines in an interaction diagram:\n- The calling object must know about the called object.\n- The called object must have the method invoked by the calling object.", "from IPython.display import Image\nImage(filename='ATMAuthentication.png', width=800, height=800) ", "Look at Objects/ATMDiagrams.pdf for a solution.\nWhat Else in Design?\n\nOther diagrams: state diagrams, package diagrams, ...\nObject oriented design patterns\n\nComplex Example of Class Hierarchy", "from IPython.display import Image\nImage(filename='SciSheetsCoreClasses.png', width=300, height=30) " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
swails/mdtraj
examples/centroids.ipynb
lgpl-2.1
[ "Finding centroids\nIn this example, we're going to find a \"centroid\" (representitive structure) for a group of conformations. This group might potentially come from clustering, using method like Ward hierarchical clustering.\nNote that there are many possible ways to define the centroids. This is just one.", "from __future__ import print_function\n%matplotlib inline\nimport mdtraj as md\nimport numpy as np", "Load up a trajectory to use for the example.", "traj = md.load('ala2.h5')\nprint(traj)", "Lets compute all pairwise rmsds between conformations.", "atom_indices = [a.index for a in traj.topology.atoms if a.element.symbol != 'H']\ndistances = np.empty((traj.n_frames, traj.n_frames))\nfor i in range(traj.n_frames):\n distances[i] = md.rmsd(traj, traj, i, atom_indices=atom_indices)", "The algorithim we're going to use is relatively simple:\n- Compute all of the pairwise RMSDs between the conformations. This is O(N^2), so it's not going to\n scale extremely well to large datasets.\n- Transform these distances into similarity scores. Our similarities will calculated as\n $$ s_{ij} = e^{-\\beta \\cdot d_{ij} / d_\\text{scale}} $$\n where $s_{ij}$ is the pairwise similarity, $d_{ij}$ is the pairwise distance, and $d_\\text{scale}$ is the standard deviation of\n the values of $d$, to make the computation scale invariant.\n- Then, we define the centroid as\n $$ \\text{argmax}i \\sum_j s{ij} $$\nUsing $\\beta=1$, this is implemented with the following code:", "beta = 1\nindex = np.exp(-beta*distances / distances.std()).sum(axis=1).argmax()\nprint(index)\n\ncentroid = traj[index]\nprint(centroid)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
karlstroetmann/Algorithms
Python/Chapter-05/Dual-Pivot-Quicksort.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open('../style.css') as file:\n css = file.read()\nHTML(css)", "Dual Pivot Quicksort\nThe function sort takes a list L that is to be sorted and returns the sorted list.\nIt implements dual pivot quicksort and uses the first and the last element as pivots p1 and p2.", "def sort(L):\n if len(L) <= 1:\n return L\n x, y, R = L[0], L[-1], L[1:-1]\n p1, p2 = min(x, y), max(x, y)\n L1, L2, L3 = partition(p1, p2, R)\n if p1 == p2:\n return sort(L1) + [p1] + L2 + [p2] + sort(L3)\n else:\n return sort(L1) + [p1] + sort(L2) + [p2] + sort(L3)", "The function partition receives three arguments:\n* p1 and p2 are comparable objects taken from some list that is to be sorted.\n Furthermore p1 &lt;= p2.\n* L is a list of comparable elements. \nThe function partions the list into three sublists. Mathematically, we have:\n$$ \\texttt{partition}(p_1, p_2, L) = \\bigl\\langle[x \\in L \\mid x < p_1],\\; [x \\in L \\mid p_1 \\leq x \\leq p_2],\\; [x \\in L \\mid p_2 < x]\\bigr\\rangle $$", "def partition(p1, p2, L):\n if L == []:\n return [], [], []\n x, *R = L\n R1, R2, R3 = partition(p1, p2, R)\n if x < p1:\n return [x] + R1, R2, R3\n if x <= p2:\n return R1, [x] + R2, R3\n else:\n return R1, R2, [x] + R3\n\npartition(5, 13, [1, 19, 27, 2, 5, 6, 4, 7, 8, 5, 8, 17, 13])\n\nsort([1, 19, 27, 2, 5, 6, 4, 7, 8, 5, 8, 17, 13])" ]
[ "code", "markdown", "code", "markdown", "code" ]
bobflagg/deepER
deeper/part1-NER.ipynb
apache-2.0
[ "CS 224D Assignment #2\nPart [1]: Deep Networks: NER Window Model\nFor this first part of the assignment, you'll build your first \"deep\" networks. On problem set 1, you computed the backpropagation gradient $\\frac{\\partial J}{\\partial w}$ for a two-layer network; in this problem set you'll implement a slightly more complex network to perform named entity recognition (NER).\nBefore beginning the programming section, you should complete parts (a) and (b) of the corresponding section of the handout.", "import sys, os\nfrom numpy import *\nfrom matplotlib.pyplot import *\n%matplotlib inline\nmatplotlib.rcParams['savefig.dpi'] = 100\n\n%load_ext autoreload\n%autoreload 2", "(c): Random Initialization Test\nUse the cell below to test your code.", "from misc import random_weight_matrix\nrandom.seed(10)\nprint random_weight_matrix(3,5)", "(d): Implementation\nWe've provided starter code to load in the dataset and convert it to a list of \"windows\", consisting of indices into the matrix of word vectors. \nWe pad each sentence with begin and end tokens &lt;s&gt; and &lt;/s&gt;, which have their own word vector representations; additionally, we convert all words to lowercase, canonicalize digits (e.g. 1.12 becomes DG.DGDG), and replace unknown words with a special token UUUNKKK.\nYou don't need to worry about the details of this, but you can inspect the docs variables or look at the raw data (in plaintext) in the ./data/ directory.", "import data_utils.utils as du\nimport data_utils.ner as ner\n\n# Load the starter word vectors\nwv, word_to_num, num_to_word = ner.load_wv('data/ner/vocab.txt', 'data/ner/wordVectors.txt')\ntagnames = [\"O\", \"LOC\", \"MISC\", \"ORG\", \"PER\"]\nnum_to_tag = dict(enumerate(tagnames))\ntag_to_num = du.invert_dict(num_to_tag)\n\n# Load the training set\ndocs = du.load_dataset('data/ner/train')\nX_train, y_train = du.docs_to_windows(docs, word_to_num, tag_to_num)\n\n# Load the dev set (for tuning hyperparameters)\ndocs = du.load_dataset('data/ner/dev')\nX_dev, y_dev = du.docs_to_windows(docs, word_to_num, tag_to_num)\n\n# Load the test set (dummy labels only)\ndocs = du.load_dataset('data/ner/test.masked')\nX_test, y_test = du.docs_to_windows(docs, word_to_num, tag_to_num)", "To avoid re-inventing the wheel, we provide a base class that handles a lot of the drudgery of managing parameters and running gradient descent. It's based on the classifier API used by scikit-learn, so if you're familiar with that library it should be easy to use. \nWe'll be using this class for the rest of this assignment, so it helps to get acquainted with a simple example that should be familiar from Assignment 1. To keep this notebook uncluttered, we've put the code in the softmax_example.py; take a look at it there, then run the cell below.", "from softmax_example import SoftmaxRegression\nsr = SoftmaxRegression(wv=zeros((10,100)), dims=(100,5))\n\n##\n# Automatic gradient checker!\n# this checks anything you add to self.grads or self.sgrads\n# using the method of Assignment 1\nsr.grad_check(x=5, y=4)", "In order to implement a model, you need to subclass NNBase, then implement the following methods:\n\n__init__() (initialize parameters and hyperparameters)\n_acc_grads() (compute and accumulate gradients)\ncompute_loss() (compute loss for a training example)\npredict(), predict_proba(), or other prediction method (for evaluation)\n\nNNBase provides you with a few others that will be helpful:\n\ngrad_check() (run a gradient check - calls _acc_grads and compute_loss)\ntrain_sgd() (run SGD training; more on this later)\n\nYour task is to implement the window model in nerwindow.py; a scaffold has been provided for you with instructions on what to fill in.\nWhen ready, you can test below:", "from nerwindow import WindowMLP\nclf = WindowMLP(wv, windowsize=3, dims=[None, 100, 5], reg=0.001, alpha=0.01)\nclf.grad_check(X_train[0], y_train[0]) # gradient check on single point", "Now we'll train your model on some data! You can implement your own SGD method, but we recommend that you just call clf.train_sgd. This takes the following arguments:\n\nX, y : training data\nidxiter: iterable (list or generator) that gives index (row of X) of training examples in the order they should be visited by SGD\nprintevery: int, prints progress after this many examples\ncostevery: int, computes mean loss after this many examples. This is a costly operation, so don't make this too frequent!\n\nThe implementation we give you supports minibatch learning; if idxiter is a list-of-lists (or yields lists), then gradients will be computed for all indices in a minibatch before modifying the parameters (this is why we have you write _acc_grad instead of applying them directly!).\nBefore training, you should generate a training schedule to pass as idxiter. If you know how to use Python generators, we recommend those; otherwise, just make a static list. Make the following in the cell below:\n\nAn \"epoch\" schedule that just iterates through the training set, in order, nepoch times.\nA random schedule of N examples sampled with replacement from the training set.\nA random schedule of N/k minibatches of size k, sampled with replacement from the training set.", "nepoch = 5\nn_train = len(y_train)\nN = nepoch * n_train\nk = 5 # minibatch size\n\nrandom.seed(10) # do not change this!\n#### YOUR CODE HERE ####\ndef epoch_schedule():\n for n in xrange(N):\n yield n % n_train\n\nimport numpy as np\ndef random_schedule():\n for idx in np.random.randint(n_train, size=N):\n yield idx\n\ndef minibatch_schedule():\n M = N / k\n for m in xrange(M):\n yield np.random.randint(n_train, size=k)\n\n#clf = WindowMLP(wv, windowsize=3, dims=[None, 100, 5], reg=0.001, alpha=0.01) \nclf.train_sgd(X_train, y_train, idxiter=minibatch_schedule())\n#### END YOUR CODE ###\n\nfrom nerwindow import compute_f1\nyp = clf.predict(X_dev)\nprint compute_f1(y_dev, yp, tagnames)\n\ndef minibatch_schedule(N=100, k=5):\n M = N / k\n for m in xrange(M):\n yield np.random.randint(n_train, size=k)\nfor x in minibatch_schedule(): print x\n\nclass RunSettings(object):\n def __init__(self, windowsize, n_hidden_units, reg, alpha, n_output_units=5):\n self.windowsize = windowsize\n self.n_hidden_units = n_hidden_units\n self.reg = reg\n self.alpha = alpha\n self.dims=[None, n_hidden_units, n_output_units]\n\n def __repr__(self):\n return \"# ws = %d, # n-hu = %d, reg = %0.5f, alpha = %0.5f\" % (self.windowsize, self.n_hidden_units, self.reg, self.alpha)\n\nn_runs = 20\nruns = []\nfor i in xrange(n_runs): \n windowsize = np.random.choice([3,5,7])\n n_hidden_units = np.random.choice(range(100,400,20)) \n reg = np.random.uniform(0.001, 0.003)\n alpha = np.random.uniform(0.03, 0.06)\n reg = 0.002\n alpha = 0.056\n windowsize = 3\n runs.append(RunSettings(windowsize, n_hidden_units, reg, alpha))\n#for run in runs: print run\n\nfrom nerwindow import compute_f1\nn_train = len(y_train)\ndef minibatch_schedule(N=100, k=5):\n M = N / k\n for m in xrange(M):\n yield np.random.randint(n_train, size=k)\nbest_f1 = 0.0\nbest_parameters = None\nbest_clf = None\nfor i, run in enumerate(runs): \n dims=[None, run.n_hidden_units, 5]\n print \"%d. %s\" % (i, run),\n schedule = minibatch_schedule(N=800000, k=5)\n clf = WindowMLP(wv, windowsize=run.windowsize, dims=run.dims, reg=run.reg, alpha=run.alpha) \n clf.train_sgd(X_train, y_train, idxiter=schedule, printevery=1000000, costevery=1000000, verbose=False)\n yp = clf.predict(X_dev)\n f1 = compute_f1(y_dev, yp, tagnames)\n print '-->> %0.4f' % f1\n if f1 > best_f1:\n best_f1 = f1\n best_clf = clf\n best_parameters = (windowsize, n_hidden_units, reg, alpha)\nprint best_f1, best_parameters\nfrom nerwindow import full_report, eval_performance\nyp = best_clf.predict(X_dev)\nfull_report(y_dev, yp, tagnames)\neval_performance(y_dev, yp, tagnames)", "Now call train_sgd to train on X_train, y_train. To verify that things work, train on 100,000 examples or so to start (with any of the above schedules). This shouldn't take more than a couple minutes, and you should get a mean cross-entropy loss around 0.4.\nNow, if this works well, it's time for production! You have three tasks here:\n\nTrain a good model\nPlot a learning curve (cost vs. # of iterations)\nUse your best model to predict the test set\n\nYou should train on the train data and evaluate performance on the dev set. The test data we provided has only dummy labels (everything is O); we'll compare your predictions to the true labels at grading time. \nScroll down to section (f) for the evaluation code.\nWe don't expect you to spend too much time doing an exhaustive search here; the default parameters should work well, although you can certainly do better. Try to achieve an F1 score of at least 76% on the dev set, as reported by eval_performance.\nFeel free to create new cells and write new code here, including new functions (helpers and otherwise) in nerwindow.py. When you have a good model, follow the instructions below to make predictions on the test set.\nA strong model may require 10-20 passes (or equivalent number of random samples) through the training set and could take 20 minutes or more to train - but it's also possible to be much, much faster!\nThings you may want to tune:\n- alpha (including using an \"annealing\" schedule to decrease the learning rate over time)\n- training schedule and minibatch size\n- regularization strength\n- hidden layer dimension\n- width of context window", "from nn.base import NNBase\n\n#### YOUR CODE HERE ####\n# Sandbox: build a good model by tuning hyperparameters\nn_train = len(y_train)\ndef minibatch_schedule(N=100, k=5):\n M = N / k\n for m in xrange(M):\n yield np.random.randint(n_train, size=k)\ndef anneal_schedule(a0, epoch=50000):\n ctr = 0\n while True:\n yield a0 * 1.0/((ctr+epoch)/epoch)\n ctr += 1\nschedule = minibatch_schedule(N=100000, k=5)\nalpha_schedule = anneal_schedule(0.54)\ndims=[None, 140, 5]\nclf = WindowMLP(wv, windowsize=3, dims=dims, reg=0.02, alpha=0.054) \n#clf.train_sgd(X_train, y_train, idxiter=schedule, printevery=100000, costevery=100000, verbose=True);\nclf.train_sgd(X_train, y_train, idxiter=schedule, alphaiter=alpha_schedule, printevery=100000, costevery=100000, verbose=True);\n\n#### END YOUR CODE ####\n\n#### YOUR CODE HERE ####\n# Sandbox: build a good model by tuning hyperparameters\n\nyp = clf.predict(X_dev)\nfull_report(y_dev, yp, tagnames)\neval_performance(y_dev, yp, tagnames)\n\n\n#### END YOUR CODE ####\n\n#### YOUR CODE HERE ####\n# Sandbox: build a good model by tuning hyperparameters\ntraincurvebest = [(100000, 1.6392305257753381),\n (200000, 0.9713501488847085),\n (300000, 0.92719162923623333),\n (400000, 0.65314617730524283),\n (500000, 0.69434574257551684),\n (600000, 0.67674408587791146),\n (700000, 0.63355849175379098),\n (800000, 0.60947476566166814),\n (900000, 0.60553916271791497),\n (1000000, 0.60838990305892149),\n (1100000, 0.58387614068123839),\n (1200000, 0.56528442563491521),\n (1300000, 0.56772383628962297),\n (1400000, 0.55818029785509871),\n (1500000, 0.56255721129937275),\n (1600000, 0.55059669338120609),\n (1700000, 0.5582590912960208),\n (1800000, 0.56455349952333056),\n (1900000, 0.54554620555117928),\n (2000000, 0.53827092990040815)]\n#### END YOUR CODE ####", "(e): Plot Learning Curves\nThe train_sgd function returns a list of points (counter, cost) giving the mean loss after that number of SGD iterations.\nIf the model is taking too long you can cut it off by going to Kernel->Interrupt in the IPython menu; train_sgd will return the training curve so-far, and you can restart without losing your training progress.\nMake two plots:\n\n\nLearning curve using reg = 0.001, and comparing the effect of changing the learning rate: run with alpha = 0.01 and alpha = 0.1. Use minibatches of size 5, and train for 10,000 minibatches with costevery=200. Be sure to scale up your counts (x-axis) to reflect the batch size. What happens if the model tries to learn too fast? Explain why this occurs, based on the relation of SGD to the true objective.\n\n\nLearning curve for your best model (print the hyperparameters in the title), as trained using your best schedule. Set costevery so that you get at least 100 points to plot.", "counts = [x for x, y in traincurvebest]\ncosts = [y for x, y in traincurvebest]\n\n##\n# Plot your best learning curve here\n#counts, costs = zip(*traincurvebest)\nfigure(figsize=(6,4))\nplot(5*array(counts), costs, color='b', marker='o', linestyle='-')\ntitle(r\"Learning Curve ($\\alpha$=%g, $\\lambda$=%g)\" % (clf.alpha, clf.lreg))\nxlabel(\"SGD Iterations\"); ylabel(r\"Average $J(\\theta)$\"); \nylim(ymin=0, ymax=max(1.1*max(costs),3*min(costs)));\n#ylim(0,2)\n\n# Don't change this filename!\nsavefig(\"ner.learningcurve.best.png\")\n\n##\n# Plot comparison of learning rates here\n# feel free to change the code below\n\nfigure(figsize=(6,4))\ncounts, costs = zip(*trainingcurve1)\nplot(5*array(counts), costs, color='b', marker='o', linestyle='-', label=r\"$\\alpha=0.01$\")\ncounts, costs = zip(*trainingcurve2)\nplot(5*array(counts), costs, color='g', marker='o', linestyle='-', label=r\"$\\alpha=0.1$\")\ntitle(r\"Learning Curve ($\\lambda=0.01$, minibatch k=5)\")\nxlabel(\"SGD Iterations\"); ylabel(r\"Average $J(\\theta)$\"); \nylim(ymin=0, ymax=max(1.1*max(costs),3*min(costs)));\nlegend()\n\n# Don't change this filename\nsavefig(\"ner.learningcurve.comparison.png\")", "(f): Evaluating your model\nEvaluate the model on the dev set using your predict function, and compute performance metrics below!", "# Predict labels on the dev set\nyp = clf.predict(X_dev)\n# Save predictions to a file, one per line\nner.save_predictions(yp, \"dev.predicted\")\n\nfrom nerwindow import full_report, eval_performance\nfull_report(y_dev, yp, tagnames) # full report, helpful diagnostics\neval_performance(y_dev, yp, tagnames) # performance: optimize this F1\n\n# Save your predictions on the test set for us to evaluate\n# IMPORTANT: make sure X_test is exactly as loaded \n# from du.docs_to_windows, so that your predictions \n# line up with ours.\nyptest = clf.predict(X_test)\nner.save_predictions(yptest, \"test.predicted\")", "Part [1.1]: Probing neuron responses\nYou might have seen some results from computer vision where the individual neurons learn to detect edges, shapes, or even cat faces. We're going to do the same for language.\nRecall that each \"neuron\" is essentially a logistic regression unit, with weights corresponding to rows of the corresponding matrix. So, if we have a hidden layer of dimension 100, then we can think of our matrix $W \\in \\mathbb{R}^{100 x 150}$ as representing 100 hidden neurons each with weights W[i,:] and bias b1[i].\n(a): Hidden Layer, Center Word\nFor now, let's just look at the center word, and ignore the rest of the window. This corresponds to columns W[:,50:100], although this could change if you altered the window size for your model. For each neuron, find the top 10 words that it responds to, as measured by the dot product between W[i,50:100] and L[j]. Use the provided code to print these words and their scores for 5 neurons of your choice. In your writeup, briefly describe what you notice here.\nThe num_to_word dictionary, loaded earlier, may be helpful.", "# Recommended function to print scores\n# scores = list of float\n# words = list of str\ndef print_scores(scores, words):\n for i in range(len(scores)):\n print \"[%d]: (%.03f) %s\" % (i, scores[i], words[i])\n\n#### YOUR CODE HERE ####\n\nneurons = [1,3,4,6,8] # change this to your chosen neurons\nfor i in neurons:\n print \"Neuron %d\" % i\n print_scores(topscores[i], topwords[i])\n \n#### END YOUR CODE ####", "(b): Model Output, Center Word\nNow, let's do the same for the output layer. Here we only have 5 neurons, one for each class. O isn't very interesting, but let's look at the other four.\nHere things get a little more complicated: since we take a softmax, we can't just look at the neurons separately. An input could cause several of these neurons to all have a strong response, so we really need to compute the softmax output and find the strongest inputs for each class.\nAs before, let's consider only the center word (W[:,50:100]). For each class ORG, PER, LOC, and MISC, find the input words that give the highest probability $P(\\text{class}\\ |\\ \\text{word})$.\nYou'll need to do the full feed-forward computation here - for efficiency, try to express this as a matrix operation on $L$. This is the same feed-forward computation as used to predict probabilities, just with $W$ replaced by W[:,50:100].\nAs with the hidden-layer neurons, print the top 10 words and their corresponding class probabilities for each class.", "#### YOUR CODE HERE ####\n\n\nfor i in range(1,5):\n print \"Output neuron %d: %s\" % (i, num_to_tag[i])\n print_scores(topscores[i], topwords[i])\n print \"\"\n\n#### END YOUR CODE ####", "(c): Model Output, Preceding Word\nNow for one final task: let's look at the preceding word. Repeat the above analysis for the output layer, but use the first part of $W$, i.e. W[:,:50].\nDescribe what you see, and include these results in your writeup.", "#### YOUR CODE HERE ####\n\n\nfor i in range(1,5):\n print \"Output neuron %d: %s\" % (i, num_to_tag[i])\n print_scores(topscores[i], topwords[i])\n print \"\"\n\n#### END YOUR CODE ####" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
SheffieldML/notebook
compbio/TomancakDataWithGPy.ipynb
bsd-3-clause
[ "Tomancak et al Drosophila Data with GPy and Pandas\npresented at the EBI BioPreDyn Course 'The Systems Biology Modelling Cycle'\nNeil Lawrence, 12th May 2014 with help from Marta Milo for the Bioconductor Portion\nUpdated 18th March 2015 for latest GPy edition.\nIntroduction\nIn this demonstration we are going to load pre-processed Affymetrix time series data. The data is pre-processed in R using bioconductor. The results are then loaded into a python for further analysis with a Gaussian process.\nThe analysis we are going to do is to check whether any given set of replicates is 'valid'. In other words, were the replicates drawn from the same Gaussian process. To do this we will fit two Gaussian process models, one which assumes that the replicates are independent and one that assumes that they are drawn from the same process. We will evaluate the likelihood of each. The result could be used in a likelihood ratio test.\nPreparing the Environment\nFirstly we load in the data set by downloading cel files if the analyzed files aren't already present on the disk. We load in rmagic if that data isn't present to do the preprocessing in the R PUMA package from bioconductor.", "import os\nimport numpy as np\nimport GPy\nimport pods\n%matplotlib inline\nimport matplotlib as plt\nplt.rcParams['figure.figsize'] = (10.0, 4.0)", "Use Bioconductor for Processing\nNow we process the cel files using the PUMA package from bioconductor in R. This step can take a long time. The R code checks whether the preprocessed results are present and skips the processing of the cel files if they are. If you want to run the processing from scratch, delete any processed results stored in datapath and set the variable in the next box process_data_in_R to True.", "# This downloads cel files if they are not present. \n# These cel files would be needed if you want to do \n# the full Bioconductor analysis below.\nprocess_data_in_R = False\nif process_data_in_R:\n %load_ext rmagic\n datapath = os.path.join(pods.datasets.data_path, 'fruitfly_tomancak')\n # download the original cel files and prepare to process!\n data_set = 'fruitfly_tomancak_cel_files'\n if not pods.datasets.data_available(data_set):\n data = pods.datasets.download_data(data_set)\nelse:\n # download the puma-processed affymetrix data.\n data = pods.datasets.fruitfly_tomancak()", "This portion of the code will do the PUMA analysis of the gene\nexpression data in R. To run it needs to have the cel files\nfrom <a href=\"ftp://ftp.fruitfly.org/pub/embryo_tc_array_data/\">this fruitfly.org FTP server</a>. The code above dowloads all the cel files and stores them in the path given by datapath.", "%%R -i datapath\nreturnpath <- getwd()\nsetwd(datapath)\nif(!file.exists(\"tomancak_exprs.csv\")) {\n source(\"http://www.bioconductor.org/biocLite.R\")\n biocLite(\"puma\")\n library(puma)\n print(\"Processing data with PUMA\")\n expfiles <- c(paste(\"embryo_tc_4_\", 1:12, \".CEL\", sep=\"\"), paste(\"embryo_tc_6_\", 1:12, \".CEL\", sep=\"\"), paste(\"embryo_tc_8_\", 1:12, \".CEL\", sep=\"\"))\n library(puma)\n drosophila_exp_set <- justmmgMOS(filenames=expfiles, celfile.path=datapath)\n pData(drosophila_exp_set) <- data.frame(\"time.h\" = rep(1:12, 3), row.names=rownames(pData(drosophila_exp_set))) \n write.reslts(drosophila_exp_set, file='tomancak')\n}\nelse {\n print(\"Processed data found on disk.\")\n}\nsetwd(returnpath)", "Read Gene Expression Data into Pandas Dataframe\nOnce the analysis is complete, the results are stored in the file tomancak_exprs.csv. To save you doing the full analysis, we can download a pre-processed version of this file using this command.", "data = pods.datasets.fruitfly_tomancak()\nX = data['X']\nY = data['Y']", "The gene expression data is now loaded into the python environment. We have made use of pandas a python library for handling data structures. It provides us with a DataFrame object which gives some functionality similar to that of R for basic analysis.\nAre these Replicates Really Valid?\nNext we are going to write a couple of python functions that allow us to do some simple processing of the gene expression data. The idea will be to see if the data is best modelled through a series of identical Gaussian processes, or through series of independent Gaussian processes.", "def fit_probe(id, independent=False):\n \"\"\"Fit a set of probe repeats as either independent or correlated.\"\"\"\n \n # set up the covariance function.\n lengthscale = 2.\n if independent:\n kern = GPy.kern.RBF(1,lengthscale=lengthscale)**GPy.kern.Coregionalize(1,3, rank=0)\n name = 'independent gp'\n else:\n kern = GPy.kern.RBF(1,lengthscale=lengthscale)**GPy.kern.Coregionalize(1,3, rank=1)\n kern.coregion.W[0] = 1.\n kern.coregion.W[1] = 1.\n kern.coregion.W[2] = 1.\n kern.coregion.W.constrain_fixed()\n name = 'joint gp'\n\n kern += GPy.kern.Bias(2)\n \n m = GPy.models.GPRegression(X, Y[id][:, None], kern)\n m.name = name\n m.optimize(messages=True)\n return m", "We'd like to display the model fits. The GPy software allows us to select which data to plot from the model. Below there's a function for plotting the data associated with each repliacte alongside the fit.", "import matplotlib.pyplot as plt\nfrom IPython.display import display\n\ndef show_model_fit(m):\n display(m)\n fig, ax = plt.subplots(1, 3, sharex=False,sharey=True, figsize=(12, 3.5))\n symbols = ['x', 'x', 'x']\n replicate_str = 'replicate {}'\n #linecolors = [(1.,0.,0.), (0., 1., 0.), (0., 0., 1.)]\n mi, ma = np.inf, -np.inf\n for replicate in range(3):\n # Plot the result without noise (trying to estimate underlying gene expression)\n which_data_rows=np.nonzero(X[:,1]==replicate) \n data_symbol=symbols[replicate]\n pl = m.plot_f(ax=ax[replicate], fixed_inputs=[(1, replicate)], \n which_data_rows=which_data_rows,\n data_symbol=data_symbol,\n )\n ax[replicate].plot(m.X[which_data_rows, 0].flatten(), \n m.Y[which_data_rows].flatten(), \n data_symbol, c='k', mew=1.5,\n )\n ax[replicate].text(.98, .98, replicate_str.format(replicate), ha='right', va='top',\n transform=ax[replicate].transAxes)\n _mi, _ma = ax[replicate].get_ylim()\n if _mi < mi: mi = _mi\n if _ma > ma: ma = _ma\n for _ax in ax:\n _ax.set_ylim(mi, ma)\n ax[0].set_ylabel('gene expression [arbitrary]')\n ax[1].set_xlabel('time [hrs]')\n fig.text(0.5,1, m.name, ha='center', va='top', size=22, \n bbox=dict(facecolor='white', edgecolor='k', lw=.4, boxstyle='round'), clip_on=False)\n fig.tight_layout(rect=(0,0,1,1))\n #GPy.plotting.matplot_dep.base_plots.align_subplots(1, 3, xlim=ax[0].get_xlim(), ylim=ax[0].get_ylim())\n \ndef fit_and_display(probe_id):\n mc = fit_probe(probe_id, independent=True)\n mi = fit_probe(probe_id, independent=False)\n show_model_fit(mc)\n show_model_fit(mi)\n return mc, mi", "Now let's fit each of the two models to a given probe id. First we select a probe_id where the correspondence between the repeats is not very strong (even if it's there).", "mc, mi = fit_and_display('141201_at')", "Viewing the results here, for the independent model two of the fits adjudge there to be no signal. The constrained model shares information across the fits and can therefore determine better parameters. We can compute the ratio of the likelihods given:", "print mc.log_likelihood() - mi.log_likelihood()", "As a second gene, we consider 141200_at. This gene turns out to have a more consistent response across the three repeats.", "mc, mi = fit_and_display('141200_at')\n", "Naturally, the likelihood still favours the constrained model. \nOutlier Detection\nThe above model used a Gaussian noise to fit the data (i.e. it assumed that gene expression has a log-normally distributed noise). In the next example we will relax this assumption and use the Laplace approximation to fit a model with a Student-t noise model (degrees of freedom 5). This makes the analysis less sensitive to outliers.", "def fit_probe(id, independent=False):\n \"\"\"Fit a set of probe repeats as either independent or correlated.\"\"\"\n \n # set up a likelihood function, Student-t with 5 degress of freedom.\n likelihood = GPy.likelihoods.StudentT(deg_free=5)\n # use the Laplace approximation for inference.\n inference = GPy.inference.latent_function_inference.laplace.Laplace()\n\n # set up the covariance function as before.\n lengthscale = 3.3\n if independent:\n kern = GPy.kern.RBF(1,lengthscale=lengthscale)**GPy.kern.Coregionalize(1,3, rank=0)\n name = 'independent gp'\n else:\n kern = GPy.kern.RBF(1,lengthscale=lengthscale)**GPy.kern.Coregionalize(1,3, rank=1)\n kern.coregion.W[0] = 1.\n kern.coregion.W[1] = 1.\n kern.coregion.W[2] = 1.\n kern.coregion.W.constrain_fixed()\n name = 'joint gp'\n\n kern += GPy.kern.Bias(2)\n \n m = GPy.core.GP(X, Y[id][:, None], likelihood=likelihood, inference_method=inference, kernel=kern, name=name)\n m.optimize(messages=True)\n return m\n\nmc, mi = fit_and_display('141201_at')\n\nmc, mi = fit_and_display('141200_at')", "Is a Gene Differentially Expressed?\nWhen testing whether the replicates are valid, we can reverse the question and ask whether two different genes are differentially expressed. <a href=\"http://online.liebertpub.com/doi/abs/10.1089/cmb.2009.0175\">Stegle et al</a> (in a more sophisticated way than our simple analysis) show how this idea can be used to determine whether a gene has been differentially expressed. In recent work with Paul Heath and Sura Zaki Al Rashir we are combining such differential expression studies with multiple conditions giving tests to determine under which of the conditions the genes are differentially expressed.\nwork funded by the BioPreDyn and RADIANT projects." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Python4AstronomersAndParticlePhysicists/PythonWorkshop-ICE
notebooks/02_01_TheLanguage.ipynb
mit
[ "The language (1h)\n1) Short historical overview (5m)\n2) Briefly talk about python 2/3 (5m)\n3) Getting started with pyhton (15m)\n * Python terminal\n * iPython\n * Script with plain editors\n * Scripts with pycharm\n4) How python works (20m)\n * binaries, libraries and environments\n * exercises/examples\n * example: import library included\n * example: import library not installed\n * exercice: install library manually\n * example: install library pip\n5) Explain/show what jupyter notebooks are (15m)\n\nCode is running locally\nTypical shortcuts\nIs possible to add comments, etc...\nPlots are properly shown, etc...\n\nThe language\nTo follow the first part of the tutorial (points 1 to 4), take a look at this presentation.\n5) Jupyter notebooks\nThe Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text.\nThese notebooks are composed by \"cells\" of code (or comments, as this one, in \"markdown\" format). \nYou can (and are encouraged to) take a look at the keyboard shortcut list, by going to Help -> \"Keyboard Shortcuts\"", "# While in a \"code cell\" (like this one) you may execute the content by pressing Cntrl+Enter\nimport sys\nprint(sys.version)", "By clicking (once) on a \"cell\" you are selecting it. You may use some keyboard shortcuts:\n* Add a new cell below with typing \"B\"\n* Add a new cell above with typing \"A\"\nYou may also change the cell type:\n* Turn selected cell into markdown: \"M\"\n* Turn selected cell into code: \"Y\"\nThe very basics\nThere are several things you should remember while using jupyter notebooks:\n\nCode is beeing executed locally in your machine, under the python environment in which jupyter is installed (even if you are using your browser)\nThe cells are supposed to be executed sequencially. If you execute the cells in a different order, errors may occur!\n\nA couple of examples:", "# If we try to get a random number from the \"random\" package:\nprint(random.randint(1,6))\n\n# Ops! We forgot to import the package (you are executing normal sequencial python code)\n# If you execute this cell, you will import the random package\nimport random\n\n# If you try now to execute this cell again (identical to the one from before):\nprint(random.randint(1,6))", "Now it works!\nPlots\nWhen producing plots, jupyter notebooks nicely accomodate them within the notebook.\nAs an example, execute the following code:", "# To show plots inline:\n%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n# If you want different plotting style, uncomment the following line. Useful to have large readable fonts\n#plt.style.use('seaborn-talk') \nx=np.linspace(0,100)\nplt.plot(x,x**2,'b.')", "Notebooks are also able to execute HTML code:", "%%HTML\n<img src=https://freethoughtblogs.com/affinity/files/2017/06/good-news-everyone.jpg >\n<b>Jupyter notebooks are able to render HTML</b>", "You can also show code in a formatted style (taking profit from the markdown format power):\npython\nimport sys\nprint(sys.version)\nIn the following, the whole workshop will be using these notebooks. Before each lecture, remember several steps:\n* Organizers may update (many times, last minute) their notebooks, so remember to update your local project by executing:\n```bash\nFirst go to the path where you cloned the gitHub repository:\ncd PythonWorkshop-ICE\ngit pull\nIn case these commands show conflicts, and you **don't care to loose the changes you applied to the notebooks**, execute the following code to discard your changes and update the local notebooks to the latest version:bash\ngit reset --hard origin/master\ngit pull origin master\n```\nBack-up material\nMuch fancier things can be done with jupyter notebooks. If you want to check for more features, you can take a look to the following resources:\n* Most viewed notebooks: http://nb.bianp.net/sort/views/\n* https://github.com/cta-observatory/training-material/blob/master/tutorials/01_Intro_Python_fabio.ipynb\n* Advanced tricks: https://blog.dominodatalab.com/lesser-known-ways-of-using-notebooks/\n* Example: Interactive map https://app.dominodatalab.com/u/r00sj3/jupyter/view/batchdemo.ipynb" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mbdevpl/maildaemon
examples.ipynb
apache-2.0
[ "maildaemon package", "import maildaemon", "reading configuration", "cfg = maildaemon.load_config()\n\nconnections = cfg['connections']", "connecting, retrieving/sending messages", "from maildaemon.imap_daemon import IMAPDaemon\nfrom maildaemon.smtp_connection import SMTPConnection\nfrom maildaemon.pop_daemon import POPDaemon", "1and1", "one_and_one_imap = IMAPDaemon.from_dict(connections['1and1-imap'])\n\none_and_one_imap.connect()\n\none_and_one_imap.retrieve_folders_with_flags()\n\none_and_one_imap.retrieve_messages([1, 2, 3])\n\none_and_one_imap.disconnect()\n\none_and_one_smtp = SMTPConnection.from_dict(connections['1and1-smtp'])\n#one_and_one_pop = POPDaemon.from_dict(connections['1and1-pop'])\none_and_one_smtp.connect()\n#one_and_one_pop.connect()\n\none_and_one_smtp.disconnect()\n#one_and_one_pop.disconnect()", "Gmail", "gmail_imap = IMAPDaemon.from_dict(connections['gmail-imap'])\ngmail_imap.connect()\n\ngmail_imap.retrieve_folders_with_flags()\n\ngmail_imap.open_folder()\n\ngmail_imap.open_folder('[Gmail]/Sent Mail')\n\ngmail_imap.open_folder('[Gmail]/All Mail')\n\ngmail_imap.retrieve_message_ids()\n\ngmail_imap.retrieve_message_parts(1, ['UID', 'FLAGS'])\n\ngmail_imap.retrieve_messages_parts(range(1,100), ['ENVELOPE'], '[Gmail]/All Mail')\n\ngmail_imap.retrieve_messages_parts(range(1,100), ['ENVELOPE'])\n\nmessage_parts = gmail_imap.retrieve_message_parts(10, ['ENVELOPE'])\nmessage_parts\n\ngmail_imap.move_messages([10], '[Gmail]/Important')\n\ndef enable(self):\n self._link.enable('MOVE')\n\nenable(gmail_imap)\n\nmessage = gmail_imap.retrieve_message(1)\nmessage\n\ngmail_imap.disconnect()\n\ngmail_smtp = SMTPConnection.from_dict(connections['gmail-smtp'])\ngmail_smtp.connect()\n\ngmail_smtp.disconnect()", "iTSCOM", "#itscom_smtp = SMTPConnection.from_dict(connections['itscom-smtp'])\nitscom_pop = POPDaemon.from_dict(connections['itscom-pop'])\n\nitscom_pop.connect()\n\nitscom_pop.retrieve_message_ids()\n\nitscom_pop.disconnect()", "WIP: special handling of Gmail\nWIP: running daemon to maintain connections automatically\nWIP: filtering messages\nemail, email.message and email.parser packages\nhttps://docs.python.org/3/library/email.html\nhttps://docs.python.org/3/library/email.message.html\nhttps://docs.python.org/3/library/email.parser.html", "import email\n\nenvelope, body = message_parts\nemail_message = email.message_from_bytes(body) # type: email.message.Message\nprint(email_message.as_string())\nemail_message\n\nemail_message.defects\n\nparser = email.parser.BytesParser()\nmsg = parser.parsebytes(body, headersonly=False)\nprint(msg.as_string())\nmsg.items() == email_message.items()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
walkon302/CDIPS_Recommender
notebooks/Training_Network_to_Idenfitying_HandPicked_Classes.ipynb
apache-2.0
[ "Overview: Training Network for Useful Features.\nwe provide: \n- set of images that match along some interpretable feature. (e.g. striped dress)\n- a whole bunch of images that don't match\nCode: \n- estimates neural network features from trained resnet 50. \n- estimates weights for those neural network features to predict the interpreable feature class\n - do so with cross-validation. \n - regularized logisitic regression. \n - other classifiers. \nEvaluation: \n- save out weights to use as new features (new features = w*original features)", "import sys \nimport os\nsys.path.append(os.getcwd()+'/../')\n\n# our lib\nfrom lib.resnet50 import ResNet50\nfrom lib.imagenet_utils import preprocess_input, decode_predictions\n\n#keras \nfrom keras.preprocessing import image\nfrom keras.models import Model\n\n# sklearn\nimport sklearn\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import permutation_test_score\n\n# other\nimport numpy as np\nimport glob\nimport pandas as pd\nimport ntpath\n\n# plotting\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n\ndef preprocess_img(img_path):\n img = image.load_img(img_path, target_size=(224, 224))\n x = image.img_to_array(img)\n x = np.expand_dims(x, axis=0)\n x = preprocess_input(x)\n return(x,img)\n\n\ndef perf_measure(y_actual, y_hat):\n TP = 0\n FP = 0\n TN = 0\n FN = 0\n\n for i in range(len(y_hat)): \n if y_actual[i]==y_hat[i]==1:\n TP += 1\n for i in range(len(y_hat)): \n if (y_hat[i]==1) and (y_actual[i]!=y_hat[i]):\n FP += 1\n for i in range(len(y_hat)): \n if y_actual[i]==y_hat[i]==0:\n TN += 1\n for i in range(len(y_hat)): \n if (y_hat[i]==0) and (y_actual[i]!=y_hat[i]):\n FN += 1\n\n return(TP, FP, TN, FN)", "Extract NN Features", "# instantiate the model\nbase_model = ResNet50(include_top=False, weights='imagenet') #this will pull the weights from the folder \n\n# cut the model to lower levels only \nmodel = Model(input=base_model.input, output=base_model.get_layer('avg_pool').output)\n\n#img_paths = glob.glob('../img/baiyi/*')\n# \nimg_paths = glob.glob('../original_img/*')\nimg_paths[0:3]\n\n# single image\nx,img = preprocess_img(img_path) # preprocess\nmodel_output = model.predict(x)[0,0,0,:]\n\n\n\n\nlen(model_output)\n\n# create dataframe with all image features\nimg_feature_df = pd.DataFrame()\nfor i,img_path in enumerate(img_paths):\n x,img = preprocess_img(img_path) # preprocess\n model_output = model.predict(x)[0,0,0,:]\n img_feature_df.loc[i,'img_path']=img_path\n img_feature_df.loc[i,'nn_features']=str(list(model_output))\n\nimg_feature_df['img_name'] = img_feature_df['img_path'].apply(lambda x: ntpath.basename(x))\n\nimg_feature_df.head()\n\nimg_feature_df.to_csv('../data_nn_features/img_features_all.csv')", "Predicting Own Labels from Selected Images\n\nwithin a folder (find class 1, class 0). \n(split into test train)\nget matrix of img X features X class\nfit logistic regression (or other classifier) \nassess test set-fit. \nhtml (sample images used to define class; top and bottom predictions from test-set.", "# get target and non-target lists\n\ndef create_image_class_dataframe(target_img_folder):\n\n\n # all the image folders\n non_target_img_folders = ['../original_img/']\n\n \n target_img_paths=glob.glob(target_img_folder+'*')\n target_img_paths_stemless = [ntpath.basename(t) for t in target_img_paths]\n non_target_img_paths =[]\n for non_target_folder in non_target_img_folders:\n for img_path in glob.glob(non_target_folder+'*'):\n if ntpath.basename(img_path) not in target_img_paths_stemless: # remove targets from non-target list\n non_target_img_paths.append(img_path)\n\n # create data frame with image name and label\n img_paths = np.append(target_img_paths,non_target_img_paths)\n labels = np.append(np.ones(len(target_img_paths)),np.zeros(len(non_target_img_paths)))\n df = pd.DataFrame(data=np.vstack((img_paths,labels)).T,columns=['img_path','label']) \n df['img_name'] = df['img_path'].apply(lambda x: ntpath.basename(x)) # add image name\n df['label'] = df['label'].apply(lambda x: float(x)) # add label \n\n # load up features per image\n img_feature_df = pd.read_csv('../data_nn_features/img_features_all.csv',index_col=0)\n img_feature_df.head()\n\n\n # create feature matrix out of loaded up features. \n for i,row in df.iterrows():\n features = img_feature_df.loc[img_feature_df.img_name==row['img_name'],'nn_features'].as_matrix()[0].replace(']','').replace('[','').split(',')\n features = [np.float(f) for f in features]\n lab = row['img_name']\n if i==0:\n X = features\n labs = lab\n else:\n X = np.vstack((X,features))\n labs = np.append(labs,lab)\n\n xcolumns = ['x'+str(i) for i in np.arange(X.shape[1])]\n X_df = pd.DataFrame(np.hstack((labs[:,np.newaxis],X)),columns=['img_name']+xcolumns)\n\n # merge together \n df = df.merge(X_df,on='img_name')\n \n # make sure there is only one instance per image in dataframe\n lens = np.array([])\n for img_name in df.img_name.unique():\n lens = np.append(lens,len(df.loc[df.img_name==img_name]))\n\n\n assert len(np.unique(lens)[:])==1\n \n return(df)\n \n\n# remove some non-targets to make dataset smaller #\n# i_class0 = np.where(df.label==0.0)[0]\n# i_class0_remove = np.random.choice(i_class0,int(np.round(len(i_class0)/1.1)))\n# df_smaller = df.drop(i_class0_remove)\n#df_smaller.to_csv('test.csv')", "Horizontal Striped Data", "# image folder \ntarget_img_folder ='../data_img_classes/class_horiztonal_striped/'\ndf = create_image_class_dataframe(target_img_folder)\ndf.head()\n\nprint('target class')\nplt.figure(figsize=(12,3))\nfor i in range(5):\n img_path= df['img_path'][i]\n img = image.load_img(img_path, target_size=(224, 224))\n plt.subplot(1,5,i+1)\n plt.imshow(img)\n plt.grid(b=False)\n\nxcolumns=['x'+str(i) for i in np.arange(2048)]\nX = df.loc[:,xcolumns].as_matrix().astype('float')\ny= df.loc[:,'label'].as_matrix().astype('float')\nX_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X,y,stratify=y,test_size=.33)\nprint(' training shape {0} \\n testing shape {1}').format(X_train.shape,X_test.shape)\nprint('\\n target/non-target \\n (train) {0}\\{1} \\n (test) {2}\\{3}').format(y_train.sum(),(1-y_train).sum(),y_test.sum(),(1-y_test).sum())\n\n# classifiers \nC = 1.0\nclf_LR = LogisticRegression(C=C, penalty='l1', tol=0.01)\nclf_svm = sklearn.svm.SVC(C=C,kernel='linear')\n\n\nclf_LR.fit(X_train, y_train)\nclf_svm.fit(X_train, y_train)\n\ncoef = clf_LR.coef_[0,:]\nplt.figure(figsize=(12,3))\nsns.set_style('white')\nplt.scatter(np.arange(len(coef)),coef)\nplt.xlabel('nnet feature')\nplt.ylabel('LogReg coefficient')\nsns.despine()\n\n#len(coef)\n\ny_pred = clf_LR.predict(X_test)\n\n(TP,FP,TN,FN) =perf_measure(y_test,y_pred)\nprint('TruePos:{0}\\nFalsePos:{1}\\nTrueNeg:{2}\\nFalseNeg:{3}').format(TP,FP,TN,FN)\n\ny_pred = clf_svm.predict(X_test)\n\n(TP,FP,TN,FN) =perf_measure(y_test,y_pred)\nprint('TruePos:{0}\\nFalsePos:{1}\\nTrueNeg:{2}\\nFalseNeg:{3}').format(TP,FP,TN,FN)", "neither the svm or the logistic reg is doing well", "# from sklearn.model_selection import StratifiedKFold\n# skf = StratifiedKFold(n_splits=5,shuffle=True)\n# for train, test in skf.split(X, y):\n# #print(\"%s %s\" % (train, test))\n# C=1.0\n# clf_LR = LogisticRegression(C=C, penalty='l1', tol=0.01)\n# clf_LR.fit(X[train], y[train])\n# y_pred = clf_LR.predict(X[test])\n# (TP,FP,TN,FN) =perf_measure(y[test],y_pred)\n# print('\\nTruePos:{0}\\nFalsePos:{1}\\nTrueNeg:{2}\\nFalseNeg:{3}').format(TP,FP,TN,FN)\n\nclf_LR = LogisticRegression(C=C, penalty='l1', tol=0.01)\nskf = StratifiedKFold(n_splits=5,shuffle=True)\nscore, permutation_scores, pvalue = permutation_test_score(\n clf_LR, X, y, scoring=\"accuracy\", cv=skf, n_permutations=100)\n\n#\n\nplt.hist(permutation_scores)\nplt.axvline(score)\nsns.despine()\nplt.xlabel('accuracy')\nprint(pvalue)", "the accuracy achieved is above chance (as determined by permutation testing)\n\nRed / Pink Data", "# image folder \ntarget_img_folder ='../data_img_classes/class_red_pink/'\ndf = create_image_class_dataframe(target_img_folder)\ndf.head()\n\ndf.columns.values[-1]\n\nprint('target class')\nplt.figure(figsize=(12,3))\nfor i in range(5):\n img_path= df['img_path'][i+1]\n img = image.load_img(img_path, target_size=(224, 224))\n plt.subplot(1,5,i+1)\n plt.imshow(img)\n plt.grid(b=False)", "Split Set Assessment", "# split data \nxcolumns=['x'+str(i) for i in np.arange(2048)]\nX = df.loc[:,xcolumns].as_matrix().astype('float')\ny= df.loc[:,'label'].as_matrix().astype('float')\nX_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X,y,stratify=y,test_size=.33)\nprint(' training shape {0} \\n testing shape {1}').format(X_train.shape,X_test.shape)\nprint('\\n target/non-target \\n (train) {0}\\{1} \\n (test) {2}\\{3}').format(y_train.sum(),(1-y_train).sum(),y_test.sum(),(1-y_test).sum())\n\n\n\n# Train\nC = 1.0\nclf_LR = LogisticRegression(C=C, penalty='l1', tol=0.01)\nclf_LR.fit(X_train, y_train)\n\n# test \ny_pred = clf_LR.predict(X_test)\n(TP,FP,TN,FN) =perf_measure(y_test,y_pred)\nprint('TruePos:{0}\\nFalsePos:{1}\\nTrueNeg:{2}\\nFalseNeg:{3}').format(TP,FP,TN,FN)", "classification performance is mucher better on this dataset\n\nPermutation Assessment", "from sklearn.model_selection import StratifiedKFold\nC = 1.0\nclf_LR = LogisticRegression(C=C, penalty='l1', tol=0.01)\nskf = StratifiedKFold(n_splits=5,shuffle=True)\nscore, permutation_scores, pvalue = permutation_test_score(\n clf_LR, X, y, scoring=\"accuracy\", cv=skf, n_permutations=100)\n\nplt.hist(permutation_scores)\nplt.axvline(score)\nsns.despine()\nplt.xlabel('accuracy')\nplt.title('permutation test on test set classification')\nprint(pvalue)", "Re-train on whole dataset", "C = 1.0\nclf_LR = LogisticRegression(C=C, penalty='l1', tol=0.01)\nclf_LR.fit(X, y)\n\ncoef = clf_LR.coef_[0,:]\nplt.figure(figsize=(12,3))\nsns.set_style('white')\nplt.scatter(np.arange(len(coef)),coef)\nplt.xlabel('nnet feature')\nplt.ylabel('LogReg coefficient')\nsns.despine()\n\nlen(coef)", "Save out", "np.savetxt('../data_nn_features/class_weights_LR_redpink.txt',coef)", "Save", "%%bash\njupyter nbconvert --to html Training_Network_to_Idenfitying_HandPicked_Classes.ipynb && mv Training_Network_to_Idenfitying_HandPicked_Classes.html ../notebook_htmls/Training_Network_to_Idenfitying_HandPicked_Classes_v2.html\ncp Training_Network_to_Idenfitying_HandPicked_Classes.ipynb ../notebook_versions/Training_Network_to_Idenfitying_HandPicked_Classes_v2.ipynb\n\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cstrelioff/ARM-ipynb
Chapter2/chptr2.3.ipynb
mit
[ "2.3: Classical confidence intervals", "from __future__ import print_function, division\n%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n# use matplotlib style sheet\nplt.style.use('ggplot')", "CI for continuous data, Pg 18", "# import the t-distribution from scipy.stats\nfrom scipy.stats import t\n\ny = np.array([35,34,38,35,37])\ny\n\nn = len(y)\nn\n\nestimate = np.mean(y)\nestimate", "Numpy uses a denominator of N in the standard deviation calculation by\ndefault, instead of N-1. To use N-1, the unbiased estimator-- and to\nagree with the R output, we have to give np.std() the argument ddof=1:", "se = np.std(y, ddof=1)/np.sqrt(n)\nse\n\nint50 = estimate + t.ppf([0.25, 0.75], n-1)*se\nint50\n\nint95 = estimate + t.ppf([0.025, 0.975], n-1)*se\nint95", "CI for proportions, Pg 18", "from scipy.stats import norm\n\ny = 700\ny\n\nn = 1000\nn\n\nestimate = y/n\nestimate\n\nse = np.sqrt(estimate*(1-estimate)/n)\nse\n\nint95 = estimate + norm.ppf([.025,0.975])*se\nint95", "CI for discrete data, Pg 18", "y = np.repeat([0,1,2,3,4], [600,300, 50, 30, 20])\ny\n\nn = len(y)\nn\n\nestimate = np.mean(y)\nestimate", "See the note above about the difference different defaults for standard\ndeviation in Python and R.", "se = np.std(y, ddof=1)/np.sqrt(n)\nse\n\nint50 = estimate + t.ppf([0.25, 0.75], n-1)*se\nint50\n\nint95 = estimate + t.ppf([0.025, 0.975], n-1)*se\nint95", "Plot Figure 2.3, Pg 19\nThe polls.dat file has an unusual format. The data that we would like to\nhave in a single row is split across 4 rows:\n\nyear month\npercentage support\npercentage against\npercentage no opinion\n\nThe data seems to be a subset of the Gallup data, available here:\nhttp://www.gallup.com/poll/1606/Death-Penalty.aspx\nWe can see the unusual layout using the bash command head (linux/osx only,\nsorry..)", "%%bash\nhead ../../ARM_Data/death.polls/polls.dat", "Using knowledge of the file layout we can read in the file and pre-process into\nappropriate rows/columns for passing into a pandas dataframe:", "# Data is available in death.polls directory of ARM_Data\ndata = []\ntemp = []\nncols = 5\nwith open(\"../../ARM_Data/death.polls/polls.dat\") as f:\n for line in f.readlines():\n for d in line.strip().split(' '):\n temp.append(float(d))\n if (len(temp) == ncols):\n data.append(temp)\n temp = []\n\npolls = pd.DataFrame(data, columns=[u'year', u'month', u'perc for', \n u'perc against', u'perc no opinion'])\npolls.head()\n\n# --Note: this give the (percent) support for thise that have an opinion\n# --The percentage with no opinion are ignored\n# --This results in difference between our plot (below) and the Gallup plot (link above)\npolls[u'support'] = polls[u'perc for']/(polls[u'perc for']+polls[u'perc against'])\npolls.head()\n\npolls[u'year_float'] = polls[u'year'] + (polls[u'month']-6)/12\npolls.head()\n\n# add error column -- symmetric so only add one column\n# assumes sample size N=1000\n# uses +/- 1 standard error, resulting in 68% confidence\npolls[u'support_error'] = np.sqrt(polls[u'support']*(1-polls[u'support'])/1000)\npolls.head()\n\nfig, ax = plt.subplots(figsize=(8, 6))\nplt.errorbar(polls[u'year_float'], 100*polls[u'support'],\n yerr=100*polls[u'support_error'], fmt='ko',\n ms=4, capsize=0)\nplt.ylabel(u'Percentage support for the death penalty')\nplt.xlabel(u'Year')\n\n# you can adjust y-limits with command like below\n# I will leave the default behavior\n#plt.ylim(np.min(100*polls[u'support'])-2, np.max(100*polls[u'support']+2))", "Weighted averages, Pg 19\nThe example R-code for this part is incomplete, so I will make up N, p and\nse loosely related to the text on page 19.", "N = np.array([66030000, 81083600, 60788845])\np = np.array([0.55, 0.61, 0.38])\nse = np.array([0.02, 0.03, 0.03])\n\nw_avg = np.sum(N*p)/np.sum(N)\nw_avg\n\nse_w_avg = np.sqrt(np.sum((N*se/np.sum(N))**2))\nse_w_avg\n\n# this uses +/- 2 std devs\nint_95 = w_avg + np.array([-2,2])*se_w_avg\nint_95", "CI using simulations, Pg 20", "# import the normal from scipy.stats\n# repeated to make sure that it is clear that it is needed for this section\nfrom scipy.stats import norm\n\n# also need this for estimating CI from samples\nfrom scipy.stats.mstats import mquantiles\n\nn_men = 500\nn_men\n\np_hat_men = 0.75\np_hat_men\n\nse_men = np.sqrt(p_hat_men*(1.-p_hat_men)/n_men)\nse_men\n\nn_women = 500\nn_women\n\np_hat_women = 0.65\np_hat_women\n\nse_women = np.sqrt(p_hat_women*(1.-p_hat_women)/n_women)\nse_women\n\nn_sims = 10000\nn_sims\n\np_men = norm.rvs(size=n_sims, loc=p_hat_men, scale=se_men)\np_men[:10] # show first ten\n\np_women = norm.rvs(size=n_sims, loc=p_hat_women, scale=se_women)\np_women[:10] # show first ten\n\nratio = p_men/p_women\nratio[:10] # show first ten\n\n# the values of alphap and betap replicate the R default behavior\n# see http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mstats.mquantiles.html\nint95 = mquantiles(ratio, prob=[0.025,0.975], alphap=1., betap=1.)\nint95" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cathywu/flow
tutorials/tutorial03_rllib.ipynb
mit
[ "Tutorial 03: Running RLlib Experiments\nThis tutorial walks you through the process of running traffic simulations in Flow with trainable RLlib-powered agents. Autonomous agents will learn to maximize a certain reward over the rollouts, using the RLlib library (citation) (installation instructions). Simulations of this form will depict the propensity of RL agents to influence the traffic of a human fleet in order to make the whole fleet more efficient (for some given metrics). \nIn this exercise, we simulate an initially perturbed single lane ring road, where we introduce a single autonomous vehicle. We witness that, after some training, that the autonomous vehicle learns to dissipate the formation and propagation of \"phantom jams\" which form when only human driver dynamics are involved.\n1. Components of a Simulation\nAll simulations, both in the presence and absence of RL, require two components: a scenario, and an environment. Scenarios describe the features of the transportation network used in simulation. This includes the positions and properties of nodes and edges constituting the lanes and junctions, as well as properties of the vehicles, traffic lights, inflows, etc... in the network. Environments, on the other hand, initialize, reset, and advance simulations, and act as the primary interface between the reinforcement learning algorithm and the scenario. Moreover, custom environments may be used to modify the dynamical features of an scenario. Finally, in the RL case, it is in the environment that the state/action spaces and the reward function are defined. \n2. Setting up a Scenario\nFlow contains a plethora of pre-designed scenarios used to replicate highways, intersections, and merges in both closed and open settings. All these scenarios are located in flow/scenarios. For this exercise, which involves a single lane ring road, we will use the scenario LoopScenario.\n2.1 Setting up Scenario Parameters\nThe scenario mentioned at the start of this section, as well as all other scenarios in Flow, are parameterized by the following arguments: \n* name\n* vehicles\n* net_params\n* initial_config\nThese parameters are explained in detail in exercise 1. Moreover, all parameters excluding vehicles (covered in section 2.2) do not change from the previous exercise. Accordingly, we specify them nearly as we have before, and leave further explanations of the parameters to exercise 1.\nOne important difference between SUMO and RLlib experiments is that, in RLlib experiments, the scenario classes are not imported, but rather called via their string names which (for serializtion and execution purposes) must be located within flow/scenarios/__init__.py. To check which scenarios are currently available, we execute the below command.", "import flow.scenarios as scenarios\n\nprint(scenarios.__all__)", "Accordingly, to use the ring road scenario for this tutorial, we specify its (string) names as follows:", "# ring road scenario class\nscenario_name = \"LoopScenario\"", "Another difference between SUMO and RLlib experiments is that, in RLlib experiments, the scenario classes do not need to be defined; instead users should simply name the scenario class they wish to use. Later on, an environment setup module will import the correct scenario class based on the provided names.", "# input parameter classes to the scenario class\nfrom flow.core.params import NetParams, InitialConfig\n\n# name of the scenario\nname = \"training_example\"\n\n# network-specific parameters\nfrom flow.scenarios.loop import ADDITIONAL_NET_PARAMS\nnet_params = NetParams(additional_params=ADDITIONAL_NET_PARAMS)\n\n# initial configuration to vehicles\ninitial_config = InitialConfig(spacing=\"uniform\", perturbation=1)", "2.2 Adding Trainable Autonomous Vehicles\nThe Vehicles class stores state information on all vehicles in the network. This class is used to identify the dynamical features of a vehicle and whether it is controlled by a reinforcement learning agent. Morover, information pertaining to the observations and reward function can be collected from various get methods within this class.\nThe dynamics of vehicles in the Vehicles class can either be depicted by sumo or by the dynamical methods located in flow/controllers. For human-driven vehicles, we use the IDM model for acceleration behavior, with exogenous gaussian acceleration noise with std 0.2 m/s2 to induce perturbations that produce stop-and-go behavior. In addition, we use the ContinousRouter routing controller so that the vehicles may maintain their routes closed networks.\nAs we have done in exercise 1, human-driven vehicles are defined in the Vehicles class as follows:", "# vehicles class\nfrom flow.core.params import VehicleParams\n\n# vehicles dynamics models\nfrom flow.controllers import IDMController, ContinuousRouter\n\nvehicles = VehicleParams()\nvehicles.add(\"human\",\n acceleration_controller=(IDMController, {}),\n routing_controller=(ContinuousRouter, {}),\n num_vehicles=21)", "The above addition to the Vehicles class only accounts for 21 of the 22 vehicles that are placed in the network. We now add an additional trainable autuonomous vehicle whose actions are dictated by an RL agent. This is done by specifying an RLController as the acceleraton controller to the vehicle.", "from flow.controllers import RLController", "Note that this controller serves primarirly as a placeholder that marks the vehicle as a component of the RL agent, meaning that lane changing and routing actions can also be specified by the RL agent to this vehicle.\nWe finally add the vehicle as follows, while again using the ContinuousRouter to perpetually maintain the vehicle within the network.", "vehicles.add(veh_id=\"rl\",\n acceleration_controller=(RLController, {}),\n routing_controller=(ContinuousRouter, {}),\n num_vehicles=1)", "3. Setting up an Environment\nSeveral environments in Flow exist to train RL agents of different forms (e.g. autonomous vehicles, traffic lights) to perform a variety of different tasks. The use of an environment allows us to view the cumulative reward simulation rollouts receive, along with to specify the state/action spaces.\nEnvrionments in Flow are parametrized by three components:\n* env_params\n* sumo_params\n* scenario\n3.1 SumoParams\nSumoParams specifies simulation-specific variables. These variables include the length of any simulation step and whether to render the GUI when running the experiment. For this example, we consider a simulation step length of 0.1s and activate the GUI. \nNote For training purposes, it is highly recommanded to deactivate the GUI in order to avoid global slow down. In such case, one just needs to specify the following: render=False", "from flow.core.params import SumoParams\n\nsumo_params = SumoParams(sim_step=0.1, render=False)", "3.2 EnvParams\nEnvParams specifies environment and experiment-specific parameters that either affect the training process or the dynamics of various components within the scenario. For the environment WaveAttenuationPOEnv, these parameters are used to dictate bounds on the accelerations of the autonomous vehicles, as well as the range of ring lengths (and accordingly network densities) the agent is trained on.\nFinally, it is important to specify here the horizon of the experiment, which is the duration of one episode (during which the RL-agent acquire data).", "from flow.core.params import EnvParams\n\n# Define horizon as a variable to ensure consistent use across notebook\nHORIZON=100\n\nenv_params = EnvParams(\n # length of one rollout\n horizon=HORIZON,\n\n additional_params={\n # maximum acceleration of autonomous vehicles\n \"max_accel\": 1,\n # maximum deceleration of autonomous vehicles\n \"max_decel\": 1,\n # bounds on the ranges of ring road lengths the autonomous vehicle \n # is trained on\n \"ring_length\": [220, 270],\n },\n)", "3.3 Initializing a Gym Environment\nNow, we have to specify our Gym Environment and the algorithm that our RL agents will use. To specify the environment, one has to use the environment's name (a simple string). A list of all environment names is located in flow/envs/__init__.py. The names of available environments can be seen below.", "import flow.envs as flowenvs\n\nprint(flowenvs.__all__)", "We will use the environment \"WaveAttenuationPOEnv\", which is used to train autonomous vehicles to attenuate the formation and propagation of waves in a partially observable variable density ring road. To create the Gym Environment, the only necessary parameters are the environment name plus the previously defined variables. These are defined as follows:", "env_name = \"WaveAttenuationPOEnv\"", "3.4 Setting up Flow Parameters\nRLlib and rllab experiments both generate a params.json file for each experiment run. For RLlib experiments, the parameters defining the Flow scenario and environment must be stored as well. As such, in this section we define the dictionary flow_params, which contains the variables required by the utility function make_create_env. make_create_env is a higher-order function which returns a function create_env that initializes a Gym environment corresponding to the Flow scenario specified.", "# Creating flow_params. Make sure the dictionary keys are as specified. \nflow_params = dict(\n # name of the experiment\n exp_tag=name,\n # name of the flow environment the experiment is running on\n env_name=env_name,\n # name of the scenario class the experiment uses\n scenario=scenario_name,\n # simulator that is used by the experiment\n simulator='traci',\n # sumo-related parameters (see flow.core.params.SumoParams)\n sim=sumo_params,\n # environment related parameters (see flow.core.params.EnvParams)\n env=env_params,\n # network-related parameters (see flow.core.params.NetParams and\n # the scenario's documentation or ADDITIONAL_NET_PARAMS component)\n net=net_params,\n # vehicles to be placed in the network at the start of a rollout \n # (see flow.core.vehicles.Vehicles)\n veh=vehicles,\n # (optional) parameters affecting the positioning of vehicles upon \n # initialization/reset (see flow.core.params.InitialConfig)\n initial=initial_config\n)", "4 Running RL experiments in Ray\n4.1 Import\nFirst, we must import modules required to run experiments in Ray. The json package is required to store the Flow experiment parameters in the params.json file, as is FlowParamsEncoder. Ray-related imports are required: the PPO algorithm agent, ray.tune's experiment runner, and environment helper methods register_env and make_create_env.", "import json\n\nimport ray\ntry:\n from ray.rllib.agents.agent import get_agent_class\nexcept ImportError:\n from ray.rllib.agents.registry import get_agent_class\nfrom ray.tune import run_experiments\nfrom ray.tune.registry import register_env\n\nfrom flow.utils.registry import make_create_env\nfrom flow.utils.rllib import FlowParamsEncoder", "4.2 Initializing Ray\nHere, we initialize Ray and experiment-based constant variables specifying parallelism in the experiment as well as experiment batch size in terms of number of rollouts. redirect_output sends stdout and stderr for non-worker processes to files if True.", "# number of parallel workers\nN_CPUS = 2\n# number of rollouts per training iteration\nN_ROLLOUTS = 1\n\nray.init(redirect_output=True, num_cpus=N_CPUS)", "4.3 Configuration and Setup\nHere, we copy and modify the default configuration for the PPO algorithm. The agent has the number of parallel workers specified, a batch size corresponding to N_ROLLOUTS rollouts (each of which has length HORIZON steps), a discount rate $\\gamma$ of 0.999, two hidden layers of size 16, uses Generalized Advantage Estimation, $\\lambda$ of 0.97, and other parameters as set below.\nOnce config contains the desired parameters, a JSON string corresponding to the flow_params specified in section 3 is generated. The FlowParamsEncoder maps objects to string representations so that the experiment can be reproduced later. That string representation is stored within the env_config section of the config dictionary. Later, config is written out to the file params.json. \nNext, we call make_create_env and pass in the flow_params to return a function we can use to register our Flow environment with Gym.", "# The algorithm or model to train. This may refer to \"\n# \"the name of a built-on algorithm (e.g. RLLib's DQN \"\n# \"or PPO), or a user-defined trainable function or \"\n# \"class registered in the tune registry.\")\nalg_run = \"PPO\"\n\nagent_cls = get_agent_class(alg_run)\nconfig = agent_cls._default_config.copy()\nconfig[\"num_workers\"] = N_CPUS - 1 # number of parallel workers\nconfig[\"train_batch_size\"] = HORIZON * N_ROLLOUTS # batch size\nconfig[\"gamma\"] = 0.999 # discount rate\nconfig[\"model\"].update({\"fcnet_hiddens\": [16, 16]}) # size of hidden layers in network\nconfig[\"use_gae\"] = True # using generalized advantage estimation\nconfig[\"lambda\"] = 0.97 \nconfig[\"sgd_minibatch_size\"] = min(16 * 1024, config[\"train_batch_size\"]) # stochastic gradient descent\nconfig[\"kl_target\"] = 0.02 # target KL divergence\nconfig[\"num_sgd_iter\"] = 10 # number of SGD iterations\nconfig[\"horizon\"] = HORIZON # rollout horizon\n\n# save the flow params for replay\nflow_json = json.dumps(flow_params, cls=FlowParamsEncoder, sort_keys=True,\n indent=4) # generating a string version of flow_params\nconfig['env_config']['flow_params'] = flow_json # adding the flow_params to config dict\nconfig['env_config']['run'] = alg_run\n\n# Call the utility function make_create_env to be able to \n# register the Flow env for this experiment\ncreate_env, gym_name = make_create_env(params=flow_params, version=0)\n\n# Register as rllib env with Gym\nregister_env(gym_name, create_env)", "4.4 Running Experiments\nHere, we use the run_experiments function from ray.tune. The function takes a dictionary with one key, a name corresponding to the experiment, and one value, itself a dictionary containing parameters for training.", "trials = run_experiments({\n flow_params[\"exp_tag\"]: {\n \"run\": alg_run,\n \"env\": gym_name,\n \"config\": {\n **config\n },\n \"checkpoint_freq\": 1, # number of iterations between checkpoints\n \"checkpoint_at_end\": True, # generate a checkpoint at the end\n \"max_failures\": 999,\n \"stop\": { # stopping conditions\n \"training_iteration\": 1, # number of iterations to stop after\n },\n },\n})", "4.5 Visualizing the results\nThe simulation results are saved within the ray_results/training_example directory (we defined training_example at the start of this tutorial). The ray_results folder is by default located at your root ~/ray_results. \nYou can run tensorboard --logdir=~/ray_results/training_example (install it with pip install tensorboard) to visualize the different data outputted by your simulation.\nFor more instructions about visualizing, please see tutorial05_visualize.ipynb. \n4.6 Restart from a checkpoint / Transfer learning\nIf you wish to do transfer learning, or to resume a previous training, you will need to start the simulation from a previous checkpoint. To do that, you can add a restore parameter in the run_experiments argument, as follows:\npython\ntrials = run_experiments({\n flow_params[\"exp_tag\"]: {\n \"run\": alg_run,\n \"env\": gym_name,\n \"config\": {\n **config\n },\n \"restore\": \"/ray_results/experiment/dir/checkpoint_50/checkpoint-50\"\n \"checkpoint_freq\": 1,\n \"checkpoint_at_end\": True,\n \"max_failures\": 999,\n \"stop\": {\n \"training_iteration\": 1,\n },\n },\n})\nThe \"restore\" path should be such that the [restore]/.tune_metadata file exists.\nThere is also a \"resume\" parameter that you can set to True if you just wish to continue the training from a previously saved checkpoint, in case you are still training on the same experiment." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AllenDowney/CompStats
sampling_soln.ipynb
mit
[ "Random Sampling\nCopyright 2016 Allen Downey\nLicense: Creative Commons Attribution 4.0 International", "from __future__ import print_function, division\n\nimport numpy\nimport scipy.stats\n\nimport matplotlib.pyplot as pyplot\n\nfrom ipywidgets import interact, interactive, fixed\nimport ipywidgets as widgets\n\n# seed the random number generator so we all get the same results\nnumpy.random.seed(18)\n\n# some nicer colors from http://colorbrewer2.org/\nCOLOR1 = '#7fc97f'\nCOLOR2 = '#beaed4'\nCOLOR3 = '#fdc086'\nCOLOR4 = '#ffff99'\nCOLOR5 = '#386cb0'\n\n%matplotlib inline", "Part One\nSuppose we want to estimate the average weight of men and women in the U.S.\nAnd we want to quantify the uncertainty of the estimate.\nOne approach is to simulate many experiments and see how much the results vary from one experiment to the next.\nI'll start with the unrealistic assumption that we know the actual distribution of weights in the population. Then I'll show how to solve the problem without that assumption.\nBased on data from the BRFSS, I found that the distribution of weight in kg for women in the U.S. is well modeled by a lognormal distribution with the following parameters:", "weight = scipy.stats.lognorm(0.23, 0, 70.8)\nweight.mean(), weight.std()", "Here's what that distribution looks like:", "xs = numpy.linspace(20, 160, 100)\nys = weight.pdf(xs)\npyplot.plot(xs, ys, linewidth=4, color=COLOR1)\npyplot.xlabel('weight (kg)')\npyplot.ylabel('PDF')\nNone", "make_sample draws a random sample from this distribution. The result is a NumPy array.", "def make_sample(n=100):\n sample = weight.rvs(n)\n return sample", "Here's an example with n=100. The mean and std of the sample are close to the mean and std of the population, but not exact.", "sample = make_sample(n=100)\nsample.mean(), sample.std()", "We want to estimate the average weight in the population, so the \"sample statistic\" we'll use is the mean:", "def sample_stat(sample):\n return sample.mean()", "One iteration of \"the experiment\" is to collect a sample of 100 women and compute their average weight.\nWe can simulate running this experiment many times, and collect a list of sample statistics. The result is a NumPy array.", "def compute_sampling_distribution(n=100, iters=1000):\n stats = [sample_stat(make_sample(n)) for i in range(iters)]\n return numpy.array(stats)", "The next line runs the simulation 1000 times and puts the results in\nsample_means:", "sample_means = compute_sampling_distribution(n=100, iters=1000)", "Let's look at the distribution of the sample means. This distribution shows how much the results vary from one experiment to the next.\nRemember that this distribution is not the same as the distribution of weight in the population. This is the distribution of results across repeated imaginary experiments.", "pyplot.hist(sample_means, color=COLOR5)\npyplot.xlabel('sample mean (n=100)')\npyplot.ylabel('count')\nNone", "The mean of the sample means is close to the actual population mean, which is nice, but not actually the important part.", "sample_means.mean()", "The standard deviation of the sample means quantifies the variability from one experiment to the next, and reflects the precision of the estimate.\nThis quantity is called the \"standard error\".", "std_err = sample_means.std()\nstd_err", "We can also use the distribution of sample means to compute a \"90% confidence interval\", which contains 90% of the experimental results:", "conf_int = numpy.percentile(sample_means, [5, 95])\nconf_int", "Now we'd like to see what happens as we vary the sample size, n. The following function takes n, runs 1000 simulated experiments, and summarizes the results.", "def plot_sampling_distribution(n, xlim=None):\n \"\"\"Plot the sampling distribution.\n \n n: sample size\n xlim: [xmin, xmax] range for the x axis \n \"\"\"\n sample_stats = compute_sampling_distribution(n, iters=1000)\n se = numpy.std(sample_stats)\n ci = numpy.percentile(sample_stats, [5, 95])\n \n pyplot.hist(sample_stats, color=COLOR2)\n pyplot.xlabel('sample statistic')\n pyplot.xlim(xlim)\n text(0.03, 0.95, 'CI [%0.2f %0.2f]' % tuple(ci))\n text(0.03, 0.85, 'SE %0.2f' % se)\n pyplot.show()\n \ndef text(x, y, s):\n \"\"\"Plot a string at a given location in axis coordinates.\n \n x: coordinate\n y: coordinate\n s: string\n \"\"\"\n ax = pyplot.gca()\n pyplot.text(x, y, s,\n horizontalalignment='left',\n verticalalignment='top',\n transform=ax.transAxes)", "Here's a test run with n=100:", "plot_sampling_distribution(100)", "Now we can use interact to run plot_sampling_distribution with different values of n. Note: xlim sets the limits of the x-axis so the figure doesn't get rescaled as we vary n.", "def sample_stat(sample):\n return sample.mean()\n\nslider = widgets.IntSlider(min=10, max=1000, value=100)\ninteract(plot_sampling_distribution, n=slider, xlim=fixed([55, 95]))\nNone", "Other sample statistics\nThis framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic.\nExercise 1: Fill in sample_stat below with any of these statistics:\n\nStandard deviation of the sample.\nCoefficient of variation, which is the sample standard deviation divided by the sample standard mean.\nMin or Max\nMedian (which is the 50th percentile)\n10th or 90th percentile.\nInterquartile range (IQR), which is the difference between the 75th and 25th percentiles.\n\nNumPy array methods you might find useful include std, min, max, and percentile.\nDepending on the results, you might want to adjust xlim.", "def sample_stat(sample):\n # TODO: replace the following line with another sample statistic\n return sample.mean()\n\nslider = widgets.IntSlider(min=10, max=1000, value=100)\ninteract(plot_sampling_distribution, n=slider, xlim=fixed([0, 100]))\nNone", "STOP HERE\nWe will regroup and discuss before going on.\nPart Two\nSo far we have shown that if we know the actual distribution of the population, we can compute the sampling distribution for any sample statistic, and from that we can compute SE and CI.\nBut in real life we don't know the actual distribution of the population. If we did, we wouldn't be doing statistical inference in the first place!\nIn real life, we use the sample to build a model of the population distribution, then use the model to generate the sampling distribution. A simple and popular way to do that is \"resampling,\" which means we use the sample itself as a model of the population distribution and draw samples from it.\nBefore we go on, I want to collect some of the code from Part One and organize it as a class. This class represents a framework for computing sampling distributions.", "class Resampler(object):\n \"\"\"Represents a framework for computing sampling distributions.\"\"\"\n \n def __init__(self, sample, xlim=None):\n \"\"\"Stores the actual sample.\"\"\"\n self.sample = sample\n self.n = len(sample)\n self.xlim = xlim\n \n def resample(self):\n \"\"\"Generates a new sample by choosing from the original\n sample with replacement.\n \"\"\"\n new_sample = numpy.random.choice(self.sample, self.n, replace=True)\n return new_sample\n \n def sample_stat(self, sample):\n \"\"\"Computes a sample statistic using the original sample or a\n simulated sample.\n \"\"\"\n return sample.mean()\n \n def compute_sampling_distribution(self, iters=1000):\n \"\"\"Simulates many experiments and collects the resulting sample\n statistics.\n \"\"\"\n stats = [self.sample_stat(self.resample()) for i in range(iters)]\n return numpy.array(stats)\n \n def plot_sampling_distribution(self):\n \"\"\"Plots the sampling distribution.\"\"\"\n sample_stats = self.compute_sampling_distribution()\n se = sample_stats.std()\n ci = numpy.percentile(sample_stats, [5, 95])\n \n pyplot.hist(sample_stats, color=COLOR2)\n pyplot.xlabel('sample statistic')\n pyplot.xlim(self.xlim)\n text(0.03, 0.95, 'CI [%0.2f %0.2f]' % tuple(ci))\n text(0.03, 0.85, 'SE %0.2f' % se)\n pyplot.show()", "The following function instantiates a Resampler and runs it.", "def interact_func(n, xlim):\n sample = weight.rvs(n)\n resampler = Resampler(sample, xlim=xlim)\n resampler.plot_sampling_distribution()", "Here's a test run with n=100", "interact_func(n=100, xlim=[50, 100])", "Now we can use interact_func in an interaction:", "slider = widgets.IntSlider(min=10, max=1000, value=100)\ninteract(interact_func, n=slider, xlim=fixed([50, 100]))\nNone", "Exercise 2: write a new class called StdResampler that inherits from Resampler and overrides sample_stat so it computes the standard deviation of the resampled data.", "# Solution goes here\n\nclass StdResampler(Resampler): \n \"\"\"Computes the sampling distribution of the standard deviation.\"\"\"\n \n def sample_stat(self, sample):\n \"\"\"Computes a sample statistic using the original sample or a\n simulated sample.\n \"\"\"\n return sample.std()", "Test your code using the cell below:", "def interact_func2(n, xlim):\n sample = weight.rvs(n)\n resampler = StdResampler(sample, xlim=xlim)\n resampler.plot_sampling_distribution()\n \ninteract_func2(n=100, xlim=[0, 100])", "When your StdResampler is working, you should be able to interact with it:", "slider = widgets.IntSlider(min=10, max=1000, value=100)\ninteract(interact_func2, n=slider, xlim=fixed([0, 100]))\nNone", "STOP HERE\nWe will regroup and discuss before going on.\nPart Three\nWe can extend this framework to compute SE and CI for a difference in means.\nFor example, men are heavier than women on average. Here's the women's distribution again (from BRFSS data):", "female_weight = scipy.stats.lognorm(0.23, 0, 70.8)\nfemale_weight.mean(), female_weight.std()", "And here's the men's distribution:", "male_weight = scipy.stats.lognorm(0.20, 0, 87.3)\nmale_weight.mean(), male_weight.std()", "I'll simulate a sample of 100 men and 100 women:", "female_sample = female_weight.rvs(100)\nmale_sample = male_weight.rvs(100)", "The difference in means should be about 17 kg, but will vary from one random sample to the next:", "male_sample.mean() - female_sample.mean()", "Here's the function that computes Cohen's effect size again:", "def CohenEffectSize(group1, group2):\n \"\"\"Compute Cohen's d.\n\n group1: Series or NumPy array\n group2: Series or NumPy array\n\n returns: float\n \"\"\"\n diff = group1.mean() - group2.mean()\n\n n1, n2 = len(group1), len(group2)\n var1 = group1.var()\n var2 = group2.var()\n\n pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)\n d = diff / numpy.sqrt(pooled_var)\n return d", "The difference in weight between men and women is about 1 standard deviation:", "CohenEffectSize(male_sample, female_sample)", "Now we can write a version of the Resampler that computes the sampling distribution of $d$.", "class CohenResampler(Resampler):\n def __init__(self, group1, group2, xlim=None):\n self.group1 = group1\n self.group2 = group2\n self.xlim = xlim\n \n def resample(self):\n n, m = len(self.group1), len(self.group2)\n group1 = numpy.random.choice(self.group1, n, replace=True)\n group2 = numpy.random.choice(self.group2, m, replace=True)\n return group1, group2\n \n def sample_stat(self, groups):\n group1, group2 = groups\n return CohenEffectSize(group1, group2)", "Now we can instantiate a CohenResampler and plot the sampling distribution.", "resampler = CohenResampler(male_sample, female_sample)\nresampler.plot_sampling_distribution()", "This example demonstrates an advantage of the computational framework over mathematical analysis. Statistics like Cohen's $d$, which is the ratio of other statistics, are relatively difficult to analyze. But with a computational approach, all sample statistics are equally \"easy\".\nOne note on vocabulary: what I am calling \"resampling\" here is a specific kind of resampling called \"bootstrapping\". Other techniques that are also considering resampling include permutation tests, which we'll see in the next section, and \"jackknife\" resampling. You can read more at http://en.wikipedia.org/wiki/Resampling_(statistics)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
peteWT/fcat_biomass
biomass_burning.ipynb
mit
[ "import utils as ut\nfrom pint import UnitRegistry\nimport pandas as pd\nfrom ggplot import *\nimport seaborn as sns\nfrom tabulate import tabulate\nfrom numpy import average as avg\nimport numpy as np", "Some usefull functions and housekeeping\nInitalize a sqlite database", "sqdb = ut.sqlitedb('fcat_biomass')", "Functions for", "def distFromRange(total, maxr = 32, minr = 2):\n av = (maxr + minr)/2\n stdev = (float(maxr) - float(minr))/4 \n d_frac = (total-np.floor(total))*np.random.normal(av, stdev, 1).clip(min=0)[0]\n t_bdt = np.random.normal(av,stdev,np.floor(total)).clip(min=0)\n return np.append(t_bdt, d_frac)\n\ndef sumFromDist(total, maxr = 0.32, minr = 0.02):\n av = (maxr + minr)/2\n stdev = (float(maxr) - float(minr))/4 \n d_frac = (total-np.floor(total))*np.random.normal(av, stdev, 1).clip(min=0)[0]\n t_bdt = np.sum(np.random.normal(av,stdev,np.floor(total)).clip(min=0))\n return (d_frac+t_bdt)", "Black Carbon from pile burning\nThe CARB criteria pollutant emissions inventory reports Particualte Matter (PM 2.5) emissions from anthropogenic buring of forest residuals. The following estimates elemental carbon (Black Carbon) based on empirically derived relationships between PM2.5 and EC from.\n\nWard DE, Hardy CC. Organic and elemental profiles for smoke from prescribed fires. In: Watson JG, editor. International specialty conference of the Air and Waste Management Association [Internet]. San Francisco: Air and Waste Management Association; 1989. Available from: http://www.frames.gov/documents/smoke/serdp/ward_hardy_1989a.pdf", "ward = ut.gData('13UQtRfNBSJ81PXxbYSnB2LrjHePNcvhJhrsxRBjHpoY', 475419971)\n#Units are ratio of EC to PM produced \nwardDF = ward[['source','pct_sm','pct_f','tc_f_est','tc_f_cv','tc_s_est','tc_s_cv','ec_f_est','ec_f_cv', 'ec_s_est','ec_s_cv','ecton-1_h_pm','ecton-1_l_pm']].transpose()\nwardDF.columns = wardDF.iloc[0]\neFact = wardDF.to_dict()\nd = dict(zip([eFact[i]['source'] for i in eFact.keys()],['ALL VEGETATION','WILDLAND FIRE USE (WFU)','FOREST MANAGEMENT']))\nfor k in eFact.keys():\n eFact[k]['arblink'] = d[k]\npd.DataFrame.from_dict(eFact).transpose().to_sql('bc_pm_ratio', sqdb['cx'], if_exists= 'replace')\n#print tabulate(pd.DataFrame.from_dict(eFact).transpose(), headers = ['CARB CPE Cat.','BC/t PM 2.5 (high)','BC/t PM 2.5 (low)', 'Source'], tablefmt=\"pipe\")\npd.DataFrame.from_dict(eFact).transpose()\n\npd.DataFrame.from_dict(eFact)\n\nward\n\nec_rat_table = pd.DataFrame([ward.source, ward.tc_f_est*ward.ec_f_est, ward.tc_f_cv, ward.ec_f_cv,ward.tc_s_est*ward.ec_s_est, ward.tc_s_cv, ward.ec_s_cv]).transpose()\nec_rat_table.to_sql('ec_ratios', sqdb['cx'], if_exists = 'replace')\n\nec_rat_table\n\nprint tabulate(ec_rat_table, headers = ['Source',\n 'BC/t PM 2.5 (F, est.)',\n 'TC/t PM 2.5 (F, CV)',\n 'BC/t TC (F, CV)',\n 'BC/t PM 2.5 (S, est.)',\n 'TC/t PM 2.5 (S, CV)',\n 'BC/t TC (S, CV)'], tablefmt=\"pipe\")\n\necPct = ward[['ec_f_est', 'pct_f','pct_sm','tc_f_est','tc_f_cv','tc_s_est','tc_s_cv','ec_f_cv','ec_s_est', 'ec_s_cv']].set_index(ward.source).to_dict('index')", "Black Carbon Global Warming Potential\nSeveral estimates exist for the GWP of Black Carbon. We use GWP20 estimates for black carbon from the CARB Short-Lived Climate Pollutant Strategy and from Fuglestvedt et. al 2000.\nReferences\n\nCalifornia Air Resources Board. Short Lived Climate Pollutant Reduction Strategy. Sacramento, CA; 2015. \nFuglestvedt JS, Berntsen TK, Godal O, Skodvin T. Climate implications of GWP-based reductions in greenhouse gas emissions. Geophys Res Lett [Internet]. 2000 Feb 1 [cited 2015 Sep 5];27(3):409–12. Available from: http://doi.wiley.com/10.1029/1999GL010939", "bc_gwp = ut.gData('13UQtRfNBSJ81PXxbYSnB2LrjHePNcvhJhrsxRBjHpoY', 195715938)\nbc_gwp.to_sql('bc_gwp', sqdb['cx'], if_exists = 'replace')\n#print tabulate(bc_gwp.drop('est_id', 1), headers = [i for i in bc_gwp.drop('est_id', 1).columns],tablefmt=\"pipe\")\nbc_gwp = bc_gwp.set_index(bc_gwp.est_id).drop('est_id',1)\n\nbc_gwp", "CARB Criteria Air Pollutant (CAP) emissions data 2015\nParticulate Matter, Carbon Monoxide, Nitrogen Oxide, Sulfur Dioxide, Lead, and Reactive Organic Gases", "cpe_data = pd.read_csv('http://www.arb.ca.gov/app/emsinv/2013/emsbyeic.csv?F_YR=2015&F_DIV=0&F_SEASON=A&SP=2013&SPN=2013_Almanac&F_AREA=CA')\ncpe_data.columns = [i.lower() for i in cpe_data.columns]\ncpe_data.to_sql('cpe_2015', sqdb['cx'], if_exists = 'replace')\ncpe_2015 = pd.read_sql('''SELECT\n eicsoun,\n pm.source type,\n pm2_5 pm25_tpd,\n pm2_5*365*\"ecton-1_h_pm\" as t_ec_high,\n pm2_5*365*\"ecton-1_l_pm\" as t_ec_low,\n pm2_5*365*\"ecton-1_h_pm\"*gwp_20 as co2e_high,\n pm2_5*365*\"ecton-1_l_pm\"*gwp_20 as co2e_low,\n pm2_5*365*\"ecton-1_l_pm\"*(gwp_20-gwp_20_std) as co2e20_low_m1std,\n pm2_5*365*\"ecton-1_l_pm\"*(gwp_20+gwp_20_std) as co2e20_low_p1std,\n pm2_5*365*\"ecton-1_h_pm\"*(gwp_20-gwp_20_std) as co2e20_hi_m1std,\n pm2_5*365*\"ecton-1_h_pm\"*(gwp_20+gwp_20_std) as co2e20_hi_p1std,\n pm2_5*365*\"ecton-1_l_pm\"*(gwp_100-gwp_100_std) as co2e100_low_m1std,\n pm2_5*365*\"ecton-1_l_pm\"*(gwp_100+gwp_100_std) as co2e100_low_p1std,\n pm2_5*365*\"ecton-1_h_pm\"*(gwp_100-gwp_100_std) as co2e100_hi_m1std,\n pm2_5*365*\"ecton-1_h_pm\"*(gwp_100+gwp_100_std) as co2e100_hi_p1std,\n gwp.source\n FROM cpe_2015\n JOIN bc_pm_ratio pm on (arblink = eicsoun)\n CROSS JOIN bc_gwp gwp \n WHERE eicsoun in ('FOREST MANAGEMENT','WILDLAND FIRE USE (WFU)','ALL VEGETATION')''',\n con = sqdb['cx'])\n\nfoo = pd.melt(cpe_2015.drop('eicsoun',1), id_vars = ['type', 'source'])\nbar = foo[foo['variable'].str.contains(\"co2e\")].dropna()\nbar['type'] = bar.type.astype('category')\nbar.value = bar.value\n\nbar", "Make the BC GHG emissions plot\nA secondary apporoach based on Ward (1989) and Jenkins (1996) uses a range of 2-32% of PM as BC. \n\nJenkins BM, Turn SQ, Williams RB, Goronea M, Abd-el-Fattah H, Mehlschau J, et al. Atmospheric Pollutant Emissions Factors From Open Burning of Agricultural and Forest Biomass By Wind Tunnel Simulations [Internet]. Sacramento, CA; 1996. Available from: http://www.arb.ca.gov/ei/speciate/r01t20/rf9doc/refnum9.htm", "pm2015=pd.read_sql('''select eicsoun,\n pm2_5*365 as pm25\n from cpe_2015\n WHERE eicsoun in ('FOREST MANAGEMENT','WILDLAND FIRE USE (WFU)','ALL VEGETATION');''', con = sqdb['cx'])\npm2015=pm2015.set_index([['Wildfire','Pile Burn','Prescribed']])\n\npm2015\n\ndef ecDist(ecEst, ecCV, PM):\n '''\n PM is a mass measure, its intended to be PM2.5\n ecEst is the estimate of the percentage of elemental carbon comprising the PM\n ecCV is the coefficient of variation around the estimate of elemental carbon\n -----\n returns a random selection from a normal distribution of size `len(pm)`\n centered on `ecEst` with standard deviation of `ecCV`*`ecEst`\n '''\n ecStdev = ecCV * ecEst\n# ec_frac = (PM-np.floor(PM))*np.random.normal(ecEst, ecStdev, 1).clip(min=0)[0]\n t_ec = np.random.normal(ecEst,ecStdev,PM).clip(min=0)#+ec_frac\n return t_ec #+ ec_frac\n\necPct\n\nfor k in ecPct.keys():\n pm = pm2015.loc[k]['pm25']\n ecPct[k]['t_pm'] = pm\n #PM2.5 smoldering \n pm_sm = ecPct[k]['pct_sm']*pm\n ecPct[k]['pm_sm'] = pm_sm\n #TC smoldering\n tc_sm = ecPct[k]['tc_s_est']*pm_sm\n ecPct[k]['tc_sm'] = tc_sm\n #PM2.5 flaming\n pm_f = ecPct[k]['pct_f']*pm\n ecPct[k]['pm_f'] = pm_f\n #TC flaming\n tc_f = ecPct[k]['tc_f_est']*pm_f\n ecPct[k]['tc_f'] = tc_f\n res1k = []\n for t in range(1000):\n rnd = t\n # Total Carbon in PM\n tc_s = ecDist(ecPct[k]['tc_s_est'], ecPct[k]['tc_s_cv'], pm)\n tc_f = ecDist(ecPct[k]['tc_f_est'], ecPct[k]['tc_f_cv'], pm)\n #Elemental Carbon in Total Carbon\n ec_s = ecDist(ecPct[k]['ec_s_est'], ecPct[k]['ec_s_cv'], pm)\n ec_f = ecDist(ecPct[k]['ec_f_est'], ecPct[k]['ec_f_cv'], pm)\n ec1tpm = (tc_s*ec_s) + (tc_f*ec_f)\n ec_total_rnd = sum(ec1tpm)\n ec_total_gwp = ec_total_rnd * bc_gwp.loc['carb_slcp']['gwp_20']\n res1k.append([k,k+str(rnd),ec_total_rnd, ec_total_gwp])\n ecPct[k]['ec_tdist'] = pd.DataFrame(res1k)\n #ecPct[k]['ecSMDist'] = np.array([(ecDist(ecPct[k]['ec_s_est'], ecPct[k]['ec_s_cv'], pm)) for i in range(1000)])\n #ecPct[k]['ecFDist'] = np.array([(ecDist(ecPct[k]['ec_f_est'], ecPct[k]['ec_f_cv'], pm)) for i in range(1000)])\n #ecPct[k]['tECDist'] = ecPct[k]['ecSMDist']+ecPct[k]['ecFDist']\n #ecPct[k]['df'] = pd.DataFrame(np.column_stack((ecPct[k]['tECDist'], [k]*len(ecPct[k]['tECDist']))))\n\necPct['Prescribed']['tc_f']\n\n(pm_sm*0.2*3200)+(pm_f*0.2*3200)\n\ndf=pd.concat([ecPct[i]['ec_tdist'] for i in ecPct.keys()])\ndf.columns = ['source', 'sourcernd', 'mt_ec','co2e_ec']\n\nt=sns.FacetGrid(df,col='source', sharey=False)\nt.map(sns.boxplot, 'source','co2e_ec')\nt.set_ylabels('t $CO_2e$')\n[c.xaxis.set_visible(False) for c in t.axes[0]]\nsns.plt.subplots_adjust(top = 0.8)\nt.fig.suptitle('Black carbon emissions in CO2 equivalent units from burning in CA, 2015', fontsize= 15)\nt.fig.text(0.1, -0.08,'''Sources: CARB Criteria Pollutant Emissions Inventory(2015), Ward and Hardy (1989)''',\n fontsize=10)\nfor f in ['.png','.pdf']:\n t.savefig('graphics/bc_prob_gwp{0}'.format(f))\n\n122*0.66\n\npd.read_csv('fera_pile_cemissions.csv', header=1)", "Forested Lands and Wood Products Biodegradable Carbon Emissions & Sinks (MMTCO2)\nARB (2007). Technical support document for Land Use, Land Use Change & Forestry - Biodegradable Carbon Emissions & Sinks. Table available as a pdf document at: http://www.arb.ca.gov/cc/inventory/archive/tables/net_co2_flux_2007-11-19.pdf", "t=foo[foo['variable'].str.contains(\"co2e20\")].dropna()\nco2 = ut.gData('1GDdquzrCoq2cxVN2fbCpP4gwi2yrMnONNrWbfhZKZu4', 1636249481)\nco2.columns = co2.iloc[0]\nco2plot=pd.melt(co2.reindex(co2.index.drop(0)), id_vars = ['Year'])\nco2plot.columns = ['sc_cat','year','mmtco2e']\nco2plot.to_sql('arb_co2', sqdb['cx'], if_exists = 'replace')\ncdata = pd.read_sql('''select sc_cat, avg(mmtco2e) from arb_co2 where sc_cat in ('Forest and rangeland fires', 'Timber harvest slash') group by sc_cat''', con = sqdb['cx'])\n\nco2\n\ncdata\n#Output GitHub Markdown using the following:\n#print tabulate(cdata, headers = ['Source Category','MMTCO2'],tablefmt=\"pipe\")", "GHG equivalent emissions from management residuals\nTo arrive at an estimate of total annual emissions from buring forest management residuals in CO2 equivalent terms from published CARB estimates we can combine the CO2 emissions reported for 2004 in the LULUC Biodegradable Carbon Emissions and Sinks with black carbon emissions extrapolated from the CARB Criteria Air Pollutant Emissions inventory estimates. The dime discreppancy between the 2004 and 2015 is acknowledged as an irreconcilable source of uncertainty in this estimation among others. This does however reflect that a baseline of substantial emissions from forest management residuals has been reported in CARB emissions inventories and should be recognized as a baseline condition.", "t=foo[foo['variable'].str.contains(\"co2e20\")].dropna()\ntE = pd.DataFrame([cdata['avg(mmtco2e)'][1]*1.10231,\n avg(t[t['type'].str.contains('piles')]['value'])/1000000,\n (cdata['avg(mmtco2e)'][1]*1.10231)+(avg(t[t['type'].str.contains('piles')]['value'])/1000000)],columns = ['Mt CO2e'])\ntE['Source']=['CO2 pile buring', 'CO2e BC pile burning', 'Total Mt CO2e']\ntE.to_sql('pile_em', sqdb['cx'], if_exists = 'replace')\n\n#print tabulate(tE, headers = tE.columns.tolist(), tablefmt ='pipe')\ntE", "Timber Products Output\nThe TPO estimates logging redisues produced from commercial timber harvesting operations.", "tpoData = ut.gData('1GDdquzrCoq2cxVN2fbCpP4gwi2yrMnONNrWbfhZKZu4', 872275354, hrow=1)\ntpoData", "Biomass residuals from non-commercial management activities\nData from TPO does not account for forest management activities that do not result in commercial products (timber sales, biomass sales). To estimate the amount of residual material produced from non commercial management activities we use data from the US Forest Service (FACTS) and from CalFires timber harvest plan data. \nForest Service ACtivity Tracking System (FACTS)\nData from TPO does not account for forest management activities that do not result in commercial products (timber sales, biomass sales). We use a range of 10-35 BDT/acre to convert acres reported in FACTS to volume.", "pd.read_excel('FACTS_Tabular_092115.xlsx', sheetname = 'CategoryCrosswalk').to_sql('facts_cat', sqdb['cx'], if_exists = 'replace')\n\npd.read_csv('pd/facts_notimber.csv').to_sql('facts_notimber', sqdb['cx'], if_exists='replace')", "Querying FACTS\nThe USFS reports Hazardous Fuels Treatment (HFT) activities as well as Timber Sales (TS) derived from the FACTS database. We use these two datasets to estimate the number of acres treated that did not produce commercial material (sawlogs or biomass) and where burning was not used. The first step is to elimina all treatments in the HFT dataset that included timber sales. We accomplish this by eliminating all rows in the HFT dataset that have identical FACTS_ID fields in the TS dataset. We further filter the HFT dataset by removing any planned but not executed treatements (nbr_units1 &gt;0 below -- nbr_units1 references NBR_UNITS_ACCOMPLISHED in the USFS dataset, see metadata for HFT here), and use text matching in the 'ACTIVITY' and 'METHOD' fields to remove any rows that contain reference to 'burning' or 'fire'. Finally, we remove all rows that that reference 'Biomass' in the method category as it is assumed that this means material was removed for bioenergy.", "usfs_acres = pd.read_sql('''select\n sum(nbr_units1) acres,\n method,\n strftime('%Y',date_compl) year,\n cat.\"ACTIVITY\" activity,\n cat.\"TENTATIVE_CATEGORY\" r5_cat\n from facts_notimber n \n join facts_cat cat\n on (n.activity = cat.\"ACTIVITY\") \n where date_compl is not null\n and nbr_units1 > 0\n and cat.\"TENTATIVE_CATEGORY\" != 'Burning'\n and cat.\"ACTIVITY\" not like '%ire%'\n and method not like '%Burn%'\n and method != 'Biomass'\n group by cat.\"ACTIVITY\",\n year,\n method,\n cat.\"TENTATIVE_CATEGORY\"\n order by year;''', con = sqdb['cx'])", "Converting acres to cubic feet\nFACTS reports in acres. To estimate the production of biomass from acres treated we use a range of 10-35 BDT/acre. We assume that actual biomass residuals per acre are normally distributed with a mean of 22.5 and a standard deviation of (35-10)/4 = 6.25", "def sumBDT(ac, maxbdt = 35, minbdt = 10):\n av = (maxbdt + minbdt)/2\n stdev = (float(maxbdt) - float(minbdt))/4 \n d_frac = (ac-np.floor(ac))*np.random.normal(av, stdev, 1).clip(min=0)[0]\n t_bdt = np.sum(np.random.normal(av,stdev,np.floor(ac)).clip(min=0))\n return d_frac+t_bdt\n\nusfs_acres['bdt'] = usfs_acres['acres'].apply(sumBDT)\nusfs_an_bdt = usfs_acres.groupby(['year']).sum()", "Weighted average wood density\nAverage wood density weighted by harvested species percent. Derived from McIver and Morgan, Table 4", "wood_dens = ut.gData('138FWlGeW57MKdcz2UkWxtWV4o50SZO8sduB1R6JOFp8', 1297253755)\nwavg_dens =sum(wood_dens.pct/100 * wood_dens.density_lbscuft)", "Annual unutilized management residuals\n\n[x] Public lands non-commercial management residuals \n[ ] Private land non-commercial management residuals\n[x] Public lands logging residuals\n[x] Private lands logging residuals", "cat_codes = {'nf_ncmr': 'Unburned, non-commercial management residuals from National Forest lands',\n 'nf_lr': 'Logging residuals generated from timber sales on National Forest lands',\n 'opriv_lr': 'Logging residuals generated from timber sales on non-industrial private forest lands',\n 'fi_lr': 'Logging residuals generated from timber sales on industrial private lands',\n 'opub_lr': 'Logging residuals generated from timber sales on industrial private lands'}\n\nusfs_an_bdt['cuft']= usfs_an_bdt.bdt *wavg_dens\nresid_stats=pd.DataFrame((usfs_an_bdt.iloc[6:,2]/1000000).describe())\nresid_stats.columns = ['nf_ncmr']\nresid_stats['nf_lr']=tpoData[tpoData.ownership.str.contains('National Forest')]['loggingresidues'].describe()\nresid_stats['opriv_lr']=tpoData[tpoData.ownership.str.contains('Other Private')]['loggingresidues'].describe()\nresid_stats['fi_lr']=tpoData[tpoData.ownership.str.contains('Forest Industry')]['loggingresidues'].describe()\nresid_stats['opub_lr']=tpoData[tpoData.ownership.str.contains('Other Public')]['loggingresidues'].describe()\nresid_stats\n\nprint tabulate(resid_stats, headers = resid_stats.columns.tolist(), tablefmt ='pipe')\n\nimport os\n\n[os.path.splitext(i)[0] for i in os.listdir('lf/') if os.path.splitext(i)[1] =='.csv']\n\nureg = UnitRegistry()\nureg.define('cubic foot = cubic_centimeter/ 3.53147e-5 = cubic_foot' )\nureg.define('million cubic foot = cubic_foot*1000000 = MMCF' )\nureg.define('board foot sawlog = cubic_foot / 5.44 = BF_saw')\nureg.define('board foot veneer = cubic_foot / 5.0 = BF_vo')\nureg.define('board foot bioenergy = cubic_foot / 1.0 = BF_bio')\nureg.define('bone-dry unit = cubic_foot * 96 = BDU')\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DistrictDataLabs/yellowbrick
examples/zjpoh/stacked_feature_importance.ipynb
apache-2.0
[ "%matplotlib inline", "Yellowbrick Feature Importance Examples\nThis notebook is a sample of the feature importance examples that yellowbrick provides.", "import os\nimport sys\nsys.path.insert(0, \"../..\")\n\nimport importlib\nimport numpy as np\nimport pandas as pd\nimport yellowbrick\nimport yellowbrick as yb\nfrom yellowbrick.features.importances import FeatureImportances\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n\nfrom sklearn import manifold, datasets\nfrom sklearn.linear_model import LogisticRegression, LinearRegression\n\nmpl.rcParams[\"figure.figsize\"] = (9,6)", "Load Iris Datasets for Example Code", "X_iris, y_iris = datasets.load_iris(True)\nX_iris_pd = pd.DataFrame(X_iris, columns=['f1', 'f2', 'f3', 'f4'])", "Logistic Regression with Mean of Feature Importances\nShould we normalize relative to maximum value or maximum absolute value?", "viz = FeatureImportances(LogisticRegression())\nviz.fit(X_iris, y_iris)\nviz.show()\n\nviz = FeatureImportances(LogisticRegression(), relative=False)\nviz.fit(X_iris, y_iris)\nviz.show()\n\nviz = FeatureImportances(LogisticRegression(), absolute=True)\nviz.fit(X_iris, y_iris)\nviz.show()\n\nviz = FeatureImportances(LogisticRegression(), relative=False, absolute=True)\nviz.fit(X_iris, y_iris)\nviz.show()", "Logistic Regression with Stacked Feature Importances\nNeed to decide how to scale scale feature importance when relative=True", "viz = FeatureImportances(LogisticRegression(), stack=True)\nviz.fit(X_iris, y_iris)\nviz.show()\n\nviz = FeatureImportances(LogisticRegression(), stack=True, relative=False)\nviz.fit(X_iris, y_iris)\nviz.show()\n\nviz = FeatureImportances(LogisticRegression(), stack=True, absolute=True)\nviz.fit(X_iris, y_iris)\nviz.show()\n\nviz = FeatureImportances(LogisticRegression(), stack=True, relative=False, absolute=True)\nviz.fit(X_iris, y_iris)\nviz.show()", "Load Digits Datasets for Example Code\nShould we add an option to show only top n features?", "X_digits, y_digits = datasets.load_digits(return_X_y=True)\n\nviz = FeatureImportances(LogisticRegression(), stack=True, relative=True)\nviz.fit(X_digits, y_digits)\nviz.show()", "Linear Regression", "viz = FeatureImportances(LinearRegression())\nviz.fit(X_iris, y_iris)\nviz.show()\n\nviz = FeatureImportances(LinearRegression(), stack=True)\nviz.fit(X_iris, y_iris)\nviz.show()", "Playground", "importlib.reload(yellowbrick.features.importances)\nfrom yellowbrick.features.importances import FeatureImportances\n\nviz = FeatureImportances(LogisticRegression(), relative=False, absolute=False, stack=True)\nviz.fit(X_pd, y)\nviz.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]